Right. I think that, first of all, the estimates for the proportion of cases that we get is a little bit broader than that. Some people think it's more than 10 percent. But nonetheless, the general point is correct -- that we get a small proportion. We'd like to get more. But the system is imperfect. Without being able to require people to make those reports, if we don't have the authority to do that, it's hard to increase it beyond that. Again, we could do better if we had more case reports. But we're trying to do the best with the system that we have.
How much would it help if it were mandatory that doctors reported the adverse events that they picked up out in the real world?
In theory, it sounds like a great idea. The problem that we've got is that the medical profession really would not like that. They have a lot of paperwork requirements. It probably isn't real practical to expect that to happen. Of course, if it were mandatory, we'd get more case reports, and we'd probably be able to detect things a little bit earlier.
I just don't think it's really practical to expect that to happen with our current medical care system. But there are important improvements that we can make without actually mandating that the reports come in from every clinician.
Some of those ways are [that] we can do a better job of applying statistical and analytical tools to our large database. So improvements in computer science computation, statistics, [and] epidemiology let us look at this data and pick up things earlier.
The other thing is that we can get reports electronically. Now, we sometimes get them electronically. But sometimes, they're paper, and they just get entered by hand. It's very time-consuming, resource-intensive. So if we got all of them electronically, we'd be able to manage the data much better.
There are other ways that we can improve them, like trying to get information on adverse events to come in automatically from the medical record, instead of them having to be extracted from medical records. There are a lot of already electronic medical records around the country. For example, when a patient is discharged from the hospital, at discharge, the relevant records would automatically get transmitted to the FDA. That would increase the number that we have. That's in the future.
So what's the consequence of not having such thorough reporting right now? What's the consequence of that?
You can't really say with any precision. But very generally, I think the consequences of an imperfect system are [that] sometimes we detect problems later than they're actually happening. We know that it takes a while for the reports to get in the system. If we got more, we might be able to detect things a little bit earlier.
The other imperfections in the system mean that it's very difficult for us to really attribute a set of specific adverse events to the drug. Sometimes there are other things that happen at the same time with the patients. It's really tough with the quality of the data we get to really attribute the adverse events to the drug if that's in fact what's happening.
How is the Safety Monitoring Group doing, in terms of personnel, the number of people? Does it need more people? Is it adequate?
We think it needs more people. In estimates in our budget requests, and in the president's budget for the upcoming 2004 budget, we will get more people, assuming Congress gives it to us. As well, Congress passed the Prescription Drug User Fee Act in the last session. What that will do is give us many tens of more employees in this part of the agency.
So we've got a plan, and we actually see where the resources are going to come from to beef up that part of the agency.
As long as you've raised the issue of Prescription Drug Users Fee Act, many of the critics of the Food and Drug Administration say that having industry pay for the work that's done at the Food and Drug Administration puts a lot of pressure on the agency to approve drugs and please industry. What do you have to say about that?
We don't really feel pressure to please the industry. We feel quite independent among the scientists. We have a large number of mechanisms for assuring high quality of the reports that we do to make decisions about new drugs. We just reject that we're actually influenced by that. In fact, what's happened over the course of the User Fee activity is we've been able to hire a lot more people, improve the expertise of the people that we do have, provide more tools to our employees. So we think it's really helped the review process.
What happens at the Food and Drug Administration if Congress doesn't renew the User Fee Act? What would happen to the employees here?
I don't think it's really going to happen. It's so important to the public health infrastructure of the country that we have a good strong drug review process. We just had it re-authorized for another five years. I just don't think it's going to happen.
In your personal opinion, wouldn't it make more sense to have taxpayer dollars fund this rather than have industry pay for it?
Well, there are lots of hypotheticals we could go into. You know what the federal budget pressures are. So I'd try to stick to the pragmatic reality. The reality, in terms of the management of FDA, in the center, we're really agnostic about where the money comes from. We think we can run a high-quality independent program, regardless of the source of the resources as long as, of course, the resources aren't linked to performance goals that are going to interfere with our independence. So far, that hasn't been an issue at all. ...
Let's talk about Baycol. What kind of awareness did the FDA have that Bayer had requested, under the Freedom of Information Act, adverse events information from the FDA to compare the different statin drugs?
The first thing you have to understand is that the staff that deals with Freedom of Information Act requests is totally different from the Drug Safety staff. We get many, many thousands of these requests every year. So getting a request from Bayer for a particular drug wouldn't be considered unusual at all. We have records of one request from Bayer among a large pool of requests for that drug, and that much larger pool of all the requests that came in that period. So we weren't specifically aware. Even if we were, it wouldn't have really struck anyone as being unusual.
So these Freedom of Information Act inquiries never tip off the Safety Division that there might be a problem out there that a company is looking into?
No. I would hope that the mechanisms that we have in place provide much higher quality information than just the fact that a company is asking for something. ...
Bayer had a tabulation of the adverse events that were associated with their drug, Baycol, and the other statins. They did this tabulation specifically comparing Baycol and Lipitor. In their analysis, they found that Baycol with gemfibrozil was 855 times more likely to cause rhabdomyolysis than Lipitor with gemfibrozil, and Baycol without any other drug was 20 times more likely. So what I'm asking is, what did the FDA know about the results that Bayer got from this analysis?
According to my understanding of what the agency went through at that point is that we weren't aware at that point of the difference between Baycol and a similar class of drug, with regard to the drug alone without gemfibrozil. We were aware about the issue with gemfibrozil, because that's what resulted in the "Dear Doctor" letter.
The thing to keep in mind, generally, which I think helps you with this question, is our expectation. When a company becomes aware of a specific problem with their drug, they come to us; that's our expectation.
So how did Bayer do in this case? I mean, if Bayer had this information, what was their responsibility at that point in time?
You sent me one document. I can't really put myself in the mind of Bayer, because I don't know the totality of everything they were looking at. So I'd rather not comment specifically on this case, except to say, if they were aware that there was a problem, like with any other drug, we would expect them to come to us promptly. As you know, we did not meet with the company on this specific issue of monotherapy for another year.
I know it's an uncomfortable situation.
It's not so much that it's uncomfortable, in that it's hypothetical, because I don't have all of the data that may have been available to the company. It's just I can't put myself in their shoes.
Well, let's consider it hypothetically. If a company had data suggesting that their drug was 20 times more likely to cause an adverse event than one of their competitors, would it be a company's responsibility--
We would expect if a company became aware of a situation like that -- and they were convinced about the validity of the data, because there may be data that would argue that there's some problem with this analysis -- if they were convinced of the validity of the data, we would expect them to come to us promptly.
Does it surprise you to see the information that we've sent to you?
The issue is whether they had other information that contradicted it, or whether they thought the information was sufficiently strong to make a case.
Is it reasonable with having a difference, like Bayer apparently does, to doubt that that's a significant signal that says you need to take action?
There are lots of laws to the reports that come in. There are lots of ways that the data could be demonstrated to not be valid in a hypothetical situation. So again, if they would review the data and they were convinced that it was a valid comparison, then we would expect to be informed promptly of it. Now, they came to us a year later with this specific care study, which tended to minimize the problems with the drug. We didn't agree with them on this analysis. This is why the action occurred the following year.
They brought up comparisons that demonstrated that the drug didn't have a huge difference between its competitors and the statin class. We didn't think that analysis is real strong. That's an example of where a company was making a case that there wasn't a problem. But in our independent view of it, we just didn't agree with it.
Doesn't the story of Baycol, in a sense, make an argument for why the system needs to be more reliable? Because in a way, Bayer was able to hide behind saying, "The system is imperfect. We doubt this data. We don't trust it."
Yes. Again, I can't put myself in the spot of knowing everything Bayer said. But we start out with admitting that the system is imperfect; that, with more resources and focus on new technologies, we can do better. We are holding ourselves to that standard, and we expect that we will do better.
Did Baycol turn out to be more dangerous than other statins, when used alone?
Yes. What the data shows is that this drug used at the dose that achieved the same level of cholesterol reduction as other drugs in the class had more adverse event reports. Now, in the clinical trial that we reviewed, we did not see this rhabdomyolysis, which is the muscle lysis side effect, which is so worrisome and so dangerous. In retrospect, going back, there was a signal there. There was a signal of myopathy, which is some tension with the muscles that causes an increase in blood level of an enzyme that is released when the muscles are destroyed.
In retrospect, we can see that that signal was there. But in the clinical trial, since we didn't have the next step -- which is the rhabdomyolysis occurring -- we didn't make the connection. ...
In terms of post-marketing surveillance, what lessons did the FDA learn from the whole Fen-Phen, Pondimin and Redux story?
Well, I think probably the major thing that becomes apparent looking at that story was not really a surprise, but it reiterated the fact that this is an issue. Our system isn't perfect. It sometimes can detect things after other parts of a medical care system is able to detect it. In this case, what happened is some very astute physicians were lucky enough out in Minnesota to see a series of similar cases where they saw heart valve abnormalities, and they put the link together.
Again, looking at our data after the initial case reports that came in from the Mayo Clinic, we could see the signal. I think the lesson is that there's a lot of room for us to go in terms of making improvements with the system. Again, we already knew that, and we continue to know it now.
Tell me how the FDA first learned about any possible association with Pondimin or Redux or Fen-Phen and damage to heart valves.
My understanding of what happened is that these clinicians at the Mayo Clinic contacted our staff as they began to make these observations and [were] preparing to publish the observations. We worked with the Mayo scientists to look at the extent of the problem. Then we worked with the companies to send out a letter to physicians all over the country, to gather more specific information about this problem, which eventually led -- less than nine months after the publication of the journal article -- to the drug being withdrawn, and the similar drugs, as well. ...
I came across some interesting documents that I wanted to ask you about. Apparently, the FDA heard from the manufacturers of Phentermine that there was a problem. In April 1997, manufacturers submitted 11 urgent 15-day reports about heart valve problems. They were received and reviewed by the FDA in May. Also in early May, the doctor from Fargo called the FDA and spoke with somebody in the Drug Safety Division and faxed in 15 reports, and never heard anything back from anybody. It was as if, when the phone call came in from the Mayo Clinic, it was the first time anybody had ever heard of this problem. Yet you had groups of reports just sitting here for months. [Editor's note: after this interview was conducted, FRONTLINE found out from Wyeth, the manufacturer of Pondimin and Redux, that they, too, submitted reports to the FDA that were received in mid-April 1997.]
You said you just saw the documents recently. I'd have to go back to the people who were working on this, and ask them what the story is with those case reports and what was going through their mind. I just can't comment on it. But again, it's not surprising that we had heard about some case reports before a definitive case is really made that it's a [problem], because that's a very frequent occurrence.
The one thing that struck me as kind of odd is that there was an official story told about this whole episode. It was published in the FDA's internal newsletter. It was written by one of the women in the Safety Division, who had been doing the investigation of this problem. The story begins that this whole event began when the Mayo Clinic called the FDA. It turns out that she was the person who had been receiving those reports two months earlier. What do you think about that?
I have to look into the situation a little bit more [in] detail. The fact is, it's not surprising that there's some cases that are reported before a connection was actually made.
So it takes time to look into these? If you get an adverse event, you're not necessarily going to raise a flag--
If you get a bunch of case reports that look similar, the way the system works is, there would be consultation between the people who were experts in that drug, the condition the drug is being treated for, and the Drug Safety people. They would look at a whole range of factors -- whether the adverse event occurred without use of the drug, or whether this was a unique sort of combination between the drug and the adverse event. It does take some analysis to be able to come to conclusions.
Since that time, we've developed some techniques, called Data Mining Techniques, that would let us detect these connections earlier, at least from a hypothesis-generating perspective, before we have the time to do the detail[ed] case look back. So we get the cases. We have to call the physicians, trying to find out what the actual situation is, get more detail. That is time consuming. We would like to move toward a system where we're able to detect and identify and connect adverse events with the specific drug earlier.
Would it surprise you that there's a doctor out there that says, "Hey, I submitted 15 reports, and nobody is getting back to me?"
What we strive for when we receive reports is that we can react to them very quickly. We can follow them up as quickly as possible. We can meet internally, analyze and detect whether the connection is really there between the drug and the adverse event. Then [we can] take regulatory action if that's necessary. We're constantly striving to improve that system, and shorten the timeframe.
So why would the FDA's official story--
I don't know what the story is with the story. I don't know if the story that you're referring to was meant to be a comprehensive, point-by-point examination of everything that happened, or whether it was a condensed sort of story that would have left out details, because there wasn't room. I'm not familiar with that particular description that you're talking about.
It would seem like you'd want to say, "We learned about this for the first time two months earlier, and we didn't make the connection." It seems like you'd want to be honest about that.
We really have nothing to hide. All of our data in this regard is public, except for data that can be identified with specific patients. We make it available to people who want to look at it. We wouldn't try, by means of a description like you're talking about, to try to conceal what actually happened. I don't know in this particular situation what you're talking about. Since the data is all public, and we're striving to improve and protect the public's health, there's no reason to hide anything having to do with adverse events and drugs.
Dr. Leo Lutwak reviewed the Redux applications. He says that, throughout the approval process for Redux and after approval, when there was consideration of removing Pondimin and Redux, that the recommendation of the reviewing division were continually ignored. How did that happen?
Let me speak in general about the approval process, because I think it will really help understand that. We have a number of different levels of scientific review staff. There's a primary medical reviewer. Then there are at least a couple medical expert reviews in the other scientific disciplines -- pharmacologists, statisticians. There are more senior and more junior people. It is a compilation of the opinions of the entire review team and their supervisors that results in decisions being made to approve drugs, or to remove them from the market.
It is absolutely not unusual at all to find that there's someone in this process that has a dissenting view from others. In fact, it's something that we encourage, that we think is normal, that we expect. People wouldn't want to work here if everybody already always agreeing with each other.
So the fact that you can identify cases where a medical reviewer had an opinion that may have been contrary to a final decision that was made, for a whole range of drugs or different kinds of outcomes, is not surprising at all. In this particular case, what happened is the medical reviewer had an opinion. But the judgment of the center, after considering the broad range of risks and benefits and taking into account the opinions of the advisory committee, was that the approval and the continuation of the drug on the market was justified.
Dr. Lutwak claims that, after Pondimin and Redux were removed from the market, instead of being rewarded for his effort to help protect the American public, that he was actually marginalized and transferred to another division. The message was that finding out bad things about drugs wasn't wanted.
It's really not right of me to comment on specific individuals or specific cases. But we really can't condone that sort of behavior. We don't discourage people from coming up with negative information or negative hypotheses about drugs. Our primary goal is protecting public health. If that's, in fact, what happened to this fellow, and it can be documented that something adverse happened to him specifically because of his opinion, I think that's really wrong, and nobody would condone this in the management of the center.
We've interviewed a former FDA scientist, Paul Stolley. When he came to the FDA, he was assigned the job of reviewing the adverse events for this drug [Lotronex] that was already on the market. He argued that the drug should be taken off the market because of a number of adverse events that were occurring. It was his opinion that it would be futile to manage the risk of the drug, because the symptoms or the ischemic colitis were so similar to the symptoms of irritable bowel syndrome, and that the manufacturer had yet to identify a way to predict in advance who would be likely to develop this adverse effect.
He says that, after the drug was taken off the market, he was left out of any future discussions about the drug, basically told, "We don't want you there. We don't want your participation. You're not welcome." So how do you--
I really can't comment on the specific individual, but let me try to help answer the question of how we handle dissent. You have to distinguish between someone who may have a disagreement, and the fact that every disagreement doesn't result in the agency taking an action which the individual who was dissenting may think is the best way for us to behave. Just because we didn't do precisely what a particular individual wants us to do at any one moment doesn't mean that we didn't pay attention to them, or we didn't take the concerns seriously.
With Lotronex, we knew about this potential adverse event. From the beginning it was labeled. So the contention that we didn't take this concern seriously is simply not true. When we got the results of the increase and reports of this kind of problem after the drug was on the market, we initially issued several risk management steps to try to communicate to the community, and to patients about the risk. When that didn't work, we met with the company, and the decision was made to remove the drug from the market. So exactly what is meant by "we didn't take the situation seriously and didn't act on it" is really not clear to us.
I think Stolley's complaint is not so much that the FDA didn't take seriously what he was saying, because the FDA acted on what he was just asking for. But then afterwards, he felt like he was punished for having pushed that position, for having spoken up; that it was made known to him that he couldn't come to more meetings about this drug; that he was going to be left out of future discussions, because "We don't want to hear that."
I really don't think it's right for me to comment about individual cases. Sometimes there are people, though, who, the way that they behave in meetings, or the fact that the agency isn't taking an action that's precisely what they recommend, means that they're more disruptive to the process than helpful. These sorts of things can happen with individuals. Those individuals may interpret what we're doing in a very different way than a group of neutral people might interpret what's happening.
But we do not condone punishing people because of their opinions. The problems that we have as managers of the Center for Drugs, as managers of the FDA, is that in the end, we have to make the hard decisions. We have to sit down in a room with calm, neutral people and look at all the data and figure out what's best for this country's public health. At a certain point, if there are individuals that are being disruptive, or who are saying, "You must do A-B-C-D," it's very difficult to have that kind of neutral, calm discussions.
At a certain point, we have to get together with the people that we feel are most experienced, have the broadest perspective and make the decisions. If that means certain people not being in a room, it's not because those people are being devalued. It's because, in the end, we have to make a decision that's calm and neutral, and considers the broad range of the risks and the benefits.
Could you just tell me what sort of pressure the FDA was under to get Lotronex back on the market?
The only pressures that we're under -- which we're always under when there are unusual treatments available -- is the patients who are suffering from the condition and don't have any other way to mitigate their symptoms. In the case of inflammatory bowel disease, there is a group of individuals in the country who are really incapacitated by this. Some people can't leave the house, because of the problem of having to go to the bathroom so frequently.
So like conditions that are similar to that, if there's a unique treatment, we feel responsible to do whatever we can to make that treatment available. That's the only pressure that we felt.
How [did you get] that pressure from the public or from people who have irritable bowel syndrome? How did that play out here at the FDA? What was happening? Were you getting e-mails? Were you getting phone calls?
We got e-mails, we got phone calls, we got letters, which is what happens whenever people are trying to influence folks in the agency.
What were they saying?
They were saying a broad range of things. We knew this from talking to clinicians. We really didn't need the letters and e-mails to know this -- that irritable bowel syndrome can be very debilitating. There aren't other acceptable therapies that work in a certain subset of people, so they were dependent on this drug. We heard stories about the people who hoarded the medicine so that they could use it after it was withdrawn, and keep up their functional status. ...
How would you describe morale here at the FDA among people who review new drug applications, and among the epidemiologists who analyze information about drugs after they go on the market?
I think it's reasonably good. This is not an easy place to work. There is pressure. We have a lot of work to do. There's a lot of focus on the work that we do. It's very important to the nation. It's not a good place for people who like a low-stress, easy job. But the people that work here are challenged, they're gratified, and, by and large, they enjoy what they're doing.
A 2002 survey went across the Department of Health and Human Services, looking at all the agencies, and comparing how people felt about their job. Ninety-two percent of CDER's employees thought that this was a good place to work. That level of response compares favorably with other parts of the FDA, and other parts of the Department of Health and Human Services.
Of course, you can find people who are unhappy. But you'll find that in any organization. We're trying, using some pretty innovative tools -- within the federal government, [which] has a somewhat cumbersome personnel system to do things -- to make the work life of our employees better, both in terms of compensating them well for what they're doing, and giving them the flexibility to work at home, to work around their families. We think we've made a lot of progress with this. [Judging] by the more recent surveys that have been administered by the government and ourselves, we think we're doing pretty good.
I was sent a survey that was done at the end of 2000, a quality assurance program survey. A couple of the items that were mentioned in that survey that speak to morale were looking at why scientists at the FDA seemed to be leaving at a fairly high rate. The survey was meant to look at whether this was a problem. Among the reasons were that scientific reviews of drugs were sometimes added in by higher-up administrators, that reviewers were asked to change their opinions to something that was more favorable to the drug company applying for approval.
In addition, a third of the people surveyed said they were uncomfortable expressing their differing scientific opinions; that, within the FDA, scientists were stigmatized if they recommended that a drug did not get approved. In other words, if they discovered bad news about a drug and tried to make others aware, that they would get in trouble. What do you have to say about the survey?
I'm not happy to hear that even one person in the Center for Drug Evaluation and Research would have opinions like that. But I don't think it's characteristic of how our scientists feel. For one, this is a three-year-old survey. For two, it was not done as scientifically as some of the more recent surveys that we've seen in the last few years. Again, I'm not surprised that there are some people expressing those views. But I just don't think that it's characteristic of how our scientists feel about our job.
I'm not denying that there are issues. We were working on the question of how the center handles scientific disputes. We want to make it real clear to people that there's a path they can follow, an appeal process, when there are unresolveable scientific disputes in their organizations.
We wouldn't condone anyone being asked to change their review. The way that we generally handle that is, if people have a review, they issue it. If there are others that disagree with it, they may issue a review that contradicts it, or that explains why the original document was invalid. I think the idea of forcing someone to change something that they've written or something that they've analyzed is highly unusual. I certainly wouldn't condone it. ...
What was unscientific about the quality assurance program survey from 2000?
I think the response was quite low to begin with. That's the thing that worries me most about those surveys. Again, I'm not denying that people have those views. I'm not trying to say that that was made up somehow. We're certainly not proud to hear that there's anybody that has opinions like that. But we just don't think it's average, or even reflective of what most people think.
We've interviewed a biostatistician. The guy's name is Michael Elashoff. He worked at the FDA for five years. When he was handed a file, the first thing that his supervisor would say to him is, "We don't see any reason not to approve this." This was before anybody had looked at the clinical trials. His job was to evaluate the clinical trial at least. He felt like it was a bias coming from people who hadn't reviewed any of this information saying, "You should approve this."
Well, again, I'm very sorry to hear that anybody feels that way. But one thing you have to keep in mind is we've got several thousand people in the drug center, several hundreds of those are Ph.D.'s and M.D.-level scientists with 10 to 20 years of education. There are a lot of people with very strong opinions.
Whenever you're working on complex scientific questions, you're going to have disagreements. So we're not at all surprised to hear that there are disagreements. Some people are better than others at managing disagreements. So again, we're not surprised to hear that some supervisors may not have managed certain situations well. We do the best we can. We try to set examples for our managers to encourage people to express their viewpoints, and to respect them.
But it's just not at all surprising that there are disagreements, and we think that's normal and positive.
What does "FDA approved" mean? Does it mean that a drug is safe and effective, and that we shouldn't be concerned about taking it?
What it means when a drug is approved is that the risks are outweighed by the benefits for the indication and under the conditions that are in the label. That just means if the drug is used in the right patients, in the right way, at the right dose, and there aren't drugs that are contraindicated taken with it, that the benefits outweigh the risks. There's a lot that can go wrong that doesn't fit under that definition. But the benefits outweigh the risks for the indication and under the conditions of use that we specify when we approve drugs, and the public should feel very comfortable with the review process.
So we should feel comfortable with that phrase "FDA approved," that it really means something?
Absolutely. A tremendous amount of expertise has been built up here. We apply the most up-to-date scientific knowledge and tools to the data that we get from responsive companies. The public should feel very comfortable. But they also need to keep in mind that one, the system isn't foolproof, and two, there's no such thing as a totally safe drug. All drugs have risks, even over-the-counter drugs that are taken very commonly, like acetaminophen and aspirin. A lot of people don't understand that, and they need to [understand] that all drugs, over-the-counter [and] prescription], have risks.
What would you do to make the system better, more reliable?
The plans that we're talking about, we think, are going to make a big difference over the next period of time. That is, one, increasing the resources that the center spends on adverse event reports. Two, looking at the systems by which we analyze these large data sets, that we have to try to make sure that we're applying the best tools to let us detect things quickly. Third is moving towards technological improvement, such as electronic reporting, such as automatic reporting. Lastly, we want to improve the communication between our post-marketing people, and the people that approve the drugs originally, so that they can talk quickly when there's a problem, and we can get to resolution and action more quickly than we can now.