Data Diving _ the Scientist

download Data Diving _ the Scientist

of 7

Transcript of Data Diving _ the Scientist

  • 7/31/2019 Data Diving _ the Scientist

    1/7

    TIP OF THE ICEBERG: Independent reviewers of clinical trial

    data have access to just a minuscule percentage of the actual

    information.Pushart

    May 2012 Features

    Data DivingWhat lies untapped beneath the surface of publishedclinical trial analyses could rock the world of independent

    review.By Kerry Grens | May 1, 2012

    A few weeks before Christmas 2009, the

    world was in the grip of a flu pandemic. More

    than 10,000 people had died, and roughly hala million people had been hospitalized

    worldwide; tens of millions had been infected.

    In the United States, millions of doses of

    Tamiflu, an antiviral medication, had been

    released from national stockpiles. December

    2009 was a point in the H1N1 outbreak

    where there was a lot of talk about a second

    or third wave of this virus coming back and

    being more deadly, says Peter Doshi, now a

    postdoctoral researcher at Johns HopkinsUniversity and a member of an independent

    team of researchers tasked with analyzing

    Tamiflu clinical trials. Anxiety and concern

    were really peaking.

    So it was no small blow when, that same

    month, Doshi and his colleagues released their

    assessment of Tamiflu showing that there was

    not enough evidence to merit a claim that the

    drug reduced the complications of influenza.1

    Their report had been commissioned by the

    Cochrane Collaboration, which publishes

    independent reviews on health-care issues to

    aid providers, patients, and policy makers.

    The findings, published in theBritish Medical

    Journal, made headlines around the world.

    Doshis group arrived at this conclusion because theyd run into a lack of available data. Some of the

  • 7/31/2019 Data Diving _ the Scientist

    2/7

    widespread belief that Tamiflu could blunt pneumonia and other dangerous health consequences of flu was based

    on a meta-analysis of several clinical trials whose results had never been published. Because the data could not

    stand up to independent scrutiny by the researchers, these trials were tossed out of the Cochrane review; other

    published trials were disqualified because of possible bias or lack of information.

    Just as the 2009BMJpaper was to be published, Roche, the maker of Tamiflu, opted to do something

    unorthodoxthe company agreed to hand over full clinical study reports of 10 trials, eight of which had not been

    published, so that independent researchers could do a proper analysis. Within a few weeks after the publication

    of its review, the Cochrane team was downloading thousands of pages of study files.

    One publication of a Tamiflu trial was seven pages long. The corresponding clinical study report

    was 8,545 pages.

    Clinical study reports are massive compilations of trial documents used by regulators to make approval

    decisions. Doshi says he had never heard of, let alone worked with, a clinical study report. This is how in the

    dark most researchers are on the forms of data there are. Most people think if you want to know what happened

    in a trial, you look in theNew England Journal of Medicine orJAMA.

    And in fact, that is how many meta-analyses or systematic reviews of drugs are done. As publications amass,

    independent analysts gather up the results and publish their own findings. At times they might include unpublished

    results offered by the trial investigators, from the US Food and Drug Administrations website, or from

    conference abstracts or other grey literature, but for the most part, they rely simply on publications in peer-

    reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But

    as several recent studies illustrate, they can be grossly limited and misleading.

    Doshi and his colleagues began poring over the reams of information from Roche, and realized that not only had

    their own previous reviews of Tamiflu relied on an extremely condensed fraction of the information, but that what

    was missing was actually important. For instance, they found that there was no standard definition of pneumonia,says Tom Jefferson of the Cochrane Collaboration and lead author of the 2009 review. And among people who

    had been infected with influenza, it appeared that the placebo and treatment groups were not on equal footing.

    We realized that all of these [analyses] led to misleading results because the treatment groups [were] not

    comparable for that subpopulation, Doshi says.

    In January of this year, the group published its latest review of Tamiflu, which included the unpublished evidence

    obtained from Roche in 2009.2 The authors concluded that Tamiflu falls short of claimsnot just that it

    ameliorates flu complications, but also that the drug reduces the transmission of influenza. In an e-mail sent to

    The Scientist, Roche says the Cochrane review was not limited to people who had laboratory-confirmed flu, but

    encompassed people with influenza-like symptoms, thereby possibly underestimating Tamiflus efficacy.Independent and eminent scientists reviewed data from the Tamiflu trials, the inception and design of the studies

    which produced the data, and the assumptions made, the company states. Roche stands behind the robustness

    and integrity of our data supporting the efficacy and safety of Tamiflu.

    Jefferson is not convinced, and the experience has made him rethink his approach to systematic review, the

    Cochrane method of evaluating drugs. For 20 years, he has relied on medical journals for evidence, but now hes

    aware of an entire world of data that never sees the light of publication. I have an evidence crisis, he says. Im

    not sure what to make of what I see in journals. He offers an example: one publication of a Tamiflu trial was

    seven pages long. The corresponding clinical study report was 8,545 pages.

  • 7/31/2019 Data Diving _ the Scientist

    3/7

    Sidney Wolfe, Public Citizen

    It just blows the mind, says Doshi. A trials an extraordinarily complex process, and what we see in the

    published literature is an extreme synthesis of what goes on. The big question is: What does that mean for the

    validity of independent reviews?

    Pushart

    Unpublished dataIs it all bad news?Clinical study reports like those provided by Roche are the most comprehensive descriptions of trials

    methodology and results, says Doshi. They include details that might not make it into a published paper, such asthe composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the

    measures that were collected.

    But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw,

    unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug

    regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of

    Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself

    was staring at the full results from Eli Lillys clinical trials of the blockbuster antidepressant Prozac.

    FDA, time, Gibbons had questioned the belief that antidepressants are linked to an increased risk of suicide.Previous meta-analyses by independent reviewers on suicidal thoughts and behaviors among people taking the

    drugs had for the most part relied on summary data, Gibbons says. At a meeting at the Institute of Medicine a

    few years ago, Gibbons spoke with a senior investigator at Eli Lilly and brought up the idea of doing a full

    workup of the original data.

    If there is some lid put on some aspects of those trials, that is frustrating one important goal of

    research, which is sharing information.

    Much to his surprise, shortly after the meeting Gibbons was in possession of

    the numbers. We havent seen anybody get these kinds of data, he says. He decided to push his luck. Gibbonshad served as an expert witness for Wyeth, and he approached attorneys for the pharmaceutical company to ask

    if they would also share data from trials of the companys antidepressant Effexor. They got back to me, and

    they were agreeable to provide all their adult data, he recalls.

    Gibbons and his colleagues went to work reanalyzing the data. Everything was exquisitely well documented, he

    says. The raw data allowed them to take into account each persons depression severity and to determine

    individual outcomes rather than averages. Their results, published earlier this year, ended up bucking much of the

    published literature on antidepressants.3,4 For one, they found no link between Prozac and suicide risk among

    children and oun adults. And secondl , the found that Prozac a eared to be more effective in outh, and

  • 7/31/2019 Data Diving _ the Scientist

    4/7

    antidepressants far less efficacious in the elderly, than previously thought. I think these kinds of analyses and the

    discrepancies in the findings are good reason to be concerned about our reliance on traditional meta-analyses,

    Gibbons says.

    Although some of his results reflect negatively on the drugs, others are clearly very positive. Theres been an

    understanding for some time that publication bias is a real occurrence, and that it often favors the drug. Trials that

    show no efficacy are less likely to get into print than trials that demonstrate a positive effect.5 So when Lisa Bero

    at the University of California, San Francisco, decided to redo 42 published meta-analyses of drugs and includeunpublished, but available, data, she suspected the drugs would fare poorly. But thats not what we found, she

    says.

    She and her colleagues analyzed nine drugs using unpublished data from the FDA. For any approved drug, the

    agency makes available a summary of data used to vet the medication. When Beros group added these data to

    the meta-analyses, they found that all but three turned out to have a different result. Nineteen of the redone

    analyses showed a drug to be more efficacious, while 19 found a drug to be less efficacious. 6 The one harm

    analysis that was reanalyzed showed more harm from the drug than had been reported. We showed data that

    make a difference are not being published, Bero says.

    While the FDAs summaries of trial data are available to any researcher, theyre not necessarily easy to work

    with, and often researchers dont include them in meta-analyses. I think the FDA reports are an extremely

    valuable data source, but theyre not the full application [for drug approval], and they have redacted parts, Bero

    says. Shes found that potentially important elements, such as patient characteristics or conflict-of-interest

    information, have been blacked out. The quality of the PDFs can also be poor, with crooked pages or light print;

    and sometimes there is no index for a document hundreds of pages long.

    Such data files are quite different from the quality of the documents Gibbons was able to work with. While he

    urges independent researchers to try to access raw data, he notes that getting all the data is not a trivial

    problem.

    Why arent the data shared?Although summaries of clinical trials are available from the FDA, unabridged clinical study reports or the raw

    data are hard to come by. Keri McGrath Happe, the communications manager at Lilly Bio-Medicines, wrote in

    an e-mail to The Scientistthat the company has a committee that reviews requests to obtain unpublished clinical

    trial results. I can tell you that it is notcommon to have a request filled for raw data, she says. Granting access

    to raw data isnt as easy as opening file cabinets and handing over documents. A team has to go through each

    piece of data to find what specific data [are] needed to fulfill the request.

    f being an administrative burden, handing over clinical reports or raw data is considered hazardous to the integrity

    of a drugs worth. The simple truth is that drug discovery is enormously expensive, says Jeff Francer, the

    assistant general counsel of the Pharmaceutical Research and Manufacturers of America (PhRMA). In order for

    companies to engage in the immensely capital-intensive work to develop a medicine, there has to be some

    protection of the intellectual property. And the intellectual property is the trial data.

    The FDA tends to concur. The agency receives much more information about a drug than it ever releases.

    According to Patricia El-Hinnawy, an FDA public affairs officer, as a matter of law and regulation, patient-level

    clinical trial data has been historically regarded as confidential commercial and/or trade secret information.

  • 7/31/2019 Data Diving _ the Scientist

    5/7

    The other route to obtaining unpublished results is through a Freedom of Information Act (FOIA) request, but

    just as with putting in a request to a company, there is no guarantee that the information will be released. Plus,

    FOIA requests take a long time, says Michelle Mello, a professor of law and public health at the Harvard

    School of Public Health. In a world where were concerned about being able to rapidly assess certain safety

    signals, this is not a route to producing timely information.

    Gibbons says his studies on antidepressants make a strong case for greater data sharing. The other argument,

    says Sidney Wolfe, director of the health research group at the advocacy organization Public Citizen, is that itsa moral and ethical thing too. People who are participating in clinical trials, aside from whatever possible benefit

    will happen to them . . . are doing it for the benefit of humanity. And if there is some lid put on some aspects of

    those trials, that is frustrating one important goal of research, which is sharing information.

    The question of whether results from human experiments are private information or a public good has been

    debated for some time. In 2010, the European Medicines Agency (EMA), the European Unions equivalent of

    the FDA, finally made a decision. We had resolved that clinical data is not commercial confidential, says Hans-

    Georg Eichler, the EMAs senior medical officer. It doesnt belong to a company, it belongs to the patient who

    was in the trial.

    Pushart

    Efforts to increase data sharingThe EMAs new policy is that if someone requests data from clinical trials of an approved medication, the

    agency will provide it. Doshis group took advantage of this to obtain about 25,000 pages of information on

    Tamiflu, which they used for their 2012 Cochrane update.2

    Eichler says there have only been a handful of requests to date, too few to know how the policy is working out.

    Fulfilling such requests can be cumbersome, he says. It takes time to carefully review the data and make sure

    patients cannot be identified. Eichler adds that in the future hed like to see a system where all clinical trial resultsare entered into a system accessible by other researchers.

    Under the FDA Amendments Act of 2007, the agency requires trial sponsors to post the summary results of

    registered trials on clinicaltrials.gov within one year of completing the trial. But few comply. A recent survey of

    the website found that of 738 trials that should have fallen within the mandate, just 163 had reported their

    results.7 In a statement sent to The Scientist, Congressman Henry Waxman (D-CA) says, I was alarmed by

    the recent studies showing that compliance with this law has been sorely lacking and that industry is not reporting

    the required study results.

  • 7/31/2019 Data Diving _ the Scientist

    6/7

    While companies are certainly part of the problem in this case, they were actually more likely to report results

    than were researchers whose clinical trials had no industry backing, but were funded by foundation or

    government money. I think its so important to acknowledge that is a huge problem throughout the clinical

    research enterprise, says Kenneth Getz, a professor at Tufts Center for the Study of Drug Development. And

    industry has made some moves to be more proactive about sharing data.

    Last year, the medical device company Medtronic agreed to share all of its original data regarding Infuse, a bone

    growth product that had been facing considerable skepticism about its efficacy. Yale professor Harlan Krumholzapproached the company with a challenge: if Medtronic thinks the Infuse data can stand up to external scrutiny,

    then let an external group have a look. The company agreed, and a Yale University group serves as the

    middleman between the company and the independent reviewers.

    Joseph Ross, a Yale Medical School professor whos involved in the project, says two review teams have been

    selected, and they should have results by the summer. Medtronic is paying $2.5 million for the external reviews, a

    price Ross says is small compared to what gets invested inand ultimately earned froma successful drug. He

    says its the first experiment of its kind. In my most optimistic moments I think it has to be the way of the future.

    I dont think the public realized that this data isnt available for everybody to understand, says Ross. In my

    most pessimistic moments, this only happens one other timewhen a company gets in hot water.

    Journals are also lighting a fire under trial sponsors to provide their results to independent reviewers more quickly

    and completely. In 2005, the International Committee of Medical Journal Editors initiated a requirement that

    trials had to be registered, say on clinicaltrials.gov, in order to be published. That sent shock waves, says

    Elizabeth Loder, an editor at theBritish Medical Journal.

    Since then, Loders own publication has been digging into the effects of unpublished data. She says theBMJ

    asks independent reviewers and meta-analysts to what extent they tried to obtain unpublished results for their

    studies. And this January, the journal published a special issue of reports dedicated to missing clinical trial

    data.8 I suppose you could say that publishing the original [2009] report on Tamiflu, we were newly sensitizedto the dangers, says Loder. I think we wanted to keep everybody focused on that problem.

    For a special issue next year, Loder saysBMJis going to look at what exactly is the harm of having used

    incomplete data sets for so many meta-analyses and systematic reviews over the years. Even though going

    forward new requirements for posting study results will probably improve the situation, we remain concerned

    about previously done studies that are unpublished and unavailable, and how that might affect the existing

    evidence base.

    While Getz agrees that more data could improve meta-analyses, he cautions against data dumping

    completely opening the floodgates to unpublished results. I think just the idea of making more information

    available misses the point. You reach a level of data overload that makes it very hard to draw meaningful and

    reasonable conclusions, unless youre an expert with a lot of time.

    But Cochrane Collaborations Jefferson says bring it on. While the clinical study reports he received numbered in

    the thousands of pages, they were still incomplete. Roche says it provided as much as the researchers needed to

    answer their study questions. But accepting that response would require a trust that is clearly eroded. We hold

    in the region of 30,000 pages. Thats not a lot, Jefferson says. We dont know what the total is. Were looking

    at the tip of the iceberg and we dont know whats below the waterline.

  • 7/31/2019 Data Diving _ the Scientist

    7/7

    References

    1. T. Jefferson et al., Neuraminidase inhibitors for preventing and treating influenza in healthy adults:

    systematic review and meta-analysis,BMJ, 339:b5106, 2009.

    2. T. Jefferson et al., Neuraminidase inhibitors for preventing and treating influenza in healthy adults and

    children, Cochrane Database of Systematic Reviews, Issue 1, 2012.

    3. R.D. Gibbons et al., Suicidal thoughts and behavior with antidepressant treatment: Reanalysis of the

    randomized placebo-controlled studies of fluoxetine and venlafaxine,Archives of General Psychiatry,online February 6, 2012.

    4. R.D. Gibbons et al., Benefits from antidepressants: Synthesis of 6-week patient-level outcomes from

    double-blind placebo-controlled randomized trials of fluoxetine and venlafaxine,Archives of General

    Psychiatry, online March 5, 2012.

    5. K. Lee et al., Publication of clinical trials supporting successful new drug applications: A literature

    analysis,PLoS Medicine, September 2008.

    6. B. Hart et al., Effect of reporting bias on meta-analyses of drug trials: Reanalysis of meta-analyses,

    BMJ, 344:d7202, 2012.

    7. A.P. Prayle et al., Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov:

    Cross sectional study,BMJ, 344:d7373, 2012.

    8. R. Lehman and E. Loder, Missing clinical trial data,BMJ, 344:d8158, 2012.

    1986-2012 The Scientist

    Now Part of the LabX Media Group: Lab Manager Magazine| LabX| LabWrench