Tuesday, July 05, 2011

Should US Government Research Grants Support Meta-Analyses?

It was another headline busting study this week: Pfizer Drug Tied to Heart Risks - a provocative title no doubt fed to the media from the publisher: the Canadian Medical Association Journal. The study was yet another meta-analysis that culled the world's literature in an attempt to determine if a trend could be found that might implicate Chantix as a causative agent for and increased incidence of heart disease in smokers.

On its surface, the study sounds authoratative, analyzing "14-double-blind randomized controlled trials involving 8216 participants" ranging in duration from "7 to 52 weeks."

Never mind that 57% (25) of adverse events were weighted from one study and that none of the 14 studies had odds ratios that did not cross the unity line.

Despite this, the lead author concluded:
Despite the limitations of our analysis, our findings have potential regulatory and clinical implications.

Sorry, this is not correct. There are no clinical implications of this trial. Like all metanalyses, meta-anaylses simply cannot determine cause and effect. (Note to main stream media: are you folks listening?!?)

That being said, there's another concern I have with studies like this: our deficit and how we're spending our precious research dollars.

This Chantix study's lead author, Sonal Singh, MD MPH, was supported by a grant from the National Center for Research Resources (NCRR), a component of the US National Institutes of Health (NIH) and the NIH Roadmap for Medical Research (grant number 1KL2RR025006-03). This study came from that grant.

But so did meta-analyses on thiazolidinediones and inhaled corticosteroids and their possible risks for leg fractures or heart attacks.

With this grant, here's all you need: pick a drug, any drug. Then go to a computer, do literature searches of other people's work on a particular drug and side effect, then try to find a relationship to something.

That's it. No original ideas. No original hypotheses to test. And all funded by the American taxpayer.

Is this the "roadmap" for medical research that we want for our public scientific dollars?

Certainly we want our researchers to have the freedom to chose an area of research that they feel is important, but when should we insist on some form of accountability for the quality and cost of that research?


Addendum: It should be noted that at the time of the above publication, the authors of the above study noted the FDA's announcement of the addition of a warning to the product label for Chantix (varenicline) from the same single 700-patient randomized controlled trial that the authors weighted 57% of their opinion upon, yet the FDA admitted that "the trial was not designed to have statistical power to detect differences between the arms on the safety endpoints."

Disclaimer: I have no financial conflicts of interest to report regarding Chantix or its manufacturer.


Tim said...

It's really sad that so much change is happening in medical care that has no basis in fact. Most of these studies to which you refer only serve to provide fodder for the trial lawyers and the bureaucrats! Just look at the commercials for law firms taking on the pharmaceutical industry because, "you may be entitled to compensation." The list of drugs of concern is so long that it's difficult to get them all into one commercial! They may have to resort to Saturday morning infomercials. Real statisticians cringe at many of the conclusions drawn from medical studies, because the statistics aren't done by statisticians, and their conclusions are erroneous. Evidently, epidemiologists are quite egregious in this.
It's hard to know what to believe. We get hit twice by this problem. First, we have to deal with the medical implications. Then, we are forced to comply with government mandates based on possibly erroneus data!
In the OR, we are being forced to limit flash sterilization, a shortened cycle of steam sterilization used, for example, when you drop an instrument that you really need, and there aren't others readily available. This process is about as old as steam sterilization itself. Why? Because of the possibility that it might lead to increased risk of infection. Larger numbers of instruments are supposed to eliminate the need to flash. This is not based on even false data. It is based on no data at all! So, we are being asked to do something which is not cost effective when there is no suggestion in the medical literature that the problem even exists, based on someone's (with authority) "concern" that there "might" be an increased risk of infection with flash sterilization!
Where does the madness end? Soon the housekeeping supervisor or the mayor will be questioning my treatment plan! With over thirty years of practice under my belt, it is a blessing that I won't have that many more.

Anonymous said...

EBM is great...but so often the limitation of EBM are ignored. But when government "guidelines" start being made on so-called EBM that is clearly incomplete and without the power to support the recommendations that are being proposed, it will make EBM into a bad word. To the public, it will look a whole lot like a shoddy attempt to justify rationing, and in many cases this is how it will be used (abused).

Michel Accad said...

The great Alvan Feinstein wrote a paper in 1995 entitled "Meta-analysis: statistical alchemy for the 21st century." So it is not surprising that US government grants have and will continue to support such "research." It's as good as gold! Great post, thanks.