Wednesday, February 18, 2009

My Reservations About Comparative Effectiveness Research

Do we need comparative effectiveness research? Lots and lots of my intelligent colleagues think so. But as usual, I am more suspect.

That's because it will cost a bundle. $1.1 billion has been earmarked for this endeavor in the new Stimulus Bill.

And that's mere seed money.

Think about it: How many issues in medicine need "effectiveness" comparisons? Hundreds? Thousands? Tens of thousands? How much will each project cost? How many teams of "experts" will have to be assembled to tell us is enteric coated aspirin is as effective as plain aspirin for the treatment of arthritis? Oh sure, scoff at the notion, but with bureaucracies, there's virtually no limit to how detailed we can go with this.

And we might not even need it.

That's because comparative effectiveness research is being done, free of charge, right now, right on your computer, via the internet.

It's called t-r-a-n-s-p-a-r-e-n-c-y. Show us the technology, show us the price, show the the prospective, randomized trials, tell us what it'll cost, and let us decide. Plain and simple. Isn't that what where we're already going right now? Is there a better vetting body than the world? I mean, look at the drug costs now available on the internet. Look at the costs of procedures that hospitals are beginning to publish. More and more this will be the norm. Why? Because you and I are having to pay a larger and larger proportion of our health care bill right now.

We'll demand it.

Or are we to be fear-mongered that evil pharma and device companies will surely warp our tiny minds with their marketing schemes and exhorbitant prices so we have no choice but to accept comparative effectiveness reseearch as our ultimate fiscal and medical saviour?


One only has to recall the exposure of the marketing tactics by drug and device companies in the realm of direct-to-consumer advertising (never mind that Congress, and the FDA, remain beholden to the drug companies for funds never reads the public's tea leaves - (or the Stimulus Bills - d'oh!)). Or look to the remarkable migration of patients from pricey Vytorin to generic simvastatin after the negative results of the Enhance study failed to show an advantage with the combination medication. Was comparative effectiveness research responsible for these epiphanies?


Finally, there's some real-life research issues with comparative effectiveness research that are concerning. First: "effectiveness research" relies on the history of competing technologies. New technologies will almost always be at a disadvantage to older technologies because they do not have a history of experience with which to compare. If doctors require a learning curve in the application of any new medical advance, might there be a bias to pull new technologies before they're understood? Perhaps.

Secondly, the term "effectiveness" is bothersome because it implies there must be one correct answer. Take, for instance, a therapy that prolongs life "effectively" that is expensive (many of the new cancer drugs come to mind). If cost-effectiveness is the goal, then using none of the drug and letting the patient succumb to cancer might be the most "effective" use of the drug to save costs to our health care system. But if longevity is the primary effectiveness goal, then the best therapy might be incredibly costly in a younger patient. Which effectiveness goal will be chosen for each of the therapies tested? Cost or clinical outcome? If a "blended" goal is desired, who will decide how much of which goal will be ultimately chosen?

Finally, what about confounding factors? How can any of the millions of permutations of co-existing conditions be weighed in effectivness research? Take coronary stents, for instance. Whereas a drug-eluting stent might be the perfect choice in terms of limiting restenosis in a particular clinical situation, will comparative effectiveness research limit a cardiologist's ability to place a bare metal stent instead because he knows the patient will be undergoing hip replacement in four weeks? Can we really expect an algorithm mandated by researchers and bureaucrats to account for these situations? If so, how extensive will all the exclusion criteria become?

Suddenly to me, it seems the crystal clear goals of Comparative Effectiveness Research become very clouded.

Rather, I think this $1.1 billion earmark for comparative effectiveness research is really about stimulating research budgets for the "Chosen Fifteen" connected research politicos rather than helping doctors know how best to treat their patients. No one study or group of individuals can apply such studies to the individual patient - I don't care how much money we dump into their research. Clinical guidelines have been careful not to supercede clincal judgement and comparative effectiveness research shouldn't either. To do so invites liability claims and the potential for untoward health care delivery in the name of government mandates that might ultimately threaten the doctor-patient relationship.

Then what have we accomplished for our $1.1 billion dollars?



Keith Sarpolis said...


What we need is fair and impartial research into new technology as to whether it offers significant improvement over older techniques or medications. The Allhat study was a clear example where it was found that good old HCTZ is the most effective anti-hypertensive drug. Unfortunately it lacks a flotilla of drug reps plying our offices touting this study and suggesting we should use it more often and would never have been done by private industry.

You certainly would not start at the level of low cost Aspirin in terms of prioritizing studies. You would start with the biggies like do these Da Vinci Robotic systems really improve outcome and recovery from surgery? Is physical therapy for back pain really that helpful? Are epidural injections for back pain effective? Do we really need to do yearly stress testing on people with diagnosed CAD? Is that 100,000 per year chemotherapeutic agent for breat cancer that gives you 2 months additional time to suffer really woth it?

As the medical industry thinks up things to make, it seems somewhat removed from the cost issue since once they gain credibility (FDA approval) that the treatment is effective, they can charge whatever they wish to cover their costs. Very unlike any other industry which has to work the cost of their product into design and determine whether it makes sense to produce the product. An example will be the upcoming electric cars that GM is talking about making. Sounds like a great idea, but the trick is whether GM will be able to produce them at less than 30 grand, because otherwise, there won't be many folks who can afford them. This thought process does not apply to medicine since the costs are buried in insurance premiums and we are willing to pay whatever it costs for our health.

So we need an independent arbiter, especialy of these expensive new technologies, to decide as a society, if it is worth paying for them. Do you want to depend on Da Vinci systems to do the research that shows the efficacy of their equipment? I personally would like a more independent arbitrator.

R. W. Donnell said...

Superb, Dr. Wes.

BTW, ALLHAT is an example of precisely what can be wrong with CER, and with guvment sponsored research.

There's been much public objuscation on this topic, which I plan to address in a post of my own when time permits.

DrWes said...

Keith -

What makes the "Chosen Fifteen" more capable of evaluating the "effectiveness" of a therapy than those of us who are equally as capable? Why must we relinguish our independent thought process to some central body? Are these appointees not placed there based on a political agenda? Is that bias less influential than the marketing bias interjected by the pharma and device companies?

Who knows.

My point is this: I see comparative effectiveness research as nothing more than guidelines on steroids. Big money steroids. One only needs to look to the guidelines for antithrombotic therapies that extends over 24 CHAPTERS and over 968 pages (See Chest, June 2008) to see how crazy this whole "effectiveness" idea can be taken.

Are such guidelines for therapy even read any longer?

I suspect only in court.

Dr. Val said...

Hmmm... I shared my concerns about Comparative Effectiveness here:

Very different reasoning, same conclusion: this is not black and white, but gray.


james gaulte said...

Dr. Wes,

A great post.My concerns for this latest panacea continue to grow.I,too, wonder where these "independent" analysts will come from.I have been posting comments as well (Retired Doc's thoughts) but your essay says it better.Thanks.

James Gaulte