If there was one place research should be easy to perform, it’s on a disease that’s incredibly common.
Further, if there are two generally-accepted strategies to treating symptomatic patients with that ailment – one invasive and the other not – it should be pretty easy to compare which is best, right?
Maybe. Maybe not.
Welcome to the real life world of comparative effectiveness research, that politically and pundit-popular means to decide which treatment approach doctors should utilize and which, based on the results of these studies, our government will decide which approach they will fund.
But first, before starting the study, decide which way you’re leaning. Call that your “hypothesis”. Make sure your desired approach is the invasive one (this is very important) - that way, patients feel that at least you are trying to do something.
Good.
Now, be sure there are plenty of articles in the literature supporting your approach, but also discussing the substantial risks that might occur if that option is used and an accident happens.
Then have plenty of articles in the literature that talks about the other non-invasive but potentially dangerous treatment option.
Then go before your Investigational Research Board (IRB). Show them how cool it is and convince them this is the first prospective randomized trial comparing the two forms of treatment for this incredibly common disorder. Have a 15-page all-inclusive consent for the patient describing the good, the bad, and the potentially ugly. No, make it 17 pages just to be sure. (They’ll like that). Get the IRB’s blessing.
Then announce the trial to your colleagues and patients.
Then wait for the patient referrals from your colleagues who do not have the same vested interest in the trial as you, or wait for the Perfect Patient to enter your exam room.
Spend an hour with them telling them about the trial.
Then tell them that you really don’t know which option for therapy is best (and that's why you're doing the study), even though they have come to you in hopes you’ll explain to them which treatment option is best.
Look at their confused faces.
Offer plenty of time for them to decide if they want to be in the trial or not.
When they don’t call back, call them again to remind them about the importance of the trial. Talk to them for two more hours to answer their questions. Try to stay neutral to let them decide.. Hear them looking up things on the internet. Clarify the purpose of the trial to them. Sense their pressure.
Then watch them decline simply because they can’t decide whether to be in the trial or not.
Lather. Rinse. Repeat.
* * *
Sound familiar to others trying to do this work?
Now look at which topic was #1 of the Institute of Medicine’s Top 100 stand-alone topics for the First Quartile in which to perform Comparative Effectiveness Research.
Yep, atrial fibrillation.
Here’s the sad reality: the first comprehensive NIH and industry-sponsored comparative effectiveness trial studying the best approach to treat atrial fibrillation, the CABANA Trial, is having one hell-of-a-time enrolling subjects.
No one knows why.
But I suspect there are several reasons:
1) CER is complicated. Perhaps too much is being asked of these trials and their investigating centers since not only are clinical endpoints being studied, but costs as well.
2) These trials cost more to perform than they are funded. People can only work so long out of the goodness of their hearts until they must turn to some income-producing endeavor to justify their existence. In our current cost-conscious era, resources are limited for any complex, underfunded study.
3) Patients are better informed about their treatment options than ever before. This affects recruitment of subjects in several ways: (a) because of pre-conceived biases favoring one therapy over the other before a patient is even invited in to a trial, (b) a more educated subject population regarding the risks of any proposed therapy.
The real question becomes, can we really expect to put all our health care reform financial eggs in the unrealized promise of comparative effectiveness research trials when it’s so damn hard to enroll patients in these trials?
-Wes
Because it is hard doesn't mean it should not be done.
ReplyDeleteUnfortunately , the financial motivations of medicine run counter to performance of these studies, since many of these treatments may show little benefit over standard medical treatment. If some of these expensive procedures are shown to be the same or inferior to other treatments, then where does that leave cardiologists who are still reeling from the cuts to non invasive cardiac testing and a significant drop in the frequency and indications for cardiac catheterization? I can understand there might not be too much enthusiasm for conducting these studies.
I also see part of the problem as being the fact these studies were not performed and required by the FDA in the first place. One could argue that the FDA has given up its supervisory role in the past by allowing patients to be subjected to medical procedures and treatments that have no proven efficacy over the old and potentially less risky equivalent. There will be a need to more thoroughly vet these new treatments, comparing them not only to placebo, but the existing availible treatments as well. And there is a need to go back and substantiate benefit of those who have not undergone proper comparative testing.