Just have the Robert Johnson Wood Foundation pay for a survey!
We all know how great surveys are, especially when you design it to "raise awareness" for the low, low price of $604,454!
For that price, you get:
(1) A nice glossy white paper that contains a concerned patient looking away while she's being examined by a doctor on the cover
(2) Lots and lots of numbers and scientific-looking charts.
(3) An NORC Press release (After all, it was laundered through the Associated Press-NORC Center for Public Policy Research!)
(4) A republishing of your "key points" by a few business-minded online health care journals eager to demonstrate relevance of using "quality measures" to determine health care "value."
(5) An opportunity to collaborate "on all aspects of the study!"
See how easy it is to make sure you get your major points out there to the decision makers! (Never mind that a quality physician means many different things to many different people - stop being a perfectionist, okay?)
Look, these guys did a survey with 25% response rate that totaled a whopping 1002 people - or about 0.000000317 of the current US population! Heck, no bias there, right? Then they add a few "sampling weights" and calculate the survey response rate using the important sounding American Association of Public Opinion Research's Method 3!
What's that? You're not familiar with Method 3? What kind of scientist ARE YOU????
Here. Let me help: If Method 1,2, or 4 doesn't get you the desired number, you use Method 3! The survey response rate for Method 3 is calculated from the handy, dandy Response Rate Method Calculator where:
I = Complete Interviews
R = Refusal and Break Offs
NC = Non-contacts
O = Other
e = the estimated cases of unknown eligibility that are eliglible! (In other words, a guess)
UH = Unknown Household
UO = Unknown Other
Using these definitions , the "Method 3" calculation for the survey response rate becomes:
I / (( I+P) + (R + NC + O) + e(UH + UO))
See? And that response rate, according to the white paper, after applying "sampling weights" had "an overall margin of error" of "+/- 4.0 percentage points, including the design effect resulting from the complex sample design."
Heck ya, I'm seeing accuracy there, aren't you?
These days, it's really important that lots of people see these data so policy makers (who have about as much scientific wattage as an LED), can turn to them to create controlling policy and regulations that benefit those who make - you got it - the policy and regulations! Especially in US health care. That's because doctors are getting a bit unruly and need to understand why they must fall in line on all this physician quality measurement stuff. Perhaps one of the introductory paragraphs of the published white paper says it best:
"Major investments are being made in health care systems like Accountable Care Organizations and in tools like Physician Compare. Similarly, health insurers and employers are exploring new benefits designs that incentivize consumers to select providers and hospitals that provide the highest-quality care while reducing costs through value-based provider networks and tiered health plans."So there you have it!
It's important that we all understand just how critical these surveys paid for by political organizations will be to health care in the years ahead. Spin, you see, is everything. Thank goodness the Robert Johnson Wood Foundation (who's CEO, by the way, has also partnered on other publications about patient safety and medical professionalism with members of the American Board of Internal Medicine and National Quality Forum) can show us the way!
I feel so reassured that this is the caliber of science being used to shape US health care now.
What could go wrong?