I never understand this stuff. Why pick on just him? Oh sure, this was a big "no, no" to leak embargoed trial results early, but others hinted at tidbits from this big trial, too. This is not the first time scientific meetings have had to deal with leaks in the age of the internet. Why single him out? What about this from the Wall Street Journal Health Blog at 25 Mar 2007 @ 11:38 PM:
Interventional cardiologists the Health Blog spoke with – including Leon’s colleague Gregg Stone of Columbia University, who ran major Boston Scientific and Abbott Labs stent studies; Donald S. Baim, the chief scientist at Boston Scientific, and Barry F. Uretsky, who co-chairs part of the confab here – echoed this analysis. Hip replacements don’t decrease deaths either, Stone pointed out, but they’re still worth doing in many patients to improve quality of life.Note that these comments were also made before the COURAGE trial was released. Not that I really care. But should they be reprimanded, too? They were big dogs in this trial, weren't they? Or was the reprimand less about Marty Leon and more about the New England Journal of Medicine?
Maybe the real reason Martin Leon, MD was singled out was another reason: the Journal's impact factor.
Dr. Leon is well known in Cardiology circles. Dr. Leon knows people and industry. He is likeable. When Dr. Leon speaks, people listen. And people write articles. And articles that reference the New England Journal of Medicine are what are needed to increase the Journal's impact factor.
(British Medical Journal - 3/07) The impact factor has become the global currency for a journal's scientific standing and, by implication, of the papers it publishes. Available at the click of a mouse (http://scientific.thomson.com/isi/) from the Institute of Scientific Information and updated every year, the impact factor has three decimal place precision and an impressive range from close to zero to over 30. Some journals delight in flaunting their impact factors, and when the big names such as Nature do this you could be forgiven for believing that the impact factor is both credible and important.And the worst point of all of this, is that the impact factor can be manipulated during a rebuttal process sanctioned by the ISI:
Sadly, this is not the case. Even superficial scratching beneath the hype shows this currency to be so seriously debased that only the naive could attach any value to it. A journal's impact factor is derived as the total number of citations of all its eligible articles (full papers and reviews) published during the previous two years, divided by the total number of eligible articles. The basic assumption that this ratio reflects the journal's scientific quality has been challenged on many counts, including the heavy citation of reviews, self citation, and period of measurement. It doesn't even matter if a paper turns out to be rubbish—or even if the only reason for citing it is to point this out—because all citations count and contribute equally to the journal's impact factor.
This system of negotiations—or, as (the Institute of Scientific Information) ISI's Ms McVeigh prefers it "discussions or clarifications"—has made journals far more cognisant of how editorial decisions can affect impact factors. As well as monitoring cases in which ISI gets it wrong, editors are using this knowledge to their advantage. By keeping the numbers of scholarly articles as small as possible, journals can maximise their ranking. "Every time you get a number you get people working out how to make it work to their advantage", admits Dr (George)Lundberg (editor of JAMA). Several artefacts can influence a publication's ranking in journal lists. Review articles or letters are generally cited more than research papers, so boosting review content can make journals perform better in the ranking. Inclusion of news articles, editorials, and media reviews that are among articles considered "non-source" by ISI can win a journal citations without increasing the denominator. And journals can, of course, deliberately try to inflate self citations by asking authors to reference papers in their journal.The need for inflating impact factors in journals that report clinical research cannot be overstated:
There has been a haemorrhage of clinical academic staff from universities during the past 10 years—mirroring the existence of the research assessment exercise—and wide ranging cuts in specialist teaching available in medical schools, with some subjects now completely absent. Professor Rees says 1000 members of staff have been lost from medical schools, most of them clinical researchers. He attributes this damaging decline to the fact that papers reporting laboratory based research get published in journals with generally higher impact factors than their clinical counterparts, so universities selectively return those sorts of papers for departmental evaluations in the research assessment exercise and funding for clinical investigation decreases as a result.What is clear is that in the age of the internet, print journals, like newspapers, are losing readership. The internet is fast becoming doctors' source for information. So journals are eager to keep up their relevance in such a wired world.
And nothing sells news like bad news - and drives up an impact factor - like the reprimand of one of their own.
23 Apr 2007 - Update: It seems others realize that Wall Street always seems to know the results of these trials before they're released:
But Dr. Kaul said doctors talking about the New Orleans incident were more concerned about whether medical companies or Wall Street analysts had been alerted to the medical study’s results well before Dr. Leon’s reported lapse. “It’s very common,” Dr. Kaul said, “to hear rumors that companies are in the know about trial results.”-Wes