Meta-analysis, Science or a Tool for Advocacy


The smoking bans are spreading across the world and the activist claim the science is conclusive but what is this so called science and is it science. Well the bulk of the claimed evidence is based on Meta-analysis and world renowned John C. Bailar, III says this about it.

, taken from his letter to The New England Journal of Medicine, 338 (1998), 62, in response to letters regarding LeLorier et al. (1997), “Discrepancies Between Meta-Analyses and Subsequent Large Randomized, Controlled Trials”, NEJM, 337, 536-542 and his (Bailar’s) accompanying editorial, 559-561:

My objections to meta-analysis are purely pragmatic. It does not work nearly as well as we might want it to work. The problems are so deep and so numerous that the results are simply not reliable. The work of LeLorier et al. adds to the evidence that meta-analysis simply does not work very well in practice.

He is so renowned that he is quote in the Reference Manual on Scientific Evidence. But why would he say such a thing. Well for one thing there is no standard as to how much weight to give similar studies much less different studies. it gives the author far too much power to inject his or her personal bia or advvocacy a pointed out by “Beware of Meta-analysis Bearing False Gifts.”

Meta-analyses performed by strong advocates of a particular position in an ongoing controversy are at higher risk for bias. . . .The interpretation of a meta-analysis is potentially subject to an author’s bias by what inclusion and exclusion criteria is selected, the type of statistical evaluation performed, decisions made on how to deal with disparities between the trials, and how the subsequent results are presented.

Gerard E. Dallal, Ph.D. concurred with Dr Bailers assessment and also brought up the problems with publication bias.

Meta analysis always struggles with two issues:

publication bias (also known as the file drawer problem) and
the varying quality of the studies.

Publication bias is “the systematic error introduced in a statistical inference by conditioning on publication status.” For example, studies showing an effect may be more likely to be published and written up and submitted for publication more promptly than studies showing no effect. (Studies showing no effect are often considered unpublishable and are just filed away, hence the name file drawer problem.) Publication bias can lead to misleading results when a statistical analysis is performed after assembling all of the published literature on some subject.

Now publication bias became so prevalent in the drug industry that the medical journals took action.

In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start.[13] Furthermore, some journals, e.g. Trials, encourage publication of study protocols in their journals.

In the case of smoking ban advocates the “publication bias” was intentional not accidental as pointed out by Dr Enstrom

., and the 1992 EPA Report
One might wonder how omissions, distortions, and exaggerations like those pointed out above could occur in a document as important as a Surgeon General’s Report on ETS. To better understand this phenomena one must realize that Samet has dealt with the ETS issue in this manner for many years. In particular, he played a major role in the epidemiologic analysis for the December 1992 report on Health Effects of Passive Smoking: Lung Cancer and Other Disorders: The Report of the United States Environmental Protection Agency . . . . The epidemiologic methodology and conclusions of the EPA report have been severely criticized. One of the harshest critiques is the 92-page Decision issued by Federal Judge William L. Osteen on July 17, 1998, which overturned the report in the U.S. District Court [121]. For instance, in his conclusion Judge Osteen wrote: “In conducting the Assessment, EPA deemed it biologically plausible that ETS was a carcinogen. EPA’s theory was premised on the similarities between MS [mainstream smoke], SS [sidestream smoke], and ETS. In other chapters, the Agency used MS and ETS dissimilarities to justify methodology. Recognizing problems, EPA attempted to confirm the theory with epidemiologic studies. After choosing a portion of the studies, EPA did not find a statistically significant association. EPA then claimed the bioplausibility theory, renominated the a priori hypothesis, justified a more lenient methodology. With a new methodology, EPA demonstrated from the 88 selected studies a very low relative risk for lung cancer based on ETS exposure. Based on its original theory and the weak evidence of association, EPA concluded the evidence showed a causal relationship between cancer and ETS. The administrative record contains glaring deficiencies. . . .”

Jonathan M. Samet, M.D was the lead author of the 2006 Surgeon Generals Report does he deny these claims, NO! buried on page 21 is an admission of these facts.

Judge William L. Osteen, Sr., in the North Carolina Federal District
Court criticized the approach EPA had used to select
studies for its meta-analysis and criticized the use of 90
percent rather than 95 percent confidence intervals for
the summary estimates (Flue-Cured Tobacco Cooperative
Stabilization Corp. v. United States Environmental Protection
Agency, 857 F. Supp. 1137 [M.D.N.C. 1993]). In
December 2002, the 4th U.S. Circuit Court of Appeals
threw out the lawsuit on the basis that tobacco companies
cannot sue the EPA over its secondhand smoke
report because the report was not a final agency action
and therefore not subject to court review (Flue-Cured
Tobacco Cooperative Stabilization Corp. v. The United
States Environmental Protection Agency, No. 98-2407
[4th Cir., December 11, 2002], cited in 17.7 TPLR 2.472
[2003]).
Recognizing that there is still an active discussion
around the use of meta-analysis to pool data
from observational studies (versus clinical trials),
the authors of this Surgeon General’s report used
this methodology to summarize the available data
when deemed appropriate and useful, even while
recognizing that the uncertainty around the metaanalytic
estimates may exceed the uncertainty indicated
by conventional statistical indices, because of
biases either within the observational studies or produced
by the manner of their selection.

So is Meta-analysis science or a tool for advocacy?

About Marshall Keith

Broadcast Engineer Scuba Diver Photographer Fisherman Hunter Libertarian
This entry was posted in Libertarian, Nanny State, Smoking Ban and tagged , , , , , . Bookmark the permalink.

6 Responses to Meta-analysis, Science or a Tool for Advocacy

  1. Warning: Anti-tobacco activism may be hazardous to epidemiologic science
    Carl V Phillips

    This commentary accompanies two articles submitted to Epidemiologic Perspectives & Innovations in response to a call for papers about threats to epidemiology or epidemiologists from organized political interests. Contrary to our expectations, we received no submissions that described threats from industry or government; all were about threats from anti-tobacco activists. The two we published, by James E. Enstrom and Michael Siegel, both deal with the issue of environmental tobacco smoke. This commentary adds a third story of attacks on legitimate science by anti-tobacco activists, the author’s own experience. These stories suggest a willingness of influential anti-tobacco activists, including academics, to hurt legitimate scientists and turn epidemiology into junk science in order to further their agendas. The willingness of epidemiologists to embrace such anti-scientific influences bodes ill for the field’s reputation as a legitimate science.

    http://www.epi-perspectives.com/content/4/1/13

    Lets not forget this one:

    Warning: junk reporting of junk science threatens individual freedom
    Friday May 6, 2011
    Another day another scary health study, says Brian Monteith. But that’s the way of the bully state: junk science, junk reporting followed by junk laws.

    http://www.thefreesociety.org/Issues/Smoking/warning-junk-reporting-of-junk-science-threatens-individual-freedom

  2. http://members.iinet.com.au/~ray/TSSOASb.html

    Ray Johnstone’s The Scientific Scandal of AntiSmoking has a little about it:

    Over the next decade the results of other similar trials appeared. It had been argued that if an improvement in one life-style factor, smoking, were of benefit, then an improvement in several – eg smoking, diet and exercise – should produce even clearer benefits. And so appeared the results of the whimsically acronymed Multiple Risk Factor Intervention Trial or MRFIT, with its 12,886 American subjects. Similarly, in Europe 60,881 subjects in four countries took part in the WHO Collaborative Trial. In Sweden the Goteborg study had 30,022 subjects. These were enormously expensive, wide-spread and time-consuming experiments. In all, there were 6 such trials with a total of over a hundred thousand subjects each engaged for an average of 7.4 years, a grand total of nearly 800,000 subject-years. The results of all were uniform, forthright and unequivocal: giving up smoking, even when fortified by improved diet and exercise, produced no increase in life expectancy. Nor was there any change in the death rate for heart disease or for cancer. A decade of expensive and protracted research had produced a quite unexpected result.

  3. You maybe interested in this.

    IARC 1998 is without any doubt a no-risk study, which becomes very clear when you examine the study’s Technical report with the raw figures – page 220 for workplace SHS:

    http://legacy.library.ucsf.edu/tid/wtp67e00/pdf

    Controlling for educational level and type of residence more than halved the SHS-workplace risk from unadjusted Odds Ratio 1.17 in the article to OR 1.08 (page 220, row 1, column 3). Furthermore: Eliminating 7 portuguese cases (out of 650 cases in total) with incomplete data reduces the ratio to OR 1.02 – (page 220, row 3, column 3).
    More in the comments on Snowdon’s blog:”

    http://velvetgloveironfist.blogspot.com/2010/05/tale-of-two-studies.html?showComment=1275138882892#c4588064262918327442

    http://daveatherton.wordpress.com/2011/06/11/more-on-the-whoboffetta-paper-hat-tip-klaus-k/

  4. Pingback: Stanton Glantz, Professional at Ad Hominem | Tea Party Perspective

  5. Website Here says:

    What i do not understood is actually how you’re not really a lot more smartly-appreciated than you may be right now. You’re so intelligent.
    You realize thus considerably in relation to this matter, produced me
    personally believe it from a lot of numerous angles.
    Its like men and women aren’t fascinated unless it’s something to accomplish with Lady gaga!
    Your own stuffs excellent. All the time deal with it up!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s