The smoking bans are spreading across the world and the activist claim the science is conclusive but what is this so called science and is it science. Well the bulk of the claimed evidence is based on Meta-analysis and world renowned John C. Bailar, III says this about it.
, taken from his letter to The New England Journal of Medicine, 338 (1998), 62, in response to letters regarding LeLorier et al. (1997), “Discrepancies Between Meta-Analyses and Subsequent Large Randomized, Controlled Trials”, NEJM, 337, 536-542 and his (Bailar’s) accompanying editorial, 559-561:
My objections to meta-analysis are purely pragmatic. It does not work nearly as well as we might want it to work. The problems are so deep and so numerous that the results are simply not reliable. The work of LeLorier et al. adds to the evidence that meta-analysis simply does not work very well in practice.
He is so renowned that he is quote in the Reference Manual on Scientific Evidence. But why would he say such a thing. Well for one thing there is no standard as to how much weight to give similar studies much less different studies. it gives the author far too much power to inject his or her personal bia or advvocacy a pointed out by “Beware of Meta-analysis Bearing False Gifts.”
Meta-analyses performed by strong advocates of a particular position in an ongoing controversy are at higher risk for bias. . . .The interpretation of a meta-analysis is potentially subject to an author’s bias by what inclusion and exclusion criteria is selected, the type of statistical evaluation performed, decisions made on how to deal with disparities between the trials, and how the subsequent results are presented.
Gerard E. Dallal, Ph.D. concurred with Dr Bailers assessment and also brought up the problems with publication bias.
Meta analysis always struggles with two issues:
publication bias (also known as the file drawer problem) and
the varying quality of the studies.
Publication bias is “the systematic error introduced in a statistical inference by conditioning on publication status.” For example, studies showing an effect may be more likely to be published and written up and submitted for publication more promptly than studies showing no effect. (Studies showing no effect are often considered unpublishable and are just filed away, hence the name file drawer problem.) Publication bias can lead to misleading results when a statistical analysis is performed after assembling all of the published literature on some subject.
In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start. Furthermore, some journals, e.g. Trials, encourage publication of study protocols in their journals.
In the case of smoking ban advocates the “publication bias” was intentional not accidental as pointed out by Dr Enstrom
., and the 1992 EPA Report
One might wonder how omissions, distortions, and exaggerations like those pointed out above could occur in a document as important as a Surgeon General’s Report on ETS. To better understand this phenomena one must realize that Samet has dealt with the ETS issue in this manner for many years. In particular, he played a major role in the epidemiologic analysis for the December 1992 report on Health Effects of Passive Smoking: Lung Cancer and Other Disorders: The Report of the United States Environmental Protection Agency . . . . The epidemiologic methodology and conclusions of the EPA report have been severely criticized. One of the harshest critiques is the 92-page Decision issued by Federal Judge William L. Osteen on July 17, 1998, which overturned the report in the U.S. District Court . For instance, in his conclusion Judge Osteen wrote: “In conducting the Assessment, EPA deemed it biologically plausible that ETS was a carcinogen. EPA’s theory was premised on the similarities between MS [mainstream smoke], SS [sidestream smoke], and ETS. In other chapters, the Agency used MS and ETS dissimilarities to justify methodology. Recognizing problems, EPA attempted to confirm the theory with epidemiologic studies. After choosing a portion of the studies, EPA did not find a statistically significant association. EPA then claimed the bioplausibility theory, renominated the a priori hypothesis, justified a more lenient methodology. With a new methodology, EPA demonstrated from the 88 selected studies a very low relative risk for lung cancer based on ETS exposure. Based on its original theory and the weak evidence of association, EPA concluded the evidence showed a causal relationship between cancer and ETS. The administrative record contains glaring deficiencies. . . .”
Jonathan M. Samet, M.D was the lead author of the 2006 Surgeon Generals Report does he deny these claims, NO! buried on page 21 is an admission of these facts.
Judge William L. Osteen, Sr., in the North Carolina Federal District
Court criticized the approach EPA had used to select
studies for its meta-analysis and criticized the use of 90
percent rather than 95 percent confidence intervals for
the summary estimates (Flue-Cured Tobacco Cooperative
Stabilization Corp. v. United States Environmental Protection
Agency, 857 F. Supp. 1137 [M.D.N.C. 1993]). In
December 2002, the 4th U.S. Circuit Court of Appeals
threw out the lawsuit on the basis that tobacco companies
cannot sue the EPA over its secondhand smoke
report because the report was not a final agency action
and therefore not subject to court review (Flue-Cured
Tobacco Cooperative Stabilization Corp. v. The United
States Environmental Protection Agency, No. 98-2407
[4th Cir., December 11, 2002], cited in 17.7 TPLR 2.472
Recognizing that there is still an active discussion
around the use of meta-analysis to pool data
from observational studies (versus clinical trials),
the authors of this Surgeon General’s report used
this methodology to summarize the available data
when deemed appropriate and useful, even while
recognizing that the uncertainty around the metaanalytic
estimates may exceed the uncertainty indicated
by conventional statistical indices, because of
biases either within the observational studies or produced
by the manner of their selection.
So is Meta-analysis science or a tool for advocacy?