I have brought this up in several forums and even brought it up here. The activist keep claiming the evidence is clear that there is no safe level of second hand smoke. Well since they use the term evidence let’s see how the law views their evidence. For one thing both the Faked EPA report and the Surgeon Generals report were not studies but Meta-Analysis on cherry picked studies, so lets see what the courts have to say about that.
From the Reference Manual on Scientific Evidence
Much has been written about meta-analysis recently, and some experts consider the problems
of meta-analysis to outweigh the benefits at the present time. For example, Bailar has written the
[P]roblems have been so frequent and so deep, and overstatements of the strength of conclusions so extreme, that one might well conclude there is something seriously and fundamentally wrong with the method. For the present . . . I still prefer the thoughtful, old-fashioned review of the literature by a knowledgeable expert who explains and defends the judgments that are presented. We have not yet reached a stage where these judgments can be passed on, even in part, to a formalized process such as meta-analysis.
John C. Bailar III, Assessing Assessments, 277 Science 528, 529 (1997) (reviewing Morton Hunt, How
Science Takes Stock (1997)); see also Point/Counterpoint: Meta-analysis of Observational Studies, 140 Am.
J. Epidemiology 770 (1994).
128. See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 945 & n.6 (3d Cir. 1990) (“Epidemiological
studies do not provide direct evidence that a particular plaintiff was injured by exposure to a
substance.”); Smith v. Ortho Pharm. Corp., 770 F. Supp. 1561, 1577 (N.D. Ga. 1991); Grassis v.
Johns-Manville Corp., 591 A.2d 671, 675 (N.J. Super. Ct. App. Div. 1991); Michael Dore, A Commentary
on the Use of Epidemiological Evidence in Demonstrating Cause-in-Fact, 7 Harv. Envtl. L. Rev. 429, 436
Now this was written by a well respected Professor John C. Bailar III and in essence he is saying that Meta-analysis on observational studies is unreliable and all of the studies in both reports fit that category.
Here are some of his credentials.
* Trends in cancer
* Assessing health risks (such as new chemicals)
* Misconduct in science
* Editorial Board, New England Journal of Medicine
* Board (Chair), National Institute of Statistical Science
Now the anti-smoker activist will tell you that RR’s(Relative Risks) less than 2 are perfectly acceptable if they are repeated often enough. They will tell you that the number 2 was picked by the tobacco companies to call the ETS studies Junk Science. OK again from the Reference Manual on Scientific Evidence
The threshold for concluding that an agent was more likely than not the
cause of an individual’s disease is a relative risk greater than 2.0. Recall that a
relative risk of 1.0 means that the agent has no effect on the incidence of disease.
When the relative risk reaches 2.0, the agent is responsible for an equal number
of cases of disease as all other background causes. Thus, a relative risk of 2.0
(with certain qualifications noted below) implies a 50% likelihood that an exposed
individual’s disease was caused by the agent. A relative risk greater than
2.0 would permit an inference that an individual plaintiff’s disease was more
likely than not caused by the implicated agent.139 A substantial number of courts
in a variety of toxic substances cases have accepted this reasoning.
So the courts won’t even touch anything less then a 2. Should we be passing laws based on scientific evidence that couldn’t even stand up in a court of law? Especially since the Epidemiologist can’t even agree on RR’s this low. From an award winning article in Science. Epidemiology faces its limits
“If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.” Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies.
That’s the rationale for meta-analysis — a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476).
But when Science asked epidemiologists to identify weak associations that are now considered convincing because they show up repeatedly, opinions were divided — consistently.
Again should we be passing laws based on Junk Science that couldn’t stand up in a court of law!