Most courts (but certainly, and unfortunately, not all of them) recognize that cherry-picking is a cardinal sin under Rule 702. Science generally requires a rigorous and conservative approach to evaluating cause-and-effect relationships. This schema inherently clashes with litigation, an arena where parties prioritize results over neutral principles of process purity.
“Cherry-picking” involves the selective consideration of facts and data to support a desired or pre-determined result, rather than the analysis of all relevant facts and data to find a scientific truth (or determine that the truth remains elusive based on the available facts and data). It evades the scrupulous adherence to principles of objectivity, rigor, and process validity that are the hallmark of the scientific method. In Daubert-speak, such a methodology does not produce “scientific knowledge.” Rather, cherry-picking represents a failure of methodology that cannot be waived off as a matter of weight rather than admissibility.
A federal court in the Southern District of New York recently reaffirmed these principles in Daniels-Feasel v. Forest Pharmaceuticals, Inc., 2021 WL 4037820 (S.D.N.Y. Sept. 3, 2021). Plaintiffs alleged that Lexapro use during pregnancy was capable of inducing fetal neurodevelopmental problems leading to autism spectrum disorder (ASD). They proffered three well-traveled experts to testify to this theory of general causation.
The first expert, Dr. Lemuel Moyé, purported to employ a weight-of-the-evidence methodology. Because of the necessarily subjective nature of this analytical approach, courts require the expert employing it to explain in some detail how they analyzed and “weighed” the evidence, and to rationally justify the variable weight assignments they made along the way. Quoting In re Abilify (Aripiprazole) Product Liability Litigation, 299 F. Supp. 3d 1291, 1311 (N.D. Fla. 2018), the court observed that this methodology can be found reliable only if “the expert considers all available evidence carefully and explains how the relative weight of the various pieces of evidence led to his conclusion.”
The court found multiple reasons to conclude that Moyé’s weight-of-the-evidence analysis fell short, but chief among them was cherry-picking: “He fails to adequately support his conclusions using the selectively favorable data he relies upon, unjustifiably disregards inconsistent data, and admittedly ignores categories of relevant evidence.” Among other things, Moyé “disregard[ed] the limitations expressed by the studies he cites in support of his conclusions, and dismissed inconsistent findings without explanation.” The court found this result-driven approach unreliable.
The second expert was the ubiquitous Dr. Laura Plunkett. Characteristically, she sought to evade full scrutiny by disclaiming that her opinion was one of general causation. The court found that this “does not justify her proffer of an incomplete, selective, misleading, and ultimately unreliable opinion. Indeed, she is obligated to utilize and explain ‘a scientific method of weighting’ to avoid rendering her opinion the product of a ‘mere conclusion-oriented selection process.’”
Substantively, Plunkett’s main misstep was her failure to adequately explain how and why she arrived at her causation conclusion based on the studies she cited. But the court was also critical of cherry-picking on her part. “[A] rigorous examination of Dr. Plunkett’s analysis reveals that she conducted a flawed and misleading Bradford Hill analysis where she selectively analyzed four of the nine factors, primarily relied on cherry-picked, favorable animal data that supports her conclusions within those analyses, and failed to mention, much less reconcile, other categories of relevant data constituting contrary authority.” The court continued, “Dr. Plunkett chooses to discuss only those studies, and findings within studies, that support her conclusions, and presents to the Court ʻwhat [s]he believes the final picture looks like’ rather than the entire ‘scientific landscape.’” The court again found this biased approach unreliable.
The third expert was Dr. Patricia Whitaker-Azmitia. She deployed a “face validity” methodology to decide how to weigh the studies she reviewed. This methodology was found to inherently call for cherry-picking because it only considered animal studies that supported her mechanistic causation thesis. Indeed, she “admitted that she deliberately disregarded those studies that showed no similarities and was only looking for those that supported her hypothesis.” Paradigm cherry-picking.
As with the other experts, cherry-picking was not her only sin. She also failed to adequately (or at all) explain how she weighed the various studies—not surprising since she disregarded entirely the studies that did not support her theory. She also attempted to buttress her opinion with epidemiology studies, but candidly admitted she was not qualified to interpret epidemiological data and had relied on more knowledgeable colleagues for that part of her analysis. She was also guilty of mischaracterizing at least one study, which “further casts doubt on the reliability of her opinion.” The court concluded that her testimony too was unreliable and inadmissible under Rule 702.
The experts’ flaws in this case were a veritable stew, with the primary flavors being excessive subjectivity and lack of transparency. But the main and common ingredient was their disregard of or bias against studies that conflicted with their desired causation conclusion. The opinion excluding them reinforces the overarching requirement that experts adhere to fundamental principles of scientific rigor when they choose to enter the courtroom.