Experts and the limits of scientific knowledge

When your expert is out of control: Tips to spot unsupported and unsupportable opinions

Nathaniel Leeds
2017 April

I worked on a case a couple years ago against a nurse who we strongly believed had killed my orphaned client’s mother by negligently introducing an air bubble into an IV line. The defendant’s pulmonologist tried to convince us that the cause of death was a sudden, fatal, asthmatic attack that struck without warning and killed the patient within three minutes.

I called a non-infectious epidemiologist whom I knew to have an interest in asthma, but she refused to testify even though she had never seen or heard of a similarly sudden case.

Here is why: medical science knows very little about the onset of fatal asthmatic episodes because they do not typically arise in hospital settings where someone can watch and observe the patient as the attack progresses. The histories taken from observers are spotty, and problematic; and the person who suffered from the episode is, by definition, dead.

These problems of incomplete science confound many parts of litigation. The goal of this article is to offer readers some tools to spot unsupported and unsupportable opinions, and suggestions on what to do when the scientific evidence is spotty. Although most of the examples come out of medical-malpractice cases, the concepts can apply in many cases that require technical experts.

Jurors don’t know the difference between chi-squared and a chai, why should I?

Many of you will have scanned this article and ask a basic question: if this stuff is too complicated for most jurors to ever understand, why should I waste my time? Or, I’m a lawyer, not a statistician, won’t a good expert have their handle on most of this stuff?

Unfortunately, many experts will offer best-practice opinions that are based on things that are unknowable. As advocates we are often gullible consumers of simple explanations that help our case. The danger of not understanding that your expert’s basis is weak is that when crossed, an expert relying on an unsupported opinion can crumble, or worse, flip.

The difference between doctors, scientists, and lawyers

I think unsupported opinions are endemic, not because experts are ethically compromised, but because of the difficulty of what we ask them to do. When hiring an expert we need to be conscious that they are often coming at the problems in our cases from the perspective of their professions – and that professional perspective can be very different from what is required in our profession.

To highlight this problem I think it worth observing how different our goals are from the goals in many of the fields our experts work and were trained in. One way of framing this problem is to look at the questions that are typically posed to the professionals involved in litigation:

• Doctors: “What should I do now (before this person dies on me)?”

• Scientists: “How does it work?”

• Lawyers: “Why did it happen (and who was responsible)?”

In establishing the breach of the standard of care, asking the question, “What should defendant have done then?” is one that it is easy for a practicing doctor to answer, but that does not mean they are necessarily prepared to answer the question, “Would it have helped?” For that, you need to look further and into research that would be of very little use in the fast-paced world of patient care.

Uncontrollable: Knowing when it is unknowable

It is important to remember that medicine, like law, struggles with the unknown and unknowable.

One of the most disturbing books I have read in recent years is the Pulitzer Prize winning, The Emperor of All Maladies: A Biography of Cancer by Siddhartha Mukherjee. The book tells the story of how the history of 20th Century cancer treatment contains long, and recent episodes when people were receiving dangerous treatments that later research proved were unnecessary, or counterproductive.

These episodes of misguided cancer treatment were not due to medical indifference, but rather the difficulty of devising a scientific methodology to study the efficacy of the regime. The problem is obvious: when a treatment looks promising, nobody is to going to refuse it; those who survive will credit the treatment; and the families of those who do not survive are likely to blame the disease before they blame the treatment.

When thinking about what is knowable, here are five issues to consider even before talking to an expert and reviewing literature.

Noise and numbers

First, it is important to consider how varied the studied population is, and how many people were studied. There are some phenomena which attack healthy people, and some which attack people who have a lot of preexisting conditions. As a general rule, the more varied your population is, the more people you need to study to get a statistically significant result.

Unfortunately, many phenomena, like fatal asthma attacks, are extremely rare, and it is hard to recruit people into the study population.

This problem is particularly difficult in the case of surgical studies. Typically, surgical studies have a very small number of research subjects, and often they are varied in their age, needs, and under what condition the surgery was performed. As a result, it can be nearly impossible to get anything beyond anecdotal information that one procedure is better than another.

Statistical significance is not the same as legal causation

Just as you should be careful about small-population studies because they do not tend to offer statistically robust inferences, be careful of large-population studies, because if the phenomenon is so subtle that you need to look at one million people to see it, it is probably too small to have made a difference in your case.

Often researchers are looking for very marginal improvements. An improvement in outcome of 3 percent for a cancer drug might be a big breakthrough (particularly if you are one of those 3 out of 100 who are now alive), but that does not mean it crosses the more-likely-than-not threshold of 50 percent that we need in litigation.

My favorite example of this involves wave patterns in lakes. In addition to the waves coming to the shore, there are waves that run parallel to the shore – they just happen to be very small and very slow. Parallel waves exist, but if your feet just got wet, sue the perpendicular wave.

Selection bias

Selection bias is the phenomenon when people who are selecting themselves into a study are behaving differently than the population as a whole.

One amusing example is that average beer prices are lower on the Fourth of July weekend than the rest of the year. That is not because beer goes on sale. The reason for this phenomenon is that the population who buys beer for Fourth of July events like cheaper beer. Persons who select themselves into the population of people who buy beer on the Fourth of July are, as a whole, exhibiting different behavior and so you get a result (cheaper average beer prices) that has nothing to do with what you might be interested in (the price of beer).

Selection bias problems tend to be common when it is hard to recruit people into a study. The harder it is to get people to participate, the more unusual those who are studied are. And, sometimes, those who are studied are unusual in similar ways that will badly skew the study such that the phenomenon cannot be studied.

Unreliable reporting

There are some phenomena where people are unlikely to report accurate information, or will only report positive results. For example, nobody ever claims it was a mistake to have had children or a sex change.

This is one of the issues endemic in drug trials and academic studies: results are skewed because the drug manufacturers and academic researchers do not publish studies which report that nothing happened. In 2011 a Cornell Professor, Daryl Bem, published a peer reviewed study called “Feeling the Future” which claimed to establish that people had ESP – when statisticians looked at the study they found that the reason he was reporting those results was that research trials which did not seem to show ESP were not published and therefore not included in his analysis.

You should be particularly careful when looking at any meta-analysis. By combining only published studies to get a statistically significant result, the author of a meta-analysis may be unwittingly burying the studies that did not get their desired result.

Some of the hardest causation cases to study are those, like cancer, where nobody would be crazy enough to be in the control group and decline treatment. The same phenomenon occurs with C-sections – if there are problems in a birth, nobody is going to volunteer to be the “control” and wait to see if their child is born with a horrible brain injury.

When it is not easy to conduct controlled experiments, you need to look for epidemiological data, as discussed below.

Statistical tools: Questions to ask

The big question in a study is whether its results were just random chance, or actually appear to explain a relationship. Often this is not an either/or proposition, and the role of statistical tools is to describe what conclusions can be confidently drawn from a series of observable events – and which cannot.  Here then, are some questions to ask when evaluating literature which an expert offers as their support:

First, is there any attempt to calculate an error rate? Error rates can come in a number of different forms: standard deviations, confidence intervals, and R-squared to name a few of the more common ones. If these validations are not offered, it is likely because there is not enough information to provide a statistically robust conclusion.

Second, how similar are the test population characteristics to each other, and to your subject? Often when there is not great data, studies will simply create arbitrary distinctions to test whether there’s some general difference between two groups. For example: a study might ask whether the population that received a treatment within the first 24 hours did better than the population that was given it afterwards. You should take great care at looking at these studies because the over-24-hour group could be mostly people who did not get the treatment for 20 days; if your case involves someone who got treatment at hour 25, they are much more like the under-24-hour group, than the over-24-hour group.

Finally, is there enough information to extrapolate to your scenario? It is not uncommon that the population distinctions you care about are not reflected in the populations studied. If there is a lot of data, there may be a regression analysis, which will allow you to extrapolate. You will most often see regression analysis in cases, like automotive engineering, with large datasets from controlled environments. If not, you want to make sure that your scenario is similar to that of the studied populations.

Without science: Narrative and outliers

So, what if you come to suspect that there’s no scientific support for the proposition you need?

First, don’t despair! If your expert does not have a scientific basis for their belief, the other side does not have one either. This can be rhetorically liberating because it means that your expert is free to offer up anecdotal evidence, and narrative explanation for how the phenomenon works and why they believe what they do.

I was working on a case in which the standard of care requires an operation within 60 hours. Our expert said there was a breach, and then went too far and claimed that it was the cause of the death. But, studies supporting that standard of care were statistically flimsy and, to their credit, offered statistical validations (called a chi-squared) which conceded that the 60-hour mark was arbitrary, but appeared to have some utility.

Fortunately, the timeline problem was easily fixed because we were able to look at a narrative of the tell-tale signs of what happens before the death. Looking at that narrative, we were able to say that the fatal process could have been stopped earlier if the doctors had adhered to the 60-hour guideline.

A final approach is to ask if your outcome would be such an outlier under normal circumstances that you can infer that it caused the problem. When the Challenger Space Shuttle exploded, the legendary physicist Richard Feynman made news by dropping an O-ring into a cup of ice water, showing that it cracked and then showing that the launch temperature of the Challenger was lower than other, previous launches.

Clearly, you can’t run experiments on Space Shuttles and see at what temperature they blow up. And, when the Challenger did blow up, the debris was messy enough that it could be used to offer a range of possible explanations. So, there’s no scientific way of showing why the Challenger blew up.

If there is no scientific support for your expert’s opinion, take a line from Richard Feynman’s book and, build the facts to show that the breach of the standard of care meant that your scenario was a dangerous outlier from the normal operation.

Nathaniel Leeds Nathaniel Leeds

Nathaniel Leeds handles a broad range of civil cases on behalf of consumers and small businesses for Mitchell Leeds, LLP, including personal injury, medical malpractice, and business litigation. He started his legal career as a Deputy DA in Merced County where he tried numerous cases to a jury, including third-strike felonies, juvenile sexual assaults, and manslaughter. He brings this extensive trial experience to his civil practice. He is a University of Chicago graduate. He received his law degree from UC Hastings in San Francisco.

https://www.mitchelllawsf.com/

Copyright © 2023 by the author.
For reprint permission, contact the publisher: www.plaintiffmagazine.com