Credits to Unsplash.com

Issue 5/2019

In your book End of its Rope. How Killing the Death Penalty Can Revive Criminal Justice[1] you observe the drop in death sentences in the U.S. and the growing number of death row exonerations. Flawed forensic science seems played a prominent role in many such wrongful convictions. In your opinion is there a full understating of the potential dangers posed by misuse of forensic science among judges and prosecutors?

In End of its Rope, I describe how death sentences have plummeted in the U.S. by 90 percent, since the 1990s. In states like Texas and Virginia, which used to lead the country in death sentencing, very few are imposed today. In fact, states that had been top death penalty states, like Virginia, have not imposed a single death sentence in many years. That decline may come as a real surprise to people who still think of the U.S. as a major death sentencing county. Today, only a few counties in a few states still impose such sentences. The chances that anyone sentenced to death is executed are also slight.

One reason why people have become concerned with death sentencing is that wrongful convictions continue to occur even in capital cases. Flawed forensic science is an important reason why. Many believe that a wrongful conviction occurred due to faulty forensic evidence in the case of Cameron Todd Willingham, a man who was executed in Texas. Nor are forensic errors uncommon in death penalty cases. I describe how twenty people sentenced to death have been exonerated by post-conviction DNA testing in the U.S. Ten of those cases had microscopic hair-comparison evidence, a type so unreliable that the FBI and crime labs in several states have conducted full audits into thousands cases based on such evidence. Two more had fiber comparisons. Two notoriously unreliable bite-mark comparisons, a type of forensics that the scientific community has stated should not be used to identify individuals until meaningful research is done to validate it.

Indeed, when I examined the trial testimony in the cases of the first 250 people exonerated by DNA testing, in my book Convicting the Innocent, I found that in over half of those cases, flawed forensic evidence contributed to the convictions.

In recent years, awareness of this crisis in forensics has grown. The FBI in its audit of hair cases found that in 95% of the cases, analysts testified erroneously. In Massachusetts over 40,000 cases have been ordered reopened due to lab errors. Crime labs in large and small cities, from Chicago Illinois, to Cleveland, Ohio, to Houston, Texas, have been audited or closed, and entire state crime labs, have had cases reopened. States have begun to create forensic science commission to investigate these systematic errors in forensic evidence. However, the responses in the legal system have typically occurred after a major crisis and serious errors have come to light. Little has been done over the years to ensure that reliable evidence is used in criminal cases.

The FBI in its audit of hair cases found that in 95% of the cases, analysts testified erroneously. In Massachusetts over 40,000 cases have been ordered reopened due to lab errors

The Italian Supreme Court adopted, in a renowned decision, a sort of “Daubert style” checklist for scientific evidence. Did the U.S. Supreme Court’s Daubert standard succeed in improving the quality of judicial decisions in the U.S.?

After the U.S. Supreme Court issued its landmark ruling in Daubert v. Merrell Dow Pharmaceuticals in 1993, lawyers expected that forensic science would finally face scrutiny in court. For decades, judges had adopted an approach in which they did not inquire into the reliability of forensics. They took judicial notice of the admissibility of forensics, or they scrutinized only “novel” scientific evidence. Daubert could have changed those practices, since it calls for a multi-factor inquiry into any scientific expert evidence. Following the Daubert ruling, the federal rules were amended to detail a required reliability inquiry into expert evidence. Many states also adopted this Federal Rule of Evidence 702, following the federal approach.

However, in response, few judges have even raised reliability concerns. Judges have almost completely abdicated their role as gatekeepers: they have been largely silent in the face of this crisis in forensics. Chris Favbricant and I studied all state court opinions discussing reliability, in states that have adopted Rule 702. We found very few opinions even discussing reliability and almost none that hold that it is appropriate to exclude evidence on reliability grounds.

Judges have almost completely abdicated their role as gatekeepers: they have been largely silent in the face of this crisis in forensic […] We found very few opinions even discussing reliability and almost none that hold that it is appropriate to exclude evidence on reliability grounds

It should also be a serious constitutional violation to have undocumented, unreliable forensics used in a courtroom. While the U.S. Supreme Court has regulated the right of the defense to confront a forensic witness under the Sixth Amendment, it has left reliability to the trial courts. To be sure, some trial judges like Nancy Gertner and Jed Rakoff have ruled that forensic experts cannot deliver overstated conclusions in their testimony, and adopted procedures to better ensure discovery on forensic evidence. The National Forensic Science Commission convened by the Department of Justice also issued guidelines regarding discovery and the statements that experts can make when they testify. All of those efforts are a useful first step. However, reliable evidence will not be provided unless judges ensure that only proficient experts and reliable techniques are used in court.

 

2019 is the decennial of NAS Report Strengthening Forensic Science in the United States. In the ten years since its release, what was its impact in court?

This year marks the Tenth Anniversary of the 2009 National Research Council Report, Strengthening Forensic Science in the United States. That Committee, with leading scientists among its members, concluded that much of forensic evidence used in criminal trials is «without any meaningful scientific validation»[2]. They described «major problems»[3] in forensics, including where faulty forensic science leads to wrongful convictions. This was the most cited sentence of the report: «with the exception of nuclear DNA analysis, however, no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source»[4].

Credits to Pixabay.com

That sentence does not provide any answers, but it does pose a challenge: how can all of the non-nuclear DNA forensics be made reliable, or even tested to find out how reliable they are? In the years since that report was released, there have been some real efforts among scientists to test the reliability of several forensic techniques. Most notable, two black box studies of the reliability of fingerprint testing have been done. Statistical methods have been introduced in fingerprinting and firearms comparisons, although not yet widely adopted.

Still, very little has changed to date. The President’s Council of Advisors on Science and Technology (PCAST), issued a report that said something more forceful: if we do not know how reliable a forensic technique is, then we should not use it until the basic question is answered[5]. These scientists said some techniques, like firearms comparisons and bite mark comparisons, should not be used in court until the research is done. Other techniques, like fingerprint evidence, were valid, but have surprisingly high error rates. The PCAST report emphasized that jurors must be told about those error rates. Jurors must also be told how proficient a particular expert is. Again, forensic experts, prosecutors, and judges have largely ignored this report.

 

We need to be sure about the quality of the experts taking the scene in court. Nonetheless, Italian legal system does not provide for any kind of real selection. Moreover, the vast majority of forensic labs is embedded in law enforcement agencies. Which is the situation in the USA? How is it possible to improve the independence and competence of those experts?

There still is no national regulation of forensics in the U.S., although there is no national regulation of forensics in any other country either. No person is infallible and any technique has an error rate. Moreover, error rates can be measured. For years no research was done on error rates, perhaps because of the fear that they would uncover how unreliable some forensic techniques truly are. When lawyers tried to challenge fingerprint evidence, the FBI in particular, was extremely aggressive in responding that the «error rate for the method is zero».

Today, studies are finally beginning to be done, including due to the Center for Statistics and Applications in Forensic Evidence (CSAFE), in which I am a participating researcher. There was a DOJ and NIST convened National Commission on Forensic Science (NCFS), which was active from 2014–2017. They made a series of recommendations that were real steps forward.

No person is infallible and any technique has an error rate […]. For years no research was done on error rates, perhaps because of the fear that they would uncover how unreliable some forensic techniques truly are

However, there is still no independent federal agency that can regulate forensics and ensure that all methods and experts used are sufficiently reliable. There are still no national standards for each of the forensic disciplines. There is voluntary accreditation in the U.S. and in other countries, but it focuses almost exclusively on standards and procedures, and not on the underlying quality of the forensic work. The National Academy Report called for such an independent entity to regulate forensics and emphasized that this was the most crucial recommendation in its entire report. Such regulation does exist in medicine for clinical laboratories in the U.S., due to the concern that errors could result, for example, in errors in cancer screening. Nothing like that exists for the forensic evidence that we use to convict hundreds of thousands of people.

 

In 2016 Michael J. Saks and Barbara A. Spelmann published a book on the psychological foundation of evidence law. They scrutinized, with the lens of cognitive psychology, the rational (or non-rational) of many procedural rules and praxis. In your opinion could a better understanding of cognitive implications of assessing evidence improve the use of science in court?

Most forensic analysts are hardworking, well-intentioned, careful people. But all professionals are vulnerable to cognitive biases. Some shortcuts make our work more efficient but others can lead to terrible mistakes. Forensic analysts do not typically work in a system that is suited to science. Like cops in labcoats, most work for police or prosecutors or both and share the same culture. The goal is to get the bad guys. Police may tell them what to test and not test, or that the suspect already confessed, or that he had a long prior record. Experts use procedures that tend to push them towards a match. They face cognitive biases, like all people do when we reach judgments, and all the more so when we are given vague information to rely on. There are remarkable cognitive bias studies showing how easily top experts can alter and skew their results, whether it is in fingerprint or DNA work, based on extraneous information or pressure from the prosecution or defense. Dr. Itiel Dror has co-authored a number of these studies. There are also terrible real world examples of scientists tailoring their results to help prosecutors. In our criminal courtrooms, the lawyers take sides, but we need to keep scientists independent of influence.

There are remarkable cognitive bias studies showing how easily top experts can alter and skew their results, whether it is in fingerprint or DNA work, based on extraneous information or pressure from the prosecution or defense

There is a simple way to ensure rigorous testing to detect errors if and when they happen, and that is proficiency testing. The Houston Forensic Science Center is conducting such tests in about five percent of all of its cases. The examiner does not know that those cases are in fact tests, but the results are later checked by quality staff to measure accuracy. The airport safety authory in the U.S. routinely tests airport screeners with fake bombs to be sure they are attentive and know what to look for, since the stakes are extremely high. Similarly, we need rigorous proficiency testing in forensics – the stakes of mistakes are too high.

 

We are in a time of improvement in many forensic sciences. DNA testing offers more powerful analytical techniques. Big data has come to forensics with expanding forensic databases and machine learning approaches. The risk is that there will be new types of CSI effects, if actors give credit to new untested technologies, not subjected to rigorous empirical testing. What is the path forward?

As new forensic technologies have been developed, demand for lab services and backlogs in testing have only increased. Crime labs now process millions of request every year, as more and more criminal cases depend on forensic testing. The FBI led national efforts to expand the size of increasingly massive forensic databases, which they search to generate new leads. People do not need to be arrested or convicted of a crime to be searched in a database; for example, facial recognition databases contain millions of faces from airport footage, social media, and driver’s licenses. These databases create new leads but also new risks. For example, Oregon lawyer Brandon Mayfield had his fingerprints erroneously matched by three FBI agents to evidence from the scene of the Madrid train bombing. He had never even been to Spain, but his prints were linked through a large database.

Law enforcement increasingly use new databases, new computer programs to search through them and generate links, and they purchase new technology from companies marketing rapid DNA machines, facial recognition algorithms, and other programs, many of which have unknown reliability. More and more matching is happening, but we still do not know how reliable it all is. Judges must ensure that this information is carefully tested and is valid before it is used in court.

As technology creates more challenges for forensics, more scientists have taken up the challenge that the National Academy of Sciences Report posed to the world, to define what a “match” really means using statistics. Hopefully new efforts to use blind proficiency testing, quantitative methods, will continue to inform practices in crime labs and the courts.

_____________________________________

 

[1] For further details about the book, see B.L. Garrett, The End of the Rope for the American Death Penalty, in this journal, 17 April 2019.

[2] National Academy of Sciences, Strengthening Forensic Science in the United States: A Path Forward, 2009, p. 108.

[3] Idem, p. 5.

[4] Idem, p. 7.

[5] «Where there are not adequate empirical studies and/or statistical models to provide meaningful information about the accuracy of a forensic feature-comparison method, DOJ attorneys and examiners should not offer testimony based on the method» (Executive Office of the President President’s Council of Advisors on Science and Technology, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods, 2016, p. 19).

Altro

A meeting of knowledge on individual and society
to bring out the unexpected and the unspoken in criminal law

 

ISSN 2612-677X (website)
ISSN 2704-6516 (journal)

 

The Journal does not impose any article processing charges (APC) or submission charges