Home Education The scope of forensic examination of firearms is erroneous

The scope of forensic examination of firearms is erroneous

58
0

In 2003, Donald Kennedy, then editor-in-chief of the magazine science, wrote an editorial entitled “Forensic examination: An oxymoron?To this question, he answered, in fact, “yes.” Unfortunately, today the answer remains the same. Contrary to its popular reputation, identification , built mostly on smoke and mirrors.

Firearms experts suffer from what might be called “Sherlock Holmes Syndrome.” They claim that they can “match” a cartridge case or a bullet with a specific pistol and thus solve the case. Science not on their side, however. There are several studies of firearms, and those that show that experts cannot reliably determine whether bullets or ammunition were fired from a particular weapon. The identification of firearms, like all supposedly scientific evidence, must conform to consistent and evidence-based standards. Fundamental justice requires no less. In the absence of such standards, the likelihood of convicting the innocent and thus releasing the perpetrators is too high. Perhaps this awareness has prompted the courts to slowly begin to pay attention and limit testimony about firearms.

In the courts, firearms experts present themselves as experts. Indeed, they have experience of practitioner in the application of forensic methods, as well as a physician who practices medical instruments such as drugs or vaccines. But there is a key difference between this form of examination and a researcher having training in experimental design, statistics and scientific method; which manipulates the inputs and measures the outputs to confirm that the methods are valid. Both forms of expertise have value, but for different purposes. If you need a vaccine against COVID, the nurse has the right form of knowledge. Conversely, if you want to know if a vaccine is effective, you don’t ask a nurse; you ask research scientists who understand how it was created and tested.

Unfortunately, courts have rarely heard testimonies from classically trained researchers who could verify the claims of firearms experts and explain the basic principles and methods of science. Only research scientists have the means to counter the claims of expert practitioners. We need anti-expert experts. Such experts are now increasingly appearing in courts across the country, and we are proud to belong to this group.

The skepticism of identifying firearms is not new. A 2009 report by the National Research Council (NRC) criticized the field of identifying firearms as missing.a well-defined process». The recommendations of the Association of Experts on Firearms and Instruments (AFTE) allow examiners to report a match between a bullet or cartridge case and a specific firearm. “if the unique surface contours of the two tool marks are in “sufficient agreement”.According to the recommendations, sufficient agreement is a condition under which the comparison “exceeds the best agreement demonstrated between tool marks, which are known to have been made by different tools, and coincides with the agreement demonstrated by tool marks, which are known to be made by one and the same the same tool. ”In other words, the criterion for making a decision that determines life is based not on quantitative standards but on the subjective experience of the expert.

Report of the Council of Presidential Advisers on Science and Technology (PCAST) for 2016 repeated the NRC conclusion that the process of identifying firearms is “circular,” and it described the types of empirical research needed to verify the identification of firearms. At that time, only one appropriately designed study by the Ames Laboratory of the Department of Energy was completed, which was colloquially called “Ames I». The PCAST ​​concluded that more than one appropriately designed study was needed to confirm the area of ​​firearms expertise, and called for further research.

NRC and PCAST ​​reports have come under fire from firearms experts. Although the reports did not in themselves affect court decisions, they triggered additional tests on the accuracy of firearms identification. These studies tend to report surprisingly low error rates about 1 percent or lessthat encourages examiners to attest that their methodology is almost infallible. But the way research comes to these error rates is questionable and without anti-experts who would explain why these studies are erroneous, courts and juries can and have been considered to accept far-fetched claims.

During fieldwork, firearms experts usually come to one of three categorical conclusions: bullets from one source called “identification”, another source called “liquidation” or “non-executive” used when the examiner feels the quality of the sample is insufficient for identification or elimination. Although the category “I don’t know” makes sense in fieldwork, the secret way of looking at it in validated studies – and presented in court – is erroneous and seriously misleading.

The problem arises in how to classify the “non-executive” response in a study. Unlike fieldwork, researchers who study the identification of firearms in the laboratory create bullets and shell casings for use in their research. So they know whether the comparisons went with the same gun or with another. They know the “ground truth”. Like the true / false exam, there are only two answers in these studies; “I don’t know” or “unfeasible” is not one of them.

Existing studies, however, consider uncertain answers to be correct (i.e., “no mistakes”) without any explanation or justification. These uncertain responses have a huge impact on the number of errors reported. For example, in the Ames study I am researching, researchers reported false-positive errors of 1 percent. But here’s how they came to this: of the 2,178 comparisons they made between mismatched shell casings, 65 percent of the comparisons were rightly called “exceptions”. The remaining 34 percent of the comparisons were called “unconvincing,” but instead of keeping them as their own category, the researchers combined them with exceptions, leaving 1 percent as what they called their false supplement. If, however, these unconvincing answers are errors, then the error rate will be 35 percent. Seven years later, the Ames Laboratory conducted another study known as Ames II using the same methodology and reported a false positive error rate to compare bullets and shell casings. less than 1 percent. However, when uncertain answers are called incorrect rather than correct, the overall error rate increases rapidly to 52 percent.

The most telling conclusions came from the following stages of the Ames II study, in which researchers sent the same subjects to the same expert for re-evaluation and then to other experts to see if the results could be replicated by the same expert or replicated. Conclusions were shocking: the same expert, looking at the same bullets a second time, came to the same conclusion only two-thirds of the time. Different experts, looking at the same bullets, came to the same conclusion in less than a third of the time. So much for getting a second opinion! Yet firearms experts continue to appear in court, arguing that firearms identification studies show extremely low error rates.

English biologist Thomas Huxley said that “science is nothing but prepared and organized common sense.” In most contexts, judges exhibit an unusual degree of common sense. However, when it comes to translating science for use in the courtroom, judges need the help of scholars. But this help should come not only in the form of scientific reports and published articles. Scientists are needed in the courtroom, and one way to do that is to act as an anti-expert.

This is an article of opinion and analysis, and opinions expressed by the author or authors are not necessarily opinions Scientific American.

Source link

Previous articleУзброены чалавек забіў 19 дзяцей у тэхаскай школе
Next articleThe Gates Foundation encourages dual education and early college learning