Racial disparities within the criminal legal system are widely known and critiqued. Calls for the use of more “objective” approaches in the courtroom have been made as a way to attempt to address these inequalities. However, in a new article published in Policy Insights from the Behavioral and Brain Sciences, the authors caution that the blind adoption of neuroscience technology in the criminal legal system could worsen systemic racism.
Looking more closely at these technologies reveals subtle and inherent racial biases that could be overlooked in a criminal legal setting. The authors conclude by urging that neuroscience data should be held to a higher standard than other forms of evidence to prevent further racial discrimination of BIPOC individuals within the criminal legal system.
The authors, led by Emily R. Perkins of the Department of Psychology at the University of Pennsylvania, write:
“One well-intentioned way to reduce bias is to introduce presumed ‘objective’ approaches and technologies that are more robust to the potential of individual decision-makers’ racial biases. In recent years, neuroscience evidence has been gaining traction in the courtroom as a means to increase the ‘objectivity’ of psychological explanations for behavior. However, compelling evidence raises doubts that neuroscience will ultimately reduce racial biases in the criminal legal system as it currently exists. It may, in some cases, even exacerbate injustice as it can obscure the biases inherent in the measurement and reporting of neuroscience data.”
Black individuals are disproportionately targeted by the criminal legal system. In 2019, Black Americans comprised 14% of the United States population but represented 33% of the prison population. These disparities are attributed to a number of causes, including racial segregation, increased surveillance and profiling of Black communities by police; arrests, detainments, and charges of Black youth at undue rates; and discriminatory sentencing policies, among other factors.
The introduction of “objective” evidence and technologies, such as neuroscience evidence, has been presented as a way to attempt to reduce disparities within the criminal legal system. Neuroscience evidence is implemented in criminal legal settings in a couple of ways.
Neuroscientific research literature is used in the courtroom to establish norms for a given population. For example, it was used in the 2005 Supreme Court case Roper v. Simmons to determine that due to their immaturity and vulnerability to peer influences, the execution of minors is unconstitutional.
Neuroscience evidence, such as brain imaging, collected from a person involved in a particular criminal case is also used in the courtroom, especially during the sentencing stage, to argue that brain injury or dysfunction is an extenuating factor. It was used in 25% of trials involving the death penalty as of the early 2010s.
Yet, while supporters of the use of brain data in criminal law argue that it is more objective than other psychological evidence, such as testimony from mental health professionals or the results of risk assessments, others have suggested that the assumption of objectivity is not only incorrect but potentially dangerous.
Science as a whole is assumed to be inherently “objective.” Some have disputed this belief, contesting that the privilege of who is permitted to be a “knower” and what constitutes scientific knowledge and methods is largely controlled by the majority culture, leaving those in marginalized groups on the sidelines. Epistemic injustice, or injustice related to the production and distribution of knowledge, is a major issue within the medical and mental health fields and has been widely critiqued.
Additionally, the use of seemingly objective non-neuroscientific forensic assessment tools has been shown to perpetuate racial biases. For example, tools that are focused mainly on criminal history have been deemed problematic, as people of color are more susceptible to over-policing. As such, they are more likely to have more police encounters, which may result in lengthier criminal histories than white individuals. When we operate under the assumption that such tools are fundamentally “objective,” we ignore contextual factors that may be influencing the outcomes.
The authors draw attention to how similar problems may arise with neuroscience evidence:
“For example, research on the ‘violent brain’ provides a foundational framework that forensic psychologists rely upon during expert testimony when providing general information about brain function and crime (i.e., not referring to the defendant’s own brain scans). This information is gleaned from neuroscience studies with samples who were patrolled, arrested, and convicted within a racially biased criminal legal system, rather than a representative group of people who engage in violent behavior. The bias in this area of research is obscured by the propensity to tout the data as ‘objective’ because it comes from neuroscience technology.”
Complicating this issue is the amount of influence neuroscience evidence has on the public. Explanations of psychological phenomena that are grounded in neuroscience tend to be perceived as more satisfying by the general public. In mock-trial settings, neuroscience evidence has been found to influence the mock jury’s verdict and sentencing. Placing too much trust in neuroscience data without critically examining it could perpetuate racial biases.
Moreover, neuroscience is a complex field that requires a specialized level of knowledge and understanding, making it difficult for laypersons, including legal decision-makers, to critically appraise. The general public’s lack of understanding regarding how data is collected, processed, and interpreted obscures any possible bias.
Further, current methods used in neuroscience have been largely developed by and for white people, and have been found to be less valid and reliable with individuals who have characteristics associated with social constructions of Blackness, such as darker skin or coarse/curly hair. Even the tools themselves have been designed in such a way that makes assumptions about the characteristics of participants.
As an example, electroencephalograms (EEG) are common neuroscience tools and measure brain activity by connecting electrodes to the scalp. Anything that interferes with contact between the electrode and the scalp can lead to electrical noise, which decreases the quality of data. Part of the protocol involves pushing hair out of the way to guarantee a secure connection. However, this is not always possible, particularly for individuals with curly, dense, or tightly coiled hair, which is common among individuals with African and/or Caribbean ancestry, or with popular hairstyles in the Black community, such as weaves or dreadlocks that are unable to be easily loosened and moved out of the way.
Similar issues have been noted with other neuroscience tools, such as functional near-infrared spectroscopy (fNIRS), which uses near-infrared light to identify changes in brain activity through the skin and has been shown to have lower reliability and validity for individuals with higher concentrations of melanin or darker skin.
The design flaws and fundamental assumptions in neuroscience tools, which disproportionately affect people of color, result in the underrepresentation of Black individuals in neuroscience research. This, in turn, leads to “healthy control” comparison samples that are predominantly white.
Perkins and colleagues write:
“. . . a recent review demonstrated that Black individuals are systemically underrepresented in clinical neuroscience research samples, meaning that much of what is thought to be known about the brain may be skewed.”
Legal interpretations of “normal” brain function are skewed by the comparison sample. Assuming that white-skewed data will generalize to other populations is inappropriate and unethical without further testing and examination to determine if this actually is the case.
Given the ethical concerns associated with accepting neuroscience evidence without criticism, it is imperative that the use of it in criminal-legal settings should proceed with extreme caution by legal decision-makers. The subtle biases and limitations of neuroscience data should be made clear to those in positions of legal power, and this evidence should be held to a higher standard than other forms of evidence.
Despite ethical concerns, the authors argue that if used appropriately and with research support, neuroscience evidence does have the potential to be implemented ethically. If done right, using neuroscience evidence in criminal legal settings could help to reduce racial biases in the criminal legal system and contribute to the development of personalized rehabilitation that better fits the needs of individuals than standardized rehabilitation approaches.
The authors urge for further exploration and understanding into the connections between the brain and behavior across populations and how to most effectively communicate those nuances to legal decision-makers. They also call for partnership between neuroscientists, sociologists, other experts with knowledge of race and the brain, and individuals and communities affected by systemic racism to ensure that the sociocultural context is taken into consideration and that the voices of those most affected are represented in research, policymaking, and privacy concerns regarding brain data.
Systemic racism is not only an issue in the criminal legal system but in the fields of psychology and psychiatry. The American Psychological Association recently apologized for its contributions to systemic racism within the field. Despite this apology, mainstream psychology has been critiqued for being slow to adopt the understanding that police brutality is a form of systemic racism.
Researchers have also critiqued the APA for not making meaningful progress in its pledge to dismantle systemic racism in the field since its apology. Clearly, there lies a long road ahead in making genuine changes across disciplines to guarantee the dignity, respect, and equitable treatment of individuals across racial and ethnic backgrounds, but Perkins and colleagues offer great recommendations for how we can begin to work toward preventing further injustice.
Perkins, E. R., Bradford, D. E., Verona, E., Hamilton, R. H., & Joyner, K. J., (2023). The intersection of racism and neuroscience technology: A cautionary tale for the criminal legal system. Policy Insights from the Behavioral and Brain Sciences, 10(2), 279-286. https://doi.org/10.1177/23727322231196299 (Link)