A group of researchers recently published a commentary in the Journal of the American Medical Association on one of the ways that implicit bias about persons diagnosed with ‘mental illness’ creeps into the seemingly neutral space of patient electronic health records. They focus on the group of clients who tend to be high utilizers of services at emergency departments and psychiatric crisis centers who “often have financial problems and present with chronic or untreated comorbid psychiatric and substance use disorders. These patients are often well known to clinical staff and are sometimes colloquially labeled ‘frequent flyers.'”
“A pejorative branding, ‘frequent flyers’ are often assumed to be problem patients. In psychiatric settings, these patients are sometimes said to be ‘borderlines,’ ‘drug seekers,’ ‘malingerers,’ or ‘treatment resistant.’”

Implicit bias refers to errors in judgment and/or behavior that result from attitudes and stereotypes that operate below the level of direct awareness and often without particular intent. Conceptually, this is the opposite of explicit bias, which refers to attitudes and beliefs we hold at a conscious level and are actively aware of and experience as a result of perceived threat.
The concept of implicit bias has recently become an important part of the discussion of race relations in the United States. In the present commentary, the authors note how moving beyond constructions in spoken language, which are enough to inform implicit or even explicit bias in themselves, symbols that are emblematic of bias and stereotypes are now entering the space of the electronic health record system. A system that they use as an example has an airplane icon that system administrators can use to identify to clinicians that a person is a “high utilizer” of services.
They write:
“This iconography is ethically and clinically inappropriate for 2 interdependent reasons. First, the icon reinforces and encourages the use of disrespectful and stigmatizing terminology. Second, the icon may frame the initial clinical interaction in a way that inhibits good diagnostic judgment, potentially placing the patient at increased risk of a poor outcome”
The impact of implicit bias on clinical management has been found in other studies. Stull and her colleagues, for instance, found that implicit but not explicit bias led to prescription of interventions that were more controlling of clients. They found that the prescription of such interventions are linked to more implicit endorsements of stigma toward people with mental illness being “bad” or “helpless.”
Implicit bias has also been found in the medical literature related to racial bias in the recommendation of treatment for black versus white patients. For example, unconscious bias was found by Green and his colleagues to contribute to racial disparities in the “use of medical procedures such as thrombolysis for myocardial infarction.” Moreover, Kopera et al found that even professional, long-term contact with people impacted by ‘mental illness’ does not make providers immune from negative implicit attitudes.
Similarly, in the mental health literature, it has been found that black and latino consumers of services are approximately three to four times more likely to receive a diagnosis of a psychotic disorder compared to their white counterparts. This trend is also found internationally with immigrant consumers who are diagnosed more frequently with psychotic disorders than consumers from a majority racial background. In terms of gender bias, Peters et al. found that women are almost twice more likely to receive a benzodiazepine prescription at discharge from a psychiatric inpatient unit compared to men.
Moreover, racial bias in prescriptions rates have also been observed, finding that black clients were less likely, and Asians slightly more likely, than white patients to be prescribed a benzodiazepine. These studies did not measure implicit bias directly, but these findings urge us to consider the impact of unconsciously held beliefs and attitudes on decision making, that should be based on clinical need rather than stereotypes about groups.
The authors of the present study highlight the stigmatizing clinical consequences of iconography that reinforce stereotypes of certain groups of clients. For example, one consequence is “diagnostic overshadowing” a phenomenon in which physical symptoms reported by persons will mental illness are misinterpreted as part of their mental health concerns and are undertreated as a result. This is disturbing given that people with mental health issues are known to have significantly higher medical co-morbidity as well as earlier deaths compared to the general population.
The authors write:
“These patients are less likely to receive appropriate medical care than patients without a mental health condition—their psychiatric conditions overshadow their other conditions, potentially biasing the clinician’s judgment about diagnosis and treatment such that the clinician may misattribute physical symptoms to mental health problems.”
The authors remind us that apart from electronic medical records, big data, and social media offer exciting potential for healthcare practice and patient empowerment. However, they also carry with them the potential to reflect biased social and political values. To mitigate these effects “electronic medical record systems and behavioral health care applications should be built and tested in collaboration with patients, consumers, clinicians, social scientists, and ethicists who are sensitive to the broader ramifications of iconography and language,” they conclude.
****
Joy M, Clement T, Sisti D. The Ethics of Behavioral Health Information TechnologyFrequent Flyer Icons and Implicit Bias. JAMA. 2016;316(15):1539-1540. doi:10.1001/jama.2016.12534 (Abstract)