AI Medical Advice Biased Against Marginalized People

LLMs recommended mental health interventions “approximately six to seven times more often than clinically indicated,” for LGBTQIA+ people, according to researchers.

0
79

While proponents of machine learning and large language models (termed “artificial intelligence” or AI) seem to assume they are unbiased, the truth is that AI replicates the biases of the datasets it is trained on, whether intentional or unintentional. Researchers argue that bias is impossible to separate from AI for many reasons. From Grok ranting about “white genocide” in unrelated chats to an eating disorder hotline recommending dangerous behaviors to desperate callers, the implementation of AI is fraught with peril, especially in medical care. Yet large language models (LLMs) are already being used, for instance to make decisions in cancer care.

Now, a new study has found that when asked for medical advice, LLMs discriminate against people who are Black, poor, unhoused, and LGBTQIA+. When the researchers kept the clinical information the same but added other information (such as stating that the patient was transgender), the AI was far more likely to recommend mental health interventions and far more likely to say it was an emergency case requiring hospitalization.

“Cases labeled as belonging to marginalized groups frequently receive more urgent, invasive or mental health-related interventions, even when their clinical presentations are identical,” the researchers write.

In sum, this new study shows that AI doesn’t make healthcare more equitable, it amplifies existing biases even further—only now they have the veneer of objectivity that comes with coming from a computer.

“As LLMs are trained on human-generated data, there is a valid concern that these models may perpetuate or even exacerbate existing healthcare biases,” the researchers write.

The study was led by Mahmud Omar at Icahn School of Medicine at Mount Sinai, New York, and was published in Nature Medicine.

Doctor using AI Artificial intelligent management in flow chart for planning treatment patient.

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

Previous articleSubhuman Schizo Sonnet by Michael Hudson
Peter Simons
Peter Simons was an academic researcher in psychology. Now, as a science writer, he tries to provide the layperson with a view into the sometimes inscrutable world of psychiatric research. As an editor for blogs and personal stories at Mad in America, he prizes the accounts of those with lived experience of the psychiatric system and shares alternatives to the biomedical model.

LEAVE A REPLY