While proponents of machine learning and large language models (termed “artificial intelligence” or AI) seem to assume they are unbiased, the truth is that AI replicates the biases of the datasets it is trained on, whether intentional or unintentional. Researchers argue that bias is impossible to separate from AI for many reasons. From Grok ranting about “white genocide” in unrelated chats to an eating disorder hotline recommending dangerous behaviors to desperate callers, the implementation of AI is fraught with peril, especially in medical care. Yet large language models (LLMs) are already being used, for instance to make decisions in cancer care.
Now, a new study has found that when asked for medical advice, LLMs discriminate against people who are Black, poor, unhoused, and LGBTQIA+. When the researchers kept the clinical information the same but added other information (such as stating that the patient was transgender), the AI was far more likely to recommend mental health interventions and far more likely to say it was an emergency case requiring hospitalization.
“Cases labeled as belonging to marginalized groups frequently receive more urgent, invasive or mental health-related interventions, even when their clinical presentations are identical,” the researchers write.
In sum, this new study shows that AI doesn’t make healthcare more equitable, it amplifies existing biases even further—only now they have the veneer of objectivity that comes with coming from a computer.
“As LLMs are trained on human-generated data, there is a valid concern that these models may perpetuate or even exacerbate existing healthcare biases,” the researchers write.
The study was led by Mahmud Omar at Icahn School of Medicine at Mount Sinai, New York, and was published in Nature Medicine.
“… the implementation of AI is fraught with peril, especially in medical care.” Indeed.
Report comment
I like AI because it can give reasonable answers to certain questions (like what is gravity or explain the genetic code). But AI is flawed, and some people are expecting far more of it than it is capable of providing. I’ve seen it do strange things, like give two contradictory answers to a logic puzzle. It really wasn’t necessary to do a study to know AI will reflect the biases of its sources.
Report comment
This is how much society has fallen apart. People are asking an AI for help instead of trying to connect with another human being. This says a lot about how much people distrust others especially in the medical and mental unhealth fields. Having a certain sexuality isn’t a health issue to begin with. Seems like people would rather think about this than care about the actual issues in society. All of this does nothing but distract from the actual issues that are effecting people everyday. This is how much society has decayed. All these people can think about is their genitalia. I can’t imagine what an advanced race would think of humanity. They would probably take one look at us and warp multiple universes away to escape this insanity.
Report comment