How AI Risks Deepening Health Inequality

AI is rapidly transforming healthcare, but a new study warns that without careful oversight, it could reinforce existing disparities in access, bias, and digital poverty.

6
545

Artificial intelligence (AI) and large language models (LLMs) are increasingly shaping the future of healthcare. Advocates claim that these technologies have the potential to enhance accessibility, streamline medical decision-making, and address social determinants of health (SDOH).

However, a recent study published in Cell Reports Medicine highlights the risks that come with integrating AI into digital health systems. Researchers Ong, Seng, Law, Low, Kwa, Giacomini, and Ting warn that without intentional efforts to mitigate bias and increase digital accessibility, AI could exacerbate existing healthcare inequalities rather than resolve them.

“Despite rapid technological advancements, a concerted effort to address barriers to digital health… is still lacking. Low digital literacy, unequal access to digital health, and biased AI algorithms have raised mounting concerns over health equity. As AI applications and LLM models become pervasive, we seek to understand the potential pitfalls of AI in driving health inequalities and identify key opportunities for AI in SDOH from a global perspective.”

This study highlights the ways in which emerging technologies, often framed as neutral or universally beneficial, can, in fact, deepen existing power imbalances within healthcare. By exposing how AI and large language models risk reinforcing structural inequities—through biased data, digital exclusion, and the privatization of health information—it raises urgent questions about who controls the tools that shape mental and physical well-being. As the push toward AI-driven health solutions accelerates, this research underscores the need for critical scrutiny of how these technologies are designed, whose interests they serve, and whether they truly empower individuals or merely automate and obscure existing systems of control.

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

What Are Social Determinants of Health?

The World Health Organization (WHO) defines social determinants of health as non-medical factors that influence health outcomes. These include environmental, economic, and social conditions such as access to clean water, stable housing, education, and employment. A growing body of research has shown that these factors play a greater role in health outcomes than medical care itself.

The usual examples of SDOH are issues like lead in water lines, leading to increased lead-related conditions in children, and air pollution, causing increased percentages of children with asthma and asthma-related conditions. These examples, especially using children, highlight the conditions that are outside of a person’s control, often outweighing the individual’s ability to overcome these determinants through “good choices.” Another example that falls within the digital sphere is access to medical practitioners in underprivileged areas.

SDOH initiatives have led to policies aimed at reducing health disparities, such as the “Healthy People 2030” program in the U.S., which seeks to eliminate barriers to care. Digital health technologies, including telemedicine and AI-driven diagnostic tools, have been introduced as potential solutions to these disparities. However, the very people these innovations aim to help—those in low-income or rural areas, for example—often lack the digital access needed to benefit from them.

The Digital Divide in Healthcare

The expansion of digital health tools has highlighted the emerging concept of digital determinants of health (DDOH)—the ways in which digital access and literacy impact health outcomes. While digital health tools can improve access to care, they also introduce new barriers, particularly for those without reliable internet access, smartphones, or basic digital literacy.

“Digital health tools helped to address major global health challenges through improved access to healthcare services, strengthened health promotion and disease prevention, and enhanced care experiences for professionals and patients.”

Yet, despite their potential, these tools remain out of reach for many. A report from the UK found that 11 million people lack the skills to fully participate in the digital economy, including digital healthcare services. The elderly, disabled individuals and those in lower-income brackets are disproportionately affected. Telemedicine, for example, is often seen as a solution for those who lack access to in-person medical care, but it requires high-speed internet, digital literacy, and, in many cases, the ability to navigate complex healthcare portals—all of which can serve as barriers rather than solutions.

Digital platforms and tools continue to address the gaps identified in research on social determinants of health (SDOH) and digital determinants of health (DDOH), despite their inherent challenges. Many of these tools are now designed using artificial intelligence (AI) and large language models (LLMs), such as ChatGPT. They are also being integrated into electronic health records (EHRs) to facilitate the data collection necessary for the effective operation of AI and LLM programs. This combination of technologies presents both opportunities and challenges.

Opportunities and Barriers of AI in Social Determinants of Health

Ong et al. highlight the first social determinant of health (SDOH) factor related to the use of artificial intelligence (AI) and large language models (LLMs): high-income countries (HICs) have an inherent advantage over low- and middle-income countries (LMICs). HICs are the primary source of technology, giving them a competitive edge on a global scale.

Additionally, Ong et al. identify a significant challenge in the development of health-based AI: much of the data needed for these models will come from unstructured, uncoded qualitative sources, such as physician and practitioner notes. This understanding has spurred the creation of models capable of analyzing these notes to aggregate, synthesize, and utilize qualitative data effectively.

“[D]eep learning (DL) algorithms such as CNNs (convolutional neural networks) and BERT (bidirectional encoder representations from Transformers) have been applied to SDOH annotation from clinical unstructured.”

BERT models utilize contextual information by analyzing text and audio signals to identify patterns, which they then use to perform a variety of designated tasks. In this instance, they aim to represent the typical behaviors of physicians and other healthcare professionals.

AI and large language models (LLMs) can be particularly beneficial in addressing issues related to social determinants of health (SDOH), especially in the context of mental health and chronic health conditions. These AI systems can analyze individual data alongside predictive patterns to identify potential problems, such as those associated with diabetes. Furthermore, these models can provide an overview of the prevalence of mental health conditions and their related issues on a larger scale.

“[T]emporal evolution of SDOH factors, geospatial relationships, and their impact on health outcomes have also been poorly characterized by conventional statistical methods.”

AI and LLMs can predict the rise and shift in conditions related to SDOH better than current models. However, those predictions are based on existing data models, based on existing qualitative data, which itself may present a problem.

The Problems in AI-Driven Healthcare

Efforts to integrate artificial intelligence (AI) into healthcare face significant challenges, with experts pointing to the lack of standardization in data collection, storage, and modeling as one of the most pressing concerns. In a recent study, researchers highlight how disparities in electronic health records (EHR) and other data systems create major barriers to interoperability, making it difficult to effectively analyze and apply AI-driven insights.

“Data are often not harmonized nor interoperable across sectors, making data integration a challenge and subsequent analysis/modeling difficult,” the authors explain. “The lack of standardized terminologies and definitions will also limit semantic interoperability, posing a challenge to the implementation of algorithms.”

Some initiatives are attempting to address this issue. In the United States, for example, the AMA Integrated Health Model Initiative (IMHI) and the Gravity Project—spearheaded by the American Medical Association—aim to develop standardized protocols for collecting and transferring social determinants of health (SDOH) data. Their goal is to integrate these standards into medical coding systems such as Current Procedural Terminology (CPT) codes, potentially streamlining the way AI processes healthcare information.

Yet even if data standardization improves, AI still faces major hurdles related to accessibility and infrastructure. Physical platforms, telecommunications networks, and digital literacy all influence how effectively AI can be deployed in different regions. Researchers caution that disparities in internet access, language barriers, and technological skills could further entrench inequalities rather than alleviate them.

“Variances in internet connectivity, linguistic compatibility, operational skills, and hardware prerequisites may challenge access to benefits conferred by AI models and LLMs,” the study notes. “On the level of physical infrastructure, LMICs [low- and middle-income countries] and remote areas may not possess reliable telecommunication networks, stable internet connections, or sufficient equipment.”

Beyond logistical challenges, two larger concerns loom over the integration of AI into healthcare: bias and privacy.

Bias in AI: Reinforcing Systemic Inequalities

AI models are only as unbiased as the data they are trained on, and in healthcare, that data often reflects existing racial and socioeconomic disparities. Researchers warn that without intentional efforts to mitigate bias, AI could exacerbate health inequities rather than reduce them.

“Implementing algorithms trained using large datasets has been shown to be biased against individuals of different races and socioeconomic status,” the authors explain. “For instance, chest X-ray classifiers trained using datasets dominated by White patients performed poorly in Medicare patients, and a risk prediction algorithm trained using large datasets failed to triage African Americans for necessary care. This will remain a pervasive problem since more than half of the datasets used for clinical AI development.”

This raises a critical issue: If AI is designed to process SDOH data but does not account for the structural biases embedded within that data, it risks perpetuating the very disparities it aims to address. In essence, systemic racism and social prejudices are themselves determinants of health—failing to address them in AI development means codifying them into medical decision-making systems.

The Privacy Problem: Who Controls the Data?

Another unresolved issue is data privacy. With AI systems relying on vast amounts of patient information, questions remain about who is responsible for ensuring that data is handled ethically and securely.

“In the realm of data and model responsibility, the party that remains liable for the outcomes of AI and LLMs remains unclear,” the researchers note. “It is important to note that absence of autonomy and sentience renders a lack of moral agency in AI.”

This ambiguity raises difficult questions. If an AI system makes an incorrect diagnosis or fails to recommend necessary treatment, who is accountable? The healthcare provider relying on the AI’s recommendation? The developers who created the algorithm? Or is there a need for a new regulatory framework to oversee AI-driven healthcare decisions?

With AI development in healthcare advancing rapidly, these concerns remain unresolved. The technology is evolving faster than the ethical and legal frameworks designed to regulate it, leaving fundamental safety measures “up in the air.” Without clear policies addressing bias and accountability, AI risks becoming yet another tool that reinforces disparities rather than eliminating them.

Future Directions: Can AI Be Used Responsibly in Healthcare?

The study suggests that AI could still be a powerful tool for addressing social determinants of health—if it is developed and implemented responsibly. The authors argue that AI-driven health solutions should be designed with the most vulnerable populations in mind, rather than being developed in high-income countries and imposed on lower-income ones.

“Achieving health equity remains a challenge, and addressing SDOH is progressively becoming a global priority. In a post-pandemic world, the widening digital and health divide has thrown caution to the use of AI and digital health technologies. Significant challenges and barriers to equitable AI implementation remain, especially with regard to overcoming infrastructure limitations and algorithmic bias.”

As access to digital tools becomes increasingly central to healthcare, digital literacy and internet connectivity have emerged as “super” social determinants of health (SDOH). These factors influence nearly every aspect of modern healthcare delivery, from telemedicine to digital health records. However, disparities in digital access risk deepening existing health inequities rather than alleviating them.

“A key barrier to digital literacy lies in the digital divide, which afflicts disadvantaged populations more significantly,” the study notes. “For example, approximately one-sixth of low-income household families in the USA do not have broadband internet access.”

This digital gap highlights two key challenges for the equitable implementation of artificial intelligence (AI) in healthcare.

First, technology has the potential to either close or exacerbate existing disparities. If AI-driven healthcare tools are designed without considering who has access to them, they may reinforce inequalities rather than serve as a bridge to better care. For instance, AI-powered telehealth services may benefit those with stable broadband access while leaving behind those in rural or low-income communities who lack reliable internet.

Second, the people most affected by these digital disparities are often excluded from the development and implementation of AI-driven healthcare solutions. The researchers stress that those who have historically lacked access to digital health tools must be involved in shaping AI models to ensure these technologies address their needs rather than widen health gaps. Without inclusive design, AI risks reinforcing the very inequalities that social determinants research aims to dismantle.

The issue extends beyond national borders. Low- and middle-income countries (LMICs) face even greater obstacles, as high-income countries (HICs) dominate AI development. If global health technology continues to be shaped primarily by wealthier nations, it may create an additional SDOH barrier at an international level, further disadvantaging populations already struggling with access to care.

The authors emphasize that overcoming these challenges will require proactive efforts, including infrastructure improvements, investments in digital literacy programs, and the development of unbiased AI models. Without such measures, AI’s potential to transform healthcare could become another force entrenching disparities rather than alleviating them. However, with global collaboration and a commitment to equitable design, AI could serve as a powerful tool in reducing health inequalities—rather than reinforcing them.

****

CL., Seng, B. J. J., Law, J. Z. F., Low, L. L., Kwa, A. L. H., Giacomini, K. M., & Ting, D. S. W. (2024). Artificial intelligence, ChatGPT, and other large language models for social determinants of health: Current state and future directions. Cell Reports Medicine5(1). (Link)

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

Previous articleJo Watson Chats With Rob Wipond About His Work and His Book
Next articleThe Ethics of Long-Term Psychiatric Drug Use and Why We Need a Better Way
Kevin Gallagher
Dr. Kevin Gallagher is currently an Adjunct Professor of Psychology Point Park University, in Pittsburgh, PA, focusing on Critical Psychology. Over the past decade, he has worked in many different community mental and physical health settings, including four years with the award-winning street medicine program, Operation Safety Net and supervising the Substance Use Disorder Program at Pittsburgh Mercy. Prior to completing his Doctorate in Critical Psychology, he worked with Gateway Health Plan on Clinical Quality Program Development and Management. His academic focus is on rethinking mental health, substance use, and addiction from alternative and burgeoning perspectives, including feminist, critical race, critical posthumanist, post-structuralist, and other cutting edge theories.

6 COMMENTS

  1. About 6 months ago, I was in my doctor’s exam room. She came in, smiled, and asked if she had my permission to use AI to “listen in” and take notes, then transcribe the notes into my record which would be a part of my permanent file. “That way,” she soothed, “I can spend more time with you and other patients instead of taking notes and typing them up myself, and I won’t be looking at a computer screen while I am here in the room with you.” I remembered a time when there was no computer in the exam room, not so long ago. I remembered when I actually undressed and put on a gown to be examined, which never happens these days. Never. Anything covered by my clothing is now unseen, anything which might give a hint to undiagnosed health conditions. Like unexplained bruising, a rash, skin cancers, localized swelling, etc. So I was already one down.

    I reluctantly agreed to let AI listen in and take notes. BTW, she automatically assumed that I, a 66 yr old disabled woman diagnosed many years ago with schizophrenia would know what AI was. I did, but she assumed.

    Later, when I accessed my health portal online from my home computer, I was horrified to see all the assumptions AI had made. I expected a straightforward, word for word court reporter-style compilation of notes. No.

    AI reworded what I had said in the exam room, which I believe was about ongoing insomnia and anxiety and a new concern about dizziness. I am very much a non-emotional, straight talker, very much to the point because I don’t want to waste my time or the doctor’s. AI concocted a narrative about my being a neurotic, hysterical woman who was overly concerned about nothing much and that this would all doubtless blow over by my next appointment. WTF?

    Immediately I messaged my doctor (her preferred method of contact) and pointed out that AI had misinterpreted and misogynized everything I had said. I asked if AI was being used by her to diagnose and/or suggest treatment options. And that I could tell AI was a long, long way from being patient-friendly and was perhaps not ready for primetime viewing or doctors’ offices. One of her minions replied that of course AI was not used for anything but note-taking and they would be happy to note in my file that I preferred it not be used in subsequent visits.

    Of course, then I had to wonder if my messaging to the doctor was also being monitored and answered by AI. Possibly. Who knows. And I am not a paranoid schizophrenic, btw. But I am becoming a paranoid patient about what is being put in my permanent record. AI was a menace in the room of my healthcare provider, and she seemed totally unconcerned. I also wondered if there was an extra charge to Medicare for the AI.

    Report comment

  2. Using perplexity A.I.
    https://www.perplexity.ai/
    because it has references (links) to where it gets its ideas,

    I asked “What percentage of patients electronic health records are in error”.

    That A.I. is reporting 21%, 25%, 50%, and 15% with associated details.

    I have no idea how anyone feels about this, but it seems to mean that millions of patients records are not accurate. With about nothing I can think of, that can be done about this.

    I once looked at my doctors notes – where he wrote down I had low potassium, when I had said I had had high potassium (with compromised kidneys) and just shook my head. It’s amazing they don’t kill more of us!

    I am eagerly using this particular A.I. It is terrific for asking “list the common side effects of xxx pharmaceutical. Include percentages of effected patients”

    Often discussed on this website – the withdrawal effects of withdrawing from xxx pharmaceutical. Ask A.I. what percentage of patients experience these withdrawal effects?

    I’m just guessing, but it appears A.I. is pulling from multiple sources – and giving me a succinct answer. No more searching all those PubMed papers, and various “look up your drug” – websites.

    Ask the half life? Ask the organ clearing the drug (I care, my kidneys are compromised)?

    I am quickly becoming a fan.
    Think how different life will be, when we all have an A.I. doctor app on our phones – within 5 years.

    If the doctors hate “I googled it” what are they going to think when we bring A.I. answers to them?

    I am not excusing all the errors. From my point of view I find the entire industry offensive.

    Report comment

  3. When we have people as insane, stupid and evil as Donald Trump and Elon Musk running the world, all these talking points like the dangers of AI very rapidly lose all salience. Do you think you will be destroyed by a system of AI or a president who openly wants to be king who is conducting a full scale assault on all your connections with the outer world, all connections with truth and reality, and all remnants of democracy and decency in the United States? Of course this government could deploy AI and inflame the danger by ceding so much ground to technology corporations and business in general, but then it is these evil thugs governing the system that are the problem and AI could as easily have been deployed to more benevolent ends: and you know that. If AI came about during the time of Roosevelt and the League of Nations probably we’d have a marvellous world shithole utopia shit only because we’d still be boring people – until there was a global magic mushroom revolution, naturally. And while there is no use in me evoking parallel universes here, nonetheless in this Universe our best hope is still probably the same, a magic mushroom or hallucinogenic revolution, and doesn’t that sound fun. Many will go absolutely psychotic of course but if you consult your reason for a moment, see how necessarily the ones who go bizzerk will be the most socially conditioned or greedy – for example can you imagine Trump or Musk NOT going psychotic on magic mushrooms or ayahuasca? Hence the most problematic people will go totally nuts, so we can regard the annunciation of the magic mushroom revolution as the birth of our new spiritual and psychological immune system which filters out all the crazies for the whole of humanity. And sane by that enlightened time will be non-conformist creative freedom. Insane will be regarded as conformist and socially conditioned. That’s how it was in the garden of Eden hence the tree of the knowledge of good and evil, which breeds social conditioning and conformity, destroyed everything. And as a result part of Adam and Eve went off with the snake. That’s why eventually to redeem humanity we’ll have to eat the snake. Or we could just eat ourselves because we are the snake.

    Report comment

  4. I don’t know how artificial intelligence’s involvement in ‘mind (mental) health’ will shape up in the future. However… Probably… I would guess that it may not be any worse than the psychiatric drugs (and the psychiatry and pharmaceutical industries) that have turned out to be harmful and deadly. However, it is still necessary to be cautious. There is a saying, ‘What goes around comes around /What goes away makes people look for what comes’. To avoid encountering this, one must be cautious.

    In my opinion.. The mental (mind) health system should consist of behavioral therapies such as ‘nature, travel, work, health’ etc.. Behavioral therapies may vary depending on people’s ‘lifestyles and personalities’. But the one constant story here is probably the belief that mental health can be improved with behavioral therapies. As we always say…. We can say that the best examples of this are examples like ‘Norwegian and Storia houses’..

    ‘Artificial intelligence’ driven mental (mind) health system can help the mental (mind) health system. At least it would be better than the mental health system that focuses on ‘toxic psychiatric medication and harmful psychiatric treatment’, which is deadly. Of course, it is necessary to be cautious.. In order for artificial intelligence to make a significant contribution to the ‘mental health system’, it needs to develop and improve further.

    —-

    For now, artificial intelligence seems to be playing the role of ‘dumb AI’. When I first came out with AI on my own blog (when it first came out), and specifically when I called Google’s AI ‘dumb AI’, there was a huge uproar. Because he was misinterpreting everything. And in fact, he still misinterprets most things. Nowadays, it is said that artificial intelligence is developing. But no..

    Artificial intelligence is not advanced or anything. Even today, he still cannot answer the questions correctly. He misinterprets them in a very dangerous way. Could this misinterpretation of AI pose a danger, especially to children and young people? (This is debatable..)

    Should artificial intelligence be included in the mental (mind) health paradigm? I think it should be bought, but as we said, we should approach it with caution. (In an AI-driven mental health system, if the AI ​​also recommends poisonous psychiatric drugs to people… 🙁 We would understand that this AI is controlled by someone.)

    After all, even if artificial intelligence plays the role of ‘idiot’, it can have deadly consequences for every human in the future. The fact that artificial intelligence plays the role of an idiot probably lies in the human intelligence that controls it. Probably… 🙂

    With my best wishes.. Y.E. (Researcher blog writer (blogger))

    Report comment

  5. You were using artificial intelligence to produce this article because the intellect IS artificial intelligence, a socially conditioned mechanical thought machine if you like. Real intelligence is through silent perception of what is as it is, and the understanding thereof. Facts brov, facts! Not alternative facts like you have in America.

    Report comment

LEAVE A REPLY