As digital technologies gain traction in mental health care, it is increasingly urgent to investigate the biases embedded within them and their impacts. While there are high hopes that these technologies will lower costs and expand access, researchers often find they exacerbate existing inequalities.
A new article by interdisciplinary researchers Tomičić and Marija Adela Gjorgjioska from the ARETE Institute for Sustainable Prosperity critiques the biases present in psychological assessments and digital health tools, highlighting inequalities across technical, social, and political systems.
Drawing on Frantz Fanon’s analysis of colonial identity, the authors explore digital mental health technologies—specifically AI and machine learning—to identify critical challenges, including neoliberal and neocolonial economic structures, biased technologies, and the marginalization of alternative therapeutic methods. They conclude by proposing ethical solutions to these challenges.
“Our analysis aims to reflect on the discourse around mental e-health in the age of global connectivity and shed light on the ontological and political premises behind digital mental health technologies,” the authors write. “This article, drawing from science, technology, and society studies and postcolonial studies, examines the ecosystem of digital mental health technologies and challenges assumptions about psychological normality and algorithmic bias. We specifically explore AI and machine learning in mental health, differentiating them from broader digital health tools to highlight unique biases and ethical issues.”
By integrating postcolonial theory with an analysis of digital mental health technologies, Tomičić and Gjorgjioska offer a compelling critique of how these tools may inadvertently sustain global inequities. Their work highlights the need for more culturally sensitive and socially responsive approaches that go beyond technical solutions to address the root causes of psychological suffering. In doing so, they challenge the mental health field to rethink its reliance on AI-driven innovations and consider the broader social, cultural, and political contexts in which these technologies operate.
AI averages data – excluding less than and more than outside the chosen parameters. Somewhat like the one size fits all marketing concept. It is used under the guise of making things (? what and for who) fairer and supposedly less biased, less “noisy”. In reality it is an economic strategy to enable maximum cost savings and maximum financial gains.
Viewing Apps in the same light, would make sense. Apps made available under the guise that something is better than nothing. But is it?
Somewhat sad, that good resources are available to the wealthy, while others with certain constraints get to interact with a machine. Yippee ….
Report comment