A paper published recently in the journal International Journal of Bipolar Disorders reviews the literature on the current state of consumer technology use in psychiatry and provides recommendations for the future. The study was conducted by a group of ten authors, led by Michael Bauer, Medical Faculty at Technische Universität Dresden and advisory board member for the International Bipolar Foundation.
According to these authors, a series of lofty goals began surfacing around a decade ago to use smartphones as psychiatric tools to collect data for individuals’ electronic medical records (EMR). And yet, for a variety of reasons outlined below, these goals have not come anywhere close to being achieved. As they explain:
“The vision of data from apps seamlessly returned to, and integrated in, the electronic medical record (EMR) to assist both psychiatrists and patients has not been widely achieved, due in part to complex issues involved in the use of smartphone and other consumer technology in psychiatry. These issues include consumer technology usage, clinical utility, commercialization, and evolving consumer technology. Technological, legal, and commercial issues, as well as medical issues, will determine the role of consumer technology in psychiatry.”
From digital pills that report on when they have been ingested to a growing variety of new mental health and psychotherapy apps, uses of digital technology in relation to mental health are gaining hype both within and beyond psychiatry. This has become especially relevant in the wake of the COVID-19 pandemic, where mental health care has become almost entirely virtual due to mandatory social distancing guidelines.
At this point, it seems like such trends are only likely to increase, despite mounting concerns about efficacy, privacy, and ethics regarding these uses of digital technologies in psychiatry.
The authors begin the paper with a quick review of the history of smartphones and wearable technologies, like smartwatches. They describe such tools in terms of “transformational technology,” which means that they should, in theory, improve the lives of those who use them.
One smartphone, for instance, can replace the need for several different technologies (e.g., a phone, a camera, a calendar, etc.). And yet, they likewise serve as data-collection tools for those who sell them and those who produce the apps that run on them.
According to the article, the ways new digital tools serve different purposes for users than they do for those who develop them has not at all been lost on psychiatrists. Since smartphone use began steadily increasing during the early 2010s, many psychiatrists imagined a future where they could be used in the following ways: 1) by patients to access care virtually, and 2) by health professionals to generate data on patient’s mental and behavioral states, some of which could be entered into their individual EMR.
One set of reasons why smart technologies have not proven useful for the above purposes, the authors explain, can be connected to more general trends in how they are used. Although there were 5 billion worldwide mobile subscribers in 2017, with 57% of those using smartphone technology, relatively older populations and those with “serious mental health issues” appear to use smartphones at much lower rates than the broader community. Inconsistent uses of smart technologies thus make it challenging to research precisely what effects they have on which users.
Health-related apps, specifically those marketed for mental health purposes, present unique problems for researchers looking to evaluate them. The main problem the authors identify is an overarching lack of regulation regarding how such apps are marketed and used by developers. As they explain, “[t]he vast majority of medical apps that pose ‘minimal risk’ to a user are outside of FDA enforcement.”
And yet, according to the federal trades commission (FTC), “minimal risk” simply means that an app does not either diagnose or suggest treatment directly to users. Apps that are marketed directly for mental health purposes can thus operate beyond FDA regulation by only offering “self-help” advice and avoiding the use of specific clinical terms within the interface.
This also occurs mostly outside of HIPAA laws. As the authors state:
“HIPAA does not cover patient-generated data from apps, wearables, or the Internet, which is collected by firms and services that receive, store, combine, analyze, and sell the data.”
Even where the FDA regulates apps, the process involves a form of “pre-certification” that reviews general practices of the company rather than specific details about the software themselves. This means that once an organization is certified to create clinical apps, very little oversight occurs as their apps are rolled out onto the online market.
For these reasons, it is impossible to make any blanket statements about the uses of digital technology in psychiatry. Applications continue to grow in number and varying types. They are likewise developed by very different organizations to serve purposes that often extend beyond the goals of psychiatrists and researchers in academia.
Nonetheless, Bauer et al. draw together what limited research exists on the topic to argue that only marginal benefits, at best, are reported by users of most mental health apps. While users describe some essential functions, like appointment reminders, as generally helpful, more advanced functions, ranging from chatbot (artificial intelligence) responses to distress to sensors that measure physiological activity and speech patterns, are mired in a host of technical, conceptual, and ethical problems.
One problem with more advanced functions is that sensors across different devices, and even different generations of a device, do not operate the same. This makes it difficult to draw valid conclusions from physiological or speech data, for instance, generated across different devices. Confounding variables, such as processing speed, size of the device, and the location of the sensor makes any interpretations of the data highly unreliable.
Beyond issues related to psychiatry and measurement, the authors underscore a series of issues regarding the commercialization of smart technologies within the always booming digital economy. Noting the permanency of internet data, and the glaring privacy concerns that go along with this, the authors describe how “[c]ommercial, academic and governmental organizations purchase data, combine data with other data from all aspects of daily life, and create algorithms that are routinely used to classify people.”
Not only are such algorithms refined mainly for commercial purposes rather than clinical ones, but the authors also cite several studies that illustrate how they reflect human biases that perpetuate existing forms of social discrimination and stigma. As these problems become more evident to the users of such technologies, the relatively little amount of trust still invested in them is likely to erode further.
Considered together, such problems put psychiatrists and other mental health professionals in a very precarious situation—something similar to what economist Guy Standing has described, aptly, as the precariat. Although, this situation is not all too different than the one that professionals across most industries today find themselves in.
Clinical uses of digital technology are destined to proliferate beyond the expertise of mental health professionals in ways that disproportionately benefit private companies like Facebook, for instance, who have already started purchasing and repurposing data generated through the use of mental health apps.
Beyond this, artificial intelligence is already automating many services that had previously been provided by human clinicians. There are thus clear reasons to remain critical of the unbridled optimism with which mental health apps are too often packaged in the psychiatric literature.
While Bauer et al. do a good job outlining some important reasons to be concerned about how digital technology is being used in psychiatry, their suggestions for how to move forward more ethically leave a lot to be desired. For them, solutions to the full range of issues outlined so far essentially boil down to:
- maximizing patient choice regarding which technologies they can use,
- improving the digital skills of patients, and
- giving users a choice regarding how their data is shared with others.
However, the authors describe issues regarding the commercialization of services as if they are separate from psychiatry and the mental health field more generally. But as other researchers have illustrated, the conceptual underpinnings of Western mental health care are inseparable from neoliberal capitalism.
A genuinely critical analysis of the use of technology in psychiatry must, therefore, extend to practices of psychiatrists themselves, which have a long history of being co-opted by private interests ranging from pharmaceutical to insurance companies.
Bauer et al. also make no mention of ways activists and peer-to-peer communities are using digital technology outside of professional mental health settings. By operating beyond the control of service institutions, these uses of digital technology have the potential to sidestep many of the issues the authors outline above.
With psy-professionals now developing mental health apps and network theories that are modeled on the social media industry, they are entering unknown territories that extend beyond their educational training. It remains to be seen whether they will turn the critical lens back toward their professions and acknowledge the extent to which they are complicit in sustaining markets of capitalism that samp out free and open uses of technology.
Bauer, M., Glenn, T., Geddes, J., Gitlin, M., Grof, P., Kessing, L. V., Monteith, S., Faurholt-Jepsen, M., Severus, E., & Whybrow, P. C. (2020). Smartphones in mental health: A critical review of background issues, current status, and future concerns. International Journal of Bipolar Disorders, 8(1), 2. (Link)