In a recent article published in Big Data and Society, “digital phenotyping”—using smartphones and other devices to assess mental health and provide treatment before symptoms even appear—has come under critical scrutiny. The authors contend that, aside from providing diagnoses or treatments, digital phenotyping shapes individual paths toward interpreting everything in neurological terms.
Despite the use of objective data analysis, the authors caution that this method inherently incorporates human biases and cultural norms, suggesting that it fosters a new type of self-awareness and truth, primarily driven by app-guided insights rather than personal experiences.
According to the researchers Rodrigo De La Fabián, Álvaro Jiménez, and Francisco Pizarro Obaid, digital phenotyping doesn’t provide an objective look into the self. It is grounded on population statistics, predefined datasets, constructed using pre-existing neuropsychological assumptions, and human categorization processes. They don’t see this as a misleading ideological stance but rather an opportunity for productive exploration. The disparity between our perception of ourselves and what digital phenotyping suggests can help us comprehend our lives through a neuropsychological lens.
“Digital phenotyping does not produce neutral mirrors for self-knowledge,” the authors write, “rather than providing personalized diagnoses and treatments, digital phenotyping produces individualized pathways to normalization and neuropsychologization.”
The study’s primary purpose was to delve into the claims and contradictions of personalized mental health evaluations and interventions offered by digital phenotyping. They scrutinized scientific articles from the past four years and evaluated applications and platforms utilizing digital phenotyping. This approach enabled them to understand the scientific and popular perceptions and applications associated with digital phenotyping. The focus wasn’t on the “truths” it might reveal about individuals but on what it engenders, like new ways of existing and thinking about knowledge and truth.
The paper begins by examining the recent “neuro-turn” in science and society, a shift toward favoring biological data over personal narratives to understand who we are. In the past, mental health professionals relied on firsthand accounts to comprehend the mind. But with the rise of the 20th century, objective biological evidence began to eclipse these personal narratives, which were viewed as unreliable.
The National Institute for Mental Health embarked on the Research Domain Criteria initiative in 2009, aiming to identify biological markers of mental illness akin to those used for physical diseases. Yet, the expected results have not materialized. The authors highlight two main issues impeding the initiative.
First, when hunting for biomarkers of mental illness, researchers had to rely on personal narratives to categorize mental illnesses into specific diagnoses, like depression, which they considered unscientific. Second, our mental state, heavily shaped by our surroundings, is not static. Hence, conducting psychological research in artificial settings, such as labs or psychiatrists’ offices, can distort the manifestation of mental illnesses and complicate the identification of biomarkers.
The spread of smart devices over the last decade has provided researchers with a wealth of data. Previously, mental health disciplines primarily depended on “active data”—information provided voluntarily, such as questionnaire responses. However, the digital age has ushered in a plethora of “passive data” generated by interacting with devices. Coupled with artificial intelligence, this passive data, mostly detached from subjective experience, can be analyzed by machines, sidestepping human biases.
However, the authors caution that it’s impossible to totally exclude human, subjective, and unscientific elements from the equation. For example, they point to a smartphone app designed to alert users of potential manic or depressive episodes by analyzing voice changes during phone calls. The creation of such a system was predicated on human ears listening to the voices of people with mania and depression, demonstrating that passive data still hinge on subjective perspectives. Moreover, because a diverse group of people develop these apps, they unintentionally guide users toward cultural norms and expectations.
Finally, the authors consider how such technologies can generate new forms of self-awareness and truth. This kind of passive data collection births a new type of individual—one who is constantly monitored and aware of that monitoring. They posit that this self-awareness might motivate us to amplify the physical and mental attributes considered important by the various apps we use.
This could also lead to a new kind of truth, where a person may not feel depressed, but their mental health app detects depressive behavior. With digital phenotyping deeming firsthand accounts as unreliable and favoring passive, AI-analyzed data, proponents might contend that it’s the app, not the person’s feelings, that we should trust.
“From this viewpoint,” the authors write, “the gap between how DP [digital phenotyping] produces the truth about us and how we perceive it becomes productive: it measures the distance between whom we think we are and who we actually are. Therefore, we conclude that even though DP still participates in pre-digital processes of neuropsychologization, it does so in a new way. Rather than providing personalized diagnoses and treatments —as DP’s techno-utopia claims—what has been individualized are the pathways to normalization and neuropsychologization.”
Previous research has pointed to some problems around digital phenotyping, such as the threat to privacy and autonomy. Others have also raised legal and ethical questions about digital mental health technologies. As with the current work, past research has argued that digital psychiatry may radically change how we think about mental health, including through the use of “digital pills” that track users.
Researchers have warned of abuse and coercion that can occur when private companies leverage suffering for profit with the “app-ification” of mental health services. Research has also found that mental health apps may lead to overdiagnosis.
De La Fabián, R., Jiménez-Molina, Á., & Pizarro Obaid, F. (2023). A critical analysis of digital phenotyping and the neuro-digital complex in psychiatry. Big Data & Society, 10(1), 205395172211490. https://doi.org/10.1177/20539517221149097 (Link)