Regulations Needed to Protect Privacy and Autonomy from Digitalized Psychiatric Tools

Researchers bring attention to the threats posed by neuromarketing and digital phenotyping in psychiatric systems worldwide.

1
696

While digitalized knowledge and information technology have improved efficiency in medical fields, a lack of regulatory processes around digital phenotyping and neuromarketing threatens individual privacy and autonomy, posing serious threats to democracy.

In a new paper published in Frontiers in Psychiatry, lead author Hossein Akbarialiabad elaborates the nature of these threats in terms of global psychiatric initiatives and makes suggestions to reduce the harms of unintended consequences of digitalization.

Digital tools have been touted as a means to overcome systemic barriers in scaling up mental health care, closing the so-called “treatment gap.” Digital health care is cheaper, often more accessible, and offers more flexibility and choice to patients and service users. Telehealth services expanded dramatically during the COVID-19 pandemic. As the authors note:

“Using such technologies has improved freedom, efficacy, and flexibility in communication for patients and physicians worldwide. On the flip side, several studies have shown that digital mental health applications may alienate some and raise anxiety and stress for some clients, including fear of relapse and even paranoid thinking.”

Concerns that continue to remain unaddressed include heightened surveillance through the use of so-called mental health apps like Mindstrong; breach of patients’ privacy, confidentiality, and autonomy; and undermining of patients’ agency by using patient data for purposes other than what they consented.

Digital phenotyping and neuromarketing provide two examples of the potential of digitizing mental health care services to harm service users.

Digital phenotyping aims to detect and categorize an individual’s behavior, activities, interest, and psychological features to properly customize future communications or mental care for that individual.

Neuromarketing uses data from an individual’s neuronal response(s) to stimuli to direct the person into purchasing merchandise. Neuromarketing is also used to shape an individual’s opinions in consumer, social or political decision-making. Neuromarketing thus presents a clear threat to both personal autonomy and, more generally, to democracy. As the authors argue:

“The intersection of digital phenotyping and digital neuromarketing can be perilous and could potentially lead to what we may call ‘digital surveillance capitalism.’”

This heightened and more invasive surveillance poses obvious threats to privacy, freedom, autonomy, and democracy. To minimize these threats while maintaining the benefits of the digital turn in health care, the authors make the following recommendations:

First, we need technical and public evaluation of technologies and media before release – new technologies for use in health care should be studied and evaluated before widespread implementation.

Second, regulatory processes must be in place, with careful monitoring protocols, before implementing new technologies.

Third, policy measures should be in place to ensure each digital tool is transparent about the potential uses of user data.

Fourth, the authors suggest that we should ensure public awareness and education on digital apps through, for example, education programs led by public health programs, NGOs, and humans rights activists.

And finally, we should tax information-gathering for giant companies to modulate the current trend toward compiling vast amounts of consumer data. To this point, the authors suggest that “such taxes should be used for public education towards sustainable and ethical e-health solutions towards strengthening the health care system in low-resource settings.”

 

****

Akbarialiabad, H., Bastani, B., Taghrir, M. H., Paydar, S., Ghahramani, N., Kumar, M. (2021) “Threats to Global Mental Health From Unregulated Digital Phenotyping and Neuromarketing: Recommendations for COVID-19 Era and Beyond.” Frontiers in Psychiatry 12:713987. DOI: 10.3389/fpsyt.2021.713987 (Link)

1 COMMENT

  1. “we should tax information-gathering for giant companies to modulate the current trend toward compiling vast amounts of consumer data.”

    I’ve never been much of a Facebook user, but do have an account. But I, for one, know I no longer even want to get on Facebook. Since they’re finally now required to tell you they are compiling vast amounts of consumer data on you, and they don’t seem to give the consumer an “opt out” option.

    Although Facebook does try to convince it’s users to continue using their service, by claiming this will “help keep Facebook free” and “get ads that are more personalized” – seemingly benign benefits.

    But Facebook has apparently been “compiling vast amounts of consumer data” since it’s inception, illegally, without telling it’s users.

    So, if our government does decide to “tax information-gathering for giant companies to modulate the current trend toward compiling vast amounts of consumer data.” This tax should be retroactive, to Facebook’s inception. Since I was not told that this is Facebook’s primary source of income and purpose, until very recently.

    And, as one who has never used Facebook much, but did research into who they thought I would vote for. I will say I do know for a fact, that Facebook knows nothing about my voting preferences, so it is not a company that is a credible source of information.

    And anyone who regularly reads MiA, knows that Facebook has already “diagnosed” people with the “invalid” DSM disorders, and sent the police to people’s homes, despite the fact Facebook is not doctors. Can you say abuse of corporate power?

    Absolutely, I do agree, “Regulations [are] Needed to Protect Privacy and Autonomy from Digitalized Psychiatric Tools.” Which Facebook apparently considers itself to be.

    Report comment

LEAVE A REPLY