The impact of digital technologies on those with mental health treatment histories is rarely addressed by sweeping reports and recommendations that focus on the impacts of technology on society. In a paper submitted to the Australian Human Rights Commission about promoting fair and equitable deployments of Artificial Intelligence, Piers Gooding addressed this gap. His report showed how infringements on privacy through data collection pose risks to people with disabilities and mental health service-users.

Digital technologies and artificial intelligence-enabled changes to the provision of mental health services are ubiquitous today, facilitating ‘supported decision-making’ in healthcare services, peer networking, face-to-face support, and crisis support. They are often instrumental in monitoring abuses in care provision too. Yet, with the rise of mental health apps and even AI-controlled brain implants undergoing testing, it has also become apparent that consumers of mental health services have become guinea pigs for testing invasive technologies that yield personal, highly sensitive data.
Gooding aimed to make the Commission aware of how classic issues associated with digital rights, like freedom of expression, privacy, and data protection, apply to those with mental health and psychosocial issues, and how threats to these populations model dangers based upon making egregious assumptions and taking information out of context.
He argues that, because norms governing the appropriate flow of information in society are yet to be established in any context (mental health or otherwise), abuses of mental health data should be examined for what they tell us about technologically-prompted human rights abuses across society rather than be understood as exceptional.
He highlighted instances when mental health data was used for non- care-related reasons, which led to the violation of traditional patient protections. One example included the mistaken release of hundreds of students’ personal digital records at a high school in Melbourne that included information about their ‘mental health conditions, medications, and learning and behavioral difficulties.’
Beyond privacy infringements, explicit repurposing of data, such as preventing gun violence in U.S. high schools, threatens to normalize the collection and digitization of student mental health data for distribution through a statewide database.
“Advisors to the Trump Administration are reportedly promoting experimentation to determine ‘whether technology, including phones and smartwatches, can be used to detect when mentally ill people are about to turn violent.’”
This is one of many cases when data-sharing technologies are used by criminal justice agencies to circulate mental health-related data for predictive and preventative purposes.
“In 2017, the Office of the Privacy Commissioner of Canada, for example, found that the Toronto Police Service had released mental health and suicide data which led to Canadians with a documented history of suicide attempts or mental health hospitalizations being refused entry at the US border.”
He also cited the use of ‘GPS’ technology to track forensic psychiatric patients, AI-based suicide alerts enabled by Facebook’s pattern recognition software that operates completely independently of the healthcare system and its ethics, electronic monitoring of social service provision like home-visits, and psychiatric drugs that incorporate inbuilt sensors to track medication compliance. Gooding underlines the need to weigh the benefits to individuals who consent to treatments with leading-edge technologies against these technologies’ human rights implications for society.
In some cited instances, data collected in mental health contexts were used for extraneous purposes. In others, data collected outside of mental health contexts are being used or tested for use in making judgments about mental health and behavioral dispositions more broadly.
This report reflects how mental health data can be used to discriminate against former and present users of mental health services in ways that violate disability-based discrimination under international human rights law. It emphasizes how surveillance is becoming a condition of service provision in many countries, and how, when taken out of context and shared between agencies, personal mental health information becomes weaponized (e.g., for policing, prediction, or denial of rights), even when its collection was premised on a non-infringing, perhaps beneficial, purpose.
****
Gooding, P. (2020). On Disability Discrimination, Mental Health, and Algorithmic Accountability: Submission to the Australian Human Rights Commission. (Link)