Your Mental Health Information Is for Sale

Data brokers are selling massive lists of psychiatric diagnoses, prescriptions, hospitalizations, and even lab results, all linked to identifiable contact information.

0
5440

Millions of Americans are using health tracking apps, including for mental health issues like depression and anxiety. We’re used to our private medical information (in hospitals and doctors’ offices, for instance) being protected by HIPAA (the Health Insurance Portability and Accountability Act). But HIPAA wasn’t designed for the modern world, with its new technologies. Most apps—including health, mental health, and biometric tracking devices—don’t fall under HIPAA rules, meaning that these tech companies can sell your private health data to third parties, with or without your consent.

In a new report from Duke University’s Technology Policy Lab, researcher Joanne Kim found that data brokers are selling huge datasets full of identifiable personal information—including psychiatric diagnoses and medication prescriptions, as well as many other identifiers, such as age, gender, ethnicity, religion, number of children, marital status, net worth, credit score, homeownership, profession, and date of birth—all matched with names, addresses, and phone numbers of individuals.

For instance, Kim was offered lists such as “Anxiety Sufferers” and “Consumers with Clinical Depression in the United States.” Even individual lab results appeared to be for sale.

The price? As low as $275.

Kim writes, “The largely unregulated and black-box nature of the data broker industry, its buying and selling of sensitive mental health data, and the lack of clear consumer privacy protections in the U.S. necessitate a comprehensive federal privacy law or, at the very least, an expansion of HIPAA’s privacy protections alongside bans on the sale of mental health data on the open market.”

Digital illustration of a giant eye looking over a field of human figuresKim began by searching for data brokers online. She contacted 37 of them by email or a form on their website (Kim identified herself as a researcher in the initial contact). None of those she contacted via email responded; some of those she contacted via form referred her to other data brokers. A total of 26 responded in some way (including some automated responses). Ultimately, only 10 data brokers had sustained contact by call or virtual meeting with Kim, so they were included in the study.

Ironically, since the brokers sell identifying information about others, they had very strict confidentiality rules regarding their own information. Thus, Kim’s study does not include identifying information about who the brokers are. Kim writes that even when on video call, the brokers’ salespeople did not activate their cameras—contrary to typical sales procedures.

How protected was the information offered by these data brokers?

Kim writes that all 10 of the brokers she was in contact with asked about how she planned to use the data. However, they generally did not appear to double-check her responses or do a background check. In one case, despite emphasizing the sensitive nature of the data and asking how it would be used, a company sent deidentified sample information before Kim could even send a response.

Kim writes, “The lack of follow-up and vetting controls in place suggests that malicious actors or clients could use the data in unstated ways or easily lie about their intentions.”

One data broker refused to work with Kim, but would not explain why.

The price varied considerably by broker. One offered $275 per 1,000 records, with a minimum order of 5,000 records; another offered a price of $2,000 for 10,000 records. In both cases, the brokers offered discounts for larger orders.

Other brokers were more expensive, including annual license fees in the $15,000 to $20,000 range, with some offering prices of up to $100,000 for large datasets with detailed, specific information.

The data brokers, in general, did not provide clear information about whether their data was deidentified and aggregated or could be linked to specific people. In several cases, they offered clearer explanations—but only if Kim would sign a non-disclosure agreement to prevent her from being able to report on it.

However, some blatantly advertised sensitive information, including names, addresses, emails, and phone numbers, linked to the dataset. In some cases, the data brokers advertised that their reports took de-identified data that was HIPAA compliant, and re-identified it, matching it with specific people.

“In general, the data brokers advertised nonmedical data elements on individuals including home market value, credit score, homeowner status, marital status, ethnicity/race, net worth, name, address, profession, and email/phone number, as well as information on food insecurity, transportation, and detailed purchasing habits. The data brokers also advertised medical data on individuals, including information on mental health facilities, anxiety, depression, PTSD, bipolar disorder, ability to pay for medical expenses, caregivers, annual exams, and biometric lab data,” Kim writes.

After an analysis of the data brokers’ privacy policies, Kim concluded that protections were, in general, either vague or nonexistent.

What is this information being used for?

The top reasons for use of this data appear to be research and advertising. The data brokers appeared comfortable with providing deidentified information for research and some mentioned close relationships with the pharmaceutical industry for research purposes. However, much of the sales information seemed targeted towards advertising companies, including one asking Kim for samples of her intended mailings using individual health information. One firm suggested targeting households with specific ailments with advertising about medical offers.

Moreover, Kim suggests, “malicious actors” could use this data for other purposes. According to Kim, we should consider whether insurance companies could secretly use this information to learn more about their clients before offering specific plans. Or whether scammers and con artists could look for vulnerable people to exploit.

“The unregulated collection, aggregation, sharing, and sale of data on individuals’ mental health conditions puts vulnerable populations at greater risk of discrimination, social isolation, and health complications. Health insurance providers—which already buy individuals’ race, education level, net worth, marital status, and other data without their knowledge or full consent to predict healthcare costs—could buy mental health data to discriminately charge individuals for care or discriminately target vulnerable populations with advertisements. Scammers could purchase mental health data from data brokers to exploit and steal from individuals living with mental health conditions, as scammers have done to steal from payday loan applicants,” Kim writes.

Or consider this: What’s to stop a stalker from buying access to a depressed woman’s health records, prescriptions, and contact information?

According to a study last year, popular therapy apps BetterHelp and Talkspace were among the worst offenders in terms of privacy. BetterHelp has been caught in various controversies, including a “bait and switch” scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.

There’s little to no evidence that any mental health apps are effective; most don’t even claim to be evidence-based.

Attention seems to be growing toward the privacy risks of health apps. Now that the Supreme Court has overturned Roe v. Wade, concerns have grown that the data from period tracking apps and apps that record the user’s location could be used by law enforcement to identify people seeking abortions (or abortion providers) in states where the procedure has been criminalized. Some legislators appear to be responding to these concerns by floating legislation to protect privacy from data brokers in some situations.

Another example? On February 1, the Federal Trade Commission announced that online pharmacy company GoodRx would be charged a $1.5 million penalty for sharing users’ specific health data with various tech companies for targeted advertising—and prohibiting them from doing so in the future. One of the biggest strengths in the FTC’s case was that GoodRx misled users by including a HIPAA seal on its telehealth website, falsely indicating to users that their information would be protected by that law. But what about companies that simply don’t mention one way or another?

Ultimately, Kim argues, strict federal regulations need to be enacted to protect the privacy of our health information. In the interim, she suggests, HIPAA needs to be expanded to cover the currently unregulated field of mental health apps, as well as health apps and biometric tracking devices in general.

 

****

Kim, J. (2023). Data Brokers and the Sale of Americans’ Mental Health Data: The Exchange of Our Most Sensitive Data and What It Means for Personal Privacy. Duke University. (Full text)

LEAVE A REPLY