Samaritans’ Online Suicide Surveillance App an Ethical Minefield

3
108

In Gigaom, privacy and security journalist David Meyer discusses the release of a new app from the UK Samaritans called “Radar.” The app monitors and automatically “alerts people to potentially suicidal tweets made by those they follow.” Meyer admits at first he liked the app, but with links to a wide range of other commentators, he now describes the app as an “ethical and legal minefield.”

According to CNN, the app flags phrases such as “help me,” “tired of being alone,” “hate myself” and “need someone to talk to.”

“Samaritans Radar is in its infancy and won’t get it right every time,” the Samaritans reportedly said in a statement. “It’s not good at sarcasm or jokes yet! But there’s a way for you to give feedback on whether a Samaritans Radar alert was correct, so the service improves for everyone.”

Meyer quotes the blogger Delusions of Candour, who described the difference between personally responding to a friend’s distressed tweet and setting up an automated system for doing so: “The latter feels invasive and intrusive. It’s not dissimilar to the contrast between bumping into a friend in the high street, and following that friend down the high street so you can engineer an encounter. There is also a risk that this app could be used to target vulnerable individuals; I have at least one friend who is outspoken about her mental illness and receives all kinds of abuse as a result, even (sometimes especially) when she is in crisis.”

Meyers also quotes an email that a woman sent to the Samaritans: “Vulnerable people need to feel that they are calling the shots with regard to their own wellbeing. Samaritans Radar sweeps that away, simply by having no ‘opt-in’ facility…. By launching software that specifically purports to be activated ‘discreetly’, which a user’s followers will have no knowledge of until they receive a communication checking up on them, you are sending the message that vulnerable people cannot be trusted to make their own choices about who to communicate with.”

What’s wrong with automatically scanning Twitter for suicidal tweets? (Gigaom, October 30, 2014)

New app helps flag suicidal tweets (CNN, October 29, 2014)

3 COMMENTS

  1. Worse than creepy – decidedly dangerous!!!

    It could actively prevent people being honest about how they’re feeling or from reaching out, however tenuously, in times of need in case someone who knows them overreacts and calls in the crisis team. It could also bring things crashing down around the ears of people who are in absolutely no need of help, but merely using some idiom that the app deems suspect or doesn’t recognise as a joke. This truly is offensive.

    Ethical nightmare or just plain nightmare?

    These are very bad samaritans indeed.

    Report comment

  2. WTF? Now you’ll have another awesome form of thought police – if you dare to share your feelings with other people and they are anything but rainbows and unicorns you can get a knock on the door and spend some delightful locked up in a psych ward. With attractions such as forced drugging, nude searches, shitty food, restraints and if you get it really bad involuntary ECT. What a lovely idea.

    Report comment

LEAVE A REPLY