Facebook Is Predicting If You’ll Kill Yourself. That’s Wrong

5
844

From The Guardian: “In 2017, Facebook started using artificial intelligence to predict when users might kill themselves. The program was limited to the US, but Facebook has expanded it globally. It scans nearly all user-generated content in most regions where Facebook operates. When it identifies users at high risk for suicide, Facebook’s team notifies police and helps them locate users. It has initiated more than 3,500 of these ‘wellness checks’ in the US and abroad.

Though Facebook’s data practices have come under scrutiny from governments around the world, its suicide prediction program has flown under the radar, escaping the notice of lawmakers and public health agencies such as the Food and Drug Administration (FDA). By collecting data from users, calculating personalized suicide risk scores, and intervening in high-risk cases, Facebook is taking on the role of a healthcare provider; the suicide predictions are its diagnoses and the wellness checks are its treatments. But unlike healthcare providers, which are heavily regulated, Facebook’s program operates in a legal grey area with almost no oversight.”

Article →

5 COMMENTS

  1. Facebook is so creepy I am glad I never signed up for it.

    I thought about it and putting my picture and information on Facebook, and the internet in general, seemed very similar to just posting my picture and information on the wall of a public bathroom in a train station or highway rest stop for anyone and everyone to look at.

    Report comment

    • After all, people don’t want to go against FB, it must know what it’s talking about, right?

      Could the shrinks who design these prognosticating programs for FB be held accountable for planting seeds in the minds of desperate and suggestible people?

      This is very much in synch with Sera Davidow’s blog running concurrently with this one.

      Report comment

LEAVE A REPLY