Brave New Apps: The Arrival of Surveillance Psychiatry

20
3226

Dreams that the internet would foster civic engagement, democratic ideals and equality have been drenched with the ice-cold water of trolling, election tampering, and algorithmic bias. The history of technological systems repeatedly reveals that they do not usually deliver as promised. This reality reared its head in a rash of recent news stories celebrating the application of artificial intelligence to mental health. The coverage suggests that there is something in the zeitgeist, or perhaps some aggressive public relations work happening behind the scenes.

I realized the conversation had shifted when New York Times op-ed columnist David Brooks wrote a naive opinion piece, “How Artificial Intelligence Can Save Your Life: The Machines Know You Better Than You Know Yourself.” His op-ed uncritically promotes a dangerous new paradigm of diagnostic prediction and behavioral prevention. Brooks claims that Big Data will enable companies and mental health providers to “understand the most intimate details of our emotional life by observing the ways we communicate.” He also asserts that “you can be freaked out by the privacy-invading power of A.I. to know you, [but only] A.I. can gather the data necessary to do this.” He concludes with the prediction that “if it’s a matter of life and death, I suspect we’re going to go there.”

“Promises Abound, But So Do Potential Problems”

Seemingly oblivious to all the ways that utopian hopes for new technologies have horribly backfired in the past, Brooks imagines that artificial intelligence will succeed in helping to predict and prevent suicide, blatantly ignoring the challenges that MindStrong Health, one of the tech startup he highlights, is now facing—even though their problems were covered the previous week in his own paper. Brooks says the company “is trying to measure mental health by how people use their smartphones: how they type and scroll, how frequently they delete characters.” The earlier article, by Benedict Carey, tempers Mindstrong’s progress, stating that “the road will be slow and winding, pitted with questions about effectiveness, privacy and user appeal.” Their trials have been riddled with “recruiting problems, questions about informed consent, and [concerns that] people won’t ‘tolerate’ it well, and quit.” The same article quotes Keris Myrick, a collaborator with Mindstrong and chief of peer services for Los Angeles County, who reminds us that “we need to understand both the cool and the creepy of tech.”

Crucially, Brooks also ignores the New York Times op-ed more than a month prior on “The Empty Promise of Suicide Prevention.” Here, psychiatrist Amy Barnhorst argues that “suicide prevention is also difficult because family members rarely know someone they love is about to attempt suicide; often that person doesn’t know herself… almost half of people who try to kill themselves do so impulsively.” Worse, even when problems are identified, “the implication is that the help is there, just waiting to be sought out.” Unfortunately, “initiatives like crisis hotlines and anti-stigma campaigns focus on opening more portals into mental health services, but this is like cutting doorways into an empty building.” Access to care is often limited or nonexistent, and some of the mental health care we currently offer is sometimes worse than none at all.

Hidden Risks of Risk Detection

There are many problems with pathologizing risk, a trend that is part of a bigger pattern that is emerging around diagnosis and treatment. Large, centralized, digital social networks and data-gathering platforms have come to dominate our economy and our culture, and technology is being shaped by those in power to magnify their dominance. In the domain of mental health, huge pools of data are being used to train algorithms to identify signs of mental illness. I call this practice surveillance psychiatry.

Researchers are now claiming they can diagnose depression based on the color and saturation of photos in your Instagram feed and predict manic episodes based on your Facebook status updates. The growth of electronic health records, along with the ability to data-mine social networks and even algorithmically classify video surveillance footage, is likely to significantly amplify this approach. As the recent wave of press coverage demonstrates, corporations and governments are salivating at the prospect of identifying vulnerability and dissent among the populace. (For more examples of this trend, including Facebook’s packaging and selling of emotionally vulnerable users to advertisers and the Abilify Mycite ingestible sensor pill, see my blog post on “The Rise of Surveillance Psychiatry and the Mad Underground.”)

The sociologist and New York Times columnist Zeynep Tufekci has also written about the risks posed when corporations get their hands on mental health and behavioral data, warning us to consider how this data will easily be used to target manipulative advertising, deny us insurance coverage, and discriminate against job applicants. Tufekci points out that it is likely machine-learning algorithms have already learned that people in a heightened or altered state are more responsive to ads for casinos and Vegas—even without humans intentionally targeting this behavioral demographic.

A deeper, less obvious threat than unethical marketing and unlawful discrimination is what this data looks like in the hands of the psychiatric-pharmaceutical complex. So-called “digital phenotyping” is poised to change the definition of normal itself and vastly expand psychiatry’s diagnostic net. Digital phenotyping is the “moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices” such as smartphones, as this article in Nature described it. These new tools for tracking behavior, and the computational approach to psychiatry that underlies them, are poised to displace the never-substantiated chemical-imbalance theory as the underlying rationale for diagnosis and treatment.

Thomas Insel, former director of the National Institute of Mental Health and the co-founder and president of Mindstrong, advocates that psychiatric research should return to a behavioral focus instead of its current emphasis on pharmacology, genomics, and neuroscience. In practice, this likely means that you may someday be prescribed antipsychotics for posting on social media a few nights in a row at odd hours. And you had better brush up on your grammar if you want to avoid a schizophrenia diagnosis. Researchers were recently awarded a $2.7M NIMH grant to study how “nuances of language styles, like the way people use articles or pronouns, can say a lot about their psychological state.” These trends are especially troubling when you consider how racialized suspicion and the perception of threat have become in the United States. As New York City’s Stop and Frisk program demonstrated, not all behaviors or demographics are profiled equally aggressively.

The emphasis on treating risk rather than disease predates the arrival of big data, but together they are now ushering in an era of algorithmic diagnosis based on the data mining of our social media and other digital trails. As Brooks’ op-ed illustrates, the language of suicide and violence prevention will be used to promote this paradigm, even though the lines between politics and harm reduction are not so clear. When algorithms are interpreting our tweets to determine who is “crazy,” it will become increasingly difficult to avoid a diagnosis even if we carefully watch what we say. This environment will severely inhibit people’s willingness to seek support and is creating an atmosphere where other people are conditioned to report behaviors that appear different or abnormal.

What Could Possibly Go Wrong?

The mainstream reactions to the two mass shootings the weekend of August 3, which as usual place the blame for these tragedies on “mental illness,” only compound my concerns that surveillance psychiatry is a brave new paradigm. The President has been parroting the National Rifle Association’s talking points, including a renewed push for “Red Flag” or “Extreme Risk Laws,” reinforcing the hubris that we can reliably predict risk.

One of the best counterpoints to these initiatives is an excellent investigative story by ProPublica on Aggression Detectors currently being deployed in schools and hospitals around the country. Reporters went into schools that purchased and deployed expensive audio-capture-and-analysis systems sold with the promise of preventing the next school shooting. Their manufacturer claimed that these special microphones would detect aggression, but ProPublica’s rigorous tests demonstrated that they tend to mix up laughter and anger and mistake locker doors slamming for gunshots. Tuning the system is fraught, and false positives and negatives abound. All sorts of implicit biases are likely to be baked in, as expressions of emotion are culturally conditioned. The entire premise of the aggression detectors is also flawed, as school shooters often display quiet rage before attacking rather than an audible outburst. To top it off, the systems also record all audio in the school and are being used by administrators to crack down on vaping in the bathroom.

The aggression detectors story is important because it demonstrates how such systems will be used to regulate all forms of affect, not just depression and anxiety. And it vividly shows how dangerous and expensive false positives are to society. For years I have feared that computational systems would be used to monitor and discipline emotions and that they would be first deployed in schools and prisons. Sucks to be proven right.

Theory Deprivation Disorder

A fundamental misunderstanding inherent in so many of these projects reminds me of Chris Anderson’s 2008 Wired essay, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” Like Brooks, Anderson argues that with enough data, we will no longer need theory or hypotheses, as the torrents of data collected will soon be sufficient to fully describe reality, seemingly speaking for itself. The polarizing essay was widely refuted, as data cannot be interpreted without a framing theory, even if we don’t recognize that we are implicitly utilizing one in our analysis.

Information must always be interpreted in context, and both machines and humans are notoriously fallible at doing that. Subjective assessments, such as determining if a work of art is beautiful, if a joke is funny, or if a person is exhibiting behavior within the normal range of human experience will always require a value judgment. Despite what proponents of the biomedical and disease models of mental health would like you to believe, these kinds of assessments are never matters of fact, and behavioral data is not self-interpreting.

Brooks’ op-ed touts Crisis Text Line, an SMS-based support service that crowdsources counselors from a pool of trained volunteers. Crisis Text Line brags about applying data science to their crisis support sessions, although their findings are demonstrably weak and their inferences suspect. CTL’s tag-clouds of word frequencies capture what has long been obvious to crisis counselors and contributes few surprises or insights. Brooks is impressed by their analysis linking keywords to crisis; however, his conclusions presume that Crisis Text Line counselors are always contacting law enforcement appropriately—calling them when they should (if ever), and not calling them when they should not. Without this critical correction, dispatching emergency services can quickly become more rampant, as the automated systems will learn from previous examples and quickly reproduce them. This is not simply a matter of erring on the side of caution—dispatching law enforcement to a person perceived to be in crisis often leads to lethal consequences: one in four police fatalities involve someone with a mental health diagnosis.

The general public has a vague sense that Americans are over-diagnosed and over-medicated. And, in many cases, clinicians cannot even identify emotional distress effectively, as the expression of anxiety and depression tend to be culturally conditioned. A recent American Psychological Association report claimed that “even professional health care providers have trouble detecting depression among racial/ethnic minority patients. Men from these groups are diagnosed with depression less often than non-Hispanic white males, and depression may also present itself differently in males as irritability, anger, and discouragement rather than hopelessness and helplessness.” On top of that, a comprehensive review of suicide-prediction models found that current models “cannot overcome the statistical challenge of predicting this relatively rare event.”

With such large gaps and discrepancies in practice, how can we train machines to judge and categorize behavior when humans cannot agree on their interpretation?

Human and Machine Learning

A vast majority of these initiatives promise better “management” and “treatment,” although the details of their programs focus mainly on early detection and managing risk. When I talk to crisis counselors, they uniformly express that they do not have difficulty recognizing risk or identifying those in crisis—rather, they need better tools for supporting them.

I’m not a Luddite. I think it is possible to redirect this wizardly technology to help support people better. Doing this well starts with inclusive design—people with lived experience need to be involved in planning and shaping the systems meant to support them. Nothing about us without us.

Reducing suicide is generally a good thing, but remember that this same infrastructure will also be able to police normal, proactively detecting all forms of deviance, dissent, and protest. A nuanced critique, once again informed by people with lived experience, needs to shape the development of these systems because context is everything. Alongside the focus on short-term interventions, we also need to spend more resources on understanding how and why people become suicidal and the long-term consequences of treatment by our healthcare systems.

I also think that sophisticated technology—richer interactive training materials, recommendation engines, and networked collaboration—can significantly improve the training and development of providers offering support to those in crisis. Instead of focusing the diagnostic lens on the recipients of services, let’s start by developing better tools to help providers enhance their skills and empathetic understanding. I am imagining contextual help, immersive simulations and distributed role plays, just-in-time learning modules that caregivers could query or have recommended to them based on an automated analysis of the helping interaction. The field could also benefit from more intentional use of networked, interactive media to engage counselors in their clinical supervision and help them collectively to improve. Did that last crisis intervention go well? What could I do differently if I encounter a similar situation again? Do any of my peers have other ideas on how I could have handled that situation better?

Of course, as with so much else about mental health, little is known about what works well. These same systems could be used to gain more insight on successful interventions that have a positive impact. It is critical to have more confidence that an intervention is successful before we start using related data sets to create machines that magnify those approaches and deliver more of the same. Business-as-usual is failing. Let’s not amplify our current approaches with super-charged algorithms without reflecting critically on what helps and what harms.

***

Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.

***

Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.

20 COMMENTS

  1. Jonah you have some true gems here among your thoughts.
    Tech is a tool and needs ethics and how to’s and phases of implementation and oh yes the profit motive and or control motive which left unchecked, unregulated, or ignored because just not convient to the bottom line.
    And as a professional who worked with mothers involved in the child protective service once long ago we did think on using tech because for those traumatized in childhood and then or already traumatized by sheer environmental extreme hardness sometimes the ability to connect with their infant was compromised not because of them but they had never been given the freedom to look eye to eye in a safe way with another human being Using video to show the infant actively looking and desiring eye contact , to let them be safe enough to observe and feel maybe someone wants to interact with me and dance into life is simply essential for human beings to continue to survive and the earth to survive.
    However, not sure about your knowledge of this or anyone else’s.
    This is where I depart from some other survivors in that as always there are folks in any profession who break through. And here I am acknowledging the horrirific history of psychiatry that was like so many things so successfully well hidden.
    I still get triggered with the phrase mental health and just the topic so my reading is compromised by my history of awfulness. See previous posts.
    And training ? Trained by whom and what funding sources and who actually originated the idea? Who were it’s influencers? If shareholders who really are they and if LLC who is behind the mask and curtains? And we need to realize it’s not just one convoluted serpentine and byzantine layer it is I fear Leviathan. But even the nine headed Hydra was defeated so I think there is hope still.
    It’s not MH it’s human beings in crisis and a world in crisis and for those of us in the United States a country on the edge of collapse because of so many folks asleep at the wheel.
    My mother grew up with a record album of just laughter and I think tech could work here. If one can laugh and learn amazing cool facts on any subject or walk or swing on a swing or see a piece or art. Then with both human and tech positives a sea change.
    Whenever secrecy, power and control and inequality and yuck factors appear then NO. Danger not only to the targeted by to the by standers as well.

    Report comment

    • There is stuff to laugh at and educate us online. Without the internet I would still be in bondage to “mental health.” A lot of shrinks hate public access to the internet for this reason.

      One said the internet caused “agosognosia.” Then he had to backtrack and dance around since devout eugenicist that he was he firmly believed only “biological defectives go crazy” and reading something online can’t alter DNA.

      The internet has brought to light a lot of ugly truths the psychiatrists longed to keep hidden. More and more are finding out now that that Artful “Science” is based entirely on a lie dreamed up by ad marketers. And its “treatments” nothing but random acts of brain damage.

      Report comment

  2. Since our beloved “thought leaders” aren’t quite certain themselves about what will make ill individuals potentially dangerous, the idea they can program a machine appropriately seems more than a trifle grandiose, since an AI machine can’t function properly without proper programming.

    Report comment

  3. You are talking about psychiatry. The DSM has nothing to so with science or scientific fact. You are talking about defining the human mind and defining it and how it should act. You will have to say goodbye to a lot of behaviors and things society enjoys. Eistein would be locked up. Steve Jobs labeled a mad man. Do not even begin to discuss art: Picasso and Van Gough will never be tolerated in the future. In Politics, there will have to be an override for abhorent behavior. And the last will be humor and sarcasm, because everything will be scrutinized and may be taken out of context. I don’t even think Orwell could have dreamed up what we face but he came pretty close. Too bad 1984 might be a novel that would not be able to be published in the future. Wow, pretty bleak. I think I might run away to Mexico.

    Report comment

  4. “Unfortunately, ‘initiatives like crisis hotlines and anti-stigma campaigns focus on opening more portals into mental health services, but this is like cutting doorways into an empty building.’” And most “of the mental health care we currently offer is sometimes worse than none at all.” Absolutely, the psych drugs create the symptoms of the “serious mental illnesses.”

    https://www.alternet.org/2010/04/are_prozac_and_other_psychiatric_drugs_causing_the_astonishing_rise_of_mental_illness_in_america/
    https://en.wikipedia.org/wiki/Toxidrome (see anticholinergic toxidrome)
    https://en.wikipedia.org/wiki/Neuroleptic-induced_deficit_syndrome

    “Researchers are now claiming they can diagnose depression based on the color and saturation of photos in your Instagram feed and predict manic episodes based on your Facebook status updates.” This is quite disgusting, and sounds absurd – even “delusions of grandeur” filled – to me.

    “As the recent wave of press coverage demonstrates, corporations and governments are salivating at the prospect of identifying vulnerability and dissent among the populace.” Wouldn’t this be evidence that we have the wrong people in control of our government, since governments are supposed to regulate corporations, not collude with them, since that’s fascism?

    https://www.youtube.com/watch?v=ZKeaw7HPG04

    “These new tools for tracking behavior, and the computational approach to psychiatry that underlies them, are poised to displace the never-substantiated chemical-imbalance theory as the underlying rationale for diagnosis and treatment.” Will the psychiatrists ever stop trying to perpetuate their scientific fraud based system?

    https://www.nimh.nih.gov/about/directors/thomas-insel/blog/2013/transforming-diagnosis.shtml

    How sad, Thomas Insels’ work is now being done, to further harm the psychiatrically misdiagnosed. I’m quite certain that his belief “that psychiatric research should return to a behavioral focus instead of its current emphasis on pharmacology, genomics, and neuroscience.” This is likely due to the reality that it is now medically been proven that the serious DSM disorders, are illnesses created with the psychiatric drugs, as I pointed out above.

    “Subjective assessments, such as determining if a work of art is beautiful, if a joke is funny, or if a person is exhibiting behavior within the normal range of human experience will always require a value judgment. Despite what proponents of the biomedical and disease models of mental health would like you to believe, these kinds of assessments are never matters of fact, and behavioral data is not self-interpreting.” It’s all “bullshit.”

    https://www.wired.com/2010/12/ff_dsmv/

    But our “mental health” workers are trying, desperately, to maintain their “dirty little secret of the two original educated professions.” Which is that the primary actual societal function of our “mental health” industries is, and historically has always been, their systemic cover ups of child abuse.

    https://www.indybay.org/newsitems/2019/01/23/18820633.php?fbclid=IwAR2-cgZPcEvbz7yFqMuUwneIuaqGleGiOzackY4N2sPeVXolwmEga5iKxdo

    https://www.madinamerica.com/2016/04/heal-for-life/

    “Of course, as with so much else about mental health, little is known about what works well.” For example, “the postdischarge suicide rate was approximately 100 times the global suicide rate during the first 3 months after discharge and patients admitted with suicidal thoughts or behaviors had rates near 200 times the global rate.” Forced psychiatric treatment is doing much more harm than good.

    https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2629522

    “It is critical to have more confidence that an intervention is successful before we start using related data sets to create machines that magnify those approaches and deliver more of the same. Business-as-usual is failing. Let’s not amplify our current approaches with super-charged algorithms without reflecting critically on what helps and what harms.” I couldn’t agree more. Our modern day psychiatric “holocaust” needs to be ended, not expanded.

    https://www.naturalnews.com/049860_psych_drugs_medical_holocaust_Big_Pharma.html

    Report comment

    • Trying to turn something unquantifiable into tidy facts and figures.

      Gone is the day of the Renaissance polymath who wrote beautiful poetry, painted, fenced, and made scientific discoveries.

      Today’s scientists and pseudo scientists can’t even enjoy art. Let alone make any.

      Report comment

    • “initiatives like crisis hotlines and anti-stigma campaigns focus on opening more portals into mental health services”

      Shit! But it is true. This is why I take exception to so many of the articles here, the are perpetuating a problem, while they say they are making an improvement.

      What do other people have on this surveillance, and there must be some other forum which deals with this.

      Report comment

  5. For those who haven’t seen it, please take a look at my more detailed illustration of the what those from both Technology and psychiatry are up to, the truly artificial intelligence that is rattling around in their brains. It’s called “Technology in Pychiatry: An Example of Artificial Intelligence” and is available at website coleman.nyghtfalcon.com and also cited as reference by our very own James Moore in his regular podcast series.

    Thanks Jonah for an excellent summary. We’re gonna need to tell this story over an over….

    lee

    Report comment

  6. “Researchers are now claiming they can diagnose depression based on the color and saturation of photos in your Instagram feed and predict manic episodes based on your Facebook status updates”

    Just outrageous – they can’t even get inside people’s heads and find out if the problem you have with noisy neighbors is real. My situation lasted over a year. I signed myself in because I couldn’t take it any more. The doctor had a conflict of interest because he knew the landlord. Don’tcha know I HAD to be having psychoses so the landlord wouldn’t have to kick out the problem tenant even though other neighbors were having problems, too. They never ask WHY people do or say things. There is a story in the book by William Glasser about a girl others were convinced had one diagnosis – but the girl eventually started talking one word at a time. Turns out her father raped her. Like I say, no doctor can get inside your head and know what your experience was. They shouldn’t be handing out diagnoses like they do. There is an “assumption of guilt” and the psychiatrist, for the “mental patient” is judge and jury.

    Report comment

    • “they can’t even get inside people’s heads and find out if the problem you have with noisy neighbors is real.”

      It’s interesting you mention this. My mother-in-law is currently dealing with a similar situation with an extremely noisy neighbor who began targeting her with loud music in the middle of the night after she made noise complaints to her council rep. After almost a year of this, a couple of council reps visited her house a few weeks back during the heatwave that hit England and Europe. In northern England, air conditioning is rare, and my MIL had the curtains drawn during the day to try to keep out the 90 degree heat. What did they ask her? “How long have you been depressed?” They assumed she was depressed because she wasn’t running heat generating lights and she had the curtains drawn trying to survive the heat. Thankfully, she’s a firecracker and read them the riot act about how they were young and better able to tolerate the heat, and also how they had just exited their air conditioned car. But, we’ve seriously considered whether my husband will have to make an international flight home to England to straighten this out before she completely cracks up under the abuse she’s taking. It never fails to surprise me how people who are targeted by malicious actors end up being accused of being crazy. The elderly especially seem to get no empathy and find themselves diagnosed as depressed or demented and put in nursing care instead of receiving the legal assistance and advocacy they need and that is their legal right.

      Report comment

  7. This is such an interesting article, and of course very disturbing. Although it seems like a logical progression based on what I experienced almost 20 years ago, when I was caught up in the system, when it was the paper trail which found its way distributed throughout the system, which “defined” the individual, thanks to the oppressive arrogance of psychiatry and their systemic ilk. So now it’s in techno-bytes and logarithms. Hmm, that seems rather soul-less.

    I wrote about this in an article recently published in 3 parts on Mad In Italy. Part 3 just posted this morning, and I think what I describe from my experience is the precursor to what is talked about here, regarding techno-sabotage of a person by having no concept of their humanity, and going by God-knows-what information. That’s all illusion-based reality, and it directly creates marginalization, comes from “mh industry” perspective, so skewed and false, all shadow projection.

    I believe it begins with self-delusions on the part of the clinician. Certainly in my case I know this was truth, no two ways about it. Obviously, based on what is going on with these apps and remote diagnosing, seems as though this is frighteningly common. How scary for the clients.

    Big problem in psychiatry–no regard for a person’s humanity. Where does this bogus information come from? I can only call them projections, because they so often more than not, if ever, have a basis in consensual reality (*consensual* being the keyword here). What happens with clients is that these are realities made up about people (who comprise “the system,”) to favor those who consider themselves “authority” on humanity, ironically enough, while acting as though a person’s humanity does not factor into their being-ness–aka psychiatry, et al. Tragic irony.

    https://mad-in-italy.com/2019/07/larte-dellessere-umani-parte-1/

    https://mad-in-italy.com/2019/08/larte-dellessere-umani-parte-2-sfidare-il-sistema-the-art-of-being-human-part-2-challenging-the-system/

    https://mad-in-italy.com/2019/08/larte-dellessere-umani-parte-3-the-art-of-being-human-part-3/

    Report comment

  8. I agree entirely with the OP, and it scares the **it out of me! Two comments, one personal and one which is probably boring and technical.

    1. “Subjective assessments, such as determining… if a person is exhibiting behavior within the normal range of human experience will always require a value judgment”

    OMG yes! I was diagnosed with Borderline PD years ago. A major contributor to my diagnosis:
    a) Psychologist was young (late 20’s?) and pretty mainstream, like many MH workers. She’d never even heard of polyamory, even though we were in a progressive university town with at least 150 out poly people;
    b) I’m polyamorous, had been for ~13 years, and was in 2 long-term (years!), committed, loving relationships (everybody knew and was happy with it).
    Result: “Promiscuous -> instability, impulsivity, victimisation.” Hmmm.

    Her experience of the “normal range of human experience” was very different from mine.
    Her “subjective assessment” was based on her mainstream point of view, and she made some pretty serious “value judgments” based on that: -Monogamy=THE WAY IT IS DONE, non-monogamy=WRONG, and *ahem* unstable.
    -Nobody could possibly *choose* to be poly, so I clearly couldn’t have done extensive reading, talking and thinking over years and chosen a lifestyle which, for me, is healthy and happy. I must have just impulsively fallen into bed with two men. (Over and over for years)?
    -Polyamory is only about sex (it really isn’t), and a woman having sex with 2 men=promiscuous and/or victimised.
    GRRRRRRR.

    2. I’ve been reading about data science a lot lately, as I’m trying to figure out what I might be able to do if I get well enough to get a job again *highly unlikely, but one has to have hope, right?*. A lot of it is very cool stuff, theoretically, but I find quite a few of the examples of how to apply it concerning.

    The scariest thing I’ve seen is that quite a few examples on educational sites are either done without paying any attention to the context, or don’t bother to discuss the context and how it affected the analysis. Get some data; clean it up; do x, y and z to it, and voila! Pretty graphs, a few numbers that are less than 0.01, and some comparisons that show that A is greater than B. Awesome!

    Except… If we use a drug trial as an example: How were the data collected? Were certain groups of people excluded from the trial? How many people were there originally, how many dropped out, and how did they deal with that? If all the patients had a certain diagnosis, how certain were they that it was correct? How did they make sure that none of them were, say, smoking pot here and there during the trial? Did any of them have a bereavement during the trial? Etc., etc. People often grab data sets off the web and have no idea how they were put together. Some of those data sets are essentially garbage, no matter how much you “clean” them.

    A lot of the courses in data science you can take on the web are pretty short, but they’re advertised as everything you need to become a data scientist. Hmmm. Statistics, as I’m well aware from my previous career, is very, very complicated. If you choose the wrong kind of test, the numbers you get from it might look reasonable but mean nothing. If you simplistically train people to use x, y and z whenever they have data that look like , they may well do that in situations where t, q and g would be much more reasonable steps to take. You can’t just follow a series of steps without an in-depth understanding of what you’re doing. You may get results that seem useful, but…

    Even if we assume that the theory is good, the data were collected in teh best way possible, and that, for example, tech *can* detect depression from the colours in your Instagram pics (sounds hinky to me, but I haven’t read the paper), how do we know that the people coding the tech definitely know what they’re doing? How do we know that, in the way of many coding projects, there hasn’t been a massive rush to get it out, with a few corners cut?

    Report comment

LEAVE A REPLY