“Joy’s smile is much closer to tears than laughter.” – Victor Hugo
It was July 19, 2016. St. Louis, Missouri recorded a mid-day high of 91 degrees. Las Vegas, Nevada was more than 10 degrees hotter at 102. A tornado warning was issued in parts of Iowa. Severe thunderstorm advisories were announced across much of the Southeast. And, Joy was born unto the world via that virtual birth canal we all know so well: Facebook.
Happy Birthday to… Who?
The bot was created by one Danny Freed, inspired by the suicide of his close friend a few years prior, and intermingled with his stereotypically limited understanding of the human condition.
“I had a friend, a really close friend, who I grew up with and went to University of Michigan with… and he was struggling with a mental illness, specifically depression and bipolar disorder… and he ended up taking his own life due to these diseases towards the start of my junior year in college… and so that really opened my eyes…” (Excerpt from a podcast interview with Danny on the Chat Bubble, about 2 minutes in)
Joy operates through Facebook’s system, its brightly colored face, smiling incessantly (and somewhat blankly) away as it auto-chats you at least once a day to check in. In fairness, Danny reportedly balks at calling Joy a ‘bot,’ and refers to it instead as a “mental health journaling service.” And perhaps if Danny were focused on the design of the latter in a way that didn’t hopelessly blur itself with the former, this particular article would never have been written. But Danny is confused.
“The big piece about a conversation for Joy… and for just mental health products… is that the value in just the conversation itself… and just getting things that you maybe are keeping locked up inside your head and struggling with… getting them down onto paper or into a message… is I think valuable in itself… and so starting there is what I did… Just starting with this conversation.” (Excerpt from a podcast interview with Danny on the Chat Bubble, about 4 minutes in)
Danny seems to miss the point that one cannot have a conversation with a journal. A journal is a place to record one’s own thoughts, often privately and without immediate response. A conversation, on the other hand, takes place between a minimum of two at least semi-intelligent beings. However, he does hit on an important point here: Much of the actual help found in the mental health field (or anywhere) is rooted in simple, direct conversation and connection and the resultant space to share.
So, we have found common ground. Unfortunately, that common ground — that so much benefit comes from simply being present with one another — also offers a rather blatant contradiction to the essence of his creation. Because try as he may, Danny has no real connection to offer. In fact, what Joy brings to the table isn’t even a particularly good faked version of what human interaction is like.
“Be honest. You will never find joy if you pretend to be what you’re really not.” – Unknown
A Pretend Connection is Kind of Like No Connection at All
Now, I’m no computer programmer, but I am a pretty avid computer user and I have been for a long time. Long enough to remember interacting with Eliza back in the days of the Commodore 64. (Anyone else?) Truth is, some 30ish years later, I’m not too sure how much more advanced Joy is than its predecessor of the 80’s. Hell, I’m pretty sure the bots I put together on various MUSHs and MUDs in the 90’s were more advanced than both of them put together.
Joy’s responses span a wide range that includes both inane and invalidating. For example, one review of Joy written shortly after its release in August, 2016 included the author’s frustration at being told to “let go” of anger. This is also a response I received today, June 7, 2017. (Specifically, Joy said, “Sometimes the hardest thing to do is let go, but usually it will free you from your anger.”) But, what of those of us who have damn good reasons to be angry? What of those of us who are energized to make change or take important actions because we’re mad?
Other invalidations I’ve seen fly by include telling me I’m “not alone” and that there are people who care about me. These pat little answers might seem like a good idea… to someone without too much of a clue of what it might actually feel like to be alone or unloved. Like, say, a fairly privileged college kid. But some people actually live in a reality without much in the way of family or other resources and really are pretty isolated. Denying that reality can be quite harmful, and is often driven by our own discomfort and desires to deny the harsh truths of our world.
Meanwhile, on the corner of the invalidating and the inane lies the downright dangerous, and Joy spends a shocking amount of time standing right there. Of course, there have been certain improvements over time (after much feedback)… if we are defining ‘improvement’ as at least openly acknowledging a complete lack of any ability to be helpful. Take, for example, rape. In the first image to the right, you’ll see Joy’s old way of responding to a disclosure of this nature.
However, hop down to the next image, and you’ll see that it has been updated to propose another avenue for support rather than responding directly. Problems with national hotlines aside, that certainly does seem a step in the right direction. Unless one reads on, and takes a gander at how Joy responds to domestic violence. Meanwhile, use language other than ‘rape’ (as so many survivors do, particularly when they’re first coming to terms with what happened), and Joy gets lost again.
For example, when I typed in that my husband “forced me to have sex last night,” Joy responded that it had “a few tips to help me feel happier,” and asked me if I wanted to hear one. When I said “Yes,” it offered me a quote from Lemony Snicket about the benefits of a good session of weeping.
Also of note is what is promoted as Joy’s ‘centerpiece’ of its little journaling heart: The mood report. The current report style leaves you staring at a bunch of emoticons corresponding to the various dates of your recent entries. Now, part of me wants to rant about how limited and lacking in nuance this sea of little yellow smiley and frowny faces can be. Or about the fact that (in his Chat Bubble interview) Danny referenced Joy separating out our feelings into a “few emotion buckets.”
But, ultimately, I have to admit that, while I find the ‘bucket’ idea distasteful (as if our lives were a bean bag toss of emotions), this is indeed probably the most useful feature, at least to those who find it worthwhile to be prompted to track their mood from day to day and compare it to what might have been going on in that moment. (Wave your cursor over the dates on your mood report, and it also pops up the narrative comments you offered at that time.) If only it weren’t all buried under such a dishonest and confused interface that pretends to be something else entirely.
“This is it. No more fun. The death of all joy has come.” – Jim Morrison
If a Disclaimer Gets Lost in the Forest, and No One’s There to See It, Does it Really Exist?
Danny does have a response to all this. In a Twitter conversation with a concerned member of the public, he offered the following:
“Joy is not marketed as a replacement for a therapist, psychologist, or a trained professional. Rather, it’s meant to be a supplement to these professionals and is geared towards more mild issues.”
Funny this focus of his on ‘mild issues,’ given the decidedly unmild inspiration of his good friend’s death. Or the numerous articles in which Danny is cited as prioritizing getting “more people who are in need to see a trained professional.” (But, you know. Remember. Danny’s confused.)
However, if ‘mild issues’ (defined as what, I wonder?) is truly where it’s at, one would think that Joy might come with some sort of clear forewarning of its purpose and limitations. And, low and behold, in an article in Venture Beat entitled, “The mental health tracker Joy wants to get more people professional help,” author Khari Johnson says that it does:
“A health-related tool like Joy comes with a boatload of disclaimers. Joy does not replace a therapist, is not FDA approved, and should not be used in an emergency.”
Well, apparently that ‘boat’ sailed off and got lost at sea, because I have no idea where those disclaimers are to be found. They certainly don’t seem to be on Joy’s Facebook page.
Oh, wait! Found it! If you merely:
- Go to Joy’s Facebook page, and then…
- Instead of initiating with Joy, you go to the Facebook ‘About’ page, and then…
- You click on the Joy website listed there, and then…
- You click on ‘Terms’ on the website, then you find all those aforementioned disclaimers
Easy to track down as any four-step process no one ever told you existed! And yes, the disclaimers are extensive, amounting to about seven pages worth. Here’s how they start out:
“Please read these Terms of Service (collectively with our Privacy Policy, which can be found on our Privacy Policy page, the “Terms of Service”) fully and carefully before using http://hellojoy.ai/ (the “Site”) and the services, features, content or applications offered by Hello Joy, LLC (“we”, “us” or “our”) (together with the Site, the “Services”). These Terms of Service set forth the legally binding terms and conditions for your use of the Site and the Services.”
So, these terms that are on an entirely different website than the one where I’m most likely to actually be interacting with Joy are legally binding? Huh. That makes perfect sense! I particularly like clause number 4 (capitalization is all theirs):
“IF YOU ARE CONSIDERING OR COMMITTING SUICIDE OR FEEL THAT YOU ARE A DANGER TO YOURSELF OR TO OTHERS, YOU MUST DISCONTINUE USE OF THE SERVICES IMMEDIATELY, CALL 911 OR NOTIFY APPROPRIATE POLICE OR EMERGENCY MEDICAL PERSONNEL”
And do heed the warning noted above to look at the privacy disclosures. They let you know that anyone who has access rights to the Joy page has access to your entire conversation. I mean… journal entry. The website also includes a menu item targeting therapists. Yes, therapists. Because Joy is now marketing itself as a communication ‘tool’ between the therapist and the therapized.
At least they offer full details on the pros and cons (and downright risks) of attaching licensed clinical professionals to your personal Joy account. Oh, wait. No, they totally don’t. Great.
“Your joy is your sorrow unmasked.” – Khalil Gibran
Last name Ful. First name Joy.
Now, you’d think that someone who’s signing up for Joy would be led directly to all the disclaimers and disclosure information before proceeding. Maybe even forced to at least pretend to read them before having a first conversation? But, you’d be wrong. Here’s what happens instead when you first sign up:
- You go to Joy’s Facebook page
- You click on ‘send message’
- Your chatbox opens, and you are greeted with the following: “Hi there! I’m Joy. You can think of me as your personal happiness assistant. I’ll check in on you once a day to see how your day is going and over time-hopefully make your days more enjoyable! Want to learn more?”
- You are offered the option to either click on ‘Get started’ or ‘Learn more’
- If you click on ‘Learn more’ you get this: “Ok…my first name is Joy. My last name is ful. 😉 But really, I am here to make your life a little more joyful. I’ll help you track your mood over time and keep journal entries for you to look back on.”
- At that point, there’s really nothing else to do but click ‘Get Started’ (unless you’d rather click ‘Learn more’ and get the same exact message over and over, which can indeed be vaguely entertaining on a short-term basis for reasons I can’t quite explain).
- Once you’ve clicked ‘Get started,’ Joy is going to ask you if you want to link your therapist up, too. Fun, fun! (Listen to the full Chat Bubble podcast to learn more about Danny’s vision of growth for his beloved one, up to and including a way to monitor your employee’s wellness.)
- Make your selection (which, if you have any sense, will mean clicking on the ‘Not Right Now’ button… because there simply isn’t a ‘Hell no, never!’ choice available), and now Joy is going to tell you that she wants to ask you some questions to learn about your “mental state.” Depending on just how well the system is working at that particular moment she’ll ask you one or more questions about particular ‘symptoms’ you’ve experienced over the last 30 days.
Once that’s done, you’re good to go. Notice that at no point is the new user directed to any disclosures or warnings, or even a clear description of what the heck Joy is meant to offer or to whom.
“The secret of joy is the mastery of pain.” – Anais Nin
Meeting Joy’s Maker
Now, if you’re finding yourself a little irritated by all this, and wondering about contacting Danny directly to give him some feedback, please feel free. His email is [email protected]. But don’t expect a satisfying response.
I had a brief stint as one of Danny’s Facebook friends. He personally encouraged me to reach out to him with any questions. I sent him eleven, including:
- Why not just create an app that is much more frank about being nothing more than a robotic diary recording app? Why do you want Joy to suggest that it’s there to talk or listen?
- Several people have suggested that it’s a really bad idea to put Joy out when it’s so limited, and that putting it out to some beta testers would be much more responsible then sending it out to unsuspecting people who really may be struggling. What’s your response to that?
- Did you consult anyone that you’d have reason to think was an ‘expert’ as you were developing Joy? If so, who? Clinicians? People who’ve been suicidal or otherwise struggled themselves? Why’d you choose who you did, and if you didn’t consult anyone, why not?
- Do you think there may be something just fundamentally contradictory about even suggesting that a programmed bot that responds based on key words and simplistic ideas of emotions could really ‘listen’ or ‘be there’ for someone? Why do you think that that is better than not having someone to talk to?
- What are the risks you see in Joy being out there?
- What if you discover that Joy is doing more harm than good?
Here’s his response:
Hi Sera,
Thanks for your note and feedback. Great to hear you are also passionate about improving mental healthcare.
I hear your points about Joy’s missteps loud and clear. I have spent many hours since you posted on Facebook yesterday working to make Joy smarter so she can better handle the various cases you posted about. I will continue to do so moving forward and would love to have you provide more of these types of phrases/conversations so that I can make sure I’m training Joy on the right things. This type of feedback is extremely helpful and I welcome it via email anytime.
As you have pointed out, there is a lot that Joy is not good at yet, however I am optimistic for the future mostly because of the numerous emails I get every week from users who graciously thank me for creating Joy and share the positive impact it has had on their lives. I am also in the process of working with a few different academic institutions and their clinical psychology + computer science departments to help test, improve, validate, and build upon Joy.
I hope you can recognize that despite the current flaws and areas for improvement with Joy, I’m trying to do good here and that you will support and help me rather than bring me down. This world needs more kindness and to truly make a difference we must work together and encourage each other.
Best,
Danny
Normally, I wouldn’t post someone’s email to me publicly without permission, but I’m pretty sure this is little more than a form letter. Clearly, Danny skipped over the most challenging parts of my email that spoke to (and asked him to think about) the fundamental disconnect between his stated intent and his approach. And, no matter how nicely worded his message may be, I somehow do not feel compelled to ‘encourage’ his work, and would like nothing more than to ‘bring him down.’ Because ‘trying to do good’ is not nearly good enough.
Also discouraging is the fact that this exchange occurred in January, and several of the Joy screenshots included in this article are from six months later in June. This all leaves me with so many questions, including who on earth is contacting him and what are they reporting to have found useful about this bot gone wild. Is it the Chris Farley picture? (Joy also periodically offers up a perplexing pic of Pokemon…) Of course, it’d be easier to ask him if he hadn’t immediately defriended me following our communication.
“Dracula did bring a hell of a lot of joy to a hell of a lot of women.” – Terence Fisher
An Actual Computer Programmer Weighs In on Joy
Like I said earlier, I’m no computer programmer (nor am I convinced that Danny Freed qualifies either). So, I decided to talk to one. Meet Chris Hamper, a Software Engineer with over a decade of professional experience who is currently working on projects utilizing Machine Learning and Conversational Interfaces. Here’s how that went:
Chris, can you tell me what you think of the idea of an automated bot that attempts to be a ‘support’ of sorts for people who are struggling? Is it an idea even worth pursuing?
While I admire the intention that Joy’s creator appears to hold — to help people who are in emotional distress — I’m not sure I’d call it a good idea. My personal opinion is that people who are going through a difficult time have the greatest need of genuine, healthy human connection, not an attempt at emulating it through a piece of computer software.
Even if the overall concept were to be a good idea (which it doesn’t seem like it is), what do you think of Joy’s programming? How sophisticated is it?
While I’d need to delve deeper into Joy’s construction to accurately evaluate how sophisticated it is, the screen captures I’ve seen of absurd interactions with Joy show that it still needs a great deal of improvement to meet the creator’s objective. My first impression of Joy is that it is closer to the Eliza end of the bot spectrum. It appears to key off of specific words and follow a script in generating its responses. While that is fine for some tasks, it seems sorely lacking for fulfilling the image of Joy that is being represented in some articles: as an empathetic friend that will help you through times of distress. It felt, to me, more than a bit irresponsible for Joy’s creator to have released it for public use in its current state. It’s concerning that Joy took over 6 months to even accommodate simple human conversation quirks, like use of the word “nope” rather than “no.”
What do you think of the fact that Joy seems to essentially be getting ‘beta tested’ on unsuspecting Facebook users who might be in substantial emotional distress?
These are definitely treacherous ethical waters. For bots to develop and improve, they generally need additional data inputs and real-life testing. Training and tuning them using a body of end-users is pretty much a requirement. However, openly testing a buggy bot that replies in ways that can be perceived as uncaring or hurtful to people who may already be having a seriously difficult day seems irresponsible to me. There aren’t clear explanations that the bot is a work-in-progress, and that it could respond in ways that would be interpreted as hurtful or distressing by the user.
Of course, this isn’t the only effort in this direction. Have you read the article ‘When Robots Feel Your Pain’ about bots designed to do a number of things up to and including diagnosis? Thoughts?
An AI that is capable of carrying on an open-ended conversation has long been a “holy grail” of the Artificial Intelligence field. The Turing test was proposed all the way back in the early 1950s as a way of validating an AI’s ability to converse naturally with a human being. It is a problem that is unbelievably complex, and has not yet been truly solved.
With more recent developments in Deep Learning and Natural Language Processing, computers can be quite successful in responding to a specific body of questions. [For example, see this article about a professor who secretly used a bot as his teaching assistant through one semester.]
However, even the most sophisticated bot can struggle to interpret more subtle nuances involved in human conversation. Being able to respond to any topic that might come up in a person’s life is just not yet achievable.
The topic of using AI to recognize the emotional content and intent of facial expressions is quite interesting. However, I am concerned about the idea of using such a system to categorize and label people with a psychiatric diagnosis. First off, is this really what’s best for a person in distress who is approaching a clinician for support in a difficult time? The proliferation of the idea that assigning a psychiatric label is of prime importance seems harmful, to me.
On top of that, what happens if a computer states that a person “has schizophrenia” and the person disagrees? Do they get locked up in a psychiatric ward for “not being capable of understanding their ‘mental illness'” (something that already is happening in our current psychiatric system)? One big concern with AI, in general, is that people often view computers as infallible, and can extend that belief to an AI (See this article called ‘Machine Bias’ for an example of racial bias in AI). An Artificial Intelligence is no less fallible than a human one, and in many cases is inferior in that regard.
Returning to the topic of facial expression recognition, there is great variation in how different cultures express their emotions, and the degree to which they consider expression to be acceptable. Will we create psychiatrist AIs that are “racist” or shortsighted, too? And that doesn’t even touch on the broader question of whether or not assigning psychiatric labels to people actually has value.
“You pay for joy with pain.” – Taylor Swift
A Greedy God is Born
Creating Artificial Intelligence is a godlike act, at least in that the AI is typically made in the image of its maker. Joy basically is Danny. All his good intentions, hopes and dreams, naivete, ignorance, arrogance, and blind spots are there in equal measure. Unfortunately, that makes for a reckless product that Facebook is plainly irresponsible for promoting.
But playing god is addictive, and so now comes the greed. For the first time ever, while I was playing the ‘mess with Joy’ game, I was asked if I wanted to pay for its help. Nothing exorbitant (and only if I wanted access to a little something extra on the side). A mere $45 if I choose to pay by the year, and $5 per month otherwise. I suspect the expanded uses for therapists — and much of what else is to come — are aimed at making profits, as well.
“Joy could just present these manual mood options upfront, but the reason I’ve sort of shied away from doing that as the default experience is that I do think there is a lot of value in having a free form way to express yourself… It’s a bit more expressive and representative of how someone’s actually feeling verses just clicking a button and saying that they are feeling like this emotion…” (More from a podcast interview with Danny on the Chat Bubble)
Yes, Danny. Yes. All true. But speaking into what is essentially a black hole that tells you it’s “here to listen,” but isn’t really anywhere at all is serving your ego and sensibilities more than any actual purpose. Joy is a fake and an impostor. Another marketing tool for treatment. A money maker masquerading as a benevolent helper. We’ve already lost so many lives to those, but why would you even know that? This isn’t your world. You don’t belong here, at least not without some humility and a guide.
As usual, its the people who are most in need — upon whose backs your dollars are to be made — who are also the ones most susceptible and likely to be lost to your ugly game.
“Death is joy to me.” – A.J. Smith