Ian Tucker is a professor and director of impact and innovation in the school of psychology at the University of East London. His expertise is in digital media, emotion, and mental health, he has published over 45 articles and book chapters and has a monograph book entitled Social Psychology of Emotion. He is currently authoring an Emotion in the Digital Age monograph for Routledge’s Studies in Science, Technology, and Society series while working on several projects involving technology and mental health. 

In this interview, we discuss how Ian became interested in studying relationships between technology, emotion, and mental health. He addresses some limitations of traditional psychological approaches to these topics and overviews some of his main areas of concern with how digital technology is being used to track people’s emotions and regulate their mental health.

Drawing on philosophers like Gilbert Simondon and Henri Bergson, Ian also explores how digital technologies are being used within peer-to-peer communities to create information archives about experiences with distress and medication in ways that offer collective support. 

The transcript below has been edited for length and clarity. Listen to the audio of the interview here.

Tim Beck: I noticed you have a broad range of transdisciplinary research interests. Could you share a little bit of information about your background?

Ian Tucker: My background is in psychology. I think the interdisciplinarity of my work, or the transdisciplinarity of it, comes from my PhD work in the early 2000s at Loughborough University. It was a really interesting place at the time because there was a group in the social sciences there who had developed an approach to studying psychology as interaction, known as discourse analysis. It is really well-known.

In a slightly different department where there was also a psychology element, there was a group of us working with Professors Steve Brown and John Connolly. We were interested in psychology as an embodied, material experience.

In a sense, although I’ve been trained in psychology, cognitive psychology, and neuropsychology and all those things as an undergraduate, it was very much the social elements of psychology that I was interested in. Studying psychology in the environments in which it’s experienced—an expanded view of psychology, fully acknowledging the stuff going on in the mind but also really interested in people’s psychological experiences as materially embodied and grounded.

I did a lot of discourse analytic work in my PhD, but then it broadened to include materiality and the environmental aspects of psychology. There were debates at the time about how language can do a lot of work psychologically, but what about the material environment, the body, and these sometimes ineffable elements that you can potentially incorporate? And I think that’s where philosophy potentially comes in.


Beck: Your research interests focus on relationships between emotions and technology and between mental health and technology. Are these interests that you’ve always had? Is this something that comes from any work that you’ve done in the mental health field?

Tucker: The interest in technology is more recent, so I’ll start with the mental health strand. I think that possibly comes from two places.

My late father was a psychiatric nurse. And in the UK—well, it would have been the same in the States as well—we used to have these grand Victorian mental health asylums out in the countryside. My father worked in one of them and he used to take me up there. You could just freely access it at that time. These trips were really evocative experiences as a young child, walking around this huge building. I can still remember the sights and the smells of the environment.

I’ve had an interest in mental health from an early age. Then, as an undergraduate, I did a final year module that was run by Professor Paula Reavey. She’s done a lot of work in mental health and the environment, as well. That was a really good critical take on mental health that raised quite a few issues around how we conceptualize mental distress and the issues around current diagnosis and medicalization.

As I say, experiences of mental distress are ongoing, then they’re not—they don’t just happen in the clinic or in the mental health services. They’re also with people as they go through everyday life.

So I was also interested in that everyday life aspect of it, where people were experiencing distress and how were their experiences of distress are shaped by the places they inhabit and the environments they are in. I’ve done empirical work on the impact of community centers on notions of recovery. I’ve done work on people’s experience of their home environments and organizing and managing their home spaces in relation to their distress. 

The interest in technology has emerged more in the last four, five, or six years when it became quite apparent that the environments and the locations of experiences of distress were becoming increasingly digitized. Environments being increasingly mediated by digital technology seemed to be an interesting area and an important area to start to study.

Beck: When I think about my own work, and I think back to where this work started, it was reading the anti-psychiatrists like Foucault and Basaglia. They’ve always had this interest in how institutions can regulate the emotions of the patients who are there. It’s not just their thoughts that become institutionalized, but people’s emotions become institutionalized and their bodies become disciplined in the way that Foucault talks about them. But with de-institutionalization things open up and [psychiatry] goes out into the community. It would make sense that we would see uses of digital technology in those areas to expand the reach of that a little bit. Is that how you see it as well?

Tucker: Yeah, definitely. I remember reading Basaglia and Foucault’s Madness and Civilization. Like you said, with deinstitutionalization, the move to community care was deemed to be a way of integrating people more into communities. Because keeping them institutionalized in these out of town places, it was like they were removed, and that was an othering process.

Deinstitutionalization is about: what if we just bring people back to the community? Then it will all be okay. And of course, it wasn’t, so it became a real interest to see what community care means for people? Where is “the institution” located? 

The location of mental health care was so prominent when it was in institutions. It was so spatialized in these huge Victorian institutions. Where were the locations in community care?

What does it mean then for digital technologies to be increasingly used in mental health services? And how does that change the operation and the way that community care works?


Beck: I have noticed in your work you focus a lot on a few particular French philosophers like Gilles Deleuze, Gilbert Simondon, and Henri Bergson. And I’m curious if there is anything about their thought in particular that you’ve found useful for thinking about these issues.

Tucker: Halfway through my PhD I’d interviewed many people using health services in the Midlands of the UK. I was interviewing them in community centers but also talking to them about other spaces in which they’d spend their time. They were talking to me about their homes and how they organized them and what they did there. 

I remember one day with my supervisor at the time he introduced this notion of territorialization, from Deleuze and [Felix] Guattari. Territorialization is about the organization of space, and of course, Deleuze and Guattari were doing a bigger thing with it, but it still felt useful to think about this notion that space isn’t fixed. It’s something that requires an ongoing process of ordering between people and objects and the material environment. That was my way into Deleuze. 

Then I read around that and the overarching ideas around process spoke quite a lot to me, as they still do actually. The idea is that our philosophies and the way that we frame experiences are an ongoing process, something that needs to be continually enacted and achieved, and we can’t reduce particular behaviors or experiences to a computational model of the mind. 

I like what Bergson writes about the body as an image, and how we can know our bodies in relation to other bodies. Our bodies are something we know as a subject rather than just as an object. It’s a particular image for us. That was of interest in relation to how people talk about their bodies in relation to medication, how they try to make sense of the material impact of medication. I’ve written some stuff around medication and experienced some medication. So I was really interested in that.

The Simondon interest is a continuation of that. Simondon had a particular focus on technology, which I found really interesting. Also this idea of the psychological experience depending on this simultaneous relationship to ourselves and our others. There’s this notion of psychic and collective individuation that suggests we are simultaneously individual and collective. We’re carrying the collective with us. That resonated with me in terms of data and data practices. As we interact with digital technologies we are constantly adding to the collective as we generate data, or the technologies we engage with generate data, about our activities. 

This is feeding into the algorithms of big tech, et cetera, broadly speaking. I found process philosophies to be a rich resource for understanding the experiences of mental health and emotion. And there are particular concepts used by Deleuze, Simondon, and Bergson that I’ve found useful in relation to specific research questions in particular projects.


Beck: I have had the same thought with Simondon. We have the bio-psycho-social model in psychology—which is quickly becoming one of the most popular model—and I feel like it tends to be talked about as if these were separate realms that interact in a way that we don’t really understand. But Simondon does a great job of, like you said, talking about how these processes of individuation happen simultaneously. By focusing on one, we ignore the implications of the others. His thought, for me, was always such an interesting way of combining these different areas without reducing any one area to any of the others.

Tucker: Yeah. We’re so conditioned to think of the individual and start our analysis in terms of the individual. Certainly in psychology, it is “how does the individual work?”

Simondon wants to take us back from that because you can’t start with the individual, you start with the collective processes of trans-individuation before an individual has emerged, and then you try to understand that process of emergence. It’s that stepping back that I think is really useful, which isn’t something we always do in psychology.

We are often very focused on “the individual.” That’s our starting point: “how does this particular psychological phenomenon operate?”


Beck: There’s been discussion on Mad in America lately about new applications of digital technology to issues related to mental health and distress. The FDA, for instance, recently approving this version of a digital pill, which contains sensors that detect when they’ve been ingested, and courts and various authorities can track this. There are new apps being released all the time that track the mental health of their users, collecting data on their activity and sometimes even their physiology and biology. Then there are the virtual therapy apps like Talkspace that are being used with great popularity now with social distancing guidelines. Then, of course, there are applications of artificial intelligence to making diagnoses and suggesting treatments. So the question that I wanted to ask you, from your perspective, what do you see as some of the most concerning ways that the digital technologies are being used in professional mental health?

Tucker: As you articulated really well this is a huge area and I think there are many things that need to be considered in relation to these technologies. I know the U.K. context better than I know the U.S. context.

First, I think often these things are being driven by commercial organizations. You can go to the app store and you can find 10,000 mental health-related apps but very few are authorized and regulated by our national health service. To get something regulated by the national health services, there’s a whole process that it needs to go through and it needs randomized control trials, which are seen as “gold standard” of testing. So there’s not many official apps, but there’s so much unofficial stuff out there. It’s almost as if the regulators can’t keep up because things are coming out so quickly.

One of my concerns is actually how these technologies are used by professional health services. For example, are they used as a supplement or are they increasingly used in place of in-person care and support? I think they can be helpful as a supplement, but obviously there are concerns if the thought is that we no longer need in-person services. This is a really dynamic situation, we don’t have answers to any of these things yet, but this is how it is playing out.

Obviously, as you alluded to at the start with digital technologies, there is a really broad range of ways they’re being used. I don’t know too much about the digital pill. My immediate reaction to that is obviously, who’s in charge of the data? How is the data being used? Is it safe, physiologically?

Also, what model of mental health and distress is underpinning that technology? It’s the idea that what we need to know is the physiology of the body to understand how people are feeling. And, of course, the body plays a major role and so does physiology, but there’s also a whole realm of sociocultural inferences around feelings of distress. There’s also a whole body of literature on the impact of trauma and negative life experiences and the correlations with developing mental health problems. It’s very much a medical model and an individualistic model that’s being used.

There are other areas in which digital technologies are being used, particularly in relation to peer support. I think that’s often happening in nonclinical settings, in non-formal mental health settings, maybe in the charity/non-profit sector. These can be particularly useful because we know that developing feelings of belonging and connecting with people that have had similar experiences can be really beneficial. There can be challenges with these as well, but they can be beneficial. And obviously one of technology’s greatest strengths is its power to connect.

I think the study of digital technology and mental health needs to be categorized into different areas, and each one judged on its merits and in terms of what’s the underlying model of mental health that is being drawn on, who’s in charge of it, and how is the data being used? Will it change an individual’s relationship with in-person services? How do individuals feel and experience these new developments? Talk to people that experience mental health problems and see from their perspective and hear their experiences. I see great value in people’s experiences and focusing research on their experiences.


Beck: On the one hand, one of the issues is just the collection of the data, how is the data being collected and stored? Who has access to that data? That’s an important issue because as more data get collected, it’s going to be harder to keep track of where this data is going and where it’s being managed. 
On the other hand, and maybe this speaks to some of your theoretical interests and research interests a little bit more, how do we interpret that data? What model are we applying to that data? Because you can collect all the data in the world, but if you don’t have a way of making sense of all of that data, then it’s hard to know where to start or even if it’s useful.
And it’s probably going to be the people who have the means of making sense of that data that can benefit from it the most, which requires pretty complex algorithms that not a lot of people have access to.

Tucker: Yeah, absolutely. The model of underpinning the data collection or the data generation—actually I prefer to talk about data being generated rather than collected, there’s a different theoretical premise there—and then what it’s used for.

To pick up on the point about AI, there’s a real push towards that. You’ve got virtual chatbots that try to emulate the therapeutic relationship, and there’s huge potential there. I think people see huge potential in that because obviously it’d be much more cost-effective and much cheaper to deliver services that way, but there are huge questions regarding whether people feel they can develop an empathetic relationship with a virtual engine.

I did a project on empathy and artificial intelligence in relation to mental health. It did not involve developing any particular technology. It was more about exploring the potential to develop a chatbot that would utilize principles of peer support. But instead of interacting with a peer, you would be interacting with an AI agent that’s been trained in the principles of peer support. So a complex, interesting project. We ran some workshops and we worked with people who had experienced mental distress to try to understand what they felt about this.

Some people were really keen on it and felt that you could develop an empathetic relationship with an agent, a virtual agent. But there are certain things that they felt would be important to do that. For instance, they felt it would be important that you could personalize the bot, so it felt more individual to you and so that you could engage in a conversation. The immediacy of it could be really beneficial. If you need support at three o’clock in the morning, you can just click on it. Maybe it would learn. A key finding that came through is that it would be beneficial if it learned what worked for you, and it had this diary element to it. 

People also had concerns about it. They felt maybe it would work as a supplemental thing, but that it would be limited. The idea that it could respond to everything and all the complexity and nuance of experience and emotion, that it would be able to be trained to respond to all of that in a similar way that a trained therapist would be was questioned. But that’s definitely a significant area of research developing the use of artificial intelligence and machine learning.


Beck: With the social distancing guidelines that have been imposed in all areas of the world recently, there are already reports coming out that more people are downloading mental health apps and turning to apps, even ones that don’t have another person on the other side, for, as you said, the immediacy of connection. Even if this is not a connection with a person, it creates a feeling of connection.
I wonder, moving forward, even after these social distancing guidelines ease a little bit, where do you see this going? Do you think that the use of mental health apps is going to continue to increase? Do you see this idea of interacting with an interface as coming to replace other forms of therapy? 
These apps are coming out so quickly that no one can really track what they’re doing. We don’t know, on the one hand, if they are useful.—Can they actually help people? On the other hand, we don’t know who is behind these technologies or who’s benefiting from the data that’s being generated via these interactions? What your sense about where this could go in the future?

Tucker: It’s definitely only going to increase as far as I can tell. I think the quantity of mental health-related apps won’t diminish, as long as people are using them.

There are lots of mental health-related apps. A lot of them are focused on “wellbeing.” I know these are problematic concepts—but some focus on “wellbeing” as opposed to helping people with existing mental distress and those who have experienced “mental ill-health” So there is a distinction there. 

In the US, the FDA, and in the UK, the NHS, are doing more research about these apps. I think certain ones will be regulated. There’s a website app in the UK called Big White Wall. I think it may well be international, so you might have it in the states. Here, you can have online therapy where it has a chat room in it, and the NHS is supportive of and regulate it and they will start doing research on it. They’ll start doing “gold standard” randomized control trials. That takes a long time.

I think this proliferation of mental health-related apps that anyone with a smartphone can have access to will continue. I don’t think they will all become part of formal mental health services because there’s always going to be a lag until a mental service will authorize the ones that are deemed to be effective.

Over time, if therapy or whatever aspect of care and support an app focuses on is deemed to be effective, then there’s a real possibility that it’ll end up reducing the provision of that aspect that was previously supplied through in-person services. In the same way that many industries are using automation. Not that mental health is an industry, but if automation is deemed to be effective, then it will be used. And I think there are real concerns about that, but it’s something that needs ongoing research and ethical oversight. But as I said before, I can see a lot more use of machine learning and artificial intelligence to provide mental health support.


Beck: We’ve had to deal with corporate influence in the mental health field since the beginning. It seems like this could open the door for some private interests to make a stake in the mental health market in the U.K., or is that a mischaracterization?

Tucker: It’s not my area of research, but the link between industry and publicly funded NHS goes back quite some time. The anti-psychiatrists had written a lot about the medicalization of distress and there’s lots of work in critical psychology and psychiatry about the influence of big pharma. So yeah, there are people that would say there has been a link and influence.

I think that will continue because the NHS, for example, doesn’t have its own technology development company. It’s going to recruit and work in partnership with technology companies who are seeking to develop these things. These technology companies are largely commercial enterprises. In a way that’s no different from the pharmaceutical companies who for the last 60, 70, 80 years have been talking to the health services and saying, look, we’ve developed this medication. This can help you in X, Y, and Z.

I don’t see that much of a difference. I don’t think it’s a new thing working now we’re maybe not talking about a medication we’ll be talking about an app or something. There are key similarities there.

Beck: There’s been a lot of talk recently about ideas like surveillance capitalism, digital capitalism, and the attention economy. All of these terms get thrown out to try to make sense of the neo-liberalization of the market and the neo-liberalization of the world and of digital technology. 
With all of these ways of talking about it, do you have a particular frame that you use to make sense of this phenomenon? I saw that one of your studies talks about “surveillance society.” Is that something that draws on Zuboff’s concept of surveillance capitalism, or are you taking it in a different direction?

Tucker: Not so much Zuboff’s concept. I have done some work on surveillance, but not specifically in relation to mental health. We were interested in how people were experiencing surveillance.

At that time, it was about visual surveillance more than what’s known as “data-veillance” or digital surveillance. There’s a concept of “affective atmospheres” in social geography and cultural theory and the social science that we used in terms of trying to understand the fluidity and the complexity of how surveillance can be experienced by people as they go through their everyday lives.

We just finished a book on emotion in the digital age that’s due out later this year. In that book, we did write about the challenges around this mass datafication of emotion. It’s such an issue in contemporary society and on a global scale at the moment, but there’s still so much we don’t know about how people experience it. What people know about the use of facial recognition has been in the press recently quite a lot. We were writing about it in terms of emotion and how it’s used by law enforcement.

Face recognition and AI is used to try to identify and interpret emotions by commercial enterprises, but a lot of it’s quite problematic. It goes back to this idea of being able to read emotion on the face. We know that there’s a lot of debate about the validity of the studies upon which some of these are developed. Others have critiqued whether the universality of emotions is a valid idea: that emotion is universally expressed. Obviously we do express emotions through our bodies, but that’s only one layer of an emotional experience.

Surveillance is happening a lot. In Piccadilly Circus, in London, they have face recognition technologies that try to pick up what people are feeling and try to use them. It’s very much, this is always the trope that’s used, but it’s very much that Minority Report idea. I just don’t know how much people realize it’s happening.

I’ve got a PhD student that’s working on a project in relation to that, to try to see how people feel about [public surveillance], but it’s a huge thing. I think there’s quite a lag in terms of people’s awareness and understanding of it.


Tim Beck: The concept of emotion has traditionally been such an X factor in the history of psychology and psychiatry. There have been so many attempts to try to understand it. One of the main issues coming up in regard to police brutality is the argument that there seems to be this breakdown in one person’s ability to read the other person’s emotions. There is a reaction on the basis of fear—perceiving an assumed threat without looking into whether there’s an actual threat. I think you started speaking to this idea that it’s not just going to be police officers that make these inferences, but there could be technology throughout our public spaces that end up making these interpretations of who’s a threat or who’s not, with implications for furthering racial injustice. 

Tucker: There are very prominent commercial organizations that have been spun out of academic institutions and research institutions that are looking at this and building huge databases of images that they then sell to commercial organizations.

It’s the gold for commercial organizations. Do they know how you feel? Do they know how you feel about a product? That’s what they need and want and they’ll pay a lot of money for it. So the interaction between technology and emotion is only going to increase.


Beck: There is a history in psychology of trying to avoid talking about emotion or theorizing it seriously, because it’s so difficult to understand. I think it speaks to something that you have to go to philosophy in order to find frameworks that are helpful to make sense of these issues.

Tucker: Yeah, I think so. Maybe that’s one of the reasons why there’s been a move in affect studies since the late nineties across the social sciences and in cultural theory to focus on emotion and affect. Obviously, you can debate a lot about the specific definitions of those terms.

One of the things we wrote about in the book is this idea that technologies can know emotions better than the human and that the human just gets in the way of it. It is not just that technologies can do what humans can do, but actually that technologies can do what humans can’t do, and you need to remove consciousness out of it.

What I mean by that is that you could develop an algorithm that can identify emotion through patterns of micro-expressions far better than the human eye. That’s often what these new technologies are saying they can do.

This goes back to the problem in early psychology when people were doing introspection as a method, where the idea was that you’ve got to take out the self-report because the self-report isn’t valid. The technology can identify the physiological aspects more and that’s where the true phenomenon lies. These technologies are deemed to be better [than people]. You don’t ask people how they feel, you use technology to analyze their micro-expressions and that technology will tell you.


Beck: We have talked a lot about the dangers of using technology for mental health issues, but it seems like a lot of your work also focuses on creating community and the collective dimension that’s offered by technology. I’m curious, what do you think about how digital technology has been used within social movements to create a sense of solidarity and to provide a sense of belonging for groups that have traditionally been marginalized?

Tucker: With any phenomena, like with technology, we have to make sure we don’t homogenize it. There can be a lot of positives to technology too.

In terms of mental health, I’ve done a project with an online community called Elefriends that was run by one of the major UK Mental health charities called “Mind.” This was a few years ago now, and they had already identified the importance of peer-support. People were effectively doing peer support on their Facebook page at the time. So they developed this specialist peer support site.

We looked at how peer support works, and we interviewed people who used it. As ever, there were pros and cons, but people found the immediacy of it really helpful. They found the anonymity of it quite helpful as well. Being able to just talk anonymously about their experiences. Peer support often works because of the idea that the person you’ve talked to has had similar experiences, which isn’t always a feeling that you’d get when you’re dealing with a mental health professional. 

The value of that sharing your experience, and the advantage of something like an online community, is that it gives you access to a far greater number of people that might have had a similar experience than a small community center that might have 20 people. Actually, Elefriends may have 200,000 people.

There are several moves towards using technologies to bring people together and for people to connect and we’re notions of proximity and distance aren’t an obstacle to support. So there are benefits to that. There are also challenges to that.

I would like to talk a little bit as well about some projects I’m hoping to be starting soon, which is also around [the idea of] community and actually has COVID-19 related elements. We have a series of government, publicly funded research councils, and one of these research councils is focused on social science research. It has funded a number of networks to look at mental health, and one of these networks looks at the impact of community and cultural assets on mental health. 

Many different things can be a community and cultural asset. There might be a local choir, it could be a walking group, it could be a painting group—any community activity. And they were really interested in understanding the impact of these community assets. But there was no digital element in any of this.

We’re proposing a project to look at the impact of the digital in relation to community assets: whether it enhances the benefits of these community practices, what the challenges are, whether using digital technology in relation to a community asset enhances those positive feelings of connection and belonging, or whether it acts as an obstacle to them. 

Then, as we were putting the bid together, COVID-19 happened. Now, two of the groups that we were going to look at—who didn’t really use digital technology before—have had to go completely online. So what does that mean? Can you deliver these community assets purely online? There’s a number of different ways that digital technology has been used in relation to communities, some formal and some informal. That can be a local creative painting group or singing group or something that has its own website. That’s quite formal. But it could also be a few users of that group who then set up a WhatsApp chat. 

So there are formal and informal uses of digital technology, and I’m interested in both of those aspects, and maybe even more in the informal ones. These don’t always get included because they’re not so visible, but then that’s why I think it’s important to talk to the users of these community groups and see how they’re using digital technologies.


Beck: This reminds me of your interest in Simondon and Deleuze and affect theory. The idea is that affect isn’t just an individual thing. It’s a public thing that links us together. And in that sense, it’s a resource, right? It’s a resource that you sometimes pay a therapist in order to regulate. But you’re looking at it from the way that it’s generated within a community rather than the way it is provided as a service?

Tucker: Absolutely. Yeah, it’s much more about the way that affect and emotion gets used and acts as a social adhesive that brings people together. 

As you’re talking now, it reminds me of something we wrote about how people try to seek support through technology. This was in relation to Elefriends, the online community, about problems they’re having with medication. When they have a new medication or their dosage is increased and they are having these different kinds of feelings there’s a set of challenges to trying to seek support for that. One of which is you might be having these feelings at three o’clock in the morning when you can’t access your mental health services.

Second is that the people mental health services, even if you could access them, might not have had this medication themselves and you want to speak to all the people that have had it. 

Third, there is a challenge in trying to describe these feelings, these embodied feelings. How do you try to communicate these embodied feelings? You could do it in such a way that would facilitate some support because digital technology gave you access to a large number of people that may well have had that particular medication. Oh yeah. “I’ve had that” or “I actually found it affected me in this or that way.” Just actually hearing those stories from other people could be really supportive.


Beck: You seemed to be talking about this idea of creating a sense of collective knowledge on the site. As more users log on and share what’s going on for them, it continues to grow and it continues to provide different perspectives on the use of medication, which isn’t something you usually get when you just go to a doctor and get a prescription. You get the doctor’s perspective, but you don’t get to hear from other people who’ve taken that medication to know how they’ve responded to it.

Tucker: Yeah, there’s an archive effect, which is something the digital provides that you wouldn’t necessarily get through face-to-face peer support. Because it’s online, [affect] gets stored and other people can go on there and see it. That’s what people would say [in the studies].

There’s an interesting temporality about this system where you can go on there, look for support about something and maybe find a post from five years ago, but that speaks directly to your current experience. So the support doesn’t even have to always be synchronous; it can be asynchronous. And actually that can be a key advantage of digital technology because it can create this basic archive.


Beck: You talked before about how you’re working currently on a book about emotion and the digital age.

Tucker: I wrote a book with Darren Ellis, a colleague of mine at the university in London, on the social psychology of emotion in 2015. And that first book ended with a chapter on digital-related aspects of emotion. From that, we’ve written a book on emotion in the digital age. Obviously it’s highly selective because it’s such a big area, but we were keen to look at uses of artificial intelligence in relation to emotion, which has been termed by some people emotional AI. 

We did a chapter on social media and emotion, a chapter on mental health and emotion, and a chapter on surveillance. [Surveillance] was the final chapter because the danger with surveillance is that if you start talking about it too early it can become an all-encompassing concept. It tells you everything, and yet nothing. It was useful to build up to that. Rather than just go in with surveillance as a concept, we’re drawing on digital aspects of emotion in relation to technologies. That’s definitely an area of ongoing interest.



MIA Reports are supported, in part, by a grant from the Open Society Foundations

Previous articlePeople in Mental Health Crises Need Help, Not Handcuffs
Next articleIssues of Power Central to Understanding Psychological Distress
Tim Beck, PhD
MIA Research News Team: Tim Beck is an Instructor in psychology at the University of West Georgia, where he earned a PhD in Psychology: Consciousness and Society. For his dissertation, he traced a critical history of the biomedical model of mental health, focusing on diagnostic representations of autism, and became interested in the power of self-advocacy movements to reshape conventional assumptions about mental suffering. In fall 2019, he will start a new position as Assistant Professor at Landmark College, where he will collaborate with students and faculty at their Center for Neurodiversity.



    Thanks Professor Ian Tucker and Dr Tim Beck for the great Article.

    I was reading about Jack Dorsey CEO of ‘Twitter’ and ‘Square’ and how he lives.

    He walks five miles to work several days a week.

    He only eats one meal per day during the week (usually chicken and vegetables or salad), and doesn’t eat at all on Saturday.

    He does 2 hours Vipassana Meditation every day, and for his holiday he went to a 10 day, all day meditation retreat.

    He’s one of the richest people in America but his main enjoyments could probably be practised by anyone.

    (He has a very nice house also, I believe, and he might date supermodels).

    Report comment

    • If Democracy is Institutionalized within Governments that institutionalize Institutions, what happens to the heart of the customer and mental health treatment providers when they “de-institutionalize the human? It would seem that if one is twittering about, that incredible precision would become even more important for the institutionalization of the forms and outcomes of the DSM made operational through an institutional way of relating within the field.

      Report comment

  2. Here’s what people who are in danger of being sectioned or detained need to know. When they come for you, you can take your mobile with you, but what will happen as soon as you get to a psych ward is your charger will be taken from you and you will be told it has to go for a PAT test. So when in a day or so your mobile needs charging you will have to tap the window/door of the nurse’s station, wait maybe a long time and ask a nurse if they can charge it for you. They will take your mobile and it will be searched, if your an activist maywell be given to the police. Psych are very very interested in finding out what people are up to online.

    Report comment

    • Wow, when I was hospitalized by a psych loonie and her now FBI convicted, criminal doctor friend, cell phones were completely taken away. You were lucky you got to keep your cell phone at all.

      Thanks for the interview. I must say, The concept of even more surveillance by an already way too intrusive “mental health” system, sounds like a frightening, and dumb, idea to me.

      For goodness sakes, you can’t stand against child abuse today, without Lutheran psychologists attacking you, because you have concern for children. Too intrusive.

      Report comment

      • Have witnessed a person going completely crazy when their mobile was taken away from them. They were OK the first day and then screaming, crying, shouting day and night. Humans are now addicted to their mobiles – removal is yet another form of torture. This particular person is black and was being viciously abused by a thug patient, so he was using his mobile to call the police. They just came and took his phone from him and allowed the thug patient to continue as did the ‘hospital’ staff.

        Report comment

    • Apparently the psychiatric “gold standard” is to choose to follow how the pharmaceutical industry decides to create an iatrogenic “bipolar epidemic,” with the ADHD drugs and antidepressants.


      Then that is covered up, for profit, by turning many of the “bipolar” misdiagnosed people, into “schizophrenia” patients, with the neuroleptic/antipsychotic drugs.

      Via both neuroleptic induced deficit syndrome, which creates the negative symptoms of “schizophrenia.” As well as via neuroleptic/antipsychotic and/or antidepressant induced anticholinergic toxidrome.



      Anticholinergic toxidrome does create hallucinations, “psychosis,” and what looks like the other positive symptoms of “schizophrenia” to the pharmaceutical industry and DSM deluded psychiatrists and psychologists. And all the other DSM deluded, un-medically trained “mental health” and “social” workers.

      Such crimes against humanity are our current “mental heath” industry’s current “gold standard” of “care.”

      But I will point out many of our DSM deluded “mental health” lunatics are also systemic child abuse cover uppers, for the mainstream religions. This was confessed to me to be “the dirty little secret of the two original educated professions.”

      And that ethical pastor’s confession does make complete sense, given the medical evidence does show that the primary actual function of today’s so called “mental health professionals” is, indeed, covering up child abuse for the mainstream religions.

      Report comment

  3. Well being as psychiatry only makes sense from the perspective of eugenics – when you study the important history which happened in Germany pre WWII during and post war, and the fact that it wasn’t abolished and outlawed being as America was planning the same, you can only really come to that conclusion. Therefore the golden standard would be as many dead as possible.

    Report comment

  4. Funny how this article seems devoid of “emotion” lol.
    I still get a chuckle out of “AI”. I think it’s designers do too. Imagine AI not having an “emotional aspect” to it’s “facial expression recognition”. So meaning that there was no emotion involved, no human biases in the design of “AI”?
    It rather seems impossible. So we might say that there is absolutely no such thing as “AI”. Although some programmers might be a tad “superficial”

    Report comment