Editor’s Note: Over the next several months, Mad in America will publish a serialized version of Sami Timimi’s book, Insane Medicine. In this part, he discusses the history of the autism diagnosis and the expansion of autism into autism spectrum disorder. Each Monday, a new section of the book will be published, and all chapters will be archived here.
What is autism spectrum disorder (ASD)? The conventional answer to this question is that, like ADHD, it’s a “neurodevelopmental disorder” and that it manifests itself primarily in deficits in the ability to understand people’s emotions and therefore difficulties in social communication. Autism is now used interchangeably with ASD and has become the rising star of childhood psychiatric disorder and, like ADHD, has burrowed its way into becoming an increasingly popular concept that can also be used with adults too. Like ADHD, autism and ASD are facts of culture rather than facts of nature.
Use of the positivist, hypothesis testing, measurement-focussed pursuit of objective, value free knowledge about the world “out there” (beyond our imaginations) works great for systems and phenomena governed by “laws of nature,” but is not the most appropriate method for understanding subjective, meaning-generating conscious life. Corruption of science can happen by methods such as the repetitive use of “scientific” sounding language to provide an air of authority, whilst ignoring, not publishing, data fishing, and/or minimising facts or research that contradict the opinions expressed.
ASD has become shrouded in psychiatric scientism, where the idea of being scientific and doing science trumps what the actual science finds and marginalises non-empirical approaches to understanding the mental life of those who get this label. Many are seduced by the idea that science will eventually answer the “why” question that will lead us to be able to make diagnoses such as ASD (i.e. a classification based on causal explanations) in the same way we do in the rest of medicine.
Because the “scientists” who study, categorise, and establish guidelines for ASD can’t find anything definitive, they resort to scientism. Over time the language and concepts associated with this ideology (of ASD existing as a fact of nature) become part of institutions, books, trainings, and of course our broader cultural “common sense.” Once it diffuses into our cultural common sense, we think of concepts like autism as if they were already established scientific facts, whilst the actual facts and uncertainties fade into smaller cultural spaces (such as this book).
This mixture of scientism and junk science that has established autism as a cultural fact has been harder to critique than any other so-called psychiatric diagnoses. Its origins lie in being a rare label applied to those who had marked learning difficulties, many of whom had evidence of neurological injury or genetic abnormalities. Most couldn’t hold any sort of meaningful conversation and many had other neurological conditions such as epilepsy. Its expansion to include geniuses like Einstein (yes he has been given a retrospective diagnosis of ASD), thereby spanning the whole spectrum of intellectual ability, has seemingly happened without a raised eyebrow in the academic circles studying it. Cultural phenomena like the film Rain Man and the MMR vaccine controversy turned this rarely spoken about or noticed condition into a centre stage “disability.”
I am aware that there are many critics of the medicalisation of autism, but who, unlike myself, see autism through a story of “neurodiversity” and have done many positive things to help empower some people who have been given the autism label, enabling them to accept, rather than struggle against, who they are. I acknowledge and value the courage and insight these activists have.
But I struggle with the “neuro” bit of “neurodiversity”—the evidence just isn’t there. We are all neuro-diverse, so as a concept it’s meaningless in a biological sense. As a cultural construct it creates unnecessary divisions, eroding the multiplicity that makes up our mental lives and may trap people back into pigeonholes rather than free them from stereotyping.
It has also been harder to critique autism than labels like ADHD, as autism has no specific pharmaceutical treatments attached to it and hence the conflict of interest issue is not so easily apparent. Since autism’s expansion into ASD we have a real mixed bag of presentations, problems, and levels of functioning. When I see such “diagnostic” expansion, I get suspicious that we are not dealing with a diagnosis, but rather a branded commodity that has market appeal and so is vulnerable to what I call the “elastic band effect” where the boundaries can be stretched almost endlessly.
Descriptions for what ASD is have “fuzzy boundaries” that are open to subjective interpretation, given that there are no physical markers to help accurately measure and categorise any one individual.
Mainstream construction of autism
It’s easy to get confused about the different terms that get used. “Diagnostic” criteria are different in different systems and have changed over the years, widening to include terms like “Asperger syndrome” and, more recently, a term that doesn’t appear in any diagnostic manual, “pathological demand avoidance” (PDA)—the less said about this latest money spinner, the better.
According to the International Statistical Classification of Diseases and Related Health Problems, 10th Edition (ICD-10, the diagnostic manual we are meant to use in the UK), autism is listed in a group of disorders called “Pervasive developmental disorders.” These include:
Childhood Autism, which is defined as “a type of pervasive developmental disorder that is defined by: (a) the presence of abnormal or impaired development that is manifest before the age of three years, and (b) the characteristic type of abnormal functioning in all the three areas of psychopathology: reciprocal social interaction, communication, and restricted, stereotyped, repetitive behaviour. In addition to these specific diagnostic features, a range of other nonspecific problems are common, such as phobias, sleeping and eating disturbances, temper tantrums, and (self-directed) aggression.”
Atypical autism, which is defined as “a type of pervasive developmental disorder that differs from childhood autism either in age of onset or in failing to fulfil all three sets of diagnostic criteria. This subcategory should be used when there is abnormal and impaired development that is present only after age three years, and a lack of sufficient demonstrable abnormalities in one or two of the three areas of psychopathology required for the diagnosis of autism (namely, reciprocal social interactions, communication, and restricted, stereotyped, repetitive behaviour) in spite of characteristic abnormalities in the other area(s). Atypical autism arises most often in profoundly retarded individuals and in individuals with a severe specific developmental disorder of receptive language.”
Asperger Syndrome, which is defined as “a disorder of uncertain nosological validity, characterised by the same type of qualitative abnormalities of reciprocal social interaction that typify autism, together with a restricted, stereotyped, repetitive repertoire of interests and activities. It differs from autism primarily in the fact that there is no general delay or retardation in language or in cognitive development. This disorder is often associated with marked clumsiness. There is a strong tendency for the abnormalities to persist into adolescence and adult life. Psychotic episodes occasionally occur in early adult life.”
Although ICD-10 is the officially used manual in the UK, the American Diagnostic and Statistical Manual of Mental Disorders (DSM) is influential on practice worldwide and often referred to even by practitioners in the UK. Its 5th edition (DSM-5), published in 2013, revised the criteria for autism and includes “sensory behaviours” as part of the new definition.
DSM-5 has dispensed with subcategories like Asperger’s syndrome and defines ASD as “persistent difficulties with social communication and social interaction” and “restricted and repetitive patterns of behaviours, activities or interests” (this includes sensory behaviour), present since early childhood, to the extent that these “limit and impair everyday functioning.”
The above are the “official” definitions currently in use. You can already see how mixed up in semantics the family of ASDs gets. Broadly speaking, autism and ASDs refers to a “disorder” that shows signs from early childhood and is characterised by “abnormalities” in social interactions, communication skills, and restricted repetitive behaviours, interests, and activities. Who gets to decide and how you decide, and by what standards, that there are “abnormalities” is of course the “expert.”
In the typical maddening circularity that infects psychiatric knowledge, it is the expert who defines how to identify abnormalities in social communication, language, and behaviours, and the expert knows what they are, because it is the expert who defines what abnormalities in social communication, language, and behaviours are.
A brief history
The word “autism” was first used in psychiatry in 1911 by the psychiatrist Eugen Bleuler who used the term “autistic” to denote the state of mind of psychotic individuals who showed extreme withdrawal from the fabric of social life. It is probably the most accurate use of the term, as Bleuler used the word to describe a state of mind rather than as a diagnosis.
Then, in a paper published in 1943, the child psychiatrist Leo Kanner first proposed “autism” as a diagnosis and used the term to label a group of 11 children of middle-class parents who were emotionally and intellectually impaired and showing an “extreme aloneness,” plus other unusual features such as hand-flapping and echoing back what a speaker says to them. It has been suggested that Kanner coined this new diagnosis in order to have a different word to use after pressure from some parents who did not wish their child to be labelled with the more stigmatising label of “mental retardation.”
Autism then remained as a rare diagnosis given to young people who had considerable impairments in day to day functioning and moderate to severe learning difficulties with, according to the early epidemiological studies, an estimated prevalence rate of 4 in 10,000 (0.04%). The concept and descriptions that Kanner came up with formed the basis for diagnosing autism right up until the early 1990s in the UK.
The year after Kanner first proposed “autism” as a diagnosis, Viennese paediatrician Hans Asperger published a paper in 1944, largely ignored at the time, in which he described four children with no easily recognisable intellectual impairment, but with social communication problems. Asperger worked in Nazi-occupied Austria, in a society organised by Nazi ideology. As Nazis were preoccupied with the task of classifying human types, Asperger’s paper should be understood as part of that endeavour.
Asperger had managed to further his career under the Nazi regime. This was not least due to opportunities created by the political upheaval after Austria’s annexation to Germany in 1938, including the expulsion of several Jewish physicians from the profession. Asperger had joined the Vienna University Children’s Clinic in May 1931, which at the time was headed by Franz Hamburger, a fervent Nazi.
In 1935 Asperger took charge of Heilpädagogik ward in the clinic. Asperger had not yet obtained his specialist qualification in paediatrics and had published only a single work, raising the question of why Asperger’s more experienced colleague Georg Frankl was not promoted to the position. Two years after Asperger’s promotion, Frankl emigrated to the USA, where, interestingly, he joined Leo Kanner at John Hopkins, leading some to speculate as to whether he introduced Kanner to the idea of autism as a diagnosis.
Austrian universities at this time were sites of virulent anti-Jewish agitation. Jewish doctors faced increasing difficulties in securing university positions, with some clinics and departments practically closed to Jews. With Hamburger’s appointment as chair in 1930, the children’s clinic in Vienna became a flagship of anti-Jewish policies long before the Nazi takeover.
Whatever the specific motivations for Hamburger’s decision to appoint Asperger as the head of the Heilpädagogik ward in 1935, Asperger’s promotion was aided by the anti-Jewish and misogynist tendencies then dominating Austria’s social and political life. Although Asperger did not join the Nazi party, he shared considerable ideological common ground with Hamburger and his network, allowing him to blend in without apparent frictions.
American historian Edith Sheffer, drawing on records discovered by Austrian researcher Herwig Czech, documents that Asperger wrote wholly damning descriptions of at least 42 of his patients, transferring them to the notorious Am Spiegelgrund clinic where almost 800 children were deliberately allowed die from neglect or lethal overdoses. Asperger actively endorsed the forced sterilisation laws, believing that some people were a burden on the community, and in his actions it’s implicit that he supported the euthanasia of those considered to have “a life not worth living.”
One of Asperger’s tasks as a paediatrician in the children’s clinic was to sift out potentially educable children to prevent them from becoming victims of the covert euthanasia “T4 programme” (which would lead to the murder of hundreds of thousands disabled and/or institutionalised people). The significance at the time of writing his paper on four young people whom he described as having “autistic psychopathology” was that he believed these young troubled patients were potentially educable and therefore could be spared from being sent to the death hospital. The widening of autism into ASD therefore started in the Nazi child murder hospitals and clinics.
By 1955, Kanner had reported a total of 120 cases of what he described as “infantile autism.” He differentiated this condition from childhood schizophrenia as he felt autism was evident almost from birth. Kanner, writing with Eisenberg in 1956, hypothesised about aetiology, and concluded that it was unhelpful to try to tie aetiology to solely biological or environmental causes, suggesting that arguments that counterposed “hereditary” versus “environmental” were unhelpful.
By the 1960s, Kanner’s diagnosis of infantile autism had become a recognised diagnosis for what was considered a rare disorder primarily found in children with moderate to severe intellectual impairments.
In the late 1970s, psychiatrist Lorna Wing saw a similarity in some people she was seeing and those described by Asperger. Dr Wing’s ideas intersected with another psychiatrist, Michael Rutter, and formed the basis for the expansion of the concept of autism into autism spectrum disorders (ASD).
Revisiting the seminal papers by Wing and Rutter reveals the extent to which this expansion of the concept of autism was not the result of any new scientific discoveries, but rather new ideologies. For example, in her 1981 paper proposing the “Asperger syndrome” diagnosis, Wing describes six case histories that appear to have little in common with the four cases Asperger described in his 1944 paper, beyond sharing a lack of social reciprocity.
Four of Wing’s cases were adults, whereas all of Asperger’s were children; two had some degree of learning disability, whereas none of Asperger’s did; most of Wing’s cases spoke late whereas most of Asperger’s spoke early; most of Wing’s cases were described as having little capacity for analytical thought whereas Asperger’s cases were described as highly analytical; and none of Wing’s cases were described as manipulative, mendacious, cheeky, confrontational, or vindictive (terms Asperger used about his cases) and so on.
In his seminal 1978 paper on the subject, well-known British child psychiatrist Michael Rutter suggested that autism likely exists on a spectrum, with a strong genetic contribution to its expression. He formulated the familiar triad of symptoms of impaired communication, impaired social skills, and a restricted imagination leading to narrow interests, that, together with Wing’s Asperger syndrome, formed the basis for a new “imagining” of an expanded autism spectrum.
None of these developments were accompanied by any new scientific discoveries about the bodies and brains of those now being thought to have autism, even though it was now spoken about as a genetically predetermined, lifelong, neurodevelopmental disorder.
Over the next couple of decades, the concept of autism started to attract more professional and public interest, boosted by popular media coverage such as through the film Rain Man and the MMR vaccine controversies. More people were talking about this “thing” called autism. Soon there were courses, assessments tools, research, services, documentaries, experts, and institutions all dedicated to furthering our knowledge and understanding of autism, its causes, and how to identify, treat, or prevent it. Autism was now a fact of culture. Diagnosis rates expanded, leading to more services, research, talking about it (and so on).
Now a group of adults who identified with idea of autism but who rejected the notion that this was a disorder emerged. These activists started talking about autism as a difference—a different, but equally valid way of viewing and interacting with the world as a result of a different neurological “wiring.” Tensions have sometimes emerged between this latter group who spoke of themselves as part of the spectrum of “neurodiversity” and those (often parents) who were struggling to cope with the behaviours of diagnosed children, who were often desperate to find “treatments” and felt the “disorder” side of things.
Autism had become a visible and lively discourse, by now simply assumed to represent a real, tangible, identifiable “thing” that could be differentiated from other potential problems (if you identified with the disorder side) or that produced something fundamentally different from “neurotypical” subjects (if you identified with the difference perspective). No one, it seemed to me, was asking the obvious question: On what evidential basis can you conclude that autism represents a natural category that can be differentiated from other natural categories, whether disorder or difference?
When I was training as a child psychiatrist in the early- to mid-1990s I came across two children diagnosed with autism in the whole of my four years of training placements. Both had marked functional impairments and had to attend specialist schools. According to some recent local data I have seen, 1.6% of school age children in my area have a diagnosis of autism. This means that in the space of two or three decades prevalence has gone from 0.04% to 1.6%, a phenomenal increase of 4000%.
Nowadays, I get the impression that any child who attends our Child and Adolescent Mental Health Services could end up getting a “diagnosis” of ASD. I often hear, particularly when the young person is not responding to what is considered the “correct” treatment, autism being suggested as a possible reason for the problems or lack of treatment response. So we end up in what I call “semantic games,” a kind of “what shall we call this” rather than an understanding of what might be contributing to their presentation or what might make a difference to them.
Naming is understandably popular with many, such as other professionals, teachers, parents, and some teenagers. But in my experience it can become a trap as people confuse (understandably) what has been sold to them as a diagnosis with it actually being a diagnosis. In other words, they imagine that because they “have autism” it helps them understand the reasons for their troubles and therefore professionals will now know how best to help them.
My clinics have many people who have gone down this route, but for whom things have gotten bad again and now they think there must be another diagnosis and therefore another treatment, and so they slip further onto the path of becoming a disempowered, helpless patient/parent at the mercy of being prescribed more, often useless, treatments (whether drugs or psychological) that further disempower. It’s a very hard cycle for all (professional, child and family) to step out of.
So where did all this ASD come from?
Given that the concept of autism arose out of a new proposal (initially by Kanner), without supporting scientific evidence and has expanded exponentially in the last two to three decades, again without any supporting scientific evidence, a legitimate question to ponder is why this happened and what might be driving our fixation with our capacity to socialise and read others’ emotions. The following paragraphs are some of my speculations about the potential social, cultural, and political drivers.
A distinct medical/psychiatric disease called autism could not have emerged until standards of normality had been formalised and narrowed and concern about children’s development extended to a child’s earliest years so that children with ASD could be “identified.” This is not to say that there haven’t been people throughout history who have displayed the behaviours we now think of as being autistic, but to remind the reader that calling this autism is simply a “trick” of classification, as opposed to being the result of new scientific knowledge.
Childhood development and schools
As educational and psychological authorities were developed during the last century to meet the changing demands for social adjustment, the boundaries between what was considered normal and “pathological” were created and gradually expanded. They also changed as societal trends changed and new areas of emotion or behaviour became sites for concern. Psychologists, psychiatrists, and paediatricians have thus become increasingly involved in “discovering” apparent indicators of an ever-increasing range of disorders among the children they survey.
These developments in the way we think about childhood and its problems interact with the political, economic, and social changes seen in the last few decades in the West, some of the hallmarks of which are the movement into smaller family and social networks, decreasing amounts of time that parents spend around their children, aggressive consumerism preying on children’s desire for stimulation, greater involvement of professionals in child-rearing activities (and advice on childrearing), and a sense of panic about boys development.
Psychiatry and psychology can easily become political tools, as they have in the past, not just in totalitarian societies but also in democratic ones. The needs of a service-based economy are different from that of a primarily manufacturing one. In service economies, poor socialisation skills (of the superficial variety) in the workforce are perceived as putting the economy at a disadvantage. The need to inculcate early social skills and “emotional intelligence” thus becomes a concern for the ruling classes, teachers, and ultimately for parents.
Although few schools in present Western society resemble the more rigid authoritarian schools of 19th century Europe, mechanisms for disciplining children have not disappeared, simply taken on a subtler form. In the practice of diagnosing and medicating a child with ADHD for example, we see surveillance and identification followed by an attempt to intervene to correct and “discipline” children who don’t live up to expectations of teachers and/or parents, who have, understandably, become concerned that the child is not conforming to the socially expected standards of conduct.
Whilst schools may acknowledge the individuality of each child, they are unlikely to escape definitions of what is considered to be “normal” for children at a certain age, and this will shape what they expect from children in their classes and what they do when they identify an individual whom they fear is not accomplishing these age-based expectations. Teachers and parents, like the psychologists, psychiatrists, and therapists they refer these children to, then become part of imposing a different form of discipline to render a child docile and obedient enough for a teacher to carry out their job or parents to run a household, without breaking the law on children’s welfare and rights through more overt forms of punishment.
Western psychiatry and psychology have constructed a series of “normal” stages of development that children are meant to progress through. Teachers are then part of the systems of surveillance in place to pick up those who are deemed to have failed to adequately achieve any of these narrow, age-dependent stages and who are then referred to get extra “help” (a nicer word than “discipline”).
The types of professional and expert care you get will then be through the systems and services that have all the unscientific ideology I have been describing throughout this book. They are likely to enshrine and solidify the suspected “disorder” a child is thought to have and thus satisfy the suspicions of the teacher and parent. The unintended consequences of this is to render the child labelled with a potentially life-long stamp that limits what they, their parents, and their teachers may now expect from them, at the same time as freeing the carers from relying on their own knowledge, skills, and intuitions, as it is now the job of these “experts” to know what is going on and what to do about it.
Our view of childhood changes over time. At one time, back in the Victorian era, when the economy needed large numbers of workers for manual tasks that required mentoring rather than extensive academic learning, child labour was viewed as a normal state for children, and something that taught them discipline, numeracy, and prepared them for the responsibilities of adulthood in an age of hierarchical relationships that were strongly class-based. We now look back with horror at the idea that children could have been sent to work down the pit or up the chimney, viewing such a life as “robbing” children of their “childhood.” Yet child labour was the normal expectation for children in Europe and North America about 150 years ago (not long ago at all in the scale of human history).
What will future generations look back and say about childhood today? Will they wonder at the cruelty of creating these compulsory institutions that children have to go for most of the first 18 years of life, where they have test after test, expected to conform to increasingly narrow expectations of age-based behaviour, etc.? At the very least it seems legitimate to speculate on how current economic forces and lifestyle choices have influenced our own view of childhood, how this may affect the way we think about and raise today’s children, and how this, in turn, may impact their actual behaviour.
As parents deal with longer hours of work, both parents working, commutes of greater distances, and less family time, children who were previously seen in more ordinary ways as merely fidgety or restless, shy, or who talked too much are now viewed as suffering from psychiatric disease. An expectation that children should want to pay attention, cooperate, and demonstrate independence and empathy within structured group settings has come to be viewed as a more important “need” for our children than would have been the case even a couple of decades ago.
Changes in the concept of self
With the demise of “welfarism” in the post-Thatcher politics of the 1980s, and the growth of a more aggressively competitive free market ideology, modern Western governments promoted the idea of the “free” individual able to compete in the free marketplace for the best jobs. Societal-wide protections diminished, social solidarity was seen as suspect, and a narrative took hold that our communities were made of two main classes of people: the strivers and the skivers.
This division into individual angels or daemons has and continues to be a powerful way of distracting our collective attentions away from the misery that structural inequalities bring—away from noticing the underlying class structure that becomes more visible at times of crisis, such as after the 2008 financial crash.
I am writing this at the moment, sitting at home in the UK in the middle of the Covid-19 pandemic crisis. We are at it again. Whilst there is belatedly some recognition that the low-paid workforce turned out to be much more important for the running of society, much of the media coverage seems to be 24 hours a day airing of stories about individuals who are either “heroes” (battling on the front line, celebrating donating a bit of their millions, etc.) or “villains” (selfishly not correctly observing the imprecise rules of the lockdown).
Most frontline workers would rather have proper personal protection equipment than be heroes; most of the villains are just trying to stay sane in an insane world. I wait to see whether after this crisis, the fragility and unfairness of our economic system and the values that come out of that have become visible enough to make the endless distractions difficult to keep up.
The individualising with stories of shaming and/or valorising means that policing no longer just involves the army, law, and prisons. There is a greater emphasis in systems that rule by consent in getting people to police themselves. A colleague of mine who grew up in Iron Curtain-era Poland commented how she felt that she knew what to expect and what the rules were to keep you out of trouble in cold war socialist Poland. After many years of living and working in the UK, she came to feel that personal life was much more precarious in the UK.
Whether at work, in public, or at home, she felt there were many unwritten rules and expectations about how she should behave, her attitude, the words and expressions she used and so on. She felt a much higher burden of self-monitoring in the UK than in pre-the fall of the Iron-Curtain Poland. There is a pervasive sense that individuals are all the time performing and trying to keep their ordinary human fallibility from being seen.
Much of the work on defining who does and doesn’t fit into our social standards is done by the individuals themselves. In a capitalist, market-driven economy, mass consumption is vital to the maintenance of the system and therefore becomes an important part of our consciousness. In such a society, even personal relations become clouded by the “compare and compete” value system. Like the stereotypical consumer wife comparing the whiteness of her sheets with those of her neighbours, people in consumer societies constantly compare their own inadequacies with those of others.
This practice of self-examination causes a cult of self-awareness. In doing so it can create inner qualities, including whatever passes for personal growth, with every day one seeking to make oneself a better product—new, improved, best and brightest yet. This internal monitoring can become as draconian as the secret police: either you monitor yourself, find yourself inadequate in some way and so keep consuming to fill whatever hole you have discovered and so keep the economy moving and fit in, or if you don’t, you risk a variety of professionals becoming concerned about your well-being.
With the goal of self-fulfilment and gratification being so hard to achieve, and the competitive mistrust of our personal relationships being promoted by consumer culture, it’s not difficult to see why more and more of the population become concerned about their psychological state and/or their children’s. As governments become aware of the problems of empathy and lack of it, so interest in conditions deemed to be based on or caused by this lack grows, and support for those researchers and services that claim to be interested in early detection, prevention, and treatment of this grows too.
The emergence of the service economy has seen the harnessing and manipulation of human desires and sexuality, especially through advertising, in service of increasing the demand for a whole variety of products. The service economy is dependent upon selling, including selling one’s self. In such a framework, what place is there for “truth” or the inability to manipulate your facial expression and body language to sell a product? In such a society the inability to do this “properly” makes the person less productive and thus a potential problem for the smooth running of such an economic system.
The adoption of autism as a label of choice for such alienated and labelled “freaks,” “geeks,” and “weirdos” provides a way of turning this problem away from a human one generated in large part by the socio-political system people are trying to survive, towards a technical problem for the expert to turn into a commodity that can be branded and sold. Hence, we get an industry of experts, treatments, books, courses, research, institutes and so on growing up around popular “diagnoses” such as ADHD and ASD.
Individualised consumerism has created a heightened awareness of appearance and style. The invasion of images from media and advertising creates a dream world, a virtual reality to fantasise about, as commercials sell us images of ideal lifestyles that they attach to their products. Our culture has become so consumed by this perpetual imagery, that we can now literally take off one identity and slip on another as we change our clothes, make up, shoes, etc. We are seduced into becoming so concerned with our surface identity that we submit ourselves to long surgical procedures to change the shape and appearance of our bodies.
In this world of consumer capitalism everything becomes potential objects for exploitation and profit. Children get advertising targeted at them from a very young age. Advertising aimed specifically at children complements markets in toys, foods, educational equipment, fashion, sportswear, and so on. Indeed, the dominance of the idea of mental “health” is a product, at least in part, of market economy consumer capitalism.
Conceptualising problems as “health” individualises suffering (thereby absolving and mystifying the role of social factors) and creates new markets (for example through the pharmaceutical industry). It is within the ideology that creates such fractured, superficial identities that we discover the same superficial labelling of identities on those decreed by modern institutions as mentally ill or disordered in some way.
One of the outcomes of this cultural milieu is a move away from understanding based on depth and a connection with physical reality and everyday functionality, toward a culture where surface factors, such as image, appearance, the short term and the immediate, have become more enduring and characteristic. These impact both our view of children and their behaviour (which are thus more likely to be shaped by surface signs—such as ASD as an easy explanatory label) as well as deeper effects on our consciousness in terms of what we see as important to bring some sense of contentment to our lives.
The marketization of our economies, in particular the growth of a separate financial economy, has led to a decline in the manufacturing sectors and the growth of the service industry. Embedded communities, such as those around the coal mines, withered and died. Communities of men who used their bodies in hard manual labour and then socialised together disappeared. The idea of solidarity and the working man’s camaraderie forming around the trade union and principles of social justice was replaced by the individualising of problems in the form of workplace “stress” that required counselling.
Companies have traded job security, stability, and a unionised workforce for employee well-being services, mindfulness classes, and mental health days. Anxiety, stress, depression are things that happen to the worker that our enlightened approach to mental health can now treat, so you can return to the bullshit, insecure jobs we offer without complaint.
This new world of the pseudo-emotionally aware language of mental health, with the requirement of having strong “people skills” in the workforce and the changing roles for men in the workplace, means that there is now a greater political and personal demand for men having the sort of enhanced social and emotional flexibility they didn’t previously need.
In relation to autism this leads to an interesting paradox. One of the core features of the diagnosis implies a lack of empathy. However, improving the “emotional intelligence” of the workforce is for the purpose of using empathy to successfully exploit and manipulate your customers and workforce into doing what you wish for your own personal gain.
It seems strange that people who find it difficult to understand emotional nuances but who can be compassionate are pathologised, yet those who can use an understanding of others emotional state to manipulate them for selfish ends are rewarded. This is precisely what has been happening in banking and many other businesses, with legislation, economic regulation, and the value system underpinning this effectively encouraging the sort of narcissistic behaviour that brought whole economies down through the legalised pursuit of profit without regard for social responsibility.
Modern Western culture, particularly through advertising and the needs of the service industries to be (pseudo)friendly and welcoming in a smarmy way, demands more convoluted and complicated forms of socialising than in the past or in many other cultures. You now have to be good at selling yourself and putting the customer at ease so that they will buy the latest useless shite you are offering them.
In this culture of survival of the smarmiest, it is little wonder that those not particularly good at that skill might get marked out as having something “wrong” with them. Most of us deep down know that this is not a nice culture. It is a culture that leaves us open to being conned and so makes us suspicious of the motives of others. The social expectations that arise from this pseudo-feminisation of the macho neoliberal culture are more troubling to me than the diversity of socialising styles we potentially possess.
The problem with boys
As with most so-called psychiatric diagnoses, we cannot escape the one socially constructed class of people with biological differences that go deeper than surface—that is sex. Psychiatric conditions, in general, follow the pattern of boys with behaviour issues being the main customers amongst children; sex differences in customers then start to widen even in adolescence as more girls present with mood issues; women then become the main customers once we move into adulthood. Whilst sex is of course a biological fact, how we construct our beliefs about the expectations of males and females is socially constructed and much debated. Gender, therefore is socially constructed.
ASD, like ADHD, is dominated by boys in childhood, with an increase in numbers of women who identify with being autistic as we get into late adolescence and adulthood. So what is it about boys and masculinity (the social construction of boyhood) more broadly?
Whilst the majority of societies around the globe remain patriarchal, the behaviour of boys as a societal and medical concern is relatively recent and largely confined to the West, though the export of Western values also means numbers being identified with these “disorders” of childhood like ASD, are rising.
In some cultures, boys are more highly prized than girls for a variety of reasons. Boys then grow up in a more privileged position and often with a view of themselves that reflects the preferential treatment they have received. Parents then have less concern about policing or worrying about these boys’ behaviour. Instead there may be greater concern about emergent female sexuality and girls and young women are then more likely to be targets of gaze and control.
Such culturally institutionalised sexism that favours boys will obviously have an impact on the way boys and men view themselves. But before we in the West get smug about Western culture being more advanced and liberated in its sexual politics, I would argue that Western culture is more covertly driven by masculine (macho) ideals and that it sometimes provides an even worse image of what it is to be a man.
Models of “what it means to be a man” are present in all cultures. In most cultures there is a differentiation between expectations for boys and girls from early childhood, often from birth (thus, boys get blue clothes, girls pink, etc.). In many Western cultures (unlike most other cultures) boys then enter institutions (particularly schools) that have non-gendered expectations with regards most things (such as behaviour, style of learning, teaching methods etc.). However, within the playground of peer group sub-cultures, gendered beliefs and expectations continue to be constructed.
We live in an era where children are often characterised by polarised anxieties about the risks they face and the risks they pose. These anxieties often have a gender bias, with girls being viewed as “at risk” and boys as posing risk (through unruly, violent, and impulsive behaviours). This concern about the potential for boys to become un-empathic thieves and thugs gets played out in the media and in homes up and down the country.
It starts very young. You now hear it in conversations between parents and carers or teachers whose kids are at nursery or just started school. It’s nearly always the parents of boys for whom the concern about their behaviour is being raised. Institutional carers (like nursery and school teachers) have so many demands on them and rules about what they can and can’t do, that questions about boys’ socialisation skills and aggressive behaviours start before they can even string a sentence together.
And autism seems the current favourite potential explanation. Not that they are young, develop at different speeds, that they are more energetic, or curious, or just boys, no, they are behaving like this maybe because they have autism. Plant that seed in the head of a parent with a young child and watch that grow. Even if you don’t believe it, can you let go of that thought? How will it subsequently shape your anxiety about your child and how will that effect your interactions with them?
Once these fretted about boys are in the school system, they will experience different pressures and expectations that they have to learn about and negotiate. In the playground they will be exposed to varieties of ways in which “what it means to be a man” are available, but there will be a dominant model, a main way of understanding what boys and men should be like. In the West generally, that dominant model that we see in films, stories, and everyday situations is built around the idea that men display power through their bodily abilities (skills in sports and athleticism), non-display of emotions (apart from rage), ability to be in control, and to be a competitive performer.
This is the model associated with what is sometimes referred to as “the patriarchal dividend,” that is, the societal expectation of being in a more powerful and influential position than women. Boys who stray from this dominant model can become targets for bullying, teasing, and exclusion by their male peers.
So far then we have a picture emerging where boys are the main customers for an ASD diagnosis as a child, where concern and scrutiny of behaviours from parents and other carers (like teachers) starts young, and where they encounter models of masculinity in the playground and peer groups that emphasise a “hyper-masculinity” of strength, power, competitive performance, and control as the main model to aspire to. But this is not what the mainly female carers and the institutions they work in want them to aspire to.
Free-market capitalism can be seen as the most complete and organised example of a political, social, and economic system based on the values of masculinity. Its social and psychological values are based on aggressive competitiveness, putting the needs of the individual above those of social responsibility, an emphasis on control (rather than harmony), the use of rational (scientific and empirical) analysis, and the constant pushing of boundaries. Such a system produces gross inequalities (both within and between nations), has reduced the status and importance of nurture, and therefore the esteem attached to the role of mother.
As more and more women are brought into the workplace—an economic necessity to increase the workforce needed to service the market economies demand for continuous growth—new forms of selfhood need to be developed in order for such a shift in women’s social role to be sustainable. As a result, the professional career of women now has more esteem attached to it than the role of motherhood, which has increasingly lost its status as a culturally valued role within an individualistic society. This movement out of the family sphere and into the public and work sphere has not been equally matched by a corresponding reverse movement of men out of the public and work sphere into more family and nurturing roles.
At the same time as there has been a movement of adults out of the family; there has been a movement towards childcare becoming a professional (mainly female low-paid worker) activity. Thus, what appears to be happening in the psychological space of childhood is an increasing feminisation of some aspects, particularly educational ones, and a professionalization of the task of raising children.
There is now a body of literature that suggests that the educational methods currently used in most Western schools (such as continuous assessment and socially orientated work sheets) are favoured more by girls than boys. This is then mirrored in national exam results where girls are now often achieving higher grades than boys across most subjects. Boys also dominate the special needs provision, where they are marked out as having disproportionately high amount of problems with poor reading and poor behaviour.
With schools under market economy political pressure to compete in national league tables, and boys hindering schools’ performance more than girls, they are at greater risk of exclusion and poor scholastic outcomes. It is hardly surprising that boys have come to be the “failed” gender, provoking anxiety in their (primarily female) carers and teachers.
The feminisation of certain aspects of the masculine capitalist culture we live in has also had an impact on the working environments our education is preparing us for. Ideas such as cultivating “emotional intelligence” in management and working relations started to become more popular in the 1990s.
Far from an enlightened move toward a nurturing and caring society, this is part of developing “better” ways to motivate the workforce and manipulate the consumer. Thus, modern Western culture demands more convoluted and complicated forms of socialising (in an image-obsessed age) than in the past (or in many other cultures), in the context of the diminishing size of families resulting in more intense emotional contact between members of these smaller units, and less opportunity for contact with a wider range of people.
The search for the technological solution
One of the features of modern, economically developed, consumer societies is the continuous advance of technologies and our ever-greater reliance on them in modern life. When technologies are functioning properly, they operate in the background and their efficiency, function, and use, are thus taken for granted. The better the technology, the less we have to think about it—it’s there, functioning just outside our awareness and making life easier for us.
Thus, in our efforts to get from A to B we first had the bicycle, then the car which made the journey easier and more efficient. The car then evolved to become faster, safer, smoother, and more comfortable and the technology continues to evolve, so we get the automatic car, satellite navigation, lights that come on and off automatically, a climate-controlled atmosphere, and so on.
The attraction of technological advance has had a large impact on our day-to-day life and, indeed, our consciousness. So attractive are the appeals of developing technologies that apparently make life easier, more efficient, and streamlined, that hardly a discipline can be found that has not turned, to some extent, to technology to find new innovative solutions.
In this respect, medicine is a good example of a profession whose core value system has shifted from a primary focus on the care ethic toward a primary focus on a more technologically orientated ethic, which revolves around efficiency, accuracy, efficacy, and economy. The focus is now on more technical aspects with news of breakthroughs and advances being given a higher status than the human aspects of the job.
This general technicalisation of life has encouraged us to search for simple solutions where we rely on the technical expertise of various technicians in their trade. These experts bring to bear their scientific knowledge and devise a simple technical solution which requires minimum thought by the user and which, when applied, will deal with the problem and render it into the background as all good technologies should.
It’s easy to see the appeal of the idea that the interpersonal problems that life inevitably brings can be reduced to a simple underlying disorder (such as ASD) which can be fixed by the expert who has diagnosed the nature of the problem. It is also easy to see why, in such a cultural context, more time-consuming approaches that require thought, reflection, mental effort, and greater engagement with subject matter that evolves and changes over time, have receded in popularity.
But, perhaps there is good reason to believe that the science has led to breakthroughs that justify this technicalisation. Perhaps we can justify the use of ASD as a category on scientific grounds?
We will explore the scientific basis of ASD next week, in Part 2 of Chapter 4.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.