I didn’t use the word statistical in the title because some of you might have backed away immediately. I am introducing several pieces with this one about measurement and numbers in mental health research and practice, because of a scientific sheen they lend to the so-called ‘evidence-based’ movement currently in vogue – which makes it seem somehow irrefutable. After all, who wants to argue with using ‘evidence’ when we set out to work on or solve life problems?
But what are the philosophical underpinnings for what constitutes evidence and how have quantitative approaches so effectively trumped qualitative approaches in applied psychiatry, psychology, and the like? Furthermore, is it possible that quantitative ways of studying human experience may actually promote constricted, myopic views that hurt or oppress human beings?
In future pieces, I’m going to aim my analysis to the Western mental health movement’s entry into indigenous communities, but the points I attempt to make here are intended to pertain to all people.
I’m into how history affects us now so I ask, what historical factors lead us to believe we can quantify the experience of people?
When I think about that question, I remember my terrifyingly-brilliant experimental psychology professor in my first year of graduate school. I’d gotten off to a bad start with her by pointing out she had a black mark on her forehead on Ash Wednesday. Having revealed my utter ignorance of all things Catholic, she brought forth her Jesuit-informed wrath by assigning our class a succession of four ‘quizzes’ to complete by next class period. We were usually assigned but one.
These ‘quizzes’ were a kind of intellectual waterboarding with multiple essay questions like this: ‘Compare and contrast E.G. Boring’s theories regarding figure ambiguity with those of George Elias Muller.” Most of us were struck by Dr. Boring’s unfortunate moniker. There was only one way to complete four of these, and that was to work together in groups night and day until next class period. My fellow students were not pleased at all with me.
And yet I’ll bet most of them still remember the Weber-Fechner Law.
Developed by German 19th century philosopher-psychologists, Ernst Heinrich Weber and Gustav Fechner, this law states that a perceived ‘just noticeable difference’ (j.n.d.) between two sequential stimuli can be scaled logarithmically in relation to the comparative physical magnitude or amplitude of each presented stimulus.
Let me try again: “Subjective sensation is proportional to the logarithm of stimulus intensity.”1
I do apologize. A demonstration might help — it takes less of an incremental change in the volume intensity of your music listening device when it’s initially at low volume for you to ‘just notice a difference’ than when you turn it up at a higher volume. This exponential ratio can be scaled.
Well, that may seem a factoid useless to all but audio engineers and disc jockeys. Besides, it turns out that this mathematical relationship doesn’t really hold together at high levels of stimulus intensity.
I’m only mentioning it at all because early psychophysical discoveries like the Weber-Fechner Law were revolutionary in suggesting that Western scientists could quantify human subjective perception. Prior to that, nobody considered such a possibility. As with many a revolutionary idea, what made a lot of sense to apply in experimental psychology gradually devolved into the quantification of all sorts of subjective experience across the entire field of psychology.
This quantification of subjective inquiry got linked in with what is known as the phenomenological approach to studying internal experience and consciousness. I could mention Wilhelm Wundt, Husserl, Freud, Jung, Sartre, and all sorts of other seminal white guys in regards to that stuff, but I believe Weber and Fechner deserve much of the credit for the quantification of subjectivity.
Most psychological tests and procedures used today can be considered phenomenological methods and are philosophically tied to self-report and subjectivity. Even the psychologist as a trained observer works phenomenologically.
Some of us may be plagued by the philosophical debate as to whether what we call ‘reality’ should be based on our ‘sense impressions’ (empiricism) or upon the internal organizing and conceptualizing process of sense impressions as ‘phenomena’ within the mind (phenomenalism). Others of us may be busy texting about Kim Kardashian’s latest pose. I’m not making value judgments.
As for me, I’ve subscribed to the phenomenalist party for many years, and you may note how this philosophy clashes quite readily with today’s current fad – so-called ‘evidence-based practice.’ Yet it’s my perspective that ‘evidence-based practice’ in mental health studiously disregards its own roots in phenomenology.
Are you still there? I’m changing the subject to the bell shaped curve. It’s also called the normal curve, or Gaussian curve, and it’s a big part of the extension of the phenomenological approach into quantitative research in psychology and psychiatry. If you don’t remember or never saw it before, the bell curve looks like this:
This statistically-derived holy grail of phenomenology has deeply affected the lives of human beings all over the world. It has served as the Delphi Oracle of normality and abnormality in many facets of Western mental health practice. It is the nucleus of the ‘evidence’ of the ‘evidence-based practice’ of which we speak in mental health research.
And who brought it to us? Galileo.
Well, he was the one to first notice that errors in telescopic observations vary in a predictable manner around the correct observation. Super-bright French white-guy scholars like Abraham de Moivre, Pierre Simon-Laplace, and Lambert Adolph Jacque Quetelet blew people’s minds at the British Royal Society when they were able to mathematically formulate these error deviations and apply them to other branches of physics and astronomy. And Quetelet even started looking into criminality and developed certain early actuarial approaches suggesting that the bell-shaped curve could tell us some things about human beings.
Woh! And we were pretty okay for a while using the bell curve in regards to height, weight, and certain other simple human characteristics. Then along came Sir Francis Galton, his biographer and protégé, mathematical genius Karl Pearson, and the international eugenics movement. More about that in an upcoming piece.
The bell-shaped curve idea is the much generalized tool for demonstrating that most people exhibit a middling amount of a particular characteristic or quality, while some people have less, some more, some have very little, and some have a great deal. No rocket science there, in my opinion.
But we assume this ‘tendency’ to be true across many socially-constructed characteristics in mental health by relying on sometimes complex information-gathering tools like psychological tests and measures and large-frame research designs. It’s still very important to remind you that every bit of that work basically boils down to what people report about themselves, how they react, or how they answer questions compared to other people. We use a quantitative phenomenological approach to construct what is considered normative.
One significant and even scary assumption therein is that findings from a group of people can be applied to understanding and evaluating an individual group member as long as they were derived using adequate criteria for being statistically ordered on a bell curve (a random, representative research sample of a population is one well-known criteria).
More cocktail party words for you: the nomothetic versus idiographic debate. I don’t drink (alcohol) but these are the kinds of issues to bring up if you feel intoxicated people are crowding you. This is a philosophical debate that has never really been resolved and can be summed up with the question, can or should an individual be meaningfully compared to a normative group? Unfortunately, this ‘debate’ has become more invisible than ever under our current mental health technocracy.
If you want to know more it click here (thanks, Mr. Crane, International School of Prague). What you’re going to notice on his table behind the click is that the DSM and other diagnostic systems like ICD fit under the nomothetic approach. That’s true.
I’m strongly allied with an idiographic stance. I expand my idiographic views to getting to know unique communities in-depth (including indigenous communities). I do read normative research but don’t consider it directly applicable to unique individuals or communities in any particular way.
I take an idiographic position due to my belief that it’s more conservative ethically because I believe nomothetic categories in clinical psychology are generally socially-contrived, faddish, culturally-nested, and/or pseudo-scientific, This is especially true regarding any ‘normative research’ derived using DSM or ICD categories. That’s why I consider the entire field of so-called ‘abnormal psychology’ to be mostly rubbish.
I’m preoccupied with the moral implications for personal liberty and freedom in imposing a statistical derivation pertaining to a group of people (bell curve) on to the lived experience of a unique and autonomous individual. I fundamentally disbelieve that group generalizations in psychology (at least) can be applied to individuals. Refer back to my phenomenalistic orientation to understand more.
I also don’t accept the idea that characteristics of a ‘random, representative sample’ from a society I view as massively dysfunctional should be used to define normality for any particular individual citizen.
At the risk of entirely monopolizing your time (well, maybe you’re on a long bus or subway ride or I’ve distracted you from less important things), I’d like to offer for your consideration (á la Rod Serling) the broadly-accepted ‘mental health disorder’ concept of depression.
We all know what depression is, right?
It’s that very popular cultural word for describing a chronic negative emotional state in somebody. Depression is also a ‘major health care target’ of the Western world with its innumerable medication ad campaigns, new-fangled breakthrough psychotherapies, self-help books, webpages, blogs, magazine articles, and media guru talk shows.
And depression is the prolonged experience of sadness, right? Well, maybe it’s more than that. Okay, it’s chronic sadness and a sense of futility and self-isolation all combined, right? There’s also loss of productivity, disruption in activities of daily living, and poor social skills. I too suffer from poor social skills.
And people get sad the world over in the same way, correct? Hey, what about the existential issues pertaining to ‘what is my purpose’ and ‘why is there so much suffering in life’? Do we just relapse to unfiltered cigs and read Sartre? What about those religious and spiritual concerns in regards to depression? Watch Bells of St. Mary’s? (At 43:55: “It’s like a world inside us and it’s up to us what we make of it.”)
I mean, depression is not grief exactly, that’s different. Well, not in DSM-5, really. Let’s just agree that depression could start with grief, but then the person carries on with it for much too long.
Dave, you say, when you see depression, you’ll definitely know it. We all know what we mean when we say depression.
Well, maybe we don’t. At least, there seems to be a lot of room for individual variance there. Could it be that understanding depression means taking an idiographic stance and asking the depressed person what he or she means?
No, instead, let’s make the American Psychiatric Association the final arbiter of what is meant by depression.
And here’s the plot twist: What might it mean if people suffering from oppression are relabeled2 by a global biopharmaceutical research enterprise as instead suffering from depression?
It means that the political and social perpetration of oppression gets obscured, legitimate and understandable reactions are reframed as individual pathology, and mental and emotional affliction directly traceable to the social problems of racism, poverty, displacement, intolerance, war, and hatred are ascribed to deficiencies and impairments of the victims themselves.
Over the next few posts, I’ll be pulling together more specifics for you to delve into regarding how the bell curve, the nomothetic approach, and other quantitative methods inside the Western mental health movement have been used historically and contemporarily to oppress and disempower indigenous people.
* * * * *
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.
Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.