So, the media are currently trumpeting that man flu is Definitely Real. I made a thread explaining why their journalism is shitty.
So, the media are currently trumpeting that man flu is Definitely Real. I made a thread explaining why their journalism is shitty.
Content note: this post discusses mental illness, mentions self harm, suicide and sexual violence
It’s been a while since I’ve considered the Guardian a decent source of news, but sometimes things get egregious. Yesterday, an article entitled “Mental illness soars among young women in England – survey” was put out, and their reporting… wasn’t very good.
A study was released finding that young women aged 16-24 are at very high risk for mental illness, with more than a quarter of the group experiencing a condition, and almost 20% screening positive for PTSD symptoms. This has all risen since 2007: not just for young women, but across genders and age groups. What, according to the Guardian’s heavy focus of the article, is to blame?
Social media, apparently.
The Guardian’s reporting focuses heavily on how social media is to blame, selectively quoting researchers mentioning social media to the extent that I would love to see what questions they were asked (my personal favourite: “There are some studies that have found those who spend time on the internet or using social media are more likely to [experience] depression, but correlation doesn’t imply causality.”)
Then there’s the case study telling her story of her experience with PTSD and triggers. She talks a lot about film and TV, and the stress of university, and yet somehow her case study is titled “Social media makes it harder to tune out things that are traumatic”. She mentions it briefly in the last paragraph–while still mostly focusing on film and TV!
Now, the reason the Guardian’s twisting of this survey for their own ends is so particularly problematic is the importance of the research. You can download the whole report here, or read a summary here.
It’s quite a well-done survey, a very robust look at mental illness in England, and laying groups who are most at risk. You know me, and how quibbly I can get about published research. This one is actually good. However, it’s worth noting something they didn’t measure in the survey: social media use. This means, of course, it’s absolutely impossible to draw conclusions from the data about social media and mental illness from this research. The survey authors mention that their young cohort is the first to come of age in the social media age, which is true to a certain extent, although I am in an older cohort and came of age in a world where I constantly chatted to friends online, whether I knew them in the meatspace or not. Again, it would be nice if they’d consistently measured online behaviour across studies.
I’ll quote one of the other key research findings here, because again it’s crucial and if you read the Guardian you’d never know about them.
Most mental disorders were more common in people living alone, in poor physical health, and not employed. Claimants of Employment and Support Allowance (ESA), a benefit aimed at those unable to work due to poor health or disability, experienced particularly high rates of all the disorders assessed.
So. Let’s speculate with the results then. What else happened between 2007 and 2014 that might have had a negative impact on people, especially those who are on disability benefits.
I’ll give you a clue. It happened quite soon after 2007, and the young cohort would have come of age into this, as well as more people using Facebook.
One more clue: it rhymes with wobal winancial wisis wand wausterity.
These are young people who have grown into a world with no prospects, with a hugely gendered impact. Of course, once again, it’s just speculation, but it’s slightly more robust speculation than the Guardian’s because they measured benefit receipt and employment status.
As women, a lot of us would have chorused “no shit, Sherlock” upon seeing the results, and seeing how gendered the results are. We deal with more, and it’s even worse if we’re poor.
The Guardian has a bit of a hateboner for social media, and, unfortunately, this has completely blurred its analysis and reporting of what is an important survey that actually found some interesting trends over time, as well as a bleak snapshot of the current realities.
Today, I am mostly furious about a particular capitalist value: lack of sleep. So I made some twitter threads.
Firstly, about Jeremy Corbyn and leaders. Worth noting, as an addendum, that Margaret Thatcher bragged about sleeping 4 hours a night and Definitely Never Made A Bad Decision Ever. Also, Hitler, who used stimulants to stay awake.
The public health double standard: smoking, drinking, eating sugar, etc are frowned upon, and people who do some of these things are deprived medical treatment. Why is it, then, that an equally dangerous health behaviour–willing sleep deprivation–is considered all right… if not actively valued and encouraged? (and, certainly, medical professionals are subjected to hugely dangerous sleep disruption)
What do I envisage? As a transitional demand, I’d like “That’s too early for me” to be a valid and accepted reason not to attend work engagements. I’d like for homeworking and flexible hours to be the norm, and if sleep disruption is necessary for a job, for “danger money” to be paid: we are, after all, ruining our health. And, ultimately, I’d like for work as we understand it under capitalism to be abolished, but I get that that one’s a big ask, and I’d be all right with the other two demands being implemented within my lifetime.
Content note: this post discusses gatekeeping healthcare, and structural oppressions
Various NHS commissioning groups have decided to cut costs by blocking access to surgery for people deemed to be obese, and smokers. To the terminally naive, this can be considered an intuitive, common-sense solution, which would encourage people to make better healthcare choices. To the rest of us, we know that choice is, for the most part, an illusion, and that such bans to healthcare access affect certain groups disproportionately–coincidentally, the same groups who make for convenient scapegoats.
First, let’s look at who’s more likely to smoke. LGBT people are much more likely to smoke than straights, and less likely to try to quit. People with mental illness are also far more likely to smoke–up to 2 in 5 cigarettes smoked will be by a mentally ill person. And of course, these groups are not mutually exclusive, with LGBT people at a higher risk of mental illness. Also, poor people are more likely to smoke, and deprivation makes it harder to stop.
When it comes to obesity, let’s first have a look at what’s deemed obese: some CCGs are using the BMI of 30 as a cut-off, which is an absolutely terrible idea. BMI is a nonsense statistic, particularly when applied to how calculating fat an individual is. A substantial portion of Olympic athletes, upon returning after their heroes’ welcome and perhaps needing an operation on injuries, would be turned away by the NHS, because their body weight is too “obese” for surgery–among other issues, BMI does not distinguish between muscle and fat. It’s also particularly statistically dodgy when someone is particularly tall or short, so Usain Bolt and Simone Biles should be glad they’re not going to find themselves at the mercy of the NHS.
As well as the muscular and the all-round encouraged under usual circumstances, who else is likely to be considered obese? Certain minority ethnic groups are more likely to have BMIs over 30–in the UK, particularly Black Caribbean, Black African, Bangladeshi, Pakistani, Indian and Irish people. Again, mentally ill people are more likely to be at risk, both as a result of their illness itself, or as a result of medication side effects. And once again, poor people are more likely to be considered obese. People with physical disabilities are also more likely to be obese. Incidentally, one of the surgeries “obese” people are blocked from accessing is hip or knee replacements–exactly how the NHS expects them to exercise to lose weight while unable to move, they have not yet explained.
So, NHS trusts with these policies will be disproportionately picking on groups who have been historically and currently disproportionately picked on and blamed for their own misfortune. It is yet another manifestation of the general state approach to behaviour change, which goes like this:
Step 1: Deprive marginalised people of a basic need
Step 2: ??????
Step 3: BEHAVIOUR CHANGE!
Unsurprisingly, there’s no evidence that this works, but it’s a nice little bedtime story for fascists-in-denial to tell themselves, that people are being refused healthcare because they made poor life choices.
At this point, the terminally naive might pipe up that obese people and smokers are at a greater risk of surgical complications than non-smokers or thin people. Yes. That’s true. However, there are also lots of other groups who are at greater risk of surgical complications. Like the elderly. Or the very young. Or malnutrition. Or even drinking moderate amounts of alcohol. Or being a bit cold around the time of your operation. Think of the billions that could be saved if they stopped operating on moderate drinkers: suddenly, there’d be barely any operations, especially if they also stopped operating on kids!
Of course that would be absurd: another myth in play here is that healthcare needs to be rationed at all. The NHS is in crisis, but this crisis isn’t caused by obese people, or smokers, or immigrants, or striking junior doctors, or whichever scapegoat you want to pick. This crisis has been manufactured by years of butchering the NHS. Hospitals are not given enough money to function, and given unrealistic targets to meet on these shoestring budgets, along with a hefty dose of bloated private sector provider inefficiency. In truth, with adequate money, the NHS could happily accommodate everyone who needed treatment.
Given that the government would be perfectly happy for the NHS to go tits-up so the private sector could further cannibalise it, that’s unlikely to happen–that harm comes to the most marginalised people is simply a welcome bonus.
Content note: this post describes in detail the Prime Minister having sex with a dead animal, and relates an encounter in therapy
It hit me because I realised I probably have aphantasia.
I’ve gone for thirty years of my life, gladly ticking along, thinking about things, writing about things, describing and remembering things, without ever having questioned that maybe I wasn’t doing it in the same way as everyone else. When I think about things, it’s never visual. I’d sort of assumed it was much like this for everybody else.
For my entire life, I’ve thought that phrases such as “mental image” or “visualisation” were metaphorical, and it was absolutely fascinating to learn that not only was this not true, but this is not true for the majority of people. After all, how can one describe one’s own chatter of thoughts outside of the realms of analogy and metaphor? To me, “mental image” has always meant something that you’re thinking about a lot, while “visualise” meant “think really hard about doing this”. It’s hard to even express how my own thinking works, because language is so set up–as I now understand it–to reflect thinking in images.
To me, my thoughts mostly come in the form of words, feelings and sounds. I can “think out loud”: an inner monologue, often in my own voice. There’s also more passive stuff going on, almost like reading a book that you don’t need to concentrate on–except I don’t see the words, and I don’t exactly “hear” it like a muffled radio–it’s just, I don’t know, kind of background processes which I could concentrate on if I wanted to and I cannot really articulate them how I perceive them, but it’s definitely a mush of words, feelings and sometimes even music. Maybe a little bit like people chatting in a coffee shop, and you occasionally pick up snatches of the conversation going on?
If pushed–for example, by tests checking your ability to visualise–I can bring up a picture in my mind to some extent. For example, the test in this article starts with asking you to picture someone you see frequently. I concentrated hard on visualising my friend, A, and I could eventually bring up something. But it was almost useless to me, not telling me much about what A really looks like. What I conjured up was like a passport picture: it’s arguably a decent likeness, but impersonal and ultimately looking not much like the person really looks. A dim and useless picture, devoid of any connection with the actual person. Instead, when I think about what A looks like without being forced to visualise it, I’d think of it almost as though it were a description of a character in a novel, with little personal quirks like the way A always walks with purpose, like she has to be somewhere very important.
The same is true for, say, visualising somewhere I’ve been before. It takes a lot of effort, though, like I’m squinting with my brain. I can call up a picture, but it is static and flawed, like a photocopy of a photocopy of a photocopy of an already-blurry photograph. Again, though, this information is mostly useless to me. If I wanted to remember somewhere I’d been, it’s far more vivid for me to remember specific incidents that happened in that place, as though they were scenes in a script that I was reading. It also helps a hell of a lot to create a mental image if I’m thinking about a photo I’ve seen of the place, ideally a photo that I myself took.
It explains a lot, if I have difficulties in visualisation, that a round of therapy that used visualisation heavily did not work for me. We did a lot of guided visualisation: my therapist would encourage me to visualise my anxiety as though it were a display in a museum, to visualise taking my anxiety and unknotting it. During these sessions, I would write these scenes in my head, describing in detail what I could “see”, even though I could not see anything. I would add set dressing: the whole thing read like Stieg Larsson in its levels of unnecessary detail. Nonetheless, I actually saw nothing, and didn’t find that therapy particularly helpful.
Luckily for me, my brain seems to do a lot of stuff without me knowing it that helps me function. I don’t know how I’m capable of recognising familiar faces without necessarily being able to visualise them, but I usually am.
I am aware, after reading Blake Ross’s account, that I am less impaired than him: I may not have a mind’s eye, but I have a mind’s ear and can hear music or noises or the sound of someone’s voice. I have the other senses, too, and can summon up a smell or a taste, or a brush against the skin. I also don’t have issues with memory: I can remember incidents, and so forth: they come up almost like diary entries but richer, accompanied by a host of feelings. I also don’t skip the bits in books where there’s lots of description of the setting, because I find it nice to know what a character looks like or the colour of the wallpaper in a cafe, even if I can’t see it myself.
I can appreciate visual art, but I unsurprisingly suck at creating it it. Absolutely fucking suck at it, and I always have. At school, I was asked not to do GCSE Art because I am so fucking shit at art. I didn’t really mind: I always hated drawing and sculpting anyway. I knew it was something I couldn’t do and others could, but I never realised that could be due to my inadequate visualisation ability, because I am dyspraxic and also crap at things like learning sequences of movements (which meant PE was also an unmitigated nightmare) and fine motor skills (handwriting was fucking dreadful, too).
I know I do have visuals without the effort of the mental squint, under certain circumstances. When I dream, I dream with vision as well as the other things that happen in my head. I can even experience visual thoughts when I am on the brink of sleep, and mix it in with my conscious thought.
Sadly, I am not necessarily spared the horror of a bad “mental image”: I may not see anything, but it’s nonetheless horrible. Take, for example, that time David Cameron fucked a pig. I thought about it, almost as if it were a particularly unpleasant scene in a book. I could think about the braying of the crowd, wonder about whose lap the pig whose head he fucked was in, ask endless questions, like was it to completion, and generally feel a sense of utter disgust.
Even as I write this, I realise I am probably doing a godawful job of explaining how things happen in my mind without images, because it’s very difficult in the first place to explain what thought feels like. So what does thinking with pictures feel like? I’m kind of imagining, if my own thought processes are like a bunch of cue cards with salient information on them, that yours are like a Buzzfeed article: mostly pictures, with a couple of words around them.
For me, basically, mental images are not something I find particularly useful, even if I can access them under certain circumstances. I suppose I must have learned to work around not visualising very early on in my life, and it’s something I just don’t rely on at all. As far as I’m aware, this hasn’t affected me adversely particularly. In fact, I wonder if it was a gift all along: I find it quite easy to put things into words, most of the time. A lot of my writing practically writes itself, words tripping from me. Perhaps it’s so easy for me because that’s how my thoughts always looked anyway.
This is the final piece in a short series on engagement, avoidance and trigger warnings.
Part 1: A trip to the dentist
Part 2: The banality of trigger warnings
Part 3: Exposing the true nature of exposure therapy
Content note: this post mentions food and eating disorders.
I’ve gone on, for thousands of words, about trigger warnings, but I have not yet addressed a very salient point: that I am a massive raging hypocrite.
I often don’t put trigger warnings where they’re necessary.
I want to say it’s because I forget, and at least in part, that is completely true. But there’s a reason I forget: because I am not thinking about my audience, only about myself.
A few months ago, I baked some cuntroversial bread for the first time (yes, I’m still doing it; the starter’s still alive and well, and I have a batch in the oven as I write). This provoked rather a strong outpouring of the entire internet telling me I’m disgusting, and so, for my part, I got a little bit defensive. Not really thinking of anything but my own emotional defences, I went off on one about food hygiene generally.
I probably should have included some content warnings somewhere along the way.
I did lose a fair few followers over that, and it was only from some kind pal doing something they needn’t have done and asking me to please pop up content warnings on the various food rants I was off on that I realised I was being a bit of a dick.
I’d been thinking about myself rather than thinking about other people. And because of that, a bunch of people who had been previously engaged with me on a personal and a political level, disengaged.
I had not been considerate of people with eating disorders, and therefore I had lost some of my audience.
And that was nobody’s fault but my own.
I had, once upon a time, been of the school of thought that trigger warnings might perhaps reduce engagement. I didn’t use them at all. For the most part, I didn’t see much of a necessity: I myself seldom needed them, and only under very specific circumstances.
I only began using trigger warnings because people asked me to, and I like a quiet life.
It was close to zero effort on my part to include a little warning at the top of a post. Just typing a couple of key words, briefly summarising content.
Incidentally, having looked at my own stats, I haven’t lost any traffic on posts that include a trigger warning: in fact, if anything, engagement goes up.
Here’s a funny thing that happened as I started to incorporate trigger warnings onto my own writing: I myself became more conscious. I thought more about how my writing would be received by certain groups of people, I thought more about people who previously had barely been on my radar.
I had sleepwalked along for much of my life, and the doors opened up and I began to think of other people who have historically been swept under the carpet.
Perhaps this is what those who resist trigger warnings fear most: ending up shifting, ever so slightly, away from dominant narratives centring people who were born lucky and stayed lucky.
You may have noticed I tend to use “content note” or “content warning” rather than “trigger warning” on my own writing. This is once again due to my desire for a quiet life.
A lot of tedious bores just love to weigh in when they even see a trigger warning: no wonder they think people disengage at trigger warnings, they themselves tend to use it as their excuse not to bother.
There are also legitimate criticisms of the term “trigger warning”: it’s loaded in assumptions, specifically about PTSD. “Content note” is both more neutral and more inclusive: it encompasses the many things which people might wish to engage with on their own terms, such as other mental illnesses, spoilers, and just generally things people might not want to deal without being forewarned.
A trigger warning, though, is another part of this family of textual warnings, and one where I simply find it bizarre that so many people are working themselves up into such a frenzy over.
Trigger warnings seem like a strange hill to die on: doing something which takes all of fifteen seconds, probably won’t harm anyone and saves you a bit of strife. Even as I built a model for why one may resist trigger warnings, I still struggle to understand the visceral dislike of something so utterly banal.
When I adopted trigger warnings, I expected little to change, and for the most part the only thing that changed was within me: I became a slightly better writer and–I hope–a slightly better person.
This series was made possible by my patrons on Patreon, who give me the motivation to keep on writing. If you found this series helpful, please consider becoming a patron.
Content note: this post discusses mental illness and psychiatry, PTSD, phobias and snakes, mentions rape.
A daytime chat show: the topic is phobias. The host promises that his guest therapists will cure these phobias, right in front of our eyes, using exposure therapy. A guest, a young woman, talks about her phobia of snakes, and how it prevents her going outside.
The host then calls a man to the stage. He enters from the back, walking up the aisle between the audience for maximum effect. They whoop and cheer, because he is carrying a large snake on his shoulders. The woman on stage pales and begins to shake as she sees him coming towards her. As he gets closer, she vibrates more and more.
The man plonks the snake around her shoulders and she screams and cries, because she has a phobia of snakes. The audience is delighted by this spectacle. Their whooping intensifies with her screaming: there is something almost medieval about it. She screams until she can scream no longer. I turn off the TV, disgusted.
The scene described above is what too many people think is meant by the term “exposure therapy”, which is usually the justification given to lend a scientific veneer to the argument against trigger warnings.
Trigger warnings, it is argued, are unhealthy. The main source for this argument is the infamous Atlantic article, which was written by a psychologist. Which, yes, it was written by a psychologist, but not one who specialises in anything clinical–or even one who fully understands the behavioural model on which exposure therapy is based. He’s a “moral psychologist”, who naturally therefore views these things throw a moral lens, rather than anything else. As the old saying goes, when all you’ve got is a hammer, everything looks like a nail.
Exposure therapy forms the core of the supposedly scientific argument against trigger warnings, but everyone putting this across is wrong.
Exposure therapy isn’t simply randomly exposing people with anxiety disorders to their anxiety triggers, and assuming they’ll eventually get better and grow some resilience. Exposure therapy is a wide term for a number of different approaches which all involve exposing the client to the thing that causes their anxiety under controlled circumstances. In some approaches, the person might be trained in coping mechanisms before being exposed to their trigger in a safe space. In others, there might be a stepped exposure to the trigger with the support of the therapist–using the example of snakes, that might be first looking at a picture of a snake, then touching a bit of snake skin, eventually working up to holding a snake over the course of the therapy. Some approaches might even use virtual reality or visualising the trigger, and so forth. Crucially, though, exposure therapy isn’t just exposing someone to their trigger and assuming they’ll just get over it and become a stronger person: the exposure happens in controlled circumstances–and usually in a manner which the person controls (indeed, in PTSD, exposure therapy is more effective when it’s self-controlled rather than therapist-controlled).
Exposure therapy is more commonly-used in treating phobias: when used for PTSD, the most common form involves a combination of visualising and processing traumatic memories with the help of a therapist, and taking a hierarchical approach to exposing oneself to triggers in real life. Again, trigger warnings are not at odds with this: hell, providing information about content could help someone undergoing exposure therapy undertake their week’s task of, say, watching a rape scene in a film, by having been told in advance that the rape scene is there!
The fundamental lack of understanding of exposure therapy is perhaps a driving force in the peculiar belief that not allowing survivors control over their engagement with triggering material is somehow for their own good.
Far from being at odds with various therapeutic models, trigger warnings can be congruent. It means that exposure occurs in circumstances which are controlled by the person rather than just at random. Exposure therapy is hardly the only model for treating PTSD, and may not necessarily be the best: however, I have not managed to identify a treatment for PTSD which is incompatible with trigger warnings.
Of course, the other primary conjecture used against trigger warnings is that they cause avoidance. The only attempt to systematically research it I’ve found is an abstract for an unpublished undergraduate dissertation with a tiny sample size of twenty and rather a lot of tests run on that very small data set (including dividing it into subsets!). If there’s any evidence of the effect of trigger warnings on avoidant behaviour, I’d love to see it. Note exactly what I asked for. I am not asking for you to leave a screed in the comments about how your feelings suggest this is so (you can call it “common sense” if you like, but it isn’t).
If warnings about content were actually harmful, we would expect to see psychologists coming out against the banal, everyday content warnings that you see on TV, or before films. We don’t.
So, trigger warnings aren’t going to harm anyone. Are they actually helping anyone?
Sadly, we don’t know, because there is an unwillingness to provide the data which could identify whether they’re effective. Given how politically-charged the issue is though, there sadly aren’t any large-scale studies on the impact of using trigger warnings in higher education, which is a primary battleground in this debate. There don’t seem to be any quality studies at all.
This is likely because very few institutions have tried, despite it being fairly easy to pilot. What we do know is that dropping out of college happens more if you’ve experienced violence. We also know that a frighteningly large portion of the population has experienced sexual violence. With this happening, what exactly do lecturers have to lose by piloting whether trigger warnings improve retention rates?
Of course, some may wonder how this all fits into practice, while teaching traumatic content. An article in The Criminologist, the American Society of Criminology’s newsletter offers some evidence-based suggestions. Criminology, of course, necessarily features teaching subject matter which can be heavily traumatic. Trigger warnings are recommended as one aspect in teaching about victimisation:
Warning early and often via multiple mediums provides students maximum opportunity to engage in informed decision-making and feel that they are in control. The first trigger warning should be on the first day of any course that includes information with the potential to emotionally trigger students. Trigger warnings should be given in at least the two classes before the presentation of potentially triggering material (or engagement with it outside of class, if that is the case), as well as at the beginning of the day when the material is presented. If an assignment is going to be shared with others, include that detail ahead of time (e.g., Hollander, 2000), so students can control how much of their experiences they share. These steps allow students time to think about what they need to do for self-care (see below) and give them an opportunity to talk to the instructor about their concerns and possible alternate arrangements.
Meanwhile, an article in the American Psychological Association’s Monitor on Psychology, suggests the following guidance, emphasising the point about how trigger warnings actually involve taking responsibility on the part of those who require trigger warnings:
Some professors, including Zurbriggen, encourage their psychology students to start doing so by taking responsibility for their reactions at the beginning of a course. She asks students to create a list of coping practices and people they can consult if they are affected by course material.
“The way the story is framed [in the media] sometimes is that students are so vulnerable or that they need to toughen up, and that’s not the issue,” says Zurbriggen. “Most trauma survivors have a lot of resilience. Providing information to students always makes the class a better experience and prepares them to dive into the material in a way that promotes learning.”
Despite all this, the evidence is sparse: the question becomes political, and therefore the objections, too, are political, and largely driven by emotion. It’s therefore only fitting that tomorrow’s conclusion to this series will also be political and largely driven by emotion and my own experiences.
Part 4: A strange hill to die on
This series was made possible by my patrons on Patreon, who give me the motivation to keep on writing. If you found this series helpful, please consider becoming a patron.