Liz Jones, spunk-heists and adult egocentrism

 

Oh dear. The professional trolling from the Daily Mail has today produced for us this eggy wet fart: “THE CRAVING FOR A BABY THAT DRIVES WOMEN TO THE ULTIMATE DECEPTION“, by Liz Jones. The link is clean; it won’t give the Mail any traffic that they desperately crave from printing such utter cock.

Liz Jones’s thesis is that women in their late 30s and early 40s are so desperate to have babies that most of them deceive men into getting them pregnant by stealing their sperm. Jones’s evidence for this assertion? She’s done it herself.

Because he wouldn’t give me what I wanted, I decided to steal it from him. I resolved to steal his sperm from him in the middle of the night. I thought it was my right, given that he was living with me and I had bought him many, many M&S ready meals.

The ‘theft’ itself was alarmingly easy to carry out. One night, after sex, I took the used condom and, in the privacy of the bathroom, I did what I had to do. Bingo.

Further evidence for Jones’s statement comes from anecdotes about friends, who may or may not exist, who apparently sneakily pretended to be on the Pill, and an unreferenced survey which suggested that 42% of women said they “would” do it*. Curiously, even the examples she cites of possibly imaginary friends conducting clandestine jizz-burglary seem to be more like examples of women longing for babies without nicking any semen. The survey cited also provides rather poor evidence for her claim: it says “would” do it, not “have done”. That’s quite a difference, there.

The only real evidence for the phenomenon provided by Jones is that she herself did it. Somehow, this is extrapolated into a dire warning that men should be careful when sleeping with women over the age of 37 as they’re probably only interested in his gametes. This argument has been used countless times by MRAs, but is perhaps the first documentation of it actually happening. In one instance. By Liz Jones.

Liz Jones is probably not a well woman. Steven Baxter writes eloquently of why she probably deserves our pity: this is not the first instance of Jones owning up to erratic behaviour. She has run herself into debt, and here she expresses a desperation for children so large that she resorts to downright immoral methods. It’s worrying and kind of tragic. She probably needs help rather than having her myriad psychological issues played out in a national newspaper.

In her piece on spunk-heists, Jones displays a psychological effect called adult egocentrism. Egocentrism is a cognitive bias wherein we fail to differentiate our own thoughts from those of others, and we assume everyone else thinks the same way as us. In other words , it’s kind of the opposite of absorbing group norms: rather than internalising the opinions of others, we project our thoughts onto them. From a developmental perspective, usually we grow out of thinking egocentrically by adulthood.

Not everyone does, though, and certainly sometimes it persists into adulthood. For example, research by Kruger and colleagues found that people are hugely overconfident in expecting others to identify sarcasm in text-based communication, suggesting that this was due to them “hearing” their sarcastic tone as they wrote the message. For my undergraduate dissertation, I replicated this research, finding something similar with politeness. The major limitation in these studies was that the participants were college students, who tend to show similar levels of egocentrism to adolescents. Furthermore, it may be that this environment brings out egocentrism, rather than it being a pervasive trait.

The level of egocentrism shown in Jones’s piece goes far beyond assuming someone will guess you’re being polite in an email. If she isn’t just trolling us, Jones genuinely seems to believe that because she’s done something, every other woman in her demographic bracket will do the same. Now, I’m hardly one to armchair diagnose, but it’s worth saying that is a characteristic of Cluster B personality disorders, which includes antisocial personality disorder (“psychopathy”), borderline personality disorder (a dubious classification: it may be pathologising extreme femininity, and the label is often slapped on patients the doctors don’t like), histrionic personality disorder and narcissistic personality disorder. Essentially, high levels of adult egocentrism are thought to be somewhat pathological.

We must take Jones’s warning to all men about nasty women and their spunk-stealing ways with a whole mine of salt, then. She is projecting her own experience and behaviour onto everyone else, and it is neither healthy nor factually correct.

And of course, spunk-stealing behaviour is abhorrent. Deception takes away the capacity of the other person to consent to the sexual encounter, and in my mind verges into sexual assault. It is serious, it is thoroughly wrong, and I am glad that this is something that most women wouldn’t dream of doing.

Or perhaps I’m wrong here. Maybe we’re jizz-robbing harlots, and I’m too egocentric to notice.

 

__

*A little bit of research leads me to discover that the results were published in the reputable scientific journal That’s Life! magazine.

 

 

Are all coppers bastards?

I know a song
And it isn’t very long
It goes
All Coppers Are Bastards!

-Traditional protest song

ACAB. It doesn’t mean Always Carry A Bible, which explains why many who have the letters tattooed across their knuckles do not have any religious texts about their person.

Some people hate the police. Really fucking hate the police, usually following a negative experience like a beating, repeated racially motivated stop-and-searches or other violations of human rights. Others mistrust the police thoroughly, feeling as though it might be better not to get the police involved. Many more have a neutral opinion to law enforcement, would call the cops after a mugging but otherwise displaying indifference. Some even love the police. Usually it’s people from the first group who believe that all coppers are bastards.

All of these evaluations of the police, though, are based on anecdote and experience. A good experience with the police will lead to a higher personal evaluation of the police, a bad experience the opposite.

Where does the truth lie? Are all coppers bastards?

It is time to do some science.

The police personality

The first question we need to ask ourselves is, is “being a bastard” a personality trait? There are certainly some kinds of personality which seem obnoxious and unpleasant, such as right-wing authoritarianism or the “dark triad”, a personality type which includes narcissism, Machiavellianism and psychopathy. Fortunately for us, there’s no evidence to suggest that police tend to be narcissistic, Machiavellian psychopaths or right-wing authoritarians, although there certainly do seem to be some personality traits which are common to police.

The police are subjected to tests in the “interview”, and studying the difference between those who make the cut and those who do not can provide insight into the police personality. In one comparison, it was found that successful applicants to join the police were more dominant, more independent, more intelligent, more masculine and more empathic than the unsuccessful applicants. Presumably, the unsuccessful applicants went on to become bailiffs.

A problem with this method is that those who apply to join the police may well be different to the general population. To better see if there is a “police personality”, it may be prudent to compare police to the non-porcine population. In such comparisons, police emerge as more conservative, “tough-minded” and extraverted than general population norms (matched to the police sample by socio-economic status, the statistical way of describing class). One study compared new recruits, police with less than two years of experience and the general population. Both groups of police were found to be more conservative and authoritarian than the general population, although spending time in the police seemed to lower levels of conservatism and authoritarianism. However, time spent in the police also led to a more intolerant view of immigration and more support for the death penalty: the authors concluded that the police attracts people who are conservative and authoritarian and while training has a temporary “liberalising effect”, service results in greater levels of racial intolerance.

There are several issues with police personality research. Sample sizes in studies are often fairly small, and it is difficult to choose a representative comparison group. Furthermore, it is difficult to tease out whether personality differences are a product of an internal trait or socialisation within the police. Certainly, the shifts in test measurements over time would suggest some effect of being in the police. If all coppers are bastards, it may be a product of their social environment rather than being immutable bastards all along.

Police culture

Several attempts have been made to study the “culture” in which police are socialised: in other words, police norms and what police as a group believe to be acceptable behaviour and beliefs. In synthesising insights from psychology, sociology and anthropology, it appears that police culture values an “us and them” mentality; an ethos which values bravery, autonomy and secrecy; and authoritarianism. Furthermore, there is a strong sense of hierarchy among police: they respect taking orders from their superiors. These cultural aspects may alienate them from the rest of the population, thus enforcing their own social norms.

Testing the affects of police culture is a difficult task: it is tough to investigate something so comprehensive empirically. One study investigated whether police brutality was related to police social norms. This was done by a survey methodology: police officers were asked questions about how severe they thought deviant behaviour such as corruption (in the form of accepting gifts), excessive force and theft to be. They were also asked how serious they believed their peers to think these behaviours. Of the three types of deviant behaviour, corruption was thought to be least serious, while theft was thought more serious than excessive force. The perceived opinions of their peers was an important predictor: police officers who believed their colleagues thought excessive force not serious were significantly more likely to have been complained about by the public. This suggests that the opinions of fellow police is important: in an environment where violence is thought to be acceptable, police may be more violent. A flaw in this study is its self-reported nature. A better test would be to use a network approach and identify whether more violent police socialise more with other violent police.

Culturally, acceptance of the Human Rights Act is low among police. In a qualitative interview study, it was found that since the introduction of the HRA, there has been little raised awareness of human rights. In fact, what the legislation became was a kind of bureaucratic paperwork which is used by officers to justify and legitimise their existing practices: the authors conclude it is used as a way of “protecting officers from criticism and blame”. As police culture values secrecy, this is hardly surprising.

One very important aspect of police culture is that police wear a uniform. A uniform sets the police aside from everyone else: it is hardly normal to wander round in a tit-shaped hat and a high-vis stab-proof vest, after all. This serves to increase isolation of the police from everyone else and may serve to reinforce the culture they have created. The uniform itself produces interesting psychological effects: it can create a strong sense of identity which can lead to negative effects discussed in the next section. The colour of the uniform, trivial as it may seem, also matters. Most police wear dark colours–the Met wear black. The problem with black uniforms is that they lead to aggression. No, really. One study identified a clear link between aggression and sports teams wearing black, which has clear implications for the kind of policing we may see. One wonders, then, whether the high-visibility vests and jaunty powder blue baseball caps we see on the Territorial Support Group in crowd situations are actually a measure to stop them from beating the fuck out of people indiscriminately.

The scary 60s social psychology effects

 The 60s was an interesting decade for psychology: social psychology had taken off, and ethics boards had not yet clamped down on doing really disturbing research. One of the most famous of these is the Stanford Prison Experiment. In this study, twelve participants were randomly allocated the role of prisoners, and another twelve allocated to playing guards. The guards were given khaki uniforms and mirrored sunglasses to prevent eye contact. They also wore wooden batons, which they were not allowed to use on the prisoners–they were just props. The prisoners were dressed in smocks and stocking caps to cover their hair. After a mock arrest, the prisoners were interned in the basement of a university building. The guards were instructed they were not allowed to physically harm the prisoners. Everyone was psychologically healthy when the study began.

Despite all of these safeguards, it went to shit pretty quickly. The guards began using psychological methods of torture, removing prisoners’ mattresses, forcing them to repeat their prisoner numbers over and over, refusing to allow prisoners to use the toilet or empty the toilet-bucket, and punishing prisoners with removing their clothes. A prison riot broke out on the second night. After six days, the experiment was called off. Some of the guards expressed disappointment at this: they were enjoying themselves.

This shocking study demonstrates an effect called deindividuation: the loss of a sense of personal identity in a crowd or role-play. In this situation, merely putting on a uniform and being given power led to horrifying instances of sadism. The implications of this for the police are terrifying.

Deindividuation often appears to lead to very negative psychological effects, as demonstrated in a recent Derren Brown experiment, The Gameshow. Derren gave the participants masks and the power to make decisions which would affect another person’s life. By the end of the hour, they had had this random person falsely accused, arrested, kidnapped and run over by a car. I would recommend watching the show: he gives a brilliant account of deindividuation. Of course, Derren Brown being Derren Brown, he is somewhat misleading–he uses another phenomenon on top of deindividuation, and eggs the crowd on. The effect of obedience can have consequences as dire as deindividuation.

Police forces have a hierarchical structure: orders come down from above. At our most wishful thinking, we tend to hope that the police are moral human beings who will disobey the orders given if these orders are horrific. Stanley Milgram believed something similar–that people, after the Second World War, would no longer “just follow orders”. He designed an experiment to test this.

In the Milgram experiment, there was one participant and two stooges. The participant was told they were participating in a memory test. One stooge played their learner, the other was the researcher. Every time the learner got a question wrong, the participant was told to give them an electric shock. Each time, the shock was of a greater voltage. The learner would scream, and eventually go silent. The participants, when they wavered, were prompted to give another electric shock by the researcher.

65% of participants delivered the highest level of electric shock, 450 volts. This was delivered after the learner had gone silent, and was beyond lethal. Our good friend Derren Brown demonstrates the phenomenon with none of his typical misdirection. This is actually how it happens.

The capacity to “just follow orders” is within most of us. When combined with the orders the police may receive, this is frightening.

So, are all coppers bastards?

Some police officers may be lovely people. They might be the nicest person in the world when off-duty. While at work and in their uniform, though, they are unlikely to be on your side. Combine a culture which can legitimise and reinforce violence with racial intolerance and the basic human capacity to become sadistic in a uniform and obey horrific orders, and a terrifying picture emerges. Can we trust a police officer on duty? Probably not. The capacity for being a bastard is in all of us, and the job brings it out in coppers.

Magic numbers

Let me start with a pop cultural mathematical axiom: the rule of three. In order to best calculate the number of people a man has slept with, divide the figure he gives by three. For a woman, multiply the figure she gives by three. Essentially, people lie about their magic number. And there are gender differences in the format of this lie.

It seems that science has uncovered some truth behind the notion. In this [unfortunately paywalled] study, the researchers made gender differences in sexual behaviour disappear using a nifty trick: the bogus pipeline methodology. In bogus pipeline studies, participants are linked up to something they are told is an infallible lie detector machine. This method has been widely used in psychology studies and seems to be consistent with the truth–for example, in drug studies, it correlates with physical measures of drug usage. A similar method was used in The Wire in the famous photocopier scene.

Thinking that a magical machine could whether they were lying, suddenly participants were far more willing to be honest about their magic numbers and other aspects of sexual behaviour. Contrary to societal expectations, there were no gender differences. Magic numbers and other experiences were the same.

It was not quite as dramatic as the rule of three would predict, but the results were clear: men say they have had sex with more people; women,  fewer.

The rule of three states that this effect takes place because women don’t want to seem like sluts, while men want to seem like players, and this was largely similar to the conclusions the authors of the study drew: people exaggerate or downplay their level of sexual experience due to expectations of their gender. In general, our societal expectations of sex and sexuality is that sex is something men want and women put up with to maintain a relationship. Throw in a hefty dose of slut-shaming levelled at women and it’s easy to see why people might feel a little uncomfortable with being truthful about the sex they have been having.

On a personal level, I hate it when someone asks my magic number, because I honestly don’t know an exact total. I’ve never really bothered counting.

What exactly ‘counts’ anyway? In order to calculate one’s magic number, one needs to define sex somehow. Some consider sex with a man to count if it involves a penis penetrating something: this is heteronormative and phenomenally narrow. Sex can be mindblowing without any dick-in-a-hole contact. And what of sex between two women? There is still a pervasive view in the mainstream thar it’s not really sex, and if it is, what makes it become sex? The answer here, when I’ve asked, is generally ‘oral sex counts’. Once again, creativity is lost.

And what of group sex? Sometimes you can share a profound connection with another person, without ever touching each other.

In short, it is remarkably difficult to quantify ‘what counts’ after any shift away from the monogamous, heteronormative model of sex. So even if I wanted to, I couldn’t count my number of sexual partners.

When asked, it is almost always by heterosexual men. Often I decline to comment as it’s not a particularly polite question and its answer should be of no consequence. Sometimes, when pushed, I lie, pulling an imaginary figure out of thin air just to make the conversation stop. Only once when directly asked did I honestly answer: I don’t know. Only once was I asked by someone I felt could handle the truth.

I still can’t understand the fixation with the reducing an individual wealth of fucking and fingering and frotting and filth into a bare, basic number. It’s so much more than that.

And yet, heteronormativity adopts this approach and people share imaginary figures they think someone else wants to hear. Can we not just abandon the nonsensical concept entirely?

Unlucky and lucky

Today is World Mental Health Day, and I mark it with the revelation that I have depression. One in four people will be affected by mental health problems at some point in their lives, and I am on the wrong end of those odds. Still, I am not alone: I know dozens of people who are affected by a rainbow of mental health problems. Sometimes, given my social circle, I forget that in our general culture, mental illness is still massively stigmatised.

And it is. There are many who do not believe that mental illness is “real”. Being “all in the head” is somehow distinct from physical illness. This is not true: many mental health problems require treatment, mental illnesses can be disabling, and the diseases of the mind/body distinction is false anyway. Despite this, when I go through bouts of depression, I am harangued by work colleagues about when I’ll be “over it” and back. Most days, I see tabloid newspapers screaming about how people are claiming disability benefits for depression. Of course they are. It can be debilitating.

Then there’s the treatment. I waited ages before I got any treatment. One dear friend of mine was twice referred to the wrong sort of counselling–only discovering this after having waited to receive this treatment for months. Another friend asked for bereavement counselling and was curtly informed there is a nine month waiting list for that. Treatment of mental illness leaves a lot to be desired.

Then there’s the having to explain to people that sometimes I won’t get out of bed all day, or I might run off in tears, or react strangely to something, and it’s not like there’s a magic wand to cure this problem. I’m different, basically, and that’s sometimes a little difficult.

Despite all of this, maybe I’m lucky–just a little bit lucky. As I mentioned, today is World Mental Health Day, and I have just given a run-down of the experience of a not-impoverished person living in the capital city of a developed country.

If I suffered from mental health problems somewhere else in the world, I’d probably be a lot worse off. Stigma is higher than that which is experienced in a reasonably-aware society. 4 in 5 people in developing countries do not receive treatment at all, even though treating a condition like depression is as successful as treating HIV with antiretrovirals. Mental health problems interact with other problems people face: people with HIV, cancer or other chronic conditions are more likely to experience depression, and as a result of their depression less likely to adhere to treatment regimens for their physical conditions.

And, of course, the elephant in the room: mental illness is a killer. Every 40 seconds, someone commits suicide.

There’s a lot to be done, and it needs to happen globally. Morally, we cannot let people continue to suffer from illness, and we need to get better at supporting people, both through treatment and through destigmatisation. Beyond morals, even to a cold capitalist it makes sense: improving mental health provides a big, happy workforce and a bunch of cheery consumers.

This is what World Mental Health Day is for: let us be aware of the vast public health problem in front of us, and give us the will to fix it.

Man-flu: is it a real thing?

As I write this, I have tonsillitis. So does a male friend of mine. It came on at around the same time two nights ago, and we’ve both been taking the same medication. As I write this, he is curled up in a little ball, unable to swallow. Me, I’m full of soft food and blogging. So what is the difference here? Does my friend have man-flu? Is man-flu even a real thing?

Man-flu is the term used to refer to how men always seem to be iller than women. With a cold, men are more likely to label it flu than women. Apparently.

Note the distinct lack of hyperlinks in the above paragraph. This is because the idea of man-flu is based on anecdotal evidence, and a web-survey from readers of Nuts magazine. Nuts magazine has a certain target demographic, which is distinctly male, and asked some rather leading questions. From self-report, then, it would appear that man-flu does not really exist at all.

But then there’s the science–the actual, sciency-evidence-stuff that means man-flu must exist, and that men do get sicker than women. A study came out showing that men have weaker immune systems than women, because female hormones improve the immune system. It seems so clean-cut when viewed like that. Man-flu exists.

Except it doesn’t. That study was conducted on mice who were given a gene that generally doesn’t exist in humans. I don’t think any more needs to be said about how thoroughly unapplicable that finding is to real human beings.

Then there’s the evolutionary explanation, which sets my teeth right on edge as anything attempting to explain differences between men and women by the medium of “we were made this way” does. This explanation goes back to the hormones again: testosterone makes men more vulnerable, apparently. It all comes down to sex, apparently, and men have swapped the ability not to get knocked out by a little sniffle for greater reproductive success. There’s also another study which suggests women go down to “male” standards of infection after the menopause. Once again, the evidence to support these claims are shaky at best: it comes from single studies.

In terms of single studies, there are also some which suggest that women are worse off. For example, women tend to take more sick days, and tend to perceive more pain. Of course, these studies do not prove the existence of “woman-flu”; they are of roughly the same level of evidence as that “proving” man-flu.

In short, then, man-flu probably doesn’t exist, at least not in any way which has been scientifically detected. Perhaps, then, the effect is down to socialisation: perhaps men do tend to milk their illness more than women as they have been taught to do so by the pervasive man-flu myth. Or, perhaps it is down to stress: in one study of man-flu, the results were found to be explicable entirely by stress, and it is entirely possible that this effect is down to how men are taught to cope with stress (suck it up!) which impacts badly on their immune systems and makes them more ill.

At any rate, as a scientific phenomenon, men do not seem to be sicker than women as a function entirely of gender. If man-flu exists, it is a social phenomenon.

So my friend, the poorly friend, is not more ill because he is a man. I am not feeling better because I’m a woman. It’s likely to be down to individual differences: I am the sort of person who takes illness with a lot of stoicism. Once, while pissing blood from my head and in a post-seizure daze, I tried to send an ambulance away as I had decided I was completely fine and I could handle it myself. Apparently, my grandmother was much the same, and once tried to hide the fact she was having a heart attack as she didn’t fancy being ill at that time.

Individual differences. Socialisation. These are what make certain people sicker than others. Our gender is probably thoroughly irrelevant.

Rats and levers: how to smash capitalism with behavioural psychology

Almost eighty years ago, rats in boxes led to a new paradigm in our understanding of human learning. The famous Skinner trained rats to pull a lever to receive food. Later, with the same methodology, he taught pigeons to play table tennis.

The phenomenon is called “operant conditioning”, and it is pervasive. It is the ability to connect a behaviour with a stimulus: press a lever, receive food; press a lever, avoid pain. It is one of our primal impulses: behaviour leads to an effect. Without that ability, we would get very little indeed done.

For operant conditioning to happen, we need to feel pleasure when we receive a good stimulus and an unpleasant feeling when we receive a bad stimulus. We can see this from when the brain goes wrong: when the ability to feel a sweet little dopamine kick at a pleasurable stimulus is impaired, learning too is impaired. To learn to associate our behaviour with something pleasant, we need to be able to feel good.

It is hardly surprising, then, that the system which we have in place taps into this basic system so well. We spend money, we receive something nice, we feel very good about that. It’s nice to have nice things, and so it’s nice to spend money. We learn to become consumers, because ultimately the stimulus-response effect is positive. Spending money is all too simple. We walk into the shop, bung our money down, and in return we get our nice new handbag or book or delicious burrito. A sweet little dopamine kick tickles our mesolimbic pathways. The response gets carved in deeper.

It goes slightly deeper than this, though. Money is an interesting reinforcer: on its own it has no value whatsoever. It is purely symbolic. You can’t eat a fiver; you can’t play with it much beyond folding it in a way to give the Queen an amusing sadface; a fiver is not entertaining or useful in any tangible way. It is only by exchanging that fiver for the real reward that it has value. This is called a secondary reinforcer. It can be compared to when someone trains a pet using a clicker: the animal will respond to the clicker because it associates the clicker with rewards.

We perform all sorts of actions which are reinforced with this essentially valueless stimulus: we sell our labour, we exchange goods for money, we fill in forms. Some of the money is spent on actual rewards: that shiny new handbag, that book, that telly, all wrapped up with the bow of a sweet little dopamine kick.

The thing with Skinner, though, is that the rats weren’t always working for nice things. Sometimes those rats were starving and they were stuck in a box, pressing that lever so they could eat. This is, usually, how we exchange our tokens: basic food to keep going, shelter, warmth and even water. We are sitting in that box frantically pressing that lever just to stay alive.

This is how the system feeds. We need the money, so we perform the actions. Every so often, we’re rewarded with something to makes us feel good. It’s smart. It’s instant. It taps into a basic learning system: even a rat can do it.

And that’s why it’s so hard to dismantle. Alternatives to the system do not always tap into that instantaneous stimulus-response system. Working against the whole shitty system often does not tap into that instantaneous stimulus-response system. We press the lever and nothing happens. Perhaps ten minutes after the lever press, the pellet of food drops down, but by this point the association is not there. The adage “good things come to those who wait” applies here: those who can build associations and put off an immediate reward in favour of a bigger one in the future tend to do better out of life. For most of us, though, this ability involves a cognitive struggle. And sometimes it’s easier to just play on the immediate stimulus-response reactions.

In activism, a lot of the time we find ourselves bored and standing in the miserable drizzle until we finally fuck off to the pub. Nothing is achieved. In part, this is because our goals are too vast: we will hardly dismantle capitalism by standing in the rain feeling cross and handing out leaflets. What if, though, our goals were smaller? That for each action, we set a simple goal: to change one mind, to block a road for an hour, to disrupt a bank so it will lose a certain amount of business that day? These goals are achievable, and the trip to the pub with comrades suddenly feels like a little treat, combined with a fizz of dopamine. This method is called mastery, an offshoot of operant learning: measurable behaviours, measurable and achievable goals, slowly building.

Satisfaction can come from other sources than buying, as many in the left wing community will know. I take more joy from a scarf I have knitted than one I have bought. I feel happier sharing a meal cooked with friends than something pricier in a restaurant. Gratification is possible, and consumerism is not the only way to get that sweet little dopamine kick. It is simply the most salient way of being.

While this works for activists, it is preaching to the converted. How can this rat and lever response be used to help those who are currently buying wholesale into the system? What we want is for people to know about the problems and act to become part of the solution. The bad news is, those leaflets we hand out in the rain are only useful for awareness-raising. Providing information does not tend to lead to magical change of behaviour. For people to act, we need to be ready.

One way is to negate the reinforcing value of money and the things bought with money. There are few legal ways of achieving this, and it is not necessarily a feasible course of action–and for our own morale, pursuit of the feasible is important. The other option is gradual: starting with helping people to do simple tasks which are rewarding, things that make them feel good. Simplicity, at first is crucial: start off with an e-petition, perhaps. E-petitions are largely pointless, but the signers tend to feel good about themselves afterwards. From the petition, progress to a slightly larger task–such as writing to an MP. Escalate slowly and gently, facilitating people to move to increasingly larger tasks until eventually they, too, are ready for revolution.

This is, essentially, why movements such as UK Uncut have been so successful, with mass appeal. UK Uncut actions involve performing a simple behaviour (sitting down in a shop) with measurable results (the shop loses business). It is hardly surprising that this movement has been a gateway for many into activism: it taps into that simple stimulus-response system.

Awareness of this basic response can help us shape the world. It can help us achieve the ultimate reward: liberation.

Right wing authoritarianism: you’ll probably recognise this personality trait

Ever found yourself trapped in an argument that is going nowhere because the other person is so dogmatically right wing that reasoning is impossible? Perhaps they’re cheerfully bellowing “hang ’em all!”, and you want to point out that perhaps the death penalty is a bad idea. Maybe they’re griping about immigrants “coming over here and taking our jobs”, or suggesting that gay marriage is wrong as marriage can only exist between a man and a woman. It might be best to just down tools. That person is likely to be a right wing authoritarian, and you probably won’t change their mind.

What is right wing authoritarianism?

Right wing authoritarianism (RWA) is a personality trait, conceived by psychologist Bob Altemeyer. The right wing authoritarian personality consists of three attributes:

  1. Authoritarian submission: submissiveness and acceptance of authorities which are perceived to be legitimate and established in society, such as government or the police.
  2. Authoritarian aggression: aggression against outgroups and “deviants”–people who the established authority mark as targets. Examples of this includes travellers, immigrants, Muslims and other kinds of scapegoats.
  3. Conventialism: high adherence to traditions and established social norms. This can manifest in a respect for “traditional family values”, for example.

RWA is measured using a scale consisting of 20 items, with a score ranging from 20 (no RWA) to 180 (high RWA). I scored 22; try it for yourself. Depending upon the sample, university students often score around 75, while a large-scale American study found the average about 90.

Correlates of right wing authoritarianism

First of all, right wing authoritarianism is called such because it tends to correlate strongly with endorsement of political conservatism. Furthermore, while attempts have been made to investigate “left wing authoritatianism”–high adherence to left wing party lines and aggression to those who do not endorse left wing values–these attempts have fallen flat, suggesting that perhaps such a thing does not exist. When one measures submission to authority using different scales, it is still found to correlate with right wing ideology; it is likely, therefore, that authoritarianism and being right wing go hand in hand.

Following a lot of research, Altemeyer has identified a lot of ideologies which correlated with right wing authoritarism. The right wing authoritarian is likely to oppose abortion, support nationalistic ideas and behaviours, capital punishment, capitalism, religion and conservative economic policies. They believe the world to be a dangerous place. They also put less value on social equality, and are far more accepting of infringements on civil liberties–Altermeyer found that high RWA people were often not fazed by the Watergate scandal. Unsurprisingly, given this set of correlates, high RWA people are also more likely to be prejudiced against ethnic minorities and gay people, and more likely to be bullies or friends with bullies in childhood.

RWA is not correlated with intelligence, but arguing with a person who is high in RWA may be difficult, as they have been found to uncritically accept poor evidence–how many times have you found yourself arguing with someone who will not listen to reason and instead clings on fervently to a story they were once told by a friend of a friend? High RWA people often hold the perception that they are right, with less ability to accept their own limitations. They are also less creative than less RWA people. High RWA people have less tolerance for ambiguity: this means they are less able to accept change and jump to conclusions in ambiguous situations.

What can be done about right wing authoritarianism?

Some critics have suggested that RWA is not an immutable personality trait, but, rather, a response to an external “threat”, and that some people have a disposition to manifest RWA beliefs when they perceive they are threatened. This threat can come in the form of economic crises or 9/11, for example. As RWAs make the best followers for a right wing authoritarian regime, a somewhat frightening implication arises: by ramping up the threat level, a larger number of followers who are willing to accept undemocratic ideas appear. On the other hand, by reducing the threat level, RWA can be decreased.

Due to the reverence for authoritative sources of information and poor assessment of evidence, though, reducing the threat level may prove challenging. An anecdote, which non-RWAs will probably see as poor evidence: I have tried to do this on several occasions. It is incredibly frustrating and ultimately fruitless.

In truth, though, there is very little evidence as to whether RWA can be changed: the bulk of it focuses on correlates and whether it is a personality trait with a genetic basis, a trait with a social basis, or a reaction to circumstances. This is an area which sorely needs research, as RWA is a somewhat dangerous ideology, given that it is so related to prejudice and violence and can lead to worrying policymaking such as capital punishment.

For now, though, I would recommend, for the sake of your own sanity, disengage from the high-RWAs. It’s an argument you won’t win.

 

 

 

Implicit prejudice: the “everyone’s a little bit racist” test

I’m slightly racist and moderately sexist. I’m probably also a little bit ableist and weightist and goodness knows what else, but I didn’t have time to try the tests. How about you?

The Implicit Association Test

These tests are called the Implicit Association Test (IAT), and have been used for a variety of purposes, including assessing unconscious favouritism towards one’s own group and bias against people outside one’s own group. It measures unconscious associations, for example, associating typically Muslim names with bad concepts such as hate and war. In the first test I took, I first had to sort Muslim names from non-Muslim names by pressing two buttons on a keyboard. Then I had to sort “good” concepts such as love and peace from “bad” concepts. After this, it got a little harder: “good” shared a button with Muslim names, and “bad” with non-Muslim names. Then the keys switch around, so “bad” and Muslim names share a button, while “good” and non-Muslim share the other. All the while, the computer measured my reaction times. I was quicker at sorting “bad” and Muslim names when they shared a button, and slower when Muslim names shared a button with “good”.

In the second test, where I discovered I’m also a little bit sexist, I had to sort men’s and women’s names, and words pertaining to either career or family. I was a little faster when women’s names and family words shared a button, indicating that unconsciously I associate women with family.

If you try one of the tests, you’ll likely discover that you display unconscious biases against marginalised groups. Almost everyone does, and it’s very difficult to fake the test and appear unbiased.

Ingroup and outgroup favouritism

The IAT taps into a psychological mechanism which we all display to some extent or another: we display favouritism towards people in our own group. This is why, when a white person takes the IAT, they will be more likely to favour “white” names. Even if a person is assigned to a group where they do not know any of the other members and do not have a strong preference for the factor which unites them all, these biases are apparent [paywalled]. Even in minimal groups, people favour the ingroup.

The exception to this rule is for people in marginalised groups [paywalled]. While some people in marginalised groups will show the usual pattern and show ingroup favouritism, other times the pattern will be reversed. They will show a more positive implicit attitude towards the “outgroup” and a more negative implicit attitude to their own group–for example, a black person might be quicker to associate black names with “bad” concepts. This is thought to be a form of system justification: a cognitive loop-the-loop so that disadvantaged people can believe that the world is fair and just.

Is it really prejudice?

Are these unconscious associations genuinely prejudice? There is some evidence [paywalled] to suggest that it may be due to familiarity rather than a bias towards one’s ingroup: when participants had to sort insects (typically something that they have a negative attitude towards) and non-words in an IAT task, they showed a more negative implicit attitude towards the non-words. Because of this effect of familiarity, the effect could be due to absorption of societal beliefs–it measures cultural knowledge rather than prejudice. Perhaps, therefore, I associate women with home and family more readily than with career because I am more familiar with this idea as I am bombarded on a daily basis with media and other people’s attitudes which express this sentiment.

Although the evidence that IAT scores equal prejudice is equivocal, IAT scores do predict behaviour [paywalled]: generally, this behaviour is non-verbal. For example, a person with a high negative implicit attitude towards black people is more likely to sit further away from a black person and less likely to smile at them. Implicit attitudes can also affect voting behaviour and performance on exams. There are real-world implications to unconscious associations. Whether implicit attitudes are genuine prejudice or a result of familiarity with stereotypes, they can affect behaviour.

Can implicit attitudes be changed?

The good news is, implicit attitudes are malleable. In one study [paywalled], implicit prejudice towards black people was reduced through reduced through education, particularly if participants liked the (black) educator. Likewise, familiarity seems to be a factor: after presenting people with familiar faces of admired black people (such as Michael Jordan), negative implicit attitudes towards black people were lower. Taking the IAT may also influence implicit attitudes itself [paywalled]: it may cause participants to build associations. Therefore, by modifying the IAT, it can function as a tool to change implicit prejudice.

By having an awareness of one’s own implicit prejudices, one can work towards changing them and breaking a habit. My area of research–behaviour change–often uses the IAT to measure implicit attitudes towards a habitual behaviour such as smoking, as this is precisely what a habit is: an unconscious association. With awareness of the habit, the habit can be broken. Just as it is possible to stop smoking, it is possible to stop being prejudiced.

Limitations of the IAT

One of the biggest problems with the IAT is that it can only measure binaries: for example, men and women, black and white, Asian or not Asian. Because of this, it is limited in its scope. It is not possible to study prejudice against several different races at once using the IAT; nor is it possible to explore beyond binary notions of gender.

Despite this weakness, though, it is a fairly robust measure: more than a decade of study has established that it is very reliable and difficult to fake results. Put simply, it is currently the best that we have.

So what if I’m racist?

Acknowledgment of one’s own unconscious prejudices is crucial. It does not make you a bad person. My own results were enlightening and show me where there is work to be done. I am angry that I have absorbed some of the messages I see daily, and it gives me the resolve to fight all the harder. It is possible to choose to change.

 

Why the government is making bad decisions after the riots

Following the riots, the government have made a decision which is likely to lead to more, rather than less rioting. These decisions are welcomed by a sizeable chunk of the British public, who are suddenly developing a bloodthirsty yearning for water cannons, rubber bullets and live ammunition–again, despite the fact that such tactics are likely to lead to more riots. They are also pursuing a vindictive policy to evict families of those charged and convicted with rioting from their social housing. How this measure is supposed to help is thoroughly unclear.

The collective lack of good judgment is hardly surprising: indeed, it is a natural consequence of decision making under stress. For once, this may not be entirely a consequence of the fact that we have a government who have nothing but contempt for anyone who is not rich and white.

It is important to acknowledge the context in which these decisions have been made: there is a climate of fear, stress and anger, and demands that Something Must Be Done Immediately. In this kind of context, good decisions are rarely made.

On an individual level, decision making is greatly affected by emotion. The ability to feel emotions is necessary for a person to make decisions: those who have been brain damaged and lose the ability to feel emotion suffer severe impairments in their ability to make decisions in their day-to-day life. High, negative emotions are problematic in making decisions, though.

In stressed decision making, the decision maker tends to focus their attention very narrowly and not examine all possible alternative analyses of the situation and courses of action. Instead, a hasty solution is proposed, one which may not be particularly fit for purpose in solving the problem. To make a good decision requires clear thinking on possible solutions to a problem and the consequences of such solutions. In their response to the riots, the government have not thought through possible consequences of their decision.

The type of emotion experienced also impacts decision making. In general, being in a good mood improves decision making and problem solving: thinking is more creative, flexible, thorough and efficient. The decisions made by the government were not made in a good mood: on top of stress, most of the senior members of government had to come back from their holidays, which is likely to add a further dampener on their moods.

The type of negative mood has also been shown to affect decision making differentially. In a state of anxiety, decision makers are biased towards making “safe” decisions: ones which are low-risk and low-reward. In contrast, when sad, a high-risk, high-reward option is more likely to be chosen. The findings of this study may not be particularly pertinent to the situation at hand, though, as it used a “gambling” methodology where participants were aware of the risks and rewards available from each course of action. In a more nuanced setting such as responses to the riots, such information was unlikely to be available, and, more importantly, unlikely to be fully surveyed by the decision makers.

When a group makes a decision under stress, they are no more likely to make a good decision than an individual. In fact, group processes may make the decision even worse. This is due to a phenomenon called groupthink, which I touched upon in my discussion of consensus decision making.

The word “groupthink” is loaded, melodramatic, reminiscent of an Orwellian dystopia, but this does not mean it does not happen. Through analysis of historical decision-making, and observations of group decision-making, a well-documented effect emerges: cohesive groups, particularly those under pressure, often make poor decisions. Crucially, this tends to happen when the group is attempting to reach a consensus.

The theory behind groupthink proposes eight “symptoms”:

  1. Illusions of invulnerability creating excessive optimism and encouraging risk taking.
  2. Rationalizing warnings that might challenge the group’s assumptions.
  3. Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions.
  4. Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, impotent, or stupid.
  5. Direct pressure to conform placed on any member who questions the group, couched in terms of “disloyalty”.
  6. Self-censorship of ideas that deviate from the apparent group consensus.
  7. Illusions of unanimity among group members, silence is viewed as agreement.
  8. Mind guards — self-appointed members who shield the group from dissenting information.
Groupthink is facilitated by stressful conditions. The phenomenon of groupthink impairs decision making in a number of ways:
  1. Incomplete survey of alternatives
  2. Incomplete survey of objectives
  3. Failure to examine risks of preferred choice
  4. Failure to reevaluate previously rejected alternatives
  5. Poor information search
  6. Selection bias in collecting information
  7. Failure to work out contingency plans.

When surveying alternative courses of action is already impaired, decision making as a group can further narrow available options, leading to convergence on a solution which is inadequate at the very best. This appears to be what happened following the COBRA meetings to plan responses to riots.

The type of leader is an important factor in times of stress, and it is an area where followers themselves make poor decisions. This is because in times of crisis, people are drawn to a charismatic leader over any other type of leader. In one lab study, it was found that people primed with thoughts of death were most likely to vote for an imagined charismatic political candidate than one who was task-oriented, or one who focused on compassion and appreciation of followers. Preference for charisma has also been identified in real-world observational studies, such as in the aftermath of 9/11.

David Cameron is somewhat of a style-over-substance leader, and a crisis like this can be beneficial to him in this respect, as he is nothing if not charismatic. Indeed, his approval rating in the last week has improved (although it is still currently negative), and 45% of surveyed people believe he responded well by coming back from holiday and making some thoroughly dangerous decisions. Right now, if Cameron continues to act charismatically, giving perfectly-written speeches and flashing his Oxford grin, consequences for him will not be negative. It could possibly act to improve his standing in a climate of fear and stress.

I wrote this post assuming the best of our current government. A large part of me believes that much of their response is deliberate, an escalation in their war against the poor. The decisions will have long-term implications for public order situations such as demonstrations, they will make people homeless and clog up our prison and justice system with people who need to see a future rather than a barred window.

However, decision making in a crisis is always going to be problematic. We need to be aware of its shortcomings and avoid swift reactions without thinking through implications and consequences. This is a lesson that we must all learn so we can respond better in emergencies and to sudden, horrifying scenarios.

The government reaction frightens me. I cannot see a way that things will get better.

Evidence-based public order policing: The Met are Doing It Wrong.

The reaction to the riots has been what can kindest be described as knee-jerk, though “absolutely bloody ridiculous and terrifyingly driven by a desire for revenge” is more apt. The police have now been given the power to use what are essentially lethal, dangerous weapons against crowds. Morally this is completely wrong. It is also likely to be ineffective, if not actively making things worse.

The thing is, the standard police approach to policing crowds is already completely wrong. I have been on a lot of protests and have been unlucky enough to end up kettled twice. On neither of those occasions did the kettles cool everything down and quell anger: quite the opposite happened. It’s not fun to have to endure a debate with oneself about whether to piss on the statue of Churchill or the statue of Lloyd George (in the end, I went for sneaky option C, and fashioned a toilet cubicle from metal fencing and tarpaulin. When I got out, there was a queue for the ersatz facilities). While I built, all around me people took poles and smashed in the windows of the Treasury. Horses charged, batons rained down on skulls and the people fought back.

There is evidence behind the idea that crowd control and public order policing is taking completely the wrong approach. This report provides theory, evidence and recommendations, and I would thoroughly recommend you read the whole thing.

Public order policing subscribes to a theory of crowd psychology that has very little evidence behind it. It assumes that once a sufficient number of people are assembled, they will become irrational and easily open to agitation. Crowds, by this theory, are dangerous, a hive mind which must be controlled: “the crowd is a barbarian”. Police are trained in this model, and taught to disperse or contain crowds where they form. This approach is demonstrably ineffective, and as supported by evidence as classic crowd psychology itself.

A better approach to describing crowd psychology is the Elaborated Social Identity Model (ESIM). This theory has roots in Social Identity Theory and Social Categorisation Theory: our behaviour is influenced by identification as a member of a group and roles we take on. We divide the world into “us” and “them”. In a crowd situation, this becomes “police” and “protesters” or “football fans” or “people who fucking hate the police”. As a member of a crowd, one identifies with this group. The police are “outsiders”. When police use indiscriminate, coercive tactics such as baton charges or kettling, the crowd will start to see itself and everyone else in the crowd as posing very little threat, and the police use of force as illegitimate. This leads to a strengthening: the crowd as “us”, the police as “them”. This can empower people to confront the police in a way they would not have done had they been left to their own devices. This can escalate to rioting, caused, inadvertently, by the very tactics the police are using to avert rioting.

The us-and-them mentality extends to the police themselves. The police tend to view crowds as a homogenous, dangerous mass that requires controlling, partly as an effect of their training, but partly as an effect of their social identity as a police officer. A friend of mine, while kettled, once ended up in conversation with a police officer. She asked to get out. “I’m sorry,” he said, “you’re all the same to us. It could have been you who graffitied Nelson’s Column.”

With the weight of evidence suggesting classic police tactics make things worse rather than better, is is clear that police tactics need to change. Fortunately, there is a much better way of policing. It involves taking a graded approach, and, crucially, treating people as individuals rather than members of a crowd. There are four phases to this approach:

  1. Understanding the crowd and their motivations. Understand the culture and the context. Communicate in advance what is and is not acceptable.
  2. On the day, visibility of the police should initially be low-impact: they should move in pairs, and wear standard uniforms rather than riot gear. No helmets, shields, or visible batons. They should interact with the crowd positively: smiling and adopting a friendly posture, being helpful with directions when they can. Communication is key.
  3. If trouble arises, target only the trouble. It is made clear in the report that this does not mean arresting “known” people, the go-to technique for public order policing. Instead, it means targeting those who are causing the trouble, and only those. Communication, once again, is crucial. No acting against the whole crowd.
  4. If there is still a problem and a riot breaks out, go back to usual police tactics of beating up everyone.

Three interesting things emerge from where this approach is used in practice. First of all, self-policing tends to start happening: members of the crowd will be less likely to accept violent behaviour. Secondly, the police are perceived as far more legitimate: the “all coppers are bastards” effect dissipates. Finally, and most importantly, the situations do not escalate. When the approach was tested in Euro2004, in zones where police were using the approach, the riot gear never came out, and only one England fan out of 150 000 present was arrested. The approach, it seems, averts riots.

There are two things in the report that bother me. First, and most importantly, is the assertion in the report that using this approach will facilitate intelligence gathering. As a believer in the right to privacy, I am not particularly comfortable with this. Secondly, as an anarchist, I do not really believe in the necessity of the police in the first place. This report, though, shows they are not hugely necessary at a mass gathering: it is gratifying to see evidence that, when left alone, a crowd will tend to self-organise and decide on non-violence: this is one of the reasons I am so annoyed to see rioting described as anarchy: anarchy is the state of order naturally emerging, and people working together.

Police tactics for crowd control, as currently used, are provocative. Bringing in bigger, more dangerous weapons which will hit anyone indiscriminately will not make anything any better. If anything, it will escalate the situation, provoking a war between the police and anyone who is not the police.

It is, of course, the government’s traditional approach to evidence. They ignore it at the expense of pursuing populist political point-scoring. It will endanger us all.