Can you steal an election with targeted ads? You don’t need to!

After writing about the digital advertising grift yesterday, I got some really good followup questions regarding agencies using data during election campaigns and the Brexit referendum. Weren’t Russia and Cambridge Analytica and all those bad-faith actors using microtargeted online content to shift elections in the direction of their guy?

There was certainly manipulation going on. But the significance of microtargeting was not how the sorcery happened.

Consultancy firms like Cambridge Analytica can mostly be categorised as companies whose product is that everyone thinks they could spy on you, using your social media data alone. They don’t actually do this. Even in the data scandal, the profiles that Cambridge Analytica had constructed were built using personality quizzes and a hefty dollop of pseudopsychology. They then extrapolated this to other users with absolutely no evidence that any of these segments held up. It was lucrative and it sounded very fancy, and a lot of the worst people in the world paid good money for this.

Cambridge Analytica’s brand is cartoonish villainy, and it is very popular among cartoonish villains. They were paid vast sums to run microtargeted ads based on their clever profiles they’d built up. In order to do this, they’d have had to translate their clever profiles into the demographic details which are used to target ads.

So ultimately, what they were doing after all this was… “hey let’s show this ad about how Brexit will save the NHS to 55 year old dental receptionists in Stoke-on-Trent”. In other words, the same way anyone else targets ads, despite the science-y veneer.

How to actually steal an election using segmentation

Segmentation is a great buzzword for sorting people into categories, and its importance is wildly overstated in order to sell tools for audience analysis. Nevertheless, it is helpful, when stealing an election, to segment your audience into three groups. You don’t even need to do much analysis on these groups: two out of the three will reveal themselves pretty quickly.

The three segments required for stealing an election are: people who are voting for your guy anyway; people who will never in a million years vote for your guy; and people who are on the fence.

You don’t need to microtarget these audiences in any way; in fact, you need to target all of them. You need a message which resonates with the people who will vote for your guy; make the people who will never in a million years vote for your guy object to the point of insufferability; and that the thing which is most noticed by swing voters is the insufferability of the second category.

A good example of this is Brexit. Pro-Brexit messaging was incoherent, but resonated in positioning the anti-Brexit advocates as elite and out of touch. Unfortunately for the most prominent anti-Brexit voices, they were pretty out of touch, and reacted with complete insufferability, and continued to be patronising, insufferable, snotty, cliquey and generally awful, until the swing voters were sufficiently icked out by this behaviour. I very grudgingly voted Remain, but considered appending my ballot with a short essay as to how this vote should not be counted as an endorsement of the Remain campaign.

Why use microtargeting, when you can just wind your rivals up to the point of being so fucking annoying that they become an electoral bonerkill?

You can apply this to most elections. There’s a side that’s awful, and a side that, by positioning, appears less awful. The swing voters will probably come down on the lesser of two awfuls. It’s a tale as old as time, and it works. Also, it doesn’t require any real data analysis, you just have to make the other side look worse.

So what are they actually doing to steal an election?

We actually know a lot about how the sausage was made with Cambridge Analytica, because their CEO was caught on camera talking about how they stole elections. Despite the major data breaches, Cambridge Analytica favoured doing things the old-fashioned way: bribery, honey traps, information gathering using sex workers, entrapment. Even the unusually candid CEO in question couldn’t provide evidence that they’d used social media data to steal the 2016 US election in his bragging – he said it was “self-destructing”.

The old dark arts are tried and tested, and these are what have been so successful at stealing elections in recent years. It’s time to take off the mask and do a Scooby Doo reveal of the real villain, hiding underneath the concerns about digital microtargeting…

It was journalists all along.

In the 2016 US election, Donald Trump received more media coverage than any other runner in the race. In combination with this, Hillary Clinton got some coverage, and it was pretty much all about the emails scandal. Nigel Farage’s stupid frog face was all over the news, all the time during the Brexit referendum. And don’t even get me started on the absolute state of the four year campaign of hit jobs on Jeremy Corbyn.

Mass media is more than sufficient to tip the scales. You just need to throw in a good wedge, and this is easily done the old-fashioned way: by throwing shit at the wall and seeing what sticks, then watching everyone fight each other. Sure, you can supplement the fight with some bots, if you want, but the humans will be more than sufficient.

But we need a bogeyman here, especially as journalism is getting markedly worse and worse. And so the bogeyman can be social media witchcraft. It’s not the carefully-curated PR attacks that are the villain, it’s Facebook ads, who know exactly when you’re constipated and will trick you into voting Lib Dem while you’re on the toilet.

_

Enjoyed what you read? Consider becoming a Patron or leave a tip

Scams upon scams: The data-driven advertising grift

Digital advertising is a scam from top to bottom. In fact, it’s several scams stacked on top of each other, wearing a trenchcoat, and some of the foundations of fibs are so effective that otherwise reasonable people entirely buy into them.

Data-driven ads are anything but

I’ll start with a few examples of the data which is definitely held on me, and just how entirely bad my targeted advertising is.

Facebook know my age and date of birth. They have had this data since I signed up for the website, 15 years ago. They know exactly how old I am. They also know where I live. Hell, sometimes I used to check into places with my location on. Despite knowing I am way north of 30 and way south of Birmingham, they are incredibly keen on advertising me events explicitly limited to people under the age of 30 in the Birmingham area.

Google knew I wanted to buy a mattress. They knew this because I googled it. And I clicked through to a brand selling mattresses, and I bought myself a mattress. The brand know I googled said mattress. Google know I clicked through. From Google’s own analytics, they ought to know I bought the mattress. Since buying that mattress, I’ve been constantly advertised mattresses, especially the one I already own and they know I already own.

Some might claim that in fact the advertisers are being incredibly smart and they’re advertising me activities for women under 30 in Birmingham so I go and tell my friends who are under 30 in Birmingham to go and do that. But of course, Facebook would also know that I don’t have any friends in that demographic. Or maybe that mattress seller is trying to tell me to refer a friend to buy that mattress by reminding me that I own a very nice mattress. In which case, why isn’t it advertising the referral programme, which I know they have because I received several emails and a physical leaflet about it with the fucking mattress?

The more simple answer is that the advertisers aren’t being data driven at all. They’re ticking default boxes or casting wider nets. I’m getting advertised mattresses because I have ~an interest in mattresses~. I’m getting activities for women under 30 in Birmingham because I’m under 40 and on the same island as Birmingham.

For all the buzzwords about “data-driven” and “smart” and whatever else you want to call it, the advertisers are just going “eh, sounds about right” and letting a robot automate their job.

This, then, is the first grift in the chain. Despite claiming to their boss that they’re using “data-driven” advertising, they’re targeting their ads even less than taking out a quarter page in the local newspaper.

The product: they could spy on you (but don’t)

Everyone is rightly nervy about the sheer quantity of data that big companies hold on us. Social media companies know all about your demographic information, social connections and interests. Amazon knows exactly when you have an outbreak of aphids because you buy things to kill the nasty little beasties, and it probably also knows when you’ve had a nasty breakup because nobody listens to Fleetwood Mac’s Rumours on repeat at 3am when they’re in a good place. Google basically knows everything about you.

At least that’s the theory. And that’s the product that they’re selling to advertisers. They have an enormous dataset from which everything an advertiser could ever dream of about a person can be garnered. They’re the world’s biggest, bestest spy network, which means they have quality data to help your business be the biggest, bestest business reaching the biggest, bestest customers.

At least that’s what they say.

Actual spying requires actual spies. There’s a reason intelligence agencies are such big employers: they have all of their fancy spy computers, but they know they need to hire humans to actually deduce patterns and sort signal from noise. They’re aware that a human brain is always superior to a computer in figuring this out, so they get humans to do the work.

Meanwhile, tech companies break into hives at the thought of getting a human to do a job. Their ethos is that if a human can do a task, a machine can do that task better, and not cost them anything such as salary, pensions or or a basic level of respect. Tech companies are fatally allergic to getting a human to do a human job, so content moderation is largely an algorithm looking for the word “boobies”. A tech company would go into anaphylactic shock at the very notion of employing a human to analyse their vast dataset.

So it’s all machine learning, and the machines are very, very stupid. Have you ever looked at your list inferred interests on a social media platform? If you ever tweeted “I don’t like Game of Thrones, it’s not for me,” you’ll be classified as interested in Game of Thrones and possibly get served ads for it. These machines may also attempt to deduce your age, gender, and so forth based on half-baked crap fed into them, and it seldom comes up right. Maybe that’s why it thinks I’m under 30 and in Birmingham. Perhaps I internet in a Brummie accent.

It’s no wonder that on multiple occasions, big tech has been caught out completely making things up when communicating with advertisers, and they continue to do so. Facebook was famously found to have inflated or outright fabricated video metrics. GA4 very quietly admits that the data is padded out with machine learning. The data is a lie, and a lot of it is because they literally haven’t the first clue on what to do with it, they just need to steeple their fingers and act all evil so advertisers think they have it.

Advertisers, then, are getting served a steaming turd on a plate rather than the medium-rare filet mignon they were promised.

And meanwhile, the spies don’t even need that data, because your posts are public anyway.

But enough about that. The problem is this grift is, too, built upon a grift.

Marketing science is a grift

I work in marketing, for my sins. This is mostly why I’m so entirely down on the marketing industry and many of the people who work in it. I also happen to have an MSc in psychology – actual psychology! – with a focus on behaviour change.

On day 1 of your class about behaviour change in a science course, you learn that behaviour change is not a simple matter of information in, behaviour out. Human behaviour, and changing it, is big and complex.

Meanwhile, on your marketing courses, which I have had the misfortune to attend, the model of changing behaviour is pretty much this: information in, behaviour out.

The thing with the entire “science” of marketing is the underpinning theory base is basic common sense which has been treated with a bit of a brand makeover, turned into a couple of overcomplicated diagrams with some neologisms obscuring meaning. Digital marketing has become very popular because baked into it are a whole bunch of metrics so you have something to show your manager that you’re not spending the entire day tending your geraniums, but do the metrics really mean anything?

The metrics that marketers are told they need are marketed to them by the marketing department of a company that specialises in making products for marketers. And that company was probably started up by someone who worked in marketing.

Marketing theory is never tested rigorously. The common sense incredibly sound scientific view based on heaps of scientific evidence view – showing your ads to people more likely to buy your product is more efficient because they’re more likely to buy your product anyway – is entirely untested.

There’s an anecdote that a glitch with Facebook led to ads no longer being targeted over a period of several weeks. And absolutely nobody noticed because the metrics all looked normal, the engagement and purchasing was just the same.

There isn’t any evidence to suggest that an ad targeted to 35 year old men with children with an interest in football is any more likely to result in sales of Football Dad socks than a poster for Football Dad socks at a bus stop. But an entire industry is based on pretending that this is the case.

tl;dr

Facebook will try to sell you Football Dad socks even if you’re a 55 year old childfree woman who posted once about hating football, because that data is utterly useless.

Spies are probably reading your posts though, no matter how boring.

_

Enjoyed what you read? Consider becoming a Patron or leave a tip