The METAmorphosis of behavioural economics and fast data.
How do we stop a $56bn gorilla climbing into all our windows and hurting our children?
[Quick trigger warning, I’ll be discussing a case study and topic related to mental health, self-harm and death by Suicide in this article.]
In the beginning, there was novelty
Originating as an internet novelty 15 years ago, social media platforms have altered everything from democracy to human relationships in ways that are hard to overstate. Designed by profit-oriented companies, social media has reshaped what it means to be human. Many people have tried to bend it to our human advantage and create good, but sustaining the scale of these businesses calls for different tactics. That’s the moral friction.
At the heart of this money-generating machine are algorithms (algos): you hear about them, you know they’re the power behind your screens, and you know they’re helpful. But do you really understand what they contribute to this story and your lives?
In the context of social media, think of algorithms as the various neural pathways that power the brain. The brain-food is data, and the algorithms tell the body (the technology) which direction to lead the herd. Can we influence which direction those algorithms take us? It’s a complicated question and one the world has been facing up to recently.
Now suppose you understand how to apply mass attention and behavioural economics algorithmically — In that case, you can understand and influence human behaviour on a huge scale, and that’s what the powers-that-be at most of the platform companies did. Because embedded deep in those lines of code are behavioural and attention economic models used to improve engagement with their audiences. It wasn’t that long ago scientists, and computer programmers developed self-improving algorithms that led to tremendous technological leaps to make modelling and predicting your digital behaviour possible. That’s been ramped up over the years to identify biases and beliefs and adapt content and advertising accordingly. Because it’s become a competitive market, the metric of stickiness has to trump the metric of registration, which is where the vanguard of algorithmically fueled attention economics crept into the business model and became the true battleground.
Social media was the first mass-consumed technology to deploy attention econcomics at scale, and also the first place it went wrong at scale.
For a long time, I wondered if the mistakes were just unfortunate accidents because the people deploying the technology just misunderstood the potential implications and impacts. But the more I’ve heard the leading players in the field defend the harm, the more I’ve started to think that they’re more than aware of the damages, and actually, it’s been very good for business.
The METAphorical gorilla in the Room
For a decade now, Facebook has been in perpetual crisis mode, confronted by wave after wave of critical scrutiny on issues caused or exacerbated by their various platforms. They have enjoyed unrestricted operation and expansion, which has in many ways been our fault as consumers because we allowed their tools to seep so seamlessly into the fabric of our existence. Until they operate in a properly regulated market, they will be allowed to continuously grow and increase the array of harms it creates.
The company doesn’t just have a problem; It is THE problem.
Facebook, founded in 2004, finally came into its proper growth trajectory in 2012. It bought Instagram for $1bn (£760m) that same year and then acquired WhatsApp for $19bn two years later. True to its original informal motto — “Move fast and break things” — they wasted no time wreaking a well-documented path of destruction once the big bucks from this new three-headed beast started rolling in.
Since those early acquisitions, a quarter of the world (1.9 billion people) has used Facebook daily, looking for interesting, relevant content. So, it should surprise no one that over 200 million businesses buy ad spaces on Facebook, and 93% of marketers worldwide are on the platform, aiming to grow their brands and hit new audiences by promoting content.
How they reached that scale has been anything other than smooth.
You are the experiment.
To continue to reach scale they‘ve utilised a lot of very high-impact test-and-learn strategies to try and create deeper engagement on those platforms, and during this new experimentation approach, they discovered how much more valuable negative content had on their revenue targets.
News organisations at both ends of the political spectrum have always leveraged this tendency, and there’s an old broadcast news adage, “If it bleeds, it leads.” To prove this point, Harvard Business School professor Amit Goldenberg and colleagues Nathan Young and Andrea Bellovary of DePaul University analysed 140,358 posts by 44 news agencies in early 2020. An automated sentiment analysis tool confirmed their hunch: negativity was about 15 per cent more prevalent than positivity, and negative posts engaged more people.
“Although people produce much more positive content on social media in general, negative content is much more likely to spread,” Amit Goldenberg.
High-arousal emotionality is more engaging, and examples of high-arousal emotions are anger and excitement, which are often more engaging than low-arousal emotions, like sadness or calmness. Facebook understood this and tweaked its algorithms to push content from the extremes, create more engagement, and ultimately expose people to more related advertising, which they’re more likely to click on, generating more revenue. But the biggest problem with social media tools like Facebook and Instagram is that almost overnight, we empowered everyday people, not just journalists, to live and breathe the “If it bleeds, it leads” mantra. And once people realised what they posted rewarded them with little dopamine hits, especially when the content they posted was more extreme, a considerable problem emerged — People got really good at posting a lot of bad stuff, tagged and bagged to attract the maximum audience attention, and the biggest dopamine rush. It was the perfect storm.
Even before the case of Molly Russell broke into mainstream consciousness, Meta’s team went on the defensive about research that demonstrated the platform’s ill effects on teenage girls. The company downplayed findings that using Instagram can have significant impacts on the mental health of teenage girls and instead started to implement strategies to attract more preteen humans to Instagram because of audience attrition to Snapchat and TikTok. In effect, they ramped up the algorithmic call to prayer rather than temporarily locking up the shop to explore and ethically fix the issues because their own internal research gave them permission to keep calm and carry on.
They realised that to get back market share, and make sure that messages spread, they would need to amplify emotional content and keep people on their platforms longer.
While people started posting more and more extreme content chasing their hit of instant gratification, behavioural science teams at Facebook were stretching the platform’s proprietary algorithms — designed to foster more human engagement in any way possible — and rewarding outrage in echo chambers where the most inflammatory content achieved the greatest visibility and the highest levels of attention even if that content meant the proliferation of extremism, bullying, hate speech, disinformation, self-harm and suicide glamorisation, conspiracy theory, and rhetorical violence.
Why? Because it was great for business.
We could argue that the content posted and amplified is just a mirror of the world’s psyche. But because the platform’s role is to amplify what gets posted to drive more Ad clicks, and the worst of the stuff is amplified, the owners of a platform like this must take a certain amount of responsibility.
This is where it gets genuinely murky because they’ve shown again this week during the Molly Russell inquest that they aren’t prepared to acknowledge their problems or admit their risks.
The METAphysical
Just days before one of Facebook’s congressional hearings — this time on those mental impacts of Instagram on teenagers — Adam Mosseri, the head of Instagram, announced his team was pausing the development of Instagram Kids, a service aimed at people under 13 years old, and developing “parental supervision tools” instead. A move designed to shift the responsibility from the company to the parents. To the untrained eye, that might seem like a good idea, but in reality, it was a deliberate deflection of the blame on the parents.
Another example was when Facebook’s Chief AI Scientist, Yann LeCun, was questioned about the algorithms’ purpose in driving people into interest bubbles and, therefore also, harm. He made the following claims:
“Critics who argue that Facebook is profiting from the spread of misinformation — are factually wrong. Facebook uses Ai-based technology to filter out hate speech, calls to violence, bullying, and disinformation that endangers public safety or the integrity of the democratic process.”
He also claimed that Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”
But absent from his rebuff is any acknowledgement that the company’s profitability depends substantially upon the polarisation LeCun insists does not exist. The guy either isn’t telling the truth or doesn’t understand how the company he works for makes money. I believe it’s the latter. They work for the greater good of the company without truly understanding its core business model.
LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions and smart people. Even when Facebook takes occasion to announce its triumphs in the ethical use of Ai, such as its excellent work detecting suicidal tendencies (whilst also allowing huge quantities of Suicide related posts to be shared?), its advancements pale in comparison to the inherent problems written into its algorithms.
This is a reality acknowledged even by those who have worked in senior-level positions at Facebook; as the former director of monetisation, Tim Kendall explained in his Congressional testimony, “social media services that I and others have built have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding — at worst, I fear we are pushing ourselves to the brink of a civil war.”
In that same period as LeCuns defence of the platform, Facebook saw an enormous backlash to an experiment they ran, where they varied how much people were exposed to happy or sad content in their feeds to see whether that affected peoples’ moods. The results, published in 2014, showed that social media content can stoke emotional contagion when a person’s emotions transfer to another. Moreover, those who had seen mostly negative content produced more negative posts and fewer positive posts. For those who had seen mostly positive posts, the opposite occurred.
During the recent inquest into the death of Molly Russell, child psychiatrists made it clear that teenage girls and boys are incredibly susceptible to extremes. They also categorically stated that the content Molly was shown was not safe for children and vulnerable adults to be exposed to. The lawyers played many videos viewed by Molly on Instagram, many of which were ‘instructional’ in nature, showing Molly efficient ways to end her life, coupled with messages of doom and depression — in very large quantities, repetitively. Was she part of a really twisted experiment gone horrifically wrong? It’s looking more and more likely.
But when Elizabeth ‘Liz’ Lagone, Head of Health & Well-Being Policy at Meta, took the witness stand and was told that out of the 16,300 posts Molly saved, shared or liked on Instagram in the six months before her death, 2,100 were depression, self-harm or suicide-related, she towed the party line that the platforms were safe, and did not harm people, much like Yann LeCun did when asked about the technology.
The Barrister for the Molly Russell family asked Ms Lagone, “Do you think this type of material is safe for children?” She replied, “I think it is safe for people to be able to express themselves.” The Coroner Andrew Walker interjected and asked: “So you are saying yes, it is safe or no, it isn’t safe?” She replied, “Yes, it is safe.”
(Ms Lagone has a background in Public Health and a B.A. In Political Science. She does not appear qualified to assess the potential impact of Suicide and self-harm-related content on the minds of impressionable teenage girls.)
I don’t believe these people are evil. But they’ve been indoctrinated into a world where ethics falls off the cliff when business results are so heavily weighted on getting likes or sales that there’s no time to stop and reflect. They don’t view the victims here as people but as users, numbers, and metrics. Ms Lagone did not even attempt to attend the testimony of Mollys parents during the inquest, choosing instead to appear only for her section, despite being ordered to fly over from the U.S to take part. Is that the actions of someone who sees Molly as anything other than a user gone wrong? An error.
Ordinary people do not generally perceive a problem until something goes wrong, but people like Mosseri, LeCun, and Lagone don’t even think anything is wrong, even when it’s evidentially wrong.
Just process that last sentence for a minute.
(Please also note in my three examples above I haven’t mentioned the founder's name once. We always associate the harm with the founder. But it’s becoming clear this is a company of senior executives who have all sworn their allegiance to the algorithm-gods.)
What Next?
I believe the Molly Russell inquest might be the moment where real change happens.
An inquest is different to other courts because there are no formal allegations or accusations and no power to blame anyone directly for the death. At the end of an inquest, the coroner will give their conclusion, which will appear on the final death certificate. The death can then be officially registered. But the conclusion cannot (in most circumstances) include any suggestion of blame — So the family aren’t trying to pin their loss on Facebook, let’s make that very clear.
At the end of the inquest, the coroner will give a conclusion about the cause of death: Natural causes / Accident or misadventure / Suicide / Narrative (which enables the coroner to describe briefly the circumstances by which the death came about) / Unlawful killing (or lawful killing) / Alcohol or drug related / Industrial disease / Road traffic collision / Neglect (usually contributing to another conclusion, e.g. natural causes) / Open, meaning that there is insufficient evidence to decide how the death came about.
In the case of Molly, they may go with a narrative verdict that will be used to describe the circumstances, which in this case show that social media played a significant part in the build-up to her Suicide. That means Meta / Facebook and Pinterest (which I haven’t mentioned because they held their hands up and admitted they’d failed… which also made Ms Lagone from Meta look pretty arrogant) are on the hook even if they’re not actually allowed to be cited in the write-up. If that is the case, then the coroner’s verdict will undoubtedly draw attention to the harms of social media in a much broader field of vision, including those who legislate and make policy decisions in government departments like DCMS. So it could start much deeper scrutiny from governments and public health bodies about the harms of some social media, but also what platforms are proactively doing to prevent them.
What will need to happen to get companies like Meta to assess their working practices is a massive dent in the revenues otherwise they simply won’t change. Hit them where it hurts, and it might start to break the spell. It has already happened to them previously for breaking GDPR, and in 2018 they were fined €17 million (~$18.6 million) by the Irish Data Protection Commission (DPC) over a string of historic data breaches. It’s still only a drop in the ocean for a company that, so far in 2022, has made $56.7 billion in revenue. Is fining them enough? I’m not sure.
But there is another hope… of that $56.7 billion, $55.1 billion came directly from advertising, which constitutes 97.2% of its total revenue this year.
Perhaps the sins of the past weren’t enough to put brands off, pumping millions into the machine, but how about the death of innocent young people sucked into an algorithmically generated artificial vortex of behavioural science-created self-harm hell? That’s not a good look, is it? Would you like your company associated by proxy with that? The way to clip their wings isn’t by policing them, it’s by starving them. Our move.