Engineered Estrangement | An Interview with Cory Doctorow
The author, activist, and technology critic Cory Doctorow on reckless broligarchs and what their 'enshittified' platforms are doing to our future
Perhaps Keir Starmer accidentally hit on an important point when he said that Britain risks becoming an “island of strangers.” The spaces we use to convene and communicate – which are increasingly found online – are being enclosed and distorted by oligarchs outside of our borders. While Starmer attributed social alienation to a “squalid chapter” of mass migration under the Conservatives, little has been said about Britain’s sordid era of monopoly social media and regulatory capture, where the public sphere has been quietly handed over to monopolists who profit from our disconnection.
Cory Doctorow is a digital rights activist, journalist, and award-winning science fiction author who argues that the UK’s regulatory environment is enshitogenic – incentivising platforms to undergo enshitification – to continue degrading the user experience on social media, including our ability to use it to connect meaningfully to each other.
The Musks and Zuckerbergs of this world are not evil wizards with mind control rays, Doctorow says, but simply unethical and reckless people who were abetted and enabled by a state unwilling to hold them accountable.
I spoke to Doctorow about the big problems with Big Tech today, what we can do to hold them accountable, and how envisioning science-fiction utopias can help us on the way to something better.
Matt Gallagher: In layman's terms, could you explain how today’s tech companies, particularly social media giants, can make people feel like strangers to one another?
Cory Doctorow: I don't think there's anything intrinsic to the idea of connecting with people online that makes us feel like strangers. I don’t buy the hypothesis that forever and always, when you meet someone online, your empathy goes away because you can't see the real person on the other side of the keyboard. There are probably people for whom that happens. I've had occasion to confront people who’ve been horrifically rude to me online, and I've discovered that they're horrifically rude in person too.
Many of us found incredible intimacy, friendship, support, love, camaraderie and political activation through online conversations. A lot of it happened on social media. At its best, social media is a great way to find people who are interested in the things you're interested in, oriented around the priorities you have – not just by searching for people, but also by having people suggested to you.
Social networks, in their modern form, are organised around advertising. They’re organised around finding people with hard-to-find traits. The person who's in the market for a piano is very rare, as is the person who shares your same sexual kink, or your love for a certain obscure author, or your musical tastes, or going out to the same kinds of weird clubs on a Friday.
Social media, at its best, has been extraordinarily good at connecting people. And I don't want to lose sight of that. Because I think there’s a reactionary account of social media that says it just drives us apart intrinsically, and I don’t think that’s true.
However, social media has a booby trap, or a pitfall. Economists call it the collective action problem. It's very hard for people to agree to do things together. It's why you and your mates can't even agree on what movie to see on Friday. It’s why we’re all in grave danger of roasting because of climate change. It’s arguably the hard problem of the human race.
When we pile into social media, we experience what economists call the network effect. The more people there are using it, the more valuable it is. Once you find people there, it’s hard to leave. As much as you hate Mark Zuckerberg, you love each other more. It’s very hard to agree to leave, to figure out how you’re going to re-establish contact somewhere else.
I sometimes call this the Anatevka problem. In Fiddler on the Roof, the Cossacks ride through this sleepy Jewish village and kick six kinds of shit out of everyone. When the Tsar ultimately exiles all the Jews from the Pale of Settlement, their community is shattered. They’re all going to different places – Krakow, New York, Chicago – and they’re never going to see each other again. The only thing they had was their social capital. Ironically, the less of that you have, the harder it is to leave social media because it is the place where your social capital is located, and that's all you've got. It becomes a very important place for you.
Social media bosses know this. In the Federal Trade Commission’s case against Facebook, there are memos where the product manager for Facebook Photos is emailing Zuckerberg and saying: “Mark, this product is valuable – not because it's a great way to organise your photos, but because people won’t want to leave their family photographs behind.”
Facebook can impose – and this is another term from economics – a high switching cost on people who leave. We can make it more expensive to leave than it is to stay.
The expense can take many forms. One is to evacuate your feed of the things you want to see – the people you’re following – and fill that vacuum with boosted posts and ads. Another way is to spend less money on moderation, less money fighting spam, shock images, harassment – all the things that make it unpleasant. That’s a way of shifting the social cost to end users.
The more expensive it is for you to leave, the worse they can treat you – and you'll stay. We know what this phenomenon looks like. Once you get through security at Heathrow, a bottle of water is suddenly five pounds, because they know you can’t leave.
So meeting online doesn’t inherently destroy human relationships. But once platforms control you through high switching costs, they can erode the user experience – enshittify – for profit.
Matt Gallagher: So it’s not about the technology itself, more about the people controlling it and their imperatives. How are people like Mark Zuckerberg and Elon Musk using their platforms to reshape the world in their own image?
Cory Doctorow: These guys have very bad ideas. Musk is slightly different in that he doesn’t start companies – he buys other people’s companies and puts his name on them. But Mark Zuckerberg made Facebook in his image. He founded Facebook in his dorm room so he could non-consensually rate the fuckability of his fellow undergraduates. Without any hyperbole at all, that is what the Facebook was originally created for.
Zuckerberg is someone who visibly and consistently does not see other people as real. You know the categorical imperative: ‘don't treat people like things’. Mark Zuckerberg sees people as things, as a means to an end. I think Musk does too – he calls people ‘NPCs’ [non-player characters] when he's angry at them or when he can’t answer their questions.
But here’s the important thing: as bad as that ideology is, Mark Zuckerberg ran a decent service for a long time. Not perfect, by any means, but when there were conflicts between his priorities and happiness of people on the platform, they usually cashed out in favour of the users.
This raises a really interesting question: why did this guy, who’s just a terrible person, treat people well – and what changed so that he started treating people badly? And the reason I’m answering this question this way is that I think it’s a mistake to locate the crisis of social media solely in the ideology of Silicon Valley billionaires.
Sarah Wynn-Williams’s book about Facebook – Careless People – tells the story well. She was an idealistic member of the New Zealand Foreign Service who became convinced that Facebook was going to be globally important, and she wanted to help them shape how to relate to the rest of the world.
What she encountered when she got the job were people who were careless – and that’s the title of the book – careless in the sense that they are reckless. Mark Zuckerberg, when he harms his users, does so not because he’s sadistic, but because he doesn’t even care to find out what his actions are likely to cause. That carelessness runs throughout his behaviour. Zuckerberg refuses to be briefed on matters longer than a text message. There was a case where his inaction jeopardised peace talks in Colombia because he wouldn’t get out of bed before noon. It’s carelessness in the sense of dropping a lit match in a haystack.
By the end of the book, they’re a different kind of careless. They’re careless in the sense of not giving a damn. They’re careless in the sense of not caring what we might do to them, because all of the constraints that Facebook once operated under had gone away.
First was competition. Meta is currently on trial in the US for antitrust violations. They’ve bought up firms like WhatsApp and Instagram explicitly to extinguish their competitors. In one memo, Zuckerberg told his CFO that “people like Instagram better than Facebook, so I’m going to ensure that if they leave Facebook the platform, they’ll still be captured by Facebook the company.”
This is a confession, in writing. It’s the kind of thing that should have prevented that merger. The Obama administration at the time waved it through – as they did with all mergers. As forty years of different administrations waved through mergers. In Canada, the Competition Bureau in its whole history challenged three mergers and successfully challenged zero. This has been the norm since the Thatcher era. We let companies buy out their competitors. It means they don’t have to worry about users leaving for a competitor.
But it also means that firms capture their regulators. When there are 200 companies in a sector, they can’t agree on anything – they’re a rabble. But when there are four companies, they’re a cartel, they’re a conspiracy. They can easily mobilise their policy preferences, and they have lots of money because they’re not competing with one another. Peter Thiel famously said that “competition is for losers.” He says it’s “wasteful.”
In the UK, Keir Starmer just fired the head of the Competition and Markets Authority (CMA) and replaced him with the former head of Amazon UK. That’s regulatory capture.
These companies also used to have to worry about their workforce. If you read Careless People, you’ll see that again and again, tech workers cared about users. Tech workers had power, but it wasn’t because of solidarity – it was because of scarcity. They never unionised when they had the chance.
So as we’ve seen mass layoffs in tech, the redundant workers have acted not as a disciplining force on tech bosses, but on the workers who remain. If you tell your boss to go to hell because you won’t do something poisonous to users, your boss fires you knowing there are ten people behind you ready to take the job.
And finally, there’s this idea of interoperability. Facebook’s original problem was Myspace. Their promise was “we’ll never spy on you” – that was their original pitch. But if your friends wouldn’t leave MySpace, are you really going to sit on Facebook rereading the privacy policy waiting for your friends to come to their senses? So Zuckerberg gave them a bot. You gave the bot your login and password, and it would go to MySpace, scrape your messaging inbox, pull it into Facebook, let you reply, and push the replies back.
But since then, IP law has metastasized. Anti-circumvention law has expanded so that reverse engineering, modding, and making plugins technology without permission has become a literal felony. Software engineer and hacker Jay Freeman calls it felony contempt of business model.
So you’ve got these guys with these terrible ideologies – narcissists, power-mad, disconnected. But despite their ideological failings, we were able to discipline them. Not perfectly, and we could have done better. It wasn’t their ideology that poisoned social media. It was the policy environment — an environment that was, to use a term I coined, enshitogenic. It encouraged enshitification, and we got enshitification.
In these antitrust trials, and in tell-all memoirs like Wynn-Williams’s, you see the internal struggles at the firms – people who used to win arguments against doing terrible things by saying, “If we do that, not only will it make me feel bad, but it’ll lose us money, or get us fined, or cost us key employees, someone will make find a work-around.”
And then one by one, those arguments stopped working because the consequences went away. The firms realised: we can be as horrible as we want to our users, and nothing bad will happen to us.
Some mix of sadism and greed triumphed at every juncture in the firm, over and over again. Eventually, you get the modern Facebook, which is just a pile of shit. As is Twitter.
That’s the actual crux here: that our policymakers, having been warned that this was the likely outcome, took specific decisions – actual things we can point to – and the thing that was predicted happened.
By all means, let’s make Mark Zuckerberg’s name a curse for a thousand years. But let’s not forget that he was able to do what he did because policymakers over and over again deliberately failed us.
Matt Gallagher: Robert Putnam’s book Bowling Alone talks about the collapse of civic life in the US – it seems like a similar process has transpired online. You’ve written about reclaiming the ‘commons’ – spaces where people act as citizens rather than consumers. Can you talk about what that looks like in the digital space and how we get there?
Cory Doctorow: Putnam is onto something. One of the reasons Americans don't belong to bowling leagues anymore is because we've seen 40 years of wage stagnation and the elimination of public spaces, and so for the most part, you have to work three side hustles to pay the rent. Even if you do have a moment free, there's nowhere to go.
In the UK we still deploy the mosquito – a noise device that emits a tone only young people can hear, so they don’t gather in front of shops. We ask why people are staying indoors, why they’re not hanging out anymore. The same phenomenon that gave us Big Tech first gave us the privatisation of public spaces, the elimination of leisure, the growth of side-hustles and the collapse of unions and a living wage.
As to what we can do to bring the commons back – I think the most valid part of the libertarian argument against totalised public provision of services is that sometimes the government is wrong. I like the idea of the Government providing fiber-optic broadband, but I don’t want Donald Trump deciding what I can see on the internet. I don’t want Nigel Farage deciding either.
The reality is that there’s a lot of space between the total public provision of a service and the public provision of aspects of service, such that monopolisation is harder. You can clear away the barriers to entry for co-operatives, individuals, loose collectives, small firms and large firms.
At the basic level, we could just have the Government own the fiber. Councils could put it in. The national Government could connect city lines. That’s one way to do it, have the Government become your internet service provider. Arguably, replacing BT Openreach with anything is good. I’ve never encountered a worse firm.
But that’s not where our imagination has to stop. Ultimately, the government is always the one laying the fiber in the sense of permitting it. Tearing up all the pavement in Hackney to get fiber onto every premises as a private sector matter, paying people for the trouble, would cost a trillion pounds and you still wouldn’t finish. Ultimately Hackney council creates the space for the private sector to come in and do it, and at that point they might as well cut out the middle man and do it themselves.
But at the end points, where the data centers and the network controls are, you could just share the facilities. Anyone who wants to start up an internet service provider could show up, put their own equipment in – or lease it from the government – and connect subscribers who have paid them for internet access.
In this system, you would have a choice: the government can provide your internet, anyone else can provide your internet, or you could provide your own internet. That’s a situation where you’d be pretty well insulated from Nigel Farage deciding what your internet service looks like. You’d also have hedges against monopoly, because you’d lower the cost of entry to peanuts.
Now scale that principle up to social media. Facebook and Twitter are retrograde in that they don’t connect to anything else. It’s a very weird idea, it’s as if Vodafone wouldn’t let you call someone using Three Mobile or Gmail wouldn’t let you contact someone on Outlook.
In contrast, two new services among many, BlueSky and Mastodon, both run on open systems, with public protocols that anyone can connect to. They want to be interoperable – meaning you can take your messages, friends, and posts with you if you leave.
Governments could be contributing to building that alternative. They could spend money on the standardisation effort. They could throw security researchers at it, or subsidise security research. You could imagine government-funded ‘bug bounties’, paying people to find bugs in the software.
BlueSky also does something very interesting called ‘composable moderation’, meaning you can moderate the things in your feed in multiple different ways. People on the website label everything that’s AI-generated, for example, allowing you to filter that content out. I don't want the Government doing age verification on my kids, which is essentially gathering data about all the stuff my kid sees, but I would welcome a well-curated feed of things that kids shouldn't see, and I might toggle that on or review it myself.
You can imagine the infrastructure for things being publicly provided that would then allow co-ops, individuals, tinkerers, loosely formed groups, churches, libraries, councils, the national government, for-profits and non-profits, as well as political parties, everything under the sun to be part of the social media universe, but without hegemonic control, and without the risk of overreaching state intervention in speech forums.
Matt Gallagher: These seem like quite common sense ideas, but with tech policy the Government – and the public – is very much trapped in Margaret Thatcher’s notion that “there is no alternative.”
You’re a science fiction writer, you spend time imagining alternative futures. In books like Walkaway, people step outside of a broken system and build something new. How does utopian thinking help us break these intellectual barriers down?
Cory Doctorow: It’s funny you refer to Walkaway as a utopia. I personally think it’s a utopian novel, but many people have called it dystopian. I think the mistake they make is they confuse dystopian furniture with dystopian stories.
It’s not dystopian to imagine that things will break. I think someone who comports themselves as though nothing can go wrong isn’t an optimist – they’re an asshole. That’s the person who says, "Why should we put lifeboats on the Titanic? It’s unsinkable." You don’t want to be around people who don’t think about how things can break.
If you're thinking about how things can break, and you’re thinking usefully, you’re also thinking about how we fix them when they break – how when the second law of thermodynamics asserts itself, we will once again temporarily push it back. Walkaway is a story about people struggling free of a bad situation.
Yes, there’s lots of dystopian stuff going on in Walkaway – but they’re dealing with it. The machine has stopped, but it didn’t explode in a shower of white-hot shrapnel. They can get it started again and set it on a better course.
And you’re right. Thatcher’s maxim, "There is no alternative," has really colonised our ideas about technology. Big Tech certainly wants you to believe it. And what’s interesting is that Big Tech and its least sophisticated critics tell the same story with different emotional tones.
Mark Zuckerberg says you can’t talk to your friends without him spying on you, because that’s how the technology works. And his critics say you can’t talk to your friends without Zuckerberg spying on you, because that’s how the technology works.
But if you actually look at the history of Facebook, it started off as the alternative to MySpace – and the promise was no surveillance. We know it’s possible to make a Facebook that doesn’t spy on you, because Facebook was a Facebook that didn’t spy on you. Until they outgrew the consequences of starting to spy.
Recovering that imagination – that critical imagination – is really important. We need to not just reflexively buy into Tech’s own story about what it does.
The tech bros claim all kinds of outlandish things. They claim they’ve perfected a mind control ray to sell your nephew fidget spinners. And their critics say that Robert Mercer stole the mind control ray and turned your uncle into a QAnon believer.
But if you believe that, you’re actually helping them sell ads, because when they go on a sales call, they say, "I don’t know if you’ve heard, but even my critics think I’ve built a mind control ray." And that’s a great sales pitch.
The evidence for mind control rays is pretty thin. Every person who’s ever claimed to have invented a mind control ray was lying. Rasputin, MK Ultra, Mesmer, pickup artists, NLP weirdos – all of them.
And I think Zuckerberg is lying too. The evidence for his mind control ray is very thin.
They did that voting experiment where they exposed 60 million people to a stimulus they thought would increase voter turnout. They saw a 0.04% effect size – real, sure, a few hundred thousand votes, but very small. One or two votes per precinct. And importantly, we know that persuasive stimulus regresses to the mean.
The first time you saw "£1.99" at a shop, you didn’t realise that it was two quid – but now you do. These tricks don’t keep working. If they did that experiment again, the 0.04% would probably shrink, not grow.
So we need to be sceptical about these mind control claims – whether they come from the tech bros or their critics.
Science fiction can help us do that. It can liberate our imagination from the overarching narrative that both tech and its critics conspire to sell – the narrative that says that what exists now is inevitable.
It can help us imagine more realistic, more achievable futures.
And I think we’re falling into this trap again with AI right now. I don’t believe that AI can do your job. But I firmly believe that an AI salesman can convince your boss to think it can do your job, and to fire you and replace you with an AI that can’t do your job.
We really need to distinguish between those things.
If you go around saying, “I’m angry at AI because it can do my job and it’s going to make me unemployed,” you’re helping the AI salesman close the deal.
But if you focus on the fact that AI can’t do your job, that firing people and replacing them with AI is filling our society with technological debt, with something like the asbestos or the Grenfell cladding of the future – then you’re doing criticism that can have a material effect.
Science fiction can help us break out of fatalism. It can show us that the future isn’t something that happens to us – it’s something we build.
"AI is ... like the asbestos or the Grenfell cladding of the future". Probably the most profound sentence in the whole piece. We have been warned!
The problem with Facebook, Google and some others is that they are basically data scrapers, harvesting people’s activities and selling it on to advertisers. See Surveillance Capitalism by Shoshana Zuboff.
I joined a couple of Facebook Groups for discussion of 2 things I was interested in. It primitive, Stone Age a complete waste of time. I hardly ever use it, nor Google.