The Spy in Your Pocket: How Big Tech is Stealing Your Data
Kyle Taylor on how Big Tech companies are harvesting your private information
In this extract from the Little Black Book of Data and Democracy, published by Byline Books, author Kyle Taylor examines the motives of the three big Internet companies and finds they are far from innocent…
It’s important to fully understand how messed up Big Tech companies are at their core. Let’s start with a Google search.
93% of all online experiences start with a search engine.
Since we say “Google it” and not “Bing it” or “DuckDuckGo it” it will come as no surprise to learn that 88% of all searches in the USA and a whopping 92% of all searches in the UK were done on Google. Once people have searched, 75% of the time they don’t scroll past the first page of results.
That means that almost all of the time, a person’s entire Internet experience is determined by what Google and its algorithm decides to put on the first page of search results. Since they make money from selling ad space, they are financially incentivised to prioritise the highest-paying advertisers and their own products and services.
Their interest isn’t in offering you the best results for you but the most profitable results for them. Your search is their product, not their service.
Onward now, with my winner for absolute worst and we’ll move backwards (or sidewards) from there.
Byline Supplement is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.
FACEBOOK IS TAKING YOUR DATA EVEN IF YOU AREN’T ON FACEBOOK
Facebook bought WhatsApp for $19 billion, and they did it for the same reason Google paid $1.6 billion for the loss-making YouTube: data. Tech giants buy seemingly unprofitable companies to monopolise their access to behavioural data that they can then incorporate into their advertising business model.
Interestingly, the European Commission – one branch of the European Union’s government – only gave Facebook permission to buy WhatsApp because they received assurances that data flows from the two businesses would remain separate.
Unfortunately – yet predictably – Facebook has since reneged on that assurance. In 2016, the company linked WhatsApp phone numbers with profiles on Facebook. This was challenged in Brussels and eventually led to a €110 million fine, which is basically nothing for a company worth over $500 billion.
To put that into context, let’s say you make $50,000 a year. That would be like getting fined $13. Yes, THIRTEEN DOLLARS. It’s basically nothing.
While WhatsApp messages are indeed “end-to-end” encrypted (meaning not even WhatsApp can view your messages), Facebook can and does collect other sorts of what’s called “metadata” from the app: your phone number, your contact list, your personal information, who you message, when you message them, and how often.
You know, all that stuff you probably thought was private information. All of this is used to personalise and increase the information Facebook holds about you and then sells to advertisers. Don’t just take our word for it, a since-deleted FAQ on WhatsApp’s own website informed users how to, “choose not to share my account information with Facebook to improve my Facebook ads and products experiences”.
This isn’t super surprising. Mark Zuckerberg himself, who founded Facebook and serves as its CEO, admitted to the United States Congress that they collect data on non-Facebook users. For example, the “People You May Know” feature that asks you to invite “People You May Know” to Facebook. Ever wonder how Facebook knows “People You May Know”? Facebook is also working on plans to connect the messaging services of Instagram – a public social media platform that it also owns – and WhatsApp. How can you keep the latter encrypted if it’s merged with something that’s public.
YOUTUBE IS A HOT-BED OF RADICALISM
YouTube is perhaps the perfect case study of the way the internet creates echo chambers with what are called ‘reconfirming algorithms’.
As a reminder, an algorithm is a set of rules that analyses data – like yours – to figure out something, like what video you might watch next, which is surprisingly far more harmful than you might think. In this context, it reconfirms that you might be, for example, someone who believes Hillary Clinton is running a peadophile sex ring from the basement of a pizza place.
In 2006, Google paid $1.65 billion for YouTube, a company that had never made a profit and was inundated in copyright infringement lawsuits. Why? To increase and streamline data harvesting. YouTube was, and still is, massively popular. In the game of data harvesting, how many users you have really matters. Now, all those YouTube users are on a Google platform and Google can track and record their behaviour. But in recent years it has become increasingly clear that YouTube is a radicalising machine.
Soon after Google purchased YouTube, it introduced advertisements on the website. Not long after that, it started giving people who put videos on YouTube a cut of the money companies like Nike were paying to advertise on their videos. The more views you get, the more you are paid. It’s sort-of like a pyramid scheme.
Google gets YouTubers to use you – the product – to get more advertisers – their actual customers – and gives the content creators a cut. This, of course, incentivises provocative and sensationalistic content because the longer people spend on YouTube, the more money Google can make.
The goal is to sell as many ads as possible, so Google wants to keep you on the platform as long as possible. Google does that by only showing you stuff you’ll definitely like, whether it’s cat videos or “lock her up” Hillary Clinton mash-ups.
For example, let’s say you’re slightly interested in this “Pizzagate” thing you’ve heard about. So you watch a video made by some random person on YouTube. (Why we’re inclined to believe unverified, total strangers on social media when we wouldn’t do that in real life is another dangerous phenomenon of the internet age). YouTube’s algorithm goes “Hey, you like that? Here’s something a bit more extreme about how the Clintons are actually running a global paedophile ring. But first, watch this short ad (cha-ching!). Keep them happy, keep them watching, make more money. Did that draw your attention? Well here’s one about ANOTHER conspiracy involving the Clintons. But first, another ad. (cha-ching!).”
Before you know it, you’re watching clips promoting QAnon, the fringe group mentioned earlier that’s organising a global movement to “free the children from a Satanist Liberal Hollywood cult” that doesn’t exist. That’s the one the FBI – even under the Trump administration – says has the markers of a terrorist organisation. So someone who’s curious about a thing they heard – possibly on Facebook or Twitter – becomes a totally radicalised conspiracy theorist living in an alternate reality that was incentivised by YouTube’s business model. Not ideal.
This combination of monetisation and personalisation has been a boon for radical and fringe ideologies, particularly on the far right. In January 2020, research presented at the ACM Conference on Fairness, Accountability and Transparency in Barcelona confirmed what many had been saying for years: people who watched “moderate” content on YouTube were likely to migrate towards increasingly radical videos.
Promoters of extreme rightwing ideology (who evidence suggests are more inclined towards shocking, provocative and emotionally-charged content) have exploited YouTube’s design to entice larger and larger audiences. And YouTube has allowed this because larger audiences mean more ads sold (cha-ching!). That incentive – making as much money as possible in an information business – is maybe, kinda (read: TOTALLY) not in the best interest of democracy.
To function, a democracy requires an accurately informed society that lives in a shared reality. As I have explained, this does not exist on the internet. On a platform like YouTube, the highly sophisticated (and secret) algorithm determines what videos appear on your homepage and on your “up next” section. Everyone’s YouTube is a highly personalised place. Just log out of your Google account and check: the new YouTube you see will be very different.
AMAZON ALEXA - YOUR OTHER ROOMMATE
As for Amazon, in many ways it might seem different to the other tech giants. After all, it has a recognisably pre-internet business model. “But it is just a shop online,” you say. “How could it be bad? I can get everything I need all in one place!” More stuff, more data.
In recent years Amazon has been moving into the world of Google and Facebook-style data harvesting and surveillance. Like these two companies, Amazon sells the extensive data it holds on its users (in this case, their shopping and viewing habits) to advertisers.
Amazon’s Alexa, however, is taking this practice into new territory. Alexa is an AI voice assistant device that you keep in your home and can speak to; a portal to the internet that can provide information, order you a product, turn on your music and even read you a recipe. Convenient, right? Don’t forget what convenient usually means. More data.
Alexa is more than just a device. Amazon has made it very easy for third-party developers (that’s other companies that create stuff for Amazon’s products) to connect to Alexa. The aim is to build an integrated environment that unifies all “smart” devices in your home, workplace, or wherever else, from lights to the thermostat to your refrigerator.
Dave Limp, Alexa’s senior vice president, explained the goal as being to “create a kind of open, neutral ecosystem for Alexa… and make it as pervasive as we possibly can”.
They don’t even pretend they care about your privacy. With Alexa, Amazon is leading the way in the implementation of online data harvesting practices into the offline world. Amazon claims the device only listens to you when you have activated it with a request, but the company’s patent applications in the U.S. show that Amazon is pre-planning how Alexa and connected devices could be used to monitor customers’ private behaviour and then nudge them towards personalised advertisements for products.
For example, it might hear you talking to your partner about how it has been raining a lot lately and start delivering ads to you across the internet for umbrellas. That may seem innocent enough but what if you’re talking about being HIV positive and then start to have trouble getting health insurance? Or you express gratitude that your criminal record was expunged and won’t show up on your background check then suddenly have job interviews cancelled?
This all-invasive future may already be here as there is mounting evidence that Alexa and devices like it are already listening to you even when they’re turned “off”.
Also owned by Amazon is the Ring in-home security camera system, which has been sold to “bring protection” to your house. Unless, of course, somebody hacks into the one you’ve placed in your children’s bedroom and begins speaking to them through the built-in speaker while they watch, which happened to a family in Tennessee, USA. Nothing says security like discovering a stranger may have been watching you for days without your knowledge
‘SMART’ SPEAKERS AND THAT SUPERCOMPUTER IN YOUR POCKET
Have you had it happen to you that you were talking about something and then you opened Instagram and started getting ads for that thing? Your first thought is “OMG my phone is listening to me.” If you tell that thought to your friends, you’re often shut down as being “paranoid.”
You’re not paranoid. The biggest player in the “listening” market is Amazon Echo devices. These devices listen out for that keyword “Alexa” so they can make your life “more convenient” and do something for you. You’ll remember that if something is selling convenience, it’s likely taking your data and that’s as true as ever with Alexa. But to listen out for that word, it has to listen to all your words.
Amazon itself admits this is true (again, they aren’t even hiding it so why don’t we believe them?). Amazon claims it is always listening so the artificial intelligence can “learn.”
Google got caught up in a bit of a whoopsy daisy when it “accidentally upgraded” enabled auto-listening on some people’s devices who hadn’t signed up for their “home security” offering. You read that right. Their “home security” offering that you pay for keeps the speaker listening all the time so it can hear fire alarms and warn you. It’s just helpful, see?
A Berlin-based company even managed to build eight “smart spies” apps that received approval in one form but were subsequently updated to spy through Amazon Echo and Google Home devices.
If this small Berlin-based company can do it, imagine what mega corporations like Amazon and Google themselves can do. Or governments for that matter. Do you really want a private company – or your government – to be able to listen to every word you say? If you leave your Android device or iPhone on the default settings it came with, everything you say could be recorded and used for marketing purposes not just by the company that made the phone but by third-party apps (that’s almost EVERY app) that you install.
This was tested recently. A Vice reporter started saying key words and phrases several times a day for five days. Within 24 hours of starting, he was getting ads for the stuff he’d been saying. He said “back to University” and started getting ads for summer courses. He said “cheap shirts” and started getting ads for – you guessed it – cheap shirts. Some companies deny this while others – like Google – are open about it.
But for most, their terms and conditions – and the law – don’t prohibit it. This entire, all-encompassing system of treating information about you as nothing more than a commodity for profit – called “surveillance capitalism” by Professor Shoshana Zuboff of Harvard Business School – now extends way beyond advertising. From watches that track your heart rate around the clock to “smart fridges” that know when you’re low on milk, every bit of data about you is stored and used with the sole objective of making money, regardless of the human implications.
This handy guide to how your data is used and abused, its implications for democracy and what to do about it is available for £9.99 from our Byline online shop