Colourful Beads and Cheap Trinkets

Charles Jameson
6 min readMar 12, 2021

Privacy is a strange idea.

Unlike a right to free speech or a right to equality, it has no clear definition in legislation or in society. However, we all seem to ‘know’ what it means, one way or another. I call this the ickiness scale.

Security cameras in the street? A bit icky.

Security cameras with facial recognition? Ickier.

A database for government to track your location at any time based on those cameras? Ickiest by far.

Privacy has taken on a whole new meaning in the 21st century. More and more of our lives are shifting online, and the current pandemic has only accelerated this trend.

However, the dialogue around privacy seems to be lagging behind. When we discuss privacy, we think about it in physical terms. We imagine Kate the police officer looking at our ‘file’ at the police station. We imagine Dave the advertiser scanning over a list of our previous purchases.

We instinctively use these structures to frame our thoughts and turn vague, principled arguments into tangible questions about human relationships — “Am I comfortable with Kate or Dave knowing this thing about me?”.

However, the world is rapidly becoming far more inhuman, and the way we approach these questions must change with it.

A Surge in Data

It’s difficult to understate the profound value of data in the modern world. Historian Yuval Noah Harari sums this up nicely:

“At present, people are happy to give away their most valuable asset — their personal data — in exchange for free email services and funny cat videos. It’s a bit like African and Native American tribes who unwittingly sold entire countries to European imperialists in exchange for colorful beads and cheap trinkets.” — Yuval Noah Harari, 21 Lessons for the 21st Century

Privacy no longer just refers to your phone number or a past purchase on Amazon. It refers to every website you visit and every link you click. Every search query, every photo, and every mouse movement. Every 4G tower you connect to, every café Wi-Fi network you use, every GPS signal you emit, and every text you send.

To make sense of all this data, Kate and Dave have made way for algorithms, machine learning, AI, and countless other technologies with equally buzzword-y names. And while it’s easy to dismiss this data as just banal facts, they can be used to predict deeply personal information — your politics, your sexuality, your race, your medical issues, your religious beliefs, your wealth and more.

A Decline in Transparency

As more of our world shifts online, it becomes easier by the day for companies’ and governments’ ickiness to hide behind jargon and code.

For example, consider the security camera example from above.

Curiously, all three levels of surveillance look completely identical. After all, it’s just a camera on a pole. However, the software driving these systems could turn from harmless to terrifying in a single software update and no-one would ever know, unless the owners disclosed the change.

Privacy was not always this way. If papparazzi stuck a camera in your face, you could be fairly certain how your information was being collected and for what purpose. It was clear where your data was going (i.e. to the newspapers), and it was clear how you could avoid this situation in the future.

However, the rules have changed. The ease with which companies and governments can track and analyse the oblivious masses is frightening. They do not need to disclose how they collect it, how they store it, or how they analyse it and profit from it. And even when they act responsibly with the data, there is a constant risk of that data being stolen, and someone else doing something even worse.

Case Study — Cambridge Analytica

In 2013, a paper from Cambridge University’s Psychometrics Centre was published. It found the following:

We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender… The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases.

With this research in mind (and with one of the authors themselves on board), the company Cambridge Analytica began collecting huge tranches of personal data harvested from Facebook and other sources.

This data was acquired primarily through a Facebook app called This is Your Digital Life. To use the app, users needed to give it permission to see all of their personal information. But crucially, it automatically received all of that user’s friends’ information as well.

(Facebook very helpfully let users give away all of their personal data to apps in just one easy click.)

And so, while it only had roughly 270,000 actual users, it was able to collect data on over 87 million people. And while selling this data to Cambridge Analytica was against the app’s terms of service with Facebook, Facebook had no way of knowing this exchange had occured.

Cambridge Analytica started creating ‘profiles’ for millions of potential voters in the United States in the run-up to the 2016 US Republican primaries (for Ted Cruz and Ben Carson), then subsequently in the presidential election (for Donald Trump).

It also worked for Leave.EU, a Brexit campaign group.

Cambridge Analytica’s goal in each case was to identify and analyse voters. These insights could then be used to create bespoke ads, playing on each person’s individual fears and beliefs, to try and influence their vote or influence turnout in key geographic areas.

While the extent of their success is still debated, the outcome is in any case alarming. Facebook’s lax data policies made it all too easy for political players to influence elections and a referendum in unprecendented ways by advertising on Facebook itself. Furthermore, most of the data they used was obtained without any user’s consent to this use, in a process facilitated by Facebook.

Resisting Apathy

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” — Edward Snowden

You’ll find very few people who actively argue against a right to privacy. Instead, the biggest opponent to privacy in the modern world is the simply the sheer effort it takes to defend it.

Why is this? Unlike a right to free speech or a right to equality, many companies are extremely financially incentivised to wear away at this fundamental right to privacy.

Besides, Facebook and Google are undeniably useful services, so it is often easier to simply ignore the lingering ickiness than upend your digital life and move to alternatives.

But the fact remains –

Yes, Facebook reads all of your messages, predicts your likes and dislikes, and tracks what websites you visit.

And yes, Google reads all of your emails, analyses your photos, and tracks your location.

And yes, both companies are worth hundreds of billions because of it.

Ultimately, if you are not being sold anything, you are the one being sold.

And so, when you encounter something new on the internet, or your government is making changes that seem a bit ickier than usual, pause for a moment. There are new considerations to be had when thinking about privacy — the power of fine-grained data, the lack of transparency, and the risks of outside attack.

These changes make it more important than ever to understand what you are about to give away with a single click.

--

--