Cambridge Analytica Swindled Facebook Data From 50 Million People – It’s Time We Know What We’re Worth Online

Politics Features User Data
Cambridge Analytica Swindled Facebook Data From 50 Million People – It’s Time We Know What We’re Worth Online

The controversial data mining and consumer-profiling research firm Cambridge Analytica just got two black eyes and a swift kick to the groin. Over the last few days, three major stories broke about the secretive company, which the Trump campaign hired to inform its social media efforts during the 2016 election cycle. This all obviously isn’t good news for Cambridge, and it’s also not good news for Trump.

The reports are also very bad news for private citizens around the world, and they demand we take on a serious and overdue reckoning of how internet-based companies collect, quantify, and distribute consumer data. Let’s take these three stories together and try to understand what’s going on here, and what, if anything, can be done about it. Bottom line: It’s time we come up with a way for consumers to understand the exchange of data as intuitively as we understand the exchange of money.

Who is Cambridge Analytica?

Cambridge Analytica is a data mining firm partly owned by Robert Mercer, Trump’s main PAC investor. Mercer also owns Breitbart, and at a time employed senior Trump campaign officials Steve Bannon and Kellyanne Conway. (It’s probably worth noting that the campaign hired both of them at the same time.) In fact, Bannon was at one point Vice President of C.A., and the billionaire Mercer family, right-wing conspiracy theorists and major Trump backers are the tradewinds which guide this data-heisting pirate ship.

Here’s the Cambridge website. Here’s an article from The Guardian about how the firm exploits social media data. And here’s a piece about Robert Mercer, the Cambridge-Trump investor who believes nuclear bombs are a net gain for the people who get bombed.

For an insider’s idea of what the company is like, the Cambridge whistleblower who blew the cover on the Facebook theft described the company’s 2016 campaign activity as exploiting the “mental vulnerabilities” of millions of Americans. And Christopher Wylie, a Cambridge co-founder who worked for them until late 2014, said that more broadly the company is rabidly pursuing a culture war. “Rules don’t matter for them,” Wylie said. “For them, this is a war, and it’s all fair.”

Anyway, the Trump campaign hired Cambridge Analytica to help them refine and micro-target digital ads. Cambridge also helmed in the Brexit social media campaign. Last December it was reported that Special Counsel Robert Mueller requested internal documents from the company.

Not only did the Trump team hire Cambridge, it also gave the data firm office space in its San Antonio digital headquarters, and it (like anyone would) made sure those Cambridge employees were favorable to Trump’s message. What’s more, though, those Cambridge employees worked next to employees from Facebook, Google, and YouTube—whom the campaign housed from time to time in temporary offices.

This isn’t quite as sinister as it sounds on the surface, but in light of these recent news it’s pretty troubling. We’ll look at it later. First, what that news was.

The Heist

Over the weekend the New York Times and U.K. periodical The Observer reported that Cambridge Analytica had paid to harvest the data of 50 million Facebook users, dishonestly using a British academic as an intermediary so it would all appear legit. In response to the reports, Facebook banned the company from its platform, but in a follow-up the next day The Guardian revealed that Facebook knew about Cambridge’s Trojan-horse heist back in 2015. Obviously Facebook didn’t tell anyone.

The social media company apparently did contact Cambridge, informing them in a letter that they knew “this data was obtained and used without permission.” The letter added that this data “cannot be used legitimately in the future and must be deleted immediately.”

A Facebook legal rep said that in response to the notice Cambridge “certified to us that they destroyed the data in question.” It’s unclear, however, how Cambridge proved it destroyed the data, or how it’s even possible for anyone to prove Cambridge destroyed the data. The firm almost certainly backed it up five minutes after they got it. And indeed, former C.A. employees cited in this weekend’s reports say the firm likely still has the data.

Of course it does.

The reports go on to describe the theft as a “leak” and a “breach” of Facebook, but it seems those definitions, which portray Facebook as the victim, don’t go far enough. We might more accurately describe it as “Facebook getting hosed.” After all, though the ultimate destination for the data had been obscured behind a frontman—and that frontman himself acquired the data legally and with full permission from the social media company. Facebook cites this fact as part of its defense, but that’s really an admission of the pitiful inadequacy of Facebook’s privacy protections. Who knows how many entities have discovered and successfully exploited this loophole over the years—a policy vulnerability so obvious, it’s all but certain the company has been aware of the scheme for a long time.

So yes, Facebook bears part of the responsibility. The company neglected to inform its customers of this and who knows how many other similar heists, nor did it publicly address this ridiculous loophole or say what it planned to do to close it. It’s also grossly negligent that Facebook banned Cambridge from its ad platform only after these reports surfaced.

Beyond this, though, the Trump digital team (headed by Jared “Oh, You Meant Those Forms” Kushner) worked hand-in-hand with Facebook during the election cycle. As mentioned above, the campaign also gave Facebook employees temporary offices in its “Project Alamo” social media headquarters, offices which were, surprise surprise, right next to the offices of the campaign’s Cambridge Analytica employees.

Again, this isn’t in itself necessarily sinister. It’s normal enough for internet giants such as Facebook and Google to work hand-in-hand with many political campaigns and high-profile (and high-paying) companies in order to help tailor ad campaigns. But of course, that targeting relies on data. And what data did the Trump campaign have for Facebook to work with? Cambridge’s data.

It’s inconceivable to me that Facebook, which was aware of the Cambridge theft one year earlier and shared office space with them during the campaign, didn’t recognize this. And it’s grossly negligent that Facebook employees worked in the same office as Cambridge employees on a major political campaign, knowing Cambridge had essentially stolen the personal data of 50 million Americans just one year before.

But that’s not where this shadowy operation ends.

The Russians

Surprise, surprise! The third story limns a direct connection between Cambridge Analytica and Russia. And, yes, it involves mining voter data.

On the same day the Times reported that Cambridge had stolen all that data from Facebook, it also dropped the story that Cambridge Analytica executives allegedly met three times with executives from the Russian oil giant Lukoil, in 2014 and 2015. This was according to Lukoil’s own documents. The Russians reportedly wanted to discuss how social media data was used to target and manipulate American voters. At the time, Cambridge had at its disposal the personal social media data of 50 million Americans.

Of course Mueller is interested.

How Can We Regain Control of Our Identities?

This question has simmered for years, occasionally boiling over only to be wiped clean. Maybe this time will be different, though, because the consequences are at once so very public and so very personal.

We have evidence of data exploitation in the form of ads in front of our faces every time we visit practically any website. And we know, in an abstract way, that even if we don’t like to admit it we can be deeply understood, targeted, and gradually—and often successfully—manipulated to act a certain way based solely on the record of our activity online. But hey, we like discounts on contact lenses and concert tickets. That’s perhaps why it’s taken until the 2016 election for many people—including me—to grasp the extent of power that data manipulators wield, and it includes the ability to influence and shape government and policy at the highest levels. And to do it so easily it’s insulting.

Cambridge got data from fifty million Americans through a personality quiz. The “academic researcher” who they hired paid Facebook users to download an app and take a personality quiz. The app collected not only their profile information but also that of their friends, which at the time was fine with Facebook. This meant that in all, the 270,000 or so people who had actually downloaded the app—thereby consenting to share their data—yielded information for over fifty million Americans to Cambridge. According to the Times, the number was confirmed by several sources, including a company email. The Times also reported that the company found that thirty million of the profiles had enough information for Cambridge to match users to other public records, such as marital status and where they live, and from that, they could build out psychographic profiles. The company paid its “academic researchers” about $800,000 for the data.

This, obviously, raises some questions. For instance, how much is our personal information worth to us? And how much is it worth to companies and data brokers? More specifically: How much personal data is our social media activity worth to us?

Most importantly, though, what do we even know about it?

Let’s return to that quote from the Cambridge Analytica whistleblower about the company’s 2016 campaign activity: They exploited the “mental vulnerabilities” of millions of Americans.

I want to distinguish between “mental vulnerabilities” and “mental deficiencies.” We don’t like to admit it, but we’re all susceptible, to varying degrees, to psychological manipulation. That’s not a deficiency; it’s a universally human vulnerability. Still, when it happens to you it does mean you’ve been in some way influenced, even controlled, to do what someone wants you to do, which is insulting. So if you voted for Trump and interacted with a bunch of material online, which either nudged you a little bit to supporting him, hardened your support, or inspired you to increase your political activity online, I don’t intend this to sound like an attack on your intelligence. But I do understand if you take it that way, and I want you to know I experience the same thing all the time and sympathize.

Still, though we can’t quantify how many votes these efforts influenced, we must be honest: They influenced votes. Otherwise what’s the point of a social media campaign? Why, for instance, would Trump push those stolen WikiLeaks stories on Twitter? Just for the hell of it? Why spend all this time and money on architecting a massive micro-targeted digital campaign (based on stolen data) if it wasn’t an effective way to persuade voters?

So no, we can’t quantifiably say what positive net effect these campaigns had on the election. But this also means we can’t say they had no effect on the votes. And in fact, no one in the intelligence community has said that. They’ve only said votes weren’t physically changed.

We can’t see our personal data, at least not the majority of it. It’s invisible, and it’s tough to gauge how much we’re really worth. But there’s a reason platforms like Facebook and Google and Instagram don’t cost you money: Your data is worth more to them than their platform is worth to you. In other words, they make far more from your data than they’d ever make from your wallet. Cows don’t pay a cover to get into Club Abattoir.

Also, we don’t, and can’t, handle data the same way we handle money. We don’t seem to have the same level of control over our social profiles as we do over our bank account. And even though banks do lend our money, they don’t sell it. We can get it back whenever we want. Not so with data. Data is forever.

This is why I feel we need to figure out a way to quantify data as a type of currency that everyone can understand as intuitively as we understand money.

For instance, we understand how prices work in every other type of market, and almost all of us understand the basics of supply and demand. This means we can more or less always see why one good or service has a higher or lower price than its competition, and we get upset when we discover hidden fees. Even more importantly, we use prices to weigh one type of good or service against not only competitors but also completely different products. A gallon of milk isn’t as expensive as, say, a government employee’s taxpayer-funded first-class trips on a private plane. Importantly, we also know why the values of those things are different.

Almost none of the above applies to how we understand and exchange data.

But how do we price something like data? To some people our data is worth a BOGO sale on shoes. To others it’s worth a President of the United States. To extend an earlier metaphor, cows don’t set the market price for beef.

But maybe they could go on a hunger strike? It’s in this vein of thinking that some people have proposed that regulators require social media companies to change to a pay model. This likely won’t do too much to stop the data bleed. The companies will have access to the data of slightly fewer people, but those people have some level of expendable income. It’s also too little too late: Many people depend on sites like Facebook for communication or organizing things such as community, company, or political events. Most obviously, such a plan would also shut out many poor people.

Organizing an actually effective social media strike is ridiculous for similar reasons.

Maybe, though, it wouldn’t be so ridiculous if we knew exactly what we were worth. Each packet itself isn’t worth much, but we’re each a bottomless well. Imagine, for instance, if we got notifications from Facebook every time it passed our data to a third party. Our phones would overheat and brand our hands.

Not only that, but the value of our privacy online includes how much information about ourselves that our friends, acquaintances, enemies, and even all the strangers we connect with on, say, LinkedIn, share unwittingly (though not according to those “terms and conditions”).

We’re worth a hell of a lot to a hell of a lot of people, but we literally don’t know our own value. It’s as unrealistic to expect people to withdraw en masse from social media as it would be to expect us to withdraw from the financial system. Participation in the virtual world has become an inalterable reality, at least for the foreseeable future. But that data-driven world demands a publicly exchanged data-driven currency. And I don’t mean bitcoin.

Share Tweet Submit Pin