The Problem of Fake News and Why Facebook and Google Can’t Fix It

The Problem of Fake News and Why Facebook and Google Can’t Fix It

Throughout our digital lives, we’ve all seen (and read) some pretty bogus stories on the Internet. They’ve reeled us in with their click bait-y titles and confusing photos that we just need to know more about—or they appeal to our emotions with headings that make us share posts before we even actually read them.

While some stories might sound like a post by The Onion (and surprisingly turn out to be true) the reality is: there is a ton of fake news on the Internet. And post and pre election 2016, you’ve seen and read a lot of false stories on your Facebook page.

Facebook CEO Mark Zuckerberg has been questioned about the role his company played in the 2016 election because reports claim Facebook failed to remove false stories (primarily posted by hyper-partisan websites) that potentially helped sway someone to vote for or against a particular candidate.

Did fake news on Facebook help Trump win the election? Are right-winged people the only ones who believe fake news? How do we know what news sources to trust and which to distrust?

An even bigger question we should be asking ourselves: do we really want tech giants to be the ones telling us what is “fake” and what is “real” news?

Does this matter? Does fake news on social media actually impact us?

No one actually believes that fake news is the sole reason why Donald Trump won the election. But by said hoaxes appealing to the emotions of people with strong views, it could have simply added fuel to a fire.

The more that person read about fake stories of violence or news pieces with headlines such as, “Pope Francis Shocks World, Endorses Donald Trump”—which I definitely saw people share that on Facebook with a comment like “God bless”—the more that person might lean toward a candidate.

And when you realize that almost half the U.S. population finds news on a platform where fake news can easily be circulated—yeah, this problem is concerning.

In an interview with The Washington Post, Paul Horner, a fake news writer, spoke out saying he feels he’s at fault for helping Donald Trump win because of the ridiculous fake stories he pushed out on Facebook.

“His campaign manager posted my story about a protester getting paid $3,500 as fact. Like, I made that up. I posted a fake ad on Craigslist,” explains Horner, who goes on to note that no one needs to be paid to protest at a Trump rally, so he wrote the piece to make fun of such an insane notion.

Unfortunately, many people believed (and still believe) that story is true because some of us are impacted by what we see on social media. The problem here may lie in the fact that social media is meant to be read quickly and responded to right then and there. Facebook in particular is the primary platform under fire for the way its built to support the circulation of fake news.

Take, for example, the reaction feature on Facebook, a tool the social platform recently implemented that lets you read someone’s post and summarize your response into a little emoji face that tells the person whether you’re sad, angry, happy, or laughing. So when someone reads an article on Facebook, regardless of whether its fake or real, they are really only encouraged to read the headline, maybe browse through what other people are saying about it, and then decide immediately what they think—though they may not have ever actually read the piece.

44% of Americans get their news on Facebook. Let that sink in for a moment. Numbers that high make Facebook the most common platform for how people get their news. While some people are able to identify the difference between legitimate and fake news sources, others are unable to do so.

Research published by Pew Research Center reports that 20% of social media users indicated that they have changed their views on a social or political issue because of something they saw or read on social media. While it’s not specific to fake news (and 82% of social media users have never modified their views on a candidate), it is interesting to consider that people are swayed by what they see on social media, even if it is a small amount.

Pew Research also found that 17% of users said social media did change their views about a specific political candidate. Though 79% of users have never changed their views on a social or political issue, the findings still indicate the possibility that the content we indulge online can go so far as to convince us to change our opinion.

Zuckerberg has defended Facebook by explaining that the amount of fake news or hoax stories you see on his social platform is not as plentiful as we think. He wrote in a statement on his personal page that, “Of all the content on Facebook, more than 99 percent of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics.”

Even though the quantity of fake news content is low, fake news pieces have a higher chance of being shared and read thousands of times before someone debunks it. Why? Because they appeal to your emotions quicker than a legitimate article might. With a fake news piece, you’re going to see the words and faces of the things and people you already dislike—so your gut-reaction will likely be to believe it and share it with a frustrated comment.

Google is part of the problem too

Screen Shot 2016-11-21 at 10.25.53 AM.png

Facebook isn’t the only platform being blamed for fake news or being accused of helping Trump win the presidency. Google also came under fire post-election because fake news articles appear in Google searches, some even listed in the top news stories section. This occurred when a fake news story indicating that Trump had won the popular vote (which is not true) sat at the top of the list when searching “final election results.”

In response to this mishap, Google actually changed its policy and announced that it will prohibit sites from using Google Ads for news pieces that misrepresent information. After Google made this announcement, Facebook made a similar one saying it, too, would not let fake news sites use Facebook ads to promote fake news.

But wait—does that mean both sites are still letting fake news even be considered “news” at all on their platforms?

That’s part of what’s scary about the prospect of Google and Facebook using their algorithms and policies to censor our news. How will they determine what is actually “fake” and what is actually “real?” Don’t forget that Facebook has previously been caught making specific stories trend (with a political bias) and sit in its Trending section above other actual trending stories.

Furthermore, let’s consider that both parties make a lot of money off fake news ads, as Horner points out.

“I make most of my money from AdSense—like, you wouldn’t believe how much money I make from it. Right now I make like $10,000 a month from AdSense,” explains Horner. “Facebook and AdSense make a lot of money for them to just get rid of it. They’d lose a lot of money.”

Defining “Fake news” and “real” news

Screen Shot 2016-11-21 at 10.27.45 AM.png

So we’ve got a problem with fake news making money off of Facebook and Google ads, but both platforms have committed putting it to a stop. Problem fixed? Not exactly. Our “fake news problem” is a result of us not trusting the mainstream media or sources for that matter—so then who are we going to trust and how are we going to determine what is real and what’s fake on the Internet?

The real problem is that the definition between “fake news” and legitimate news is increasingly blurry. Publications such as The Huffington Post and Vox are known for using misleading headlines to sell news stories. While looking at what we define as “fake news” on the Internet, shouldn’t we also discuss those headlines “That You Won’t Believe Don’t Actually Tell You Anything Important?”

BuzzFeed analyzed engagement of fake news on Facebook and compared it to the amount of engagement real news received. They found that fake news pieces received over 8.7 million engagements (shares, reactions and comments) compared to the 7.3 million reactions real news received. However, it’s important to note that this doesn’t refer to actual traffic to websites; rather, these are our snap reactions to both fake and real news.

The trouble with this BuzzFeed report though is that some of the “real” news sources in their analysis include opinion heavy publications like The Huffington Post and Vox.

And that’s what’s part of the problem too—many of us have abandoned traditional news platforms to follow sites like BuzzFeed, Upworthy, and The Huffington Post because we don’t trust a publication that has different views from us; we don’t trust mainstream media any more because we find it corrupt; and at the end of the day, a large majority of people don’t want to read a 3,000-word piece (thank you, if you’ve read this far) and would rather read a listicle filled with GIFs that will make us (and our friends when we share it) laugh.

It’s hard to decide who should and can determine what is considered “fake” and “real” news because not everyone knows how to determine that. We all have different opinions and while one person might read something and think, “This is obviously fake,” another might read it and think, “Finally, someone telling it like it is!”

It’s the same thing for sources we consider reputable: a liberal person might watch Fox News and say it’s terrible in the same manner a conservative would watch CNN and denounce everything.

The Pew Research Center conducted a study on this back in 2014 and found that (not surprisingly) trust and distrusts in the news media varies on your political ideology. I highly recommend reading through their study if you’re interested because it is a complicated answer that also can’t conclude one news outlet that is the “most trusted of all.”

One thing we should agree on though is that we cannot rely on Google and Facebook to censor fake news from real news. Doing so would give both companies the power to block certain stories by simply changing its algorithm.

Screen Shot 2016-11-21 at 10.29.20 AM.png

As we previously learned, Facebook has the ability to inject and remove stories a la its Trending section debacle, so human judgment does play a role in how an algorithm blocks articles on the platform. Plus, remember this summer when Facebook changed its algorithm to make your home feed more about your friends and family so you’d see less news or unrelated posts?

In his statement about the accusations against Facebook, Zuckerberg points out that the issue doesn’t only lie within Facebook’s algorithm. Rather, it also lies within how we each define what reading the “truth” means.

“Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted,” wrote Zuckerberg. “Even greater volumes of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves.”

It might seem like Zuckerberg is trying to get out of the spotlight here, but his point remains. Do we really want Facebook to have that much power over the news? Shouldn’t that be our job as the educated public?

There have been other attempts to solve the problem, such as this list of “trustworthy” websites, compiled by Professor Zimdars of Merrimack College. On her list is 139 websites that she considers legitimate purveyors of the news. While this is a great start and her document includes helpful tips on how to determine if a source is credible, one might argue that a site listed as false on the document is only there because of political bias. Similarly, a person might suggest The New York Times be added to the list—which people are arguing about on Twitter and Facebook. PolitiFact, a Pulitzer Prize-winning fact-checking site has started an entire section dedicated to fake news.

There’s no easy answer here. The public needs to be educated, news sites need to rethink how they do headlines, websites need to be vetted, and new platforms needs to be developed for getting news. It’s an Internet problem that isn’t going to get fixed overnight. But here’s what we know for sure: We can’t just resolve the issue by allowing social media platforms like Facebook, Twitter and Google to censor content and use algorithms to wipe out what they determine is fake.

 
Join the discussion...