Social Science: Using AI to Fight Fake News

Social Science: Using AI to Fight Fake News

The spread of misinformation was largely blamed for the surprising (upsetting) results of the 2016 presidential election. Facebook specifically caught heat for facilitating this spread—for making it easy for baseless conspiracy theories to be shared the same way as reputable news stories.

After the election, Facebook added a new feature that flags suspicious stories as “disputed by fact checkers,” but that’s just a start. Misinformation still spreads rapidly across the internet. This is an unfortunate side-effect of the non-stop instant-gratification news cycle. The reasons are layered: Stories are posted online minutes after a reporter gets their hands on; most outlets can’t afford fact-checkers, and propagandists can slide right in and join the fray unnoticed.

The University of West Virginia wants to change that. Students from Reed College of Media and Benjamin M. Statler College of Engineering and Mineral Resources are teaming up in an artificial intelligence course at the university’s Media Innovation Center that includes two projects focused on using AI to “detect and combat” fake news.

In the course, led by Don McLaughlin, research associate and retired faculty member of the Lane Department of Computer Science and Electrical Engineering, students work in teams to develop and implement their own AI programs to fight the spread of misinformation.

“Artificial intelligence can have all the same information as people, but it can address the volume of news and decipher validity without getting tired,” said Stephen Woerner, a senior in the course whose team is using a machine learning system to analyze text and score the likeliness that the claims made in the text are false. “People tend to get political or emotional, but AI doesn’t. It just addresses the problem it’s trained to combat.”

In other words, AI won’t choose to believe something it hopes is true, regardless of cues that it might be false, because AI doesn’t have a political agenda (not yet, anyway, but that’s a discussion for another time).

This is obviously an important pursuit—democracy doesn’t work with an uninformed (or worse, misinformed) populace, and there can be no accountability without transparency. But is simply detecting what is fake going to be enough to turn the tide away from ignorance?

Part of the brilliance of the Trump administration’s propaganda strategy is that they not only feed their supporters fake news, they also teach them not to trust anyone else—they insist that mainstay, reputable news sources with fact-checkers and ethical standards are not to be trusted. If you’re taught not to believe anything you’re told by outsiders, why would you believe an outside program, whether run by humans or AI, telling you that your trusted sources are lying to you? Even if fake news is flagged, the truly brainwashed could easily shrug it off with “that’s what they want you to think” – confusing misinformation for truth and truth for misinformation.

Identifying fake news for those who will care to know whether or not what they’re reading is true is a step. It’s an important step. But if we really want to combat ignorance in this country, we’ll have to do more than provide facts, we’ll have to find a way make people care about facts again. This means not only investing in journalism and journalistic tools like this fake-news-detecting AI, but investing in education, bringing back a culture that values intelligence over ignorance, and shutting down the pejorative uses of words like “smart,” “brainy,” and “intellectual.”

The current Secretary of Education is obviously not going to lead the charge toward an informed populace; we’ll have to find a way to get there ourselves.

Top photo courtesy of pixabay

Lilly Dancyger is Deputy Editor of Narratively, and a freelance journalist based in New York City.

 
Join the discussion...