Facebook Launches a New “Disputed” Tag to Combat Fake News

Business Features Facebook
Facebook Launches a New “Disputed” Tag to Combat Fake News

You can’t buy a pack of cigarettes without a giant warning label informing you that this product will slowly destroy (at least) your lungs, and now in that same vein, Facebook is rolling out a “disputed” label for stories that have a similar effect on your brain. Per Facebook:

News stories that are reported as fake by people on Facebook may be reviewed by independent third-party fact-checkers. These fact-checkers will be signatories of the non-partisan Poynter Code of Principles. A story may be marked as disputed if these fact-checkers find the story to be fake.

Any user can mark a story as fake by clicking on the downward pointing arrow next to it, clicking “Report post,” selecting “It’s a fake news story,” and finally submitting it by choosing “Mark this post as fake news.” Now, this doesn’t automatically throw a tally on the post as fake news. If this were the case, an entire army of Putinbots could remove CNN from Facebook in the span of a few hours. Instead, it gets submitted to a signatory of the Poynter Code of Principles, who help make the final ruling as to whether the story is indeed “fake.”

The Poynter Code of Principles is based around five central planks.

1. A Commitment to Nonpartisanship and Fairness
2. A Commitment to Transparency of Sources
3. A Commitment to Transparency of Funding & Organization
4. A Commitment to Transparency of Methodology
5. A Commitment to Open and Honest Corrections

If your eyes glazed over reading those seemingly repeated points, don’t worry, so did this writer’s. However, in the blurb underneath each one of those five points, there is one consistent theme, as best exemplified by the following underneath #2: A commitment to Transparency of Sources.

We want our readers to be able to verify our findings themselves. We provide all sources in enough detail that readers can replicate our work, except in cases where a source’s personal security could be compromised. In such cases, we provide as much detail as possible.

Facebook is employing the scientific method here. Nothing is true unless an independent party can replicate the results, and this is what Facebook and the Poynter Code of Principles aim to accomplish with this new feature. However, not just anyone can sign up and start vetting whatever they deem to be nonsense. The International Fact Checking Network at Poynter debuted September 16th of last year, and they describe their admissions process in three steps.

Application

First, anyone interested in becoming a fact-checker must pay $200 and complete this online form, which requires verifiable evidence that you comply with the five tenets highlighted above. This means that random bloggers without a seriously proven track record are basically off-limits, as your friend’s Medium page likely does not have public links demonstrating things like “Transparency of funding & organization” which requires you to “link to the section where you publicly list your sources of funding (including, if they exist, any rules around which types of funding you do or don’t accept), or a statement on ownership if you are the branch of an established media organization or research institution.”

Assessment

Secondly, the application is vetted by “external assessors” who are “selected for their expertise in journalism and disciplines related to fact-checking in the region where the aspiring signatory operates. They are paid $350 per assessment by the IFCN.” The assessors gauge each application based off this checklist. The IFCN interim Board is then responsible for confirming or denying the application based on the assessor’s findings. The Board is chaired by Peter Cunliffe-Jones of Africa Check, and includes members Angie Holan of Politifact, Baybars Örsek of Dogruluk Payi, Govindraj Ethiraj—founder of FactChecker.in, The Washington Post’s fact-checker Glenn Kessler, Laura Zommer—executive director of Chequeado, and Phoebe Arnold of Full Fact.

Verification

The final step is a simple vote by the Board based on the assessor’s findings. If the applicant gets four votes in favor, they are approved and they receive this badge to display on their site to prove their legitimacy. If the assessor rules against the applicant, the applicant has the opportunity to correct the issues in their application or appeal their case to the Board. Once you are verified, you must regularly demonstrate the legitimacy behind your membership or lose it altogether, per the IFCN:

Within a year from being approved and every year thereafter, approved signatories need to publish a report detailing how they respected the code. This is reviewed by the external assessor for re-verification. Violations off the code in between assessments will be evaluated by the Board and could result in the verified signatory’s removal from the list.

1linebreakdiamond.png

The process by which stories are vetted is actually quite thorough. An experienced Board who submits themselves to an open and clear set of standards codifies assessors through an application process that anyone can see for themselves, which is all based on the same set of standards legitimizing the Board in the first place. If two of the assessors rule a story to be false, the “disputed” tag goes on the story. Additionally, anything with a disputed tag cannot be promoted on Facebook.

However, how they become aware of it is still an issue, because there are only two ways for Facebook to tag fake news: either through their software picking it up, or a user flagging it. Regardless, neither can happen without someone sharing it first—meaning that a fake news story will still appear to be legitimate until it is caught, or as Adam Mosseri, VP of Facebook’s News Feed wrote on their blog: “We’ve relied heavily on our community for help on this issue, and this can help us detect more fake news.”

Mosseri also wrote something that seems to go against what many people believe about this problem, with President Donald Trump serving as the ultimate proof:

We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We’re going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it.

Peter Kafka of Recode reached out to Mosseri about this specific issue, writing:

Here’s a slightly longer summary of my conversation with Mosseri:

When Facebook decides to show me something in my News Feed, it’s doing so because it thinks I’ll want to engage with it.

Facebook has a good sense of how any given piece of content will perform when it shows it to me.

If I click on/read a story and then don’t share it with my friends—and lots of other people also read it and then don’t share it—that’s a sign it may be “problematic content.”

One reason I may not share it, Mosseri says, is that the content of the story isn’t believable, or doesn’t sync with the headline and other descriptions of the story I saw before I clicked on it.

This feature is already unnerving people over at Donald Trump’s second favorite fake news source, which doubles as one of my favorite sources of unintentional comedy, Infowars. Here is an Editor-at-Large complaining about his story which takes the tale of Juan Thompson—a man who was fired by The Intercept for faking news stories—and uses his support for Bernie Sanders to paint the entire anti-Trump crowd and all of Islam with his arrest last week for his alleged threats against Jewish Community Centers across the country.

Sadly, this is about as factual a story as you’ll see out of a website whose founder once said that:

“Yeah, so, Sandy Hook is a synthetic completely fake with actors, in my view, manufactured. I couldn’t believe it at first. I knew they had actors there, clearly, but I thought they killed some real kids. And it just shows how bold they are, that they clearly used actors. I mean they even ended up using photos of kids killed in mass shootings here in a fake mass shooting in Turkey — so yeah, or Pakistan. The sky is now the limit. I appreciate your call.”

This no doubt has rankled feathers over at the Kremlin’s propaganda network, as RT’s story about the “crackdown” as they call it, devotes 163 of its 402 words to raising “concerns” about its effects on freedom of speech. They interview Nolan Higdon, faculty advisor at Project Censored, a somewhat discredited far-left group which has done things like downplay Serbian atrocities in Bosnia and Kosovo and bizarrely claimed that the 1997 Cassini mission to Saturn should be cancelled because of the danger of plutonium being distributed throughout the atmosphere upon reentry, despite the fact that Voyager 1 and 2 in the 1970s used the same radioisotope thermoelectric generators that Project Censored’s founder—State University of New York journalism professor Karl Grossman—so desperately feared.

It’s far too early to measure the effectiveness of this new feature, and there are some inherent issues with its setup—namely that fake news can only come to Facebook’s attention after it has been shared—but if RT and Infowars are upset, then that demonstrates that Facebook is on the right track, and we all may finally have some real ammo to use against the fake news propagated by the drunk uncle’s of the world.

Jacob Weindling is Paste’s business and media editor, as well as a staff writer for politics. Follow him on Twitter at @Jakeweindling.

Share Tweet Submit Pin