The Facebook Papers Won’t Disappear Because of a Name Change
Facebook’s announcement of a new name at the end of October has overshadowed some of its recent turmoil. The company’s been in hot water due to legal challenges and the leak of internal documents showing ethical concerns in how the company is run. It was good and pleasant for jokes to go off about the umbrella company’s new name of Meta, and for thoughtful satirists and cultural commentators to remark on emerging technological concepts like “the metaverse,” a term coined by author Neal Stephenson in the seminal cyberpunk novel Snow Crash. It also may have diverted attention from the bigger story: Facebook is being sued on multiple fronts for antitrust; Facebook may also be inadvertently developing a contentious internal culture because they do a decent job measuring the negative effects Facebook has on individuals and societies, while doing a woeful job improving it.
On Oct. 25, three weeks after all Facebook apps saw an outage of several hours, Nik Popli at Time highlighted what they considered the five biggest bombshells in the wake of testimony by whistleblower Frances Haugen’s testimony in front of the British Parliament. As she mentioned in her testimony, stewardship of the userbase in the global south (“the third world”) has not been a priority in Facebook’s decisions. In fact, nowhere outside of the U.S. is. The United States receives 87% of the “global budget for time spent on classifying misinformation,” though North America makes up just 10% of daily users.
The company’s largest market is in India, with 340 million users and 22 official languages that neither Facebook’s human researchers nor content-flagging bots are adequately trained to clean up. Internal studies show the company “moves into countries without fully understanding its potential impact on local communities, particularly with regard to culture and politics, and then fails to provide adequate resources to mitigate those effects.” Put simply, Facebook content monitoring is not handled in an equal, much less equitable, fashion. Two researchers setting up a dummy test account saw “more images of dead people in the past three weeks than I’ve seen in my entire life total.”
Haugen also said in her testimony that, while Facebook officially supports 50 languages, most get “a tiny fraction of the safety systems that English gets.” Haugen even posited that there are possibly sufficient dialectical differences between U.S. and U.K. English vernaculars to make the American-based safety tools inadequate for use across the Atlantic.
The American-based safety tools are catching what Haugen believes to be around “3 to 5% of all hate speech,” and 0.8% of violence-insighting content while, she says, the company is very good at “dancing with data;” presenting instead that they appear to catch about 97%. The formulation they use to come up with that 97% number of hate speech removed is “all AI-detected hate speech divided by all AI-detected hate speech plus what is reported by users,” as opposed to divided by the actual total of what is being said on their services.
Bill Chappell at NPR also addressed the internal assessment by Facebook employees of responsibility for political violence. While Facebook spokesman Andy Stone told NPR that Facebook didn’t cause the Jan. 6 Capitol siege (why would they?), at least one employee stated that company had “been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control,” while another said “I came here hoping to effect change and improve society, but all I’ve seen is atrophy and abdication of responsibility.”
Facebook has a label for “harmful, non-violating” content; basically things that could have negative effects on the mental, social, and physical health of the individuals or society, but don’t technically violate their terms and conditions. This includes but is not limited to “false narratives about election fraud,” “conspiracy theories,” and “vaccine hesitancy.” The Time piece doesn’t elaborate on what constitutes a conspiracy theory or vaccine hesitancy, but those are two very broad categories. Regarding “false narratives about election fraud,” the Stop the Steal group had already garnered 360,000 members by the time it was banned on Nov. 5 for calling for violence after claiming the 2020 U.S. presidential election was rigged. Employees voiced concerns on internal message boards about the role of the company in spurring the Jan. 6 Capitol insurrection/coup attempt.
These disparate but related experiences—hate speech across the world, circulating misinformation and disinformation, channeling right-wing radicals together—reflect what Hauger said in her testimony. Cost for promoted material is reflective of the likelihood it will be seen and engaged with. Ads that spark outrage are less expensive. As she put it, the company is “literally subsidizing hate on these platforms.”
This includes self-hatred. Haugen testified that Instagram is not just harmful for teenagers, but more harmful than other forms of social media. As she put it, “TikTok is about doing fun activities with your friends; it’s about performance. Snapchat is about faces and augmented reality. Reddit is at least vaguely about ideas. But Instagram is about social comparison and about bodies. It’s about people’s lifestyles and that’s what ends up being the worst for kids.” She argued that this is compounded by the fact that—while, once upon a time, young people that had a tough day at school got the reprieve of home life—today, there is no break because of the constant digital connection to people, including bullies.
Haugen went on to testify in the U.S. Congress, positing in her opening statement that she believes “Facebook’s products harm children, stoke division, and weaken our democracy. The company’s leadership knows how to make Facebook safer but won’t make the necessary changes because they have put their astronomical profits before people.”
One more viscerally disturbing detail from the Time piece is that Facebook admitted in internal documents that it was “under-enforcing on confirmed abusive activity” when “Filipina maids complained of being abused and sold on the platform.” Facebook finally “removed accounts linked to the sale of maids” from the site after Apple threatened to remove Facebook and Instagram from the app store. Human rights activists have said that “images of maids with their age and price can still be found on the platform.” So, domestic workers being treated like slaves with Facebook working inadvertently as an auction block with no particular explanation for dragging their feet to fix the problem.
Jon Gambrell and Jim Gomez at the Associated Press report that, even as the Philippines government now has a team that spends all of their time scouring Facebook for such ads, “khadima” (“maids” in Arabic) can still be found for sale in the mideast (3/4s of the picture and video advertisement posts on Instagram, with links to maid-selling sites primarily on Facebook proper; 60% of offending posts in Saudi Arabia, “about a quarter” in Egypt).
Facebook internal reporting noted that, while domestic labor in Northwest Asia remains a vital source of money for some women coming from other parts of Asia and Africa, many of these domestic laborers have been locked into homes, starved, gone unpaid, faced indefinite contract extensions, and sold to other employers without their consent. The recruitment agencies these women contract with, when told of the abuse, commonly responded that the women should “be more agreeable.” Facebook made note of that, which implies they see it as a problem, but there isn’t a clear declaration of what they’re going to do to fix any of this. In 2018, the Philippines put a temporary ban on domestic workers going to Kuwait, because a missing FIlippina woman was found dead in a refrigerator.
All of this information came from the Facebook Papers, documents leaked by Haugen initially to The Wall Street Journal. On Nov. 5, Andrew Marantz at The New Yorker published an article about The Facebook Papers, about how a Slack group formed out of journalists from competing outlets were leaked the same papers that The Wall Street Journal had been publishing as “The Facebook Files.” Apparently We’re a Consortium Now (real name of the Slack group) published over 100 stories from their more than a dozen outlets, with titles like “How Facebook Users Wield Multiple Accounts to Spread Toxic Politics” and “How Facebook Neglected the Rest of the World,” leading right up to the name change.
One thing they point out, which also comes across in the explainers from Time and NPR, is that there is at least tolerance of “candor and constructive disagreement” internally, yet it appears that little is being done to create solutions for the problems elucidated in these disagreements and internal memos. The company is as likely as not to simply wash its hands of, or back away from, responsibility for these issues. Last August, an executive named Andrew Bosworth made a post called “Demand Side Problems” on the Meta internal social network Workplace arguing that, while Facebook should try to moderate hate speech, “we should temper our expectations for results” because the problem was not one of supply, but rather of demand. This leads me to believe that their thinking is “people are going to find their hate speech somewhere, so why not Facebook?” While the majority of responses were positive, the few dissenters went unaddressed.
In at least one instance, a fact-checking label from Science Feedback was removed from a Daily Wire article which it rated “partially false” after the author complained about being censored and a Republican congressperson intervened on his behalf. The article argued that climate change is “not the end of the world. It’s not even our most serious environmental problem.” In defense of removing the label, an Environmental Program Manager argued that Facebook should be an open platform for all beliefs where people can “make up their own minds for themselves,” and was met with responses including that climate denialism should be included in Facebook’s policies prohibiting “content that poses an immediate threat to human health or safety.”
This is in line with something NPR mentioned in their analysis of the documents – “Content standards were contorted, often out of fear of riling high-profile accounts.” A literal separate set of rules exists for people listed in the VIP system called “XCheck,” created to allow politicians and celebrities to say what they want without the same consequences regular people would face, to preclude the reprisals of fans and followers. Facebook’s internal Oversight Board also disapproves of this built-in double standard, a list of almost 6 million people.
Since August, Facebook has been the focus of an antitrust lawsuit by the FTC. CNBC reports that, on Oct. 4, the same day that the outage showed that having so many technologies under one umbrella and on one server network might not be a great idea, Facebook filed a second motion to have the FTC’s lawsuit thrown out. The FTC complaint is itself an amended complaint after a judge threw out an earlier one this year which alleged noncompetitive practices by Facebook.
The day before the New Yorker piece ran, on Nov. 4, the New York Times reported that Facebook, now under the umbrella of Meta, was sued for antitrust violations in the U.S. District Court for the Eastern District of New York. Seven years ago, in August of 2014, the social network “made overtures to integrate [upstart photography app] Phhhoto,” after downloading and using the app, before suppressing its content within their own photo-sharing app, Instagram. In March of 2015, Instagram’s settings were made so that Phhhoto users couldn’t find their friends on Instagram, and when Phhhoto’s team reached out, Facebook strategic partnership manager Bryan Hurren told them “that Instagram was apparently upset that Phhhoto was growing in users through its relationship with Instagram.” Gary L. Reback, the lawyer who persuaded the Department of Justice to sue Microsoft for violating antitrust laws in the 1990s, is representing Phhhoto in the suit.
Among the commentators on social media in the wake of the name change was writer Zito Madu, who linked three articles around a similar message: technological advancement is not inevitable, and the ways it advances is not a product of natural selection, but of deliberate selection. Choices are made by us, not just as individual consumers and citizens, but also by the business leaders in charge of social media corporations, and the political leaders that are supposed to protect the common interest of the people rather than serve the private interest of big business. To embrace speculative currencies and NFTs that harm the environment while producing nothing of value is a choice. To allow children to develop an addiction to a social media platform where they are relentlessly bullied is a choice. To allow private companies to collect everyone’s private information to sell to other private companies is a choice.
We don’t have to live in a cyberpunk future. We don’t have to live in a world that countries subsidize corporations to destroy. We don’t have to live in a world where businesses that wreak havoc on the planet’s environment, the world’s economies, and the inner lives of the young and old, profiting off of collective and individual pain, go unpunished for their amorality and lack of ethics. None of this is inevitable. Among Madu’s shared articles, L. M. Sacasas’s ”Resistance is Futile: The Myth of Tech Inevitability” may be my favorite because it ends with Margaret Heffernan’s quote: “Anyone claiming to know the future is just trying to own it.”
Kevin Fox, Jr. is a freelance writer and Paste intern. He loves videogames, film, history, pop culture, sports, and human rights, and can be found on Twitter @kevinfoxjr.