The Facebook Papers Won’t Disappear Because of a Name Change

Facebook’s announcement of a new name at the end of October has overshadowed some of its recent turmoil. The company’s been in hot water due to legal challenges and the leak of internal documents showing ethical concerns in how the company is run. It was good and pleasant for jokes to go off about the umbrella company’s new name of Meta, and for thoughtful satirists and cultural commentators to remark on emerging technological concepts like “the metaverse,” a term coined by author Neal Stephenson in the seminal cyberpunk novel Snow Crash. It also may have diverted attention from the bigger story: Facebook is being sued on multiple fronts for antitrust; Facebook may also be inadvertently developing a contentious internal culture because they do a decent job measuring the negative effects Facebook has on individuals and societies, while doing a woeful job improving it.
On Oct. 25, three weeks after all Facebook apps saw an outage of several hours, Nik Popli at Time highlighted what they considered the five biggest bombshells in the wake of testimony by whistleblower Frances Haugen’s testimony in front of the British Parliament. As she mentioned in her testimony, stewardship of the userbase in the global south (“the third world”) has not been a priority in Facebook’s decisions. In fact, nowhere outside of the U.S. is. The United States receives 87% of the “global budget for time spent on classifying misinformation,” though North America makes up just 10% of daily users.
The company’s largest market is in India, with 340 million users and 22 official languages that neither Facebook’s human researchers nor content-flagging bots are adequately trained to clean up. Internal studies show the company “moves into countries without fully understanding its potential impact on local communities, particularly with regard to culture and politics, and then fails to provide adequate resources to mitigate those effects.” Put simply, Facebook content monitoring is not handled in an equal, much less equitable, fashion. Two researchers setting up a dummy test account saw “more images of dead people in the past three weeks than I’ve seen in my entire life total.”
Haugen also said in her testimony that, while Facebook officially supports 50 languages, most get “a tiny fraction of the safety systems that English gets.” Haugen even posited that there are possibly sufficient dialectical differences between U.S. and U.K. English vernaculars to make the American-based safety tools inadequate for use across the Atlantic.
The American-based safety tools are catching what Haugen believes to be around “3 to 5% of all hate speech,” and 0.8% of violence-insighting content while, she says, the company is very good at “dancing with data;” presenting instead that they appear to catch about 97%. The formulation they use to come up with that 97% number of hate speech removed is “all AI-detected hate speech divided by all AI-detected hate speech plus what is reported by users,” as opposed to divided by the actual total of what is being said on their services.
Bill Chappell at NPR also addressed the internal assessment by Facebook employees of responsibility for political violence. While Facebook spokesman Andy Stone told NPR that Facebook didn’t cause the Jan. 6 Capitol siege (why would they?), at least one employee stated that company had “been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control,” while another said “I came here hoping to effect change and improve society, but all I’ve seen is atrophy and abdication of responsibility.”
Facebook has a label for “harmful, non-violating” content; basically things that could have negative effects on the mental, social, and physical health of the individuals or society, but don’t technically violate their terms and conditions. This includes but is not limited to “false narratives about election fraud,” “conspiracy theories,” and “vaccine hesitancy.” The Time piece doesn’t elaborate on what constitutes a conspiracy theory or vaccine hesitancy, but those are two very broad categories. Regarding “false narratives about election fraud,” the Stop the Steal group had already garnered 360,000 members by the time it was banned on Nov. 5 for calling for violence after claiming the 2020 U.S. presidential election was rigged. Employees voiced concerns on internal message boards about the role of the company in spurring the Jan. 6 Capitol insurrection/coup attempt.
These disparate but related experiences—hate speech across the world, circulating misinformation and disinformation, channeling right-wing radicals together—reflect what Hauger said in her testimony. Cost for promoted material is reflective of the likelihood it will be seen and engaged with. Ads that spark outrage are less expensive. As she put it, the company is “literally subsidizing hate on these platforms.”
This includes self-hatred. Haugen testified that Instagram is not just harmful for teenagers, but more harmful than other forms of social media. As she put it, “TikTok is about doing fun activities with your friends; it’s about performance. Snapchat is about faces and augmented reality. Reddit is at least vaguely about ideas. But Instagram is about social comparison and about bodies. It’s about people’s lifestyles and that’s what ends up being the worst for kids.” She argued that this is compounded by the fact that—while, once upon a time, young people that had a tough day at school got the reprieve of home life—today, there is no break because of the constant digital connection to people, including bullies.
Haugen went on to testify in the U.S. Congress, positing in her opening statement that she believes “Facebook’s products harm children, stoke division, and weaken our democracy. The company’s leadership knows how to make Facebook safer but won’t make the necessary changes because they have put their astronomical profits before people.”
One more viscerally disturbing detail from the Time piece is that Facebook admitted in internal documents that it was “under-enforcing on confirmed abusive activity” when “Filipina maids complained of being abused and sold on the platform.” Facebook finally “removed accounts linked to the sale of maids” from the site after Apple threatened to remove Facebook and Instagram from the app store. Human rights activists have said that “images of maids with their age and price can still be found on the platform.” So, domestic workers being treated like slaves with Facebook working inadvertently as an auction block with no particular explanation for dragging their feet to fix the problem.