In spring, 2018, Mark Zuckerberg invited more than a dozen professors and academics to a series of dinners at his home to discuss how Facebook could better keep its platforms safe from election disinformation, violent content, child sexual abuse material, and hate speech. Alongside these secret meetings, Facebook was regularly making pronouncements that it was spending hundreds of millions of dollars and hiring thousands of human content moderators to make its platforms safer. After Facebook was widely blamed for the rise of âfake newsâ that supposedly helped Trump win the 2016 election, Facebook repeatedly brought in reporters to examine its election âwar roomâ and explained what it was doing to police its platform, which famously included a new âOversight Board,â a sort of Supreme Court for hard Facebook decisions.
At the time, Joseph and I published a deep dive into how Facebook does content moderation, an astoundingly difficult task considering the scale of Facebookâs userbase, the differing countries and legal regimes it operates under, and the dizzying array of borderline cases it would need to make policies for and litigate against. As part of that article, I went to Facebookâs Menlo Park headquarters and had a series of on-the-record interviews with policymakers and executives about how important content moderation is and how seriously the company takes it. In 2018, Zuckerberg published a manifesto stating that âthe most important thing we at Facebook can do is develop the social infrastructure to build a global community,â and that one of the most important aspects of this would be to âbuild a safe community that prevents harm [and] helps during crisisâ and to build an âinformed communityâ and an âinclusive community.â
Several years later, Facebook has been overrun by AI-generated spam and outright scams. Many of the âpeopleâ engaging with this content are bots who themselves spam the platform. Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by âAI influencersâ in the service of promoting OnlyFans pages also full of stolen content.
Meta still regularly publishes updates that explain what it is doing to keep its platforms safe. In April, it launched ânew tools to help protect against extortion and intimate image abuseâ and in February it explained how it was âhelping teens avoid sextortion scamsâ and that it would begin âlabeling AI-generated images on Facebook, Instagram, and Threads,â though the overwhelming majority of AI-generated images on the platform are still not labeled. Meta also still publishes a âCommunity Standards Enforcement Report,â where it explains things like âin August 2023 alone, we disabled more than 500,000 accounts for violating our child sexual exploitation policies.â There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and Iâve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform.
Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there. Others have switched their academic focus after years of feeling ignored or harassed by right-wing activists who have accused them of being people who just want to censor the internet.
Meanwhile, several groups that have done very important research on content moderation are falling apart or being actively targeted by critics. Last week, Platformer reported that the Stanford Internet Observatory, which runs the Journal of Online Trust & Safety is âbeing dismantledâ and that several key researchers, including Renee DiResta, who did critical work on Facebookâs AI spam problem, have left. In a statement, the Stanford Internet Observatory said âStanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted.â (Stanford has an endowment of $36 billion.)
Following her departure, DiResta wrote for The Atlantic that conspiracy theorists regularly claim she is a CIA shill and one of the leaders of a âCensorship Industrial Complex.â Media Matters is being sued by Elon Musk for pointing out that ads for major brands were appearing next to antisemitic and pro-Nazi content on Twitter and recently had to do mass layoffs.
âYou go from having dinner at Zuckerbergâs house to them being like, yeah, we donât need you anymore,â Danielle Citron, a professor at the University of Virginiaâs School of Law who previously consulted with Facebook on trust and safety issues, told me. âSo yeah, itâs disheartening.â
It is not a good time to be in the content moderation industry. Republicans and the right wing of American politics more broadly see this as a deserved reckoning for liberal leaning, California-based social media companies that have taken away their free speech. Elon Musk bought an entire social media platform in part to dismantle its content moderation team and its rules. And yet, what we are seeing on Facebook is not a free speech heaven. It is a zombified platform full of bots, scammers, malware, bloated features, horrific AI-generated images, abandoned accounts, and dead people that has become a laughing stock on other platforms. Meta has fucked around with Facebook, and now it is finding out.
âI believe weâre in a time of experimentation where platforms are willing to gamble and roll the dice and say, âHow little content moderation can we get away with?,'â Sarah T. Roberts, a UCLA professor and author of Behind the Screen: Content Moderation in the Shadows of Social Media, told me.
In November, Elon Musk sat on stage with a New York Times reporter, and was asked about the Media Matters report that caused several major companies to pull advertising from X: âI hope they stop. Donât advertise,â Musk said. âIf somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go fuck yourself. Is that clear? I hope it is.â
There was a brief moment last year where many large companies pulled advertising from X, ostensibly because they did not want their brands associated with antisemitic or white nationalist content and did not want to be associated with Musk, who has not only allowed this type of content but has often espoused it himself. But X has told employees that 65 percent of advertisers have returned to the platform, and the death of X has thus far been greatly exaggerated. Musk spent much of last week doing damage control, and Xâs revenue is down significantly, according to Bloomberg. But the comments did not fully tank the platform, and Musk continues to float it with his enormous wealth.
This was an important moment not just for X, but for other social media companies, too. In order for Metaâs platforms to be seen as a safer alternative for advertisers, Zuckerberg had to meet the extremely low bar of ânot overtly platforming Nazisâ and âdidnât tell advertisers to âgo fuck yourself.ââ
UCLAâs Roberts has always argued that content moderation is about keeping platforms that make almost all of their money on advertising âbrand safeâ for those advertisers, not about keeping their users âsafeâ or censoring content. Muskâs apology tour has highlighted Robertsâs point that content moderation is for advertisers, not users.
âAfter he said âGo fuck yourself,â Meta can just kind of sit back and let the ball roll downhill toward Musk,â Roberts said. âAnd any backlash there has been to those brands or to X has been very fleeting. Companies keep coming back and are advertising on all of these sites, so there have been no consequences.â
Metaâs content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview. It did say, however, that it has many more human content moderators today than it did in 2018.
âThe truth is we have only invested more in the content moderation and trust and safety spaces,â a Meta spokesperson said. âWe have around 40,000 people globally working on safety and security today, compared to 20,000 in 2018.â
Roberts said content moderation is expensive, and that, after years of speaking about the topic openly, perhaps Meta now believes it is better to operate primarily under the radar.
âContent moderation, from the perspective of the C-suite, is considered to be a cost center, and they see no financial upside in providing that service. Theyâre not compelled by the obvious and true argument that, over the long term, having a hospitable platform is going to engender users who come on and stay for a longer period of time in aggregate,â Roberts said. âAnd so I think [Meta] has reverted to secrecy around these matters because it suits them to be able to do whatever they want, including ramping back up if thereâs a need, or, you know, abdicating their responsibilities by diminishing the teams they may have once had. The whole point of having offshore, third-party contractors is they can spin these teams up and spin them down pretty much with a phone call.â
Roberts added âI personally havenât heard from Facebook in probably four years.â
Citron, who worked directly with Facebook on nonconsensual imagery being shared on the platform and system that automatically flags nonconsensual intimate imagery and CSAM based on a hash database of abusive images, which was adopted by Facebook and then YouTube, said that what happened to Facebook is âdefinitely devastating.â
âThere was a period where they understood the issue, and it was very rewarding to see the hash database adopted, like, âWe have this possible technological way to address a very serious social problem,ââ she said. âAnd now I have not worked with Facebook in any meaningful way since 2018. Weâve seen the dismantling of content moderation teams [not just at Meta] but at Twitch, too. I worked with Twitch and then I didnât work with Twitch. My people got fired in April.â
âThere was a period of time where companies were quite concerned that their content moderation decisions would have consequences. But those consequences have not materialized. X shows that the PR loss leading to advertisers fleeing is temporary,â Citron added. âItâs an experiment. Itâs like âWhat happens when you donât have content moderation?â If the answer is, âYou have a little bit of a backlash, but itâs temporary and it all comes back,â well, you know what the answer is? You donât have to do anything. 100 percent.â
I told everyone I spoke to that, anecdotally, it felt to me like Facebook has become a disastrous, zombified cesspool. All of the researchers I spoke to said that this is not just a vibe.
âItâs not anecdotal, itâs a fact,â Citron said. In November, she published a paper in the Yale Law Journal about women who have faced gendered abuse and sexual harassment in Metaâs Horizon Worlds virtual reality platform, which found the the company is ignoring user reports and expects the targets of this abuse to simply use a âpersonal boundaryâ feature to ignore it. The paper notes that âMeta is following the nonrecognition playbook in refusing to address sexual harassment on its VR platforms in a meaningful manner.â
âThe response from leadership was like âWell, we canât do anything,ââ Citron said. âBut having worked with them since 2010, itâs like âYou know you can do something!â The idea that they think that this is a hard problem given that people are actually reporting this to them, itâs gobsmacking to me.â
Another researcher I spoke to, who I am not naming because they have been subjected to harassment for their work, said âI also have very little visibility into whatâs happening at Facebook around content moderation these days. Iâm honestly not sure who does have that visibility at the moment. And perhaps both of these are at least partially explained by the political backlash against moderation and researchers in this space.â Another researcher said âitâs a shitshow seeing whatâs happening to Facebook. I donât know if my contacts on the moderation teams are even still there at this point.â A third said Facebook did not respond to their emails anymore.
Not all of this can be explained by Elon Musk or by direct political backlash from the right. The existence of Section 230 of the Communications Decency Act means that social media platforms have wide latitude to do nothing. And, perhaps more importantly, two state-level lawsuits that have made their way to the Supreme Court that allege social media censorship means that Meta and other social media platforms may be calculating that they could be putting themselves at more risk if they do content moderation. The Supreme Courtâs decision on these cases is expected later this week.
The reason I have been so interested in what is happening on Facebook right now is not because I am particularly offended by the content I see there. Itâs because Facebookâs presentâa dying, decaying, colossus taken over by AI content and more or less left to rot by its ownerâfeels like the future, or the inevitable outcome, of other social platforms and of an AI-dominated internet. I have been likening zombie Facebook to a dead mall. There are people there, but they donât know why, and most of whatâs being shown to them is scammy or weird.
âItâs important to note that Facebook is Meta now, but the metaverse play has really fizzled. They donât know what the future is, but they do know that âFacebookâ is absolutely not the future,â Roberts said. âSo thereâs a level of disinvestment in Facebook because they donât know what the next thing exactly is going to be, but they know itâs not going to be this. So you might liken it to the deindustrialization of a manufacturing city that loses its base. Thereâs not a lot of financial gain to be had in propping up Facebook with new stuff, but itâs not like it disappears or its footprint shrinks. It just gets filled with crypto scams, phishing, hacking, romance scams.â
âAnd then poor content moderation begets scammers begets this useless crap content, AI-generated stuff, uncanny valley stuff that people donât enjoy and it just gets worse and worse,â Roberts said. âSo more of that will proliferate in lieu of anything that you actually want to spend time on.â
I left Facebook in 2014, having had to rejoin because in that era, you had to have an account to get a job. Which is another topic but worth keeping in mind.
If I donât know why Iâm somewhere, I leave. Rave, website, bar ⊠these are all the same questions, just with less external pressure because you arenât the product in the other two situations.