Home Culture 5 Shocking Truths from the Meta Lawsuit Filings

5 Shocking Truths from the Meta Lawsuit Filings

Meta Platforms

Introduction

For millions of young people, Instagram and Facebook are the digital equivalent of the public square—essential spaces for connection, identity, and social life. Yet for just as long, parents, educators, and users themselves have harbored a nagging concern about the platforms’ impact on mental health, safety, and well-being. Are these fears justified, or are they simply the predictable anxieties that accompany any new, transformative technology?

Newly unsealed court filings in a major lawsuit against Meta allege a far more disturbing reality: that these concerns were not just abstract fears, but known, quantified harms that the company actively chose to tolerate. Based on thousands of pages of internal research, executive testimony, and employee communications, the documents allege a disturbing pattern: Meta was repeatedly made aware of serious harms to its young users and, in many cases, chose not to act for fear of hurting growth and engagement.

This post distills the five most surprising and impactful allegations from these filings, revealing what Meta allegedly knew and the choices it made behind closed doors.

The List: 5 Shocking Allegations from the Meta Filings

  1. The Company Had a Shockingly High Tolerance for Sex Trafficking

According to the testimony of Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, she was shocked to learn in 2020 that the company had a “17x” strike policy for accounts reported for the “trafficking of humans for sex.” This meant an account would have to incur 16 separate violations before it was finally suspended on the 17th offense.

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.”

This high tolerance for trafficking existed alongside a stated “zero tolerance” policy for child sexual abuse material—for which, the filings allege, Meta failed to provide a simple reporting tool, even while offering easy reporting options for lesser violations like spam.

  1. Safety Was Delayed to Avoid Losing 1.5 Million Teen Users a Year

Around 2019, internal researchers recommended a significant safety change: making all teen accounts private by default to prevent unwanted contact from adults. The potential impact was huge. According to the filings, this single fix would have eliminated an estimated “5.4 million unwanted interactions a day.”

However, Meta’s growth team projected a major downside: the change would result in a loss of “1.5 million monthly active teens a year.” The safety feature was not launched that year, prompting dismay from some employees. One safety researcher allegedly grumbled: “Isn’t safety the whole point of this team?”

The consequences of this delay were allegedly stark. An internal 2022 audit found that Instagram’s “Accounts You May Follow” feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. The filings claim that these “inappropriate interactions with children” were so common they earned an internal acronym: “IIC.” Despite internal recommendations in 2019, Meta waited until 2024 to make this safety feature the default for all teen accounts, allowing billions of unwanted interactions to occur in the intervening five years.

  1. Profitable Features Were Kept, Even When Known to Be Toxic

In 2019, Meta launched an initiative, code-named Project Daisy, to test “hiding” likes on posts. Internal researchers had found that doing so would make users “significantly less likely to feel worse about themselves.” But the company backtracked after determining the feature was “pretty negative to FB metrics,” including ad revenue. The filings quote an unnamed employee on the growth team insisting:

“It’s a social comparison app, fucking get used to it.”

A similar debate reportedly occurred over beauty filters. An internal review found they exacerbated mental health issues like body dissatisfaction and eating disorders. Meta banned them, only to bring them back the next year after the company realized the ban would have a “negative growth impact.”

  1. A Secret Study on Mental Health Was Buried, and Congress Was Allegedly Misled

In late 2019, Meta conducted a “deactivation study” that found users who stopped using Facebook and Instagram for just one week showed lower rates of anxiety, depression, and loneliness. According to the lawsuit, Meta halted the study and never publicly disclosed its findings. An employee, expressing concern over the decision to bury the results, drew a stark comparison.

“If the results are bad and we don’t publish and they leak,” the employee wrote, “is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

The core allegation is that Meta then misled lawmakers. When the Senate Judiciary Committee asked in a written question in 2020 if the company could determine a correlation between increased platform use and teen anxiety or depression, Meta’s official answer was a single word: “No.”

  1. Employees Privately Admitted Their Products Were Addictive ‘Drugs’

While Meta publicly refers to addictive behavior as “problematic use,” employees allegedly spoke in much starker terms internally. One user-experience researcher’s message to a colleague is particularly damning.

“Oh my gosh yall IG is a drug,” the researcher allegedly wrote. “We’re basically pushers.”

The filings also describe a 2018 internal survey that found 58% of users experienced some level of “problematic use.” However, when Meta published a report on this research, it allegedly mentioned only the 3.1% of users with “severe” problematic use, omitting the other 55% with “mild” problematic use. Another researcher recommended that the company should “alert people to the effect that the product has on their brain” because it “exploits weaknesses in the human psychology,” a recommendation Meta did not adopt.

Conclusion

These filings allege more than isolated lapses in judgment; they document a systematic corporate practice of quantifying children’s safety in terms of acceptable user loss and shelving known solutions when they threatened engagement. The evidence presented suggests a pattern where, when faced with a choice between protecting young users and protecting metrics, the company consistently prioritized its bottom line.

As the internal calculus of our most dominant platforms is laid bare, the question becomes not just what we demand of them, but what we are willing to accept in exchange for their use.