is a senior Vox reporter who has covered data privacy, antitrust, and Big Tech’s power over us all for the site since 2019.
There’s a new Meta whistleblower in town: Arturo Béjar, a longtime employee who came forward to the Wall Street Journal last week to accuse the social media giant of knowing, through its own research, that its platforms were hurting children. Not only did Meta refuse to act on that information, Béjar asserted, but it also tried to cover it up. On Tuesday, he repeated those allegations to a very receptive Senate subcommittee.
This comes two years after another whistleblower made similar claims, and at a time when state and federal governments are still trying to figure out how to regulate the platforms they believe are harmful to children’s mental health, safety, and privacy. Béjar’s allegations may be the boost needed to get stalled children’s online safety bills passed — just as the previous whistleblower was a catalyst for some of those bills’ introduction.
Sign up for the
Thanks for signing up!
Check your inbox for a welcome email.
Oops. Something went wrong. Please enter a valid email and try again.
For more newsletters, check out our newsletters page.
Béjar worked at Meta, then Facebook, as an engineering director on its Protect and Care Team from 2009 to 2015 and as a consultant on Instagram’s Well-Being Team from 2019 to 2021. He says that Meta knew, through research that he conducted and conversations he had with some of the company’s most powerful executives, that kids were frequently exposed to things like unwanted sexual advances and harmful content — sometimes recommended by Meta’s own algorithms. But the company chose not to implement measures Béjar believed would help, ultimately ending the research Béjar was involved in and laying off many members of his team.
Before a Senate Judiciary subcommittee on privacy, technology, and the law, Béjar called for legislation to address children’s online safety; he described an “urgent crisis” and argued companies should no longer be trusted to regulate themselves.
“Meta knows the harm that kids experience on their platform, and the executives know that their measures fail to address it,” he said, adding: “Social media companies must be required to become more transparent so that parents and the public can hold them accountable.”
If it is an urgent crisis, it’s one that Congress doesn’t seem to be moving with that same urgency to fix. Watching Béjar’s hearing was one Frances Haugen, whose story mirrors his. Two years ago, she was the Meta whistleblower testifying before a Senate subcommittee, claiming that the company had research showing its platforms harmed children but that it chose not to act. Haugen’s revelations, also first reported by the Wall Street Journal, kicked off one of the biggest scandals in the company’s history. (Meta did not respond to request for comment but has elsewhere denied both whistleblowers’ accusations, saying, broadly, that the internal research they cited were only parts of the company’s continuing and many efforts to keep young users safe.)
Haugen praised Béjar’s testimony, noting that his revelations include evidence that people at the very top of the company, including Mark Zuckerberg, knew about the issues he raised and at times told him only to present them internally and as hypotheticals.
“That shows an active coverup,” she told Vox. “I think, with those kinds of things, it gets very hard to defend the status quo. I hope we get a floor vote this year because members should be on the record on whether or not they support these bills.”
The fact that we’re here, again, watching another whistleblower tell another Senate subcommittee that Meta knowingly harms kids because it isn’t legally required to provide a safer environment for them may be the final push Congress needs to actually pass something. Or it may be an indication that social media companies know they’ll never be held accountable for user harms beyond a few rounds of bad press. Two years after Haugen came forward, we’re still waiting for Congress to even get close to passing a law addressing online harms to children.
Despite a somewhat promising beginning, though — even passing bills out of committee is a lot more than Congress has done on children’s online safety in the previous two decades — 2023 has been a weird year for social media regulations in the United States. It was clear that lawmakers would be taking up the cause of children’s online safety in the new session. Previous efforts to pass digital privacy and antitrust bills have failed, but children’s safety is something both parties tend to agree on, even if they sometimes disagree on what the dangers are. Few lawmakers these days want to be seen as siding with Big Tech and against children, which is one reason why child-focused digital laws have had more success than those that apply to all ages.
Though President Joe Biden and several prominent members of Congress repeatedly demanded laws to protect children online, none of the resulting bills — several of which have considerable bipartisan support and passed out of committee — have so far gotten a floor vote. That’s left the US with a mishmash of executive actions, a surgeon general’s advisory, possibly unconstitutional state laws, laws in other countries that we may see incidental benefits from, and lawsuits from state governments, private citizens, and even school districts that may or may not succeed.
This is not an ideal situation. The ever-growing patchwork of state laws has done everything from requiring age verification and age-appropriate design to just forbidding children from using social media at all. Montana banned TikTok entirely while we wait to see what, if any, action the federal government decides to take against the China-based company, which has most recently been accused of propping up pro-Palestinian content (TikTok denies this). Some states are requiring porn sites to verify user identities and ages to ensure they’re not giving children access to sexual content, which has had the secondary effect of preventing many adults from using them, too. We’ll see how these laws fare when they’re inevitably challenged in court. Injunctions preventing them from taking effect have already been issued in some cases, which is a good indicator that they will be struck down.
Speaking of courts, a slew of social media harm-related cases of note were filed this year, including lawsuits against Meta from 42 attorneys general that accuse the company of knowingly making its platforms addictive and harmful to children and violated the privacy rights of children under 13, and hundreds of class action lawsuits against Meta and other platforms that make similar accusations brought on the behalf of children and even school districts. The lawsuits by attorneys general came out of an investigation in the wake of Haugen’s revelations, and Béjar is consulting with them in their cases, the Wall Street Journal said. A recently unredacted filing in Massachusetts’ case against Meta appears to show internal documents where top executives raised alarms about its products’ effects on youth mental health, as the company simultaneously claimed to the outside world that those products were safe.
This, Haugen says, is the key argument. “The issue is not, ‘Do we have the right to produce addictive digital products,’” she said. “The question is, ‘Do we have the right to lie about it?’”
International efforts have been more fruitful, most notably the European Union’s Digital Services Act, which went into effect this year. Under that law, platforms must follow certain rules to mitigate harmful content, and they can be held accountable if their efforts are insufficient. If the US doesn’t act, it will watch other governments take the lead — a position the country increasingly finds itself in when it comes to regulating Big Tech.
“I think things like the Digital Services Act are really powerful,” Haugen said. “They come in there and they say, ‘Hey, we do not know how to run your business, we’re not pretending to. But you do have to engage with the public in a way that at least feigns that you have respect for our needs, or that you recognize that you are part of society.’”
Haugen also supports the Kids Online Safety Act (KOSA), a bipartisan bill from Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) that would require platforms to implement various safeguards for underage users, including a controversial “duty of care” provision. KOSA passed out of committee in July, but no floor vote has been scheduled. Last session, it also passed out of committee and never got a vote. The two senators are currently pushing a longshot effort to get KOSA passed in the Senate before 2024. They met with Béjar