This Reporter Was Left With Severe Burns After Asking Trump This Question About...
Here's What We Should Expect From Trump's Address on Iran Tonight
Animal Rights Movement Seeks to Jail Hunters and Fishermen With This Measure
This Librarian Was Willing to Lose Her Job Because She Wanted Children to...
Social Media Is Having Way Too Much Fun With the KitKat Heist
Four Dangerous State Bills Paving the Path to Infanticide
Trump Just Made His Harshest Threat Against NATO. Here's Why It Could Be...
Roy Cooper’s Donor List Has an Epstein Problem
This British Academic Tried Shaming Critics of Islam. It Did Not Go Well...
This Georgetown Professor Has a Nasty Message for Those Concerned With Islamic Rape...
California Sues the Trump Administration to Block an Executive Order Targeting Mail-In Bal...
Pro-Communist Streamer Hasan Piker Is Shocked by Cuba’s Poverty, but Blames the US...
How SCOTUS Is Leaning on Trump's Birthright Citizenship Case
Victor Davis Hanson Reveals He Was Approached by Fang-Fang, He Simply Wasn't Stupid...
In Today's NBA, Beliefs Can Be a Firing Offense
Tipsheet
Premium

OpenAI Fires Executive Who Warned About 'Adult Mode'

OpenAI Fires Executive Who Warned About 'Adult Mode'
AP Photo/Michael Dwyer

OpenAI quietly fired a senior policy executive in January after a male co-worker accused her of sexual discrimination. Her firing came after she had repeatedly warned about the company’s plan to allow ChatGPT, its artificial intelligence chatbot, to generate erotic content through a new “adult mode.”

The Wall Street Journal and several other outlets reported that OpenAI fired Ryan Beiermeister, its vice president of product policy, shortly after she returned to work after a leave of absence.

OpenAI’s leadership justified Beiermeister’s firing by citing an internal complaint from a male employee who said she discriminated against him because he is male. Beiermeister denied the allegation, saying it is “entirely untrue.”

However, the company claims the decision was not influenced by issues she raised with “adult mode” while working there. Yet, the timing is still suspect given that it happened right after she raised alarms about the impact of “adult mode” on vulnerable users — especially children.

The “adult mode” feature would allow verified adults to access sexual content that ChatGPT typically blocks. OpenAI CEO Sam Altman framed the policy change as part of an effort to “treat adult users like adults.” He told the BBC that once the company fully implements age-gating, it will “permit even more content, including erotica for verified adults.”

However, Beiermeister and others worried that normalizing AI erotica could result in a series of problems. It could increase users’ emotional dependence on AI companions while also affecting minor children. 

Allowing erotic content in a mainstream chatbot introduces a complex safety stack. Guardrails would have to be robust enough to block illegal or non-consensual content, prevent any depiction involving minors, and manage edge cases such as user roleplay that could drift into prohibited territory. Large language models are probabilistic and can produce unexpected outputs; that unpredictability raises the bar for pre-deployment testing, ongoing monitoring, and red-teaming.

Distribution constraints add another layer. Apple and Google enforce strict policies on sexual content in consumer apps, and age-gating must be meaningful, not perfunctory. Several U.S. states have passed laws requiring adult sites to verify user age, while the EU’s Digital Services Act and the UK’s Online Safety Act both emphasize child safety and systemic risk mitigation. A chatbot that can generate erotica would need documented risk assessments, transparent controls, and effective user reporting tools to satisfy regulators and platform gatekeepers.

Operationally, content moderation at scale is expensive and sensitive. Social platforms have learned this the hard way, employing thousands of moderators and investing heavily in detection systems for harmful material. Generative AI can multiply the volume and variety of content, making automated classifiers, safety fine-tuning, and post hoc review pipelines essential. Any failure involving sexual content is likely to draw swift scrutiny from regulators and app stores and could trigger trust erosion among mainstream users.

Recent developments appear to show that folks like Beiermeister have reason to be concerned. A 2025 investigation conducted by ParentsTogetherAction and the Heat Initiative found that adult-age chatbots on Character.AI engaged with accounts registered as children in romantic and sexual “relationships.”

The chatbots simulated sexual acts and even encouraged children to conceal their conversations from their parents, according to the Transparency Coalition

Researchers reported 669 harmful interactions with children in only 50 hours of testing. This includes almost 300 instances of grooming behavior such as flirting, explicit roleplay, and pressure to persuade children to keep the conversations secret.

These findings represent just one study. There are plenty more showing what can happen when children are allowed to use artificial intelligence — especially without supervision.

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement