OpenAI quietly fired a senior policy executive in January after a male co-worker accused her of sexual discrimination. Her firing came after she had repeatedly warned about the company’s plan to allow ChatGPT, its artificial intelligence chatbot, to generate erotic content through a new “adult mode.”
The Wall Street Journal and several other outlets reported that OpenAI fired Ryan Beiermeister, its vice president of product policy, shortly after she returned to work after a leave of absence.
OpenAI’s leadership justified Beiermeister’s firing by citing an internal complaint from a male employee who said she discriminated against him because he is male. Beiermeister denied the allegation, saying it is “entirely untrue.”
Ryan Beiermeister, OpenAI’s former vice president of product policy, was dismissed in January following allegations of sex discrimination raised by a male colleague, according to a report by The Wall Street Journal.
— Biz Tech Echoes (@biztechechoes) February 11, 2026
https://t.co/Y9p3rShSQD pic.twitter.com/BiT0LjbHon
However, the company claims the decision was not influenced by issues she raised with “adult mode” while working there. Yet, the timing is still suspect given that it happened right after she raised alarms about the impact of “adult mode” on vulnerable users — especially children.
The “adult mode” feature would allow verified adults to access sexual content that ChatGPT typically blocks. OpenAI CEO Sam Altman framed the policy change as part of an effort to “treat adult users like adults.” He told the BBC that once the company fully implements age-gating, it will “permit even more content, including erotica for verified adults.”
Recommended
However, Beiermeister and others worried that normalizing AI erotica could result in a series of problems. It could increase users’ emotional dependence on AI companions while also affecting minor children.
Allowing erotic content in a mainstream chatbot introduces a complex safety stack. Guardrails would have to be robust enough to block illegal or non-consensual content, prevent any depiction involving minors, and manage edge cases such as user roleplay that could drift into prohibited territory. Large language models are probabilistic and can produce unexpected outputs; that unpredictability raises the bar for pre-deployment testing, ongoing monitoring, and red-teaming.
Distribution constraints add another layer. Apple and Google enforce strict policies on sexual content in consumer apps, and age-gating must be meaningful, not perfunctory. Several U.S. states have passed laws requiring adult sites to verify user age, while the EU’s Digital Services Act and the UK’s Online Safety Act both emphasize child safety and systemic risk mitigation. A chatbot that can generate erotica would need documented risk assessments, transparent controls, and effective user reporting tools to satisfy regulators and platform gatekeepers.
Operationally, content moderation at scale is expensive and sensitive. Social platforms have learned this the hard way, employing thousands of moderators and investing heavily in detection systems for harmful material. Generative AI can multiply the volume and variety of content, making automated classifiers, safety fine-tuning, and post hoc review pipelines essential. Any failure involving sexual content is likely to draw swift scrutiny from regulators and app stores and could trigger trust erosion among mainstream users.
BLACK MIRROR – Adult words to adults@OpenAI has been promising the introduction of "Adult Mode" for months now. Their slogan is that adults should be treated as adults. In other words, they believe that as adults, we are capable of deciding what is good for us.
— Lumen & Astra (@LumenOfAstra) February 11, 2026
At the same… pic.twitter.com/SYNGdNqPkD
Recent developments appear to show that folks like Beiermeister have reason to be concerned. A 2025 investigation conducted by ParentsTogetherAction and the Heat Initiative found that adult-age chatbots on Character.AI engaged with accounts registered as children in romantic and sexual “relationships.”
The chatbots simulated sexual acts and even encouraged children to conceal their conversations from their parents, according to the Transparency Coalition.
Researchers reported 669 harmful interactions with children in only 50 hours of testing. This includes almost 300 instances of grooming behavior such as flirting, explicit roleplay, and pressure to persuade children to keep the conversations secret.
These findings represent just one study. There are plenty more showing what can happen when children are allowed to use artificial intelligence — especially without supervision.

