OpenAI is set to introduce significant changes to the capabilities and guardrails of its flagship chatbot, ChatGPT, including the allowance of “erotica” for users who successfully verify their age on the platform.
Read: Bandwidth Blog & Smile 90.4FM Tech Tuesday: EV breaks Bugatti’s speed record
OpenAI CEO Sam Altman announced the policy change in a recent X post, confirming that as the company “roll[s] out age-gating more fully,” it will permit mature content, such as erotica, for verified adult users beginning in December. This shift, which Altman refers to as the principle of “treat[ing] adult users like adults,” signals a cultural change for the company, which had previously maintained strict content filters.
This move follows earlier hints from OpenAI about allowing developers to create “mature” ChatGPT applications once appropriate age verification controls were implemented. The decision also aligns OpenAI more closely with competitors, such as Elon Musk’s xAI, which previously launched “flirty AI companions” (appearing as 3D anime models within its Grok app).
In addition to expanding content options, OpenAI plans to address user feedback by launching a new version of ChatGPT designed to feel “more like what people liked about 4o.” This comes after the initial rollout of GPT-5 as the default model led to complaints that the new version felt less personable and more restrictive than its predecessor, GPT-4o.
Altman explained that ChatGPT was initially made “pretty restrictive to make sure we were being careful with mental health issues.” The company recognized this stance made the chatbot “less useful/enjoyable to many users who had no mental health problems.”
OpenAI asserts that it can now “safely relax the restrictions in most cases” due to improvements in its safety infrastructure. The company has launched new tools to “better detect” when a user is in mental distress.
To further guide its approach to sensitive issues, OpenAI also announced the formation of a council on “well-being and AI,” comprised of eight researchers and experts dedicated to studying the impact of AI on mental health. However, the council has faced scrutiny, notably from Ars Technica, which pointed out the lack of inclusion of dedicated suicide prevention experts—a group that recently called for additional safeguards for users experiencing suicidal thoughts.

