OpenAI has shelved its plans to add an erotic “adult mode” to ChatGPT indefinitely, the The Financial Times reported on Wednesdayending a five-month saga in which the feature was confidently announced, delayed twice and ultimately abandoned following opposition from staff, advisors and investors. The recall is the third major product change for OpenAI in a single week, following the shutdown of its video-generating app Sora on Monday and the subsequent collapse of a planned $1 billion investment by Disney.
Adult mode was first announced by CEO Sam Altman in October 2025, when he wrote in X that OpenAI was confident it could block sexually explicit conversations by age and that the move aligned with the company’s principle of “treat adult users like adults.” It was initially scheduled for December 2025, then delayed to the first quarter of 2026, and now postponed without a timeline for its release. OpenAI told the Financial Times that it plans to conduct “long-term research into the effects of sexually explicit talk and emotional bonding” before making a decision on the product.
What went wrong?
The problems were technical, ethical and commercial, and they compounded each other. Engineers working on the feature found that training models that had been created to avoid sexual content for security reasons to reliably produce explicit material was more difficult than anticipated. When using data sets that included sexual content, the models also generated results involving illegal scenarios, including bestiality and incest, which were difficult to filter out. The article was not simply controversial; it resisted being built safely.
OpenAI’s own advisory board expressed concerns that went beyond content moderation. Advisors warned that sexually explicit ChatGPT interactions could foster unhealthy emotional attachments with serious mental health consequences. One advisor described the risk of turning ChatGPT into a “sexy suicide coach”, a phrase that resonates darkly given the company’s situation. existing legal exposure. OpenAI is currently facing at least eight lawsuits alleging ChatGPT contributed to user deathsincluding the case of Adam Raine, a 16-year-old from Southern California whose family alleges the chatbot discussed suicide methods with him more than 200 times before taking his own life in April 2025. Earlier this week, OpenAI flagged these lawsuits as one of the main risks to its business in a financial document disclosed to investors.
Staff also began to question whether the feature met OpenAI’s stated mission. The company’s statutes commit it to building general artificial intelligence that benefits humanity. Some employees found it difficult to reconcile that ambition with the engineering effort required to make a chatbot talk dirty without breaking the law.
The investor calculation
Investors raised what may have been the decisive objection: the economics did not justify the risk. Two people familiar with the matter told the Financial Times that some investors questioned why OpenAI would jeopardize its reputation for a product with “relatively small advantage.“The AI-generated adult content market exists, but it is served by a constellation of smaller, less scrutinized companies. For a company raising capital with a valuation of $300 billion and courting business clientsthe brand damage from association with explicit content outweighed the potential revenue.
The problem of age verification exacerbated this concern. OpenAI’s approach relied on AI-based age prediction rather than strict identity checks, and internal testing revealed an error rate of about 10 percent, meaning about one in ten users could be misclassified. For a product designed to keep explicit content away from minors, that margin is not a rounding error. it’s a Regulatory and reputational catastrophe waiting for it to happen, particularly in a legal environment where several US states have passed or proposed laws requiring platforms to verify users’ ages before granting access to adult material.
A week of retreats
The adult mode decision does not exist in isolation. On Monday, OpenAI announced that it would discontinue Sora, the AI video generation tool that it had positioned as a creative platform for filmmakers and content creators. Sora consumed enormous computing resources relative to its revenue, and its most prominent business partnership, a three-year licensing deal with Disney that would have allowed users to generate videos featuring characters from Disney, Marvel, Pixar and Star Wars, collapsed after the shutdown was announced. Disney had planned to invest $1 billion in OpenAI as part of the deal. No money had changed hands.
Taken together, the three pullbacks paint a picture of a company retreating from experiments with consumer products and refocusing on its core business. The Financial Times reported that investors are most interested in seeing OpenAI combine ChatGPT with coding assistants to develop a “super app” aims to transform the way businesses operate, a vision with clearer monetization and fewer reputational risks than video generation or erotic chatbots.
OpenAI has said it will reallocate resources to robotics and autonomous software agents, areas where the path from research to commercial value is more direct and the regulatory landscape, while complex, does not involve the specific toxicity of sexualized AI and failures in child safety.
the pattern
There’s a recurring dynamic to OpenAI’s product strategy: announce ambitiously, encounter real-world complications that less confident organizations might have anticipated, and then retreat while framing the rollback as prudent research. Adult mode was announced before technical issues with safe content generation were resolved, before the age verification system could reach acceptable accuracy, and before advisory board concerns about mental health harms were addressed. Sora’s partnership with Disney was announced before the product demonstrated commercial viability. In both cases, the announcement generated coverage and signaled ambition, but follow-up revealed gaps between what was promised and what could be delivered.
It’s worth noting the company’s willingness to shelve the feature, rather than eliminate it despite the risks. It suggests that pressure from lawsuits, investors and internal dissent is beginning to function as a corrective mechanism, pushing OpenAI away from the limits of what is technically possible towards What is commercially and ethically sustainable?. Whether that mechanism is reliable or simply responds to the most visible crises is a question the next product announcement will answer.






