
OpenAI is taking steps to try to woo more vibe developers and coders (those who create software using AI models and natural language) away from rivals like Anthropic.
Today, the company arguably most synonymous with the rise of generative AI announced will begin offering a new mid-range subscription tier, a $100 ChatGPT Pro plan, which joins They are existing Free, Go ($8 monthly), Plus ($20 monthly), and Pro ($200 monthly) plans for people using ChatGPT and related OpenAI products.
OpenAI also currently offers Edu, Business ($25 per monthly user, formerly known as Team), and Enterprise (with variable pricing) plans for organizations in those sectors.
Why offer a $100 monthly ChatGPT Pro plan?
So why introduce a new $100 ChatGPT Pro plan?
OpenAI’s big selling point is that the new plan offers five times the usage limits on Codex, the company’s agent vibration coding app/harness (the name is shared by both, as well as a line of g-models of specific coding languages), than the existing $20 per month Plus plan, which seems fair considering the math ($20×5=$100).
As co-founder and CEO of OpenAI, Sam Altman wrote in a post on: "It’s really nice to see Codex getting so much love. We are launching a very popular $100 ChatGPT Pro tier on demand."
However, along with this, Official account of the OpenAI company on X noticed that "We’re rebalancing the use of Codex in (ChatGPT) Plus to support more sessions throughout the week, rather than longer sessions in a single day."
That sounds a lot like OpenAI is also simultaneously. reducing how many ChatGPT Plus users can use your Codex harness and app per day.
What are the new usage limits for the new $100 ChatGPT Pro plan vs. the $20 Plus plan?
So what are the current limits for the $20 Plus plan? The new Pro plan gives you 5x more than… what?
It turns out that this is more complicated than you might think to calculate, because it actually varies depending on the underlying AI model you’re using to power the application or Codex harness, and whether you’re working on code stored in the cloud or locally on your machine or servers.
OpenAI Developer Website underwent several updates today, so we are only reflecting the most recent pricing structure and offers below as of Thursday, April 1 at 10:45 pm ET. It notes that for individual users, Codex usage is categorized by “Local Messages” (tasks running on the user’s machine) and “Cloud Tasks” (tasks running on OpenAI infrastructure), and those limits share a five-hour rolling window.
It also says additional weekly limits may apply. The current Codex pricing page now shows lower usage ranges than the previous version and measures code reviews over a five-hour period instead of per week. Specifically for Pro 5x, OpenAI says the limits currently shown include a temporary 2x usage increase ending May 31, 2026.
ChatGPT Plus ($20/month)
-
GPT-5.4: Between 20 and 100 local messages every 5 hours.
-
GPT-5.4-mini: Between 60 and 350 local messages every 5 hours.
-
GPT-5.3 Codex: Between 30 and 150 local messages and between 10 and 60 cloud tasks every 5 hours.
-
Code reviews: 20 to 50 every 5 hours.
ChatGPT Pro 5x ($100/month)
-
GPT-5.4: Between 200 and 1000 local messages every 5 hours.
-
GPT-5.4-mini: Between 600 and 3500 local messages every 5 hours.
-
GPT-5.3 Codex: Between 300 and 1500 local messages and between 100 and 600 cloud tasks every 5 hours.
-
Code reviews: 200 to 500 every 5 hours.
Note: Limits shown for Pro 5x include a temporary 2x usage increase ending May 31, 2026.
ChatGPT Pro 20x ($200/month)
-
GPT-5.4: Between 400 and 2000 local messages every 5 hours.
-
GPT-5.4-mini: Between 1200 and 7000 local messages every 5 hours.
-
GPT-5.3 Codex: Between 600 and 3000 local messages and between 200 and 1200 cloud tasks every 5 hours.
-
Code reviews: 400 to 1000 every 5 hours.
-
Exclusive access: Includes GPT-5.3-Codex-Spark in the research preview for ChatGPT Pro users only. OpenAI says it has its own independent usage limit, which can be adjusted based on demand.
OpenAI has revised the Codex usage table downward from the previous numbers shown above and has also changed the way code reviews are presented. Instead of weekly pull request limits, the company now lists code review capacity in the same five-hour frame used for local messages and cloud tasks. For Pro 5x users, OpenAI also says that the limits currently shown are temporarily raised to double until May 31, 2026, so those figures may be reduced after that date.
and how OpenAI Help Documentation states:
"The number of Codex messages you can send within these limits varies depending on the size and complexity of your encoding tasks and where you run the tasks. Small scripts or simple functions may only consume a fraction of your allocation, while larger code bases, long-running tasks, or extended sessions that require Codex to maintain more context will use significantly more per message."
The broader strategic implications and context
OpenAI’s sudden move toward $100 pricing and expanded agency capabilities comes amid the unprecedented financial rise of its main rival, Anthropic.
Just a few days ago, Anthropic revealed that its annualized revenue (ARR) has exceeded $30 billionsurpassing The latest ARR reported by OpenAI is approximately $24-$25 billion.
This growth has been driven by the mass adoption of Claude Code and Claude Cowork, products that have set the benchmark for enterprise-grade headless coding.
Competitive friction intensified on April 4, 2026, when Anthropic officially blocked Claude’s subscriptions from being used to provide intelligence for third-party AI harnesses like OpenClaw.
To be clear, Anthropic Claude’s own models can still be used with OpenClaw, users now only need to pay for access to Claude’s models through Anthropic’s application programming interface (API) or additional usage credits, rather than as part of Claude’s monthly subscription tiers (which some have likened to a "all you can eat" buffet, making economics a challenge for Anthropic when power users and third-party harnesses like OpenClaw consume more than the $20 or $200 monthly that users spend on tokenized plans).
OpenClaw creator Peter Steinberger, what notably hired by OpenAI in February 2026 to lead its personal agent strategy and, since joining, has actively spoke out against Anthropic’s limitations – warning that OpenAI Codex and the models generally don’t have the same restrictions that Anthropic imposes now.
By hiring Steinberger and subsequently launching a Pro tier that provides the high-volume capability recently restricted by Anthropic, OpenAI is effectively courting the displaced OpenClaw community to recapture the professional developer market.





