
GitHub has announced that will switch to a usage-based billing model for its GitHub Copilot AI service starting June 1. The move is billed as a way to “better align pricing with actual usage” and a necessary step to keep Copilot financially sustainable amid growing demand for limited AI computing resources.
GitHub Copilot Subscribers Currently receive an allocation of monthly “requests” and “premium requests” that are spent every time they ask Copilot for help with an AI model. But those broad categories cover many different AI tasks with a wide range of total backend computing costs, GitHub says.
“Today, a quick question in a chat and a self-contained coding session lasting several hours can cost the user the same amount,” he said. Company owned by Microsoft he wrote in his announcement. And while GitHub says it has “absorbed much of the growing inference cost behind that usage” up to this point, bundling all “premium requests” together “is no longer sustainable.”
Under the new pricing system, GitHub Copilot subscribers will receive a monthly allocation of “AI Credits” that matches their monthly subscription payment. The price for additional AI usage beyond those credits “will be calculated based on token consumption, including input, output, and cached tokens, using the API fees listed for each model.”
Those API rates can vary greatly depending on the sophistication of the model being used; Pricing for OpenAI’s high-end GPT models current ranges from $4.50 per million output tokens (GPT-5.4 Mini) to $30 per million output tokens (GPT-5.5), for example. The total number of tokens used for an individual AI message can also vary greatly depending on how much “thinking” time the model needs to come up with its result.
GitHub Copilot subscribers will still be able to use simple AI suggestions, such as code completion and Next Edit, without consuming AI credits. But Copilot code reviews will cost extra in the form of GitHub Action Minutes.





