Codex rate card | OpenAI Help Center

3 min read Original article ↗

Learn how Codex credit rates work across Plus, Pro, Business, and Enterprise/Edu plans.

Overview

This article outlines the current credit rates for Codex, under the flexible pricing structure for Plus, Pro, Business, and Enterprise/Edu plans.

Learn more about credits in ChatGPT Plus and Pro.

Learn more about credits in ChatGPT Business, Enterprise, and Edu.

Codex rate card - token based pricing

Codex usage is priced based on API token usage, calculated as credits per million input tokens, cached input tokens and output tokens. Learn more about tokens here.

This format replaces average per-message estimates with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.

Under this model, actual credit usage depends on the mix of input, cached input, and output tokens in each task. The table below displays credits per 1M tokens for each token type.

Note:

  • Fast mode consumes 2x as many credits.

  • Code review uses GPT-5.3-Codex.

  • GPT-5.3-Codex-Spark may be available in Codex as a research preview - credit rates for this model are not final.

  • Read about Codex usage rate limits.

On average, Codex costs ~$100-$200/developer per month, though there is a large variance depending on model used, number of instances users are running, automations and usage of fast mode. Read more about best practices in maximizing your rate limits and managing token consumption.

You can monitor your workspace's token usage in Codex settings > Usage panel. 

Legacy Rate Card

Existing Plus/Pro and Enterprise/Edu customers should continue to use the legacy rate card displayed below, until we migrate you to the new rates in the future.

Plus/Pro and Edu users should monitor this rate card and our release notes pages for information on when the new rates apply.

Specifics of the migration, including timelines, will be provided to Enterprise admins and owners by email - contact your OpenAI sales representative if you have questions about the migration.

The legacy rate card expresses Codex usage as approximate average credits per message or pull request. These averages are useful for rough planning, but actual credit usage can vary based on task size, model choice, and reasoning requirements.

These averages also apply to legacy GPT-5.2, GPT-5.2-Codex, GPT-5.1, GPT-5.1-Codex-Max, GPT-5, GPT-5-Codex, and GPT-5-Codex-Mini.

FAQ

Why are there two Codex rate cards?

We’ve modified our pricing from credits per message, to credits per token type consumed. OpenAI supports both the legacy rate card and the updated token-based rate card. The applicable version depends on workspace migration status.

Which rate card should I use?

New and existing ChatGPT Business customers, and new ChatGPT Enterprise customers should use the token-based pricing rate card. Customers on all other plans should use the legacy rate card. We’ll continue to update this page over time as we migrate your plan to the new rates.

What changed in the updated token-based rate card?

The legacy rate card shows approximate average credits per message or pull request. The updated token-based rate card shows credits by token type and converts API-priced usage into credits.

Why is the rate card being changed?

Credits remain the core pricing unit that customers purchase and consume. The updated token-based format makes credit usage easier to map to actual model activity, aligns Codex pricing more closely with token-based metering, and gives clearer visibility into how input, cached input, and output contribute to total usage.

How does this affect my pricing?

The impact depends on your workload mix. Some users may see higher credit consumption, while others may see lower credit consumption, depending on how much input, cached input, and output their tasks use. Output-heavy tasks and fast mode generally consume more credits than lighter tasks.