GPT-5.2-Codex Model | OpenAI API

1 min read Original article ↗

Models

gpt-5.2-codex

Our most intelligent coding model optimized for long-horizon, agentic coding tasks.

Our most intelligent coding model optimized for long-horizon, agentic coding tasks.

GPT-5.2-Codex is an upgraded version of GPT-5.2 optimized for agentic coding tasks in Codex or similar environments. GPT-5.2-Codex supports low, medium, high, and xhigh reasoning effort settings. If you want to learn more about prompting GPT-5.2-Codex, refer to our dedicated guide.

128,000

max output tokens

Aug 31, 2025 knowledge cutoff

Pricing

Pricing is based on the number of tokens used, or other metrics based on the model type. For tool-specific models, like search and computer use, there’s a fee per tool call. See details in the

pricing page.

Endpoints

Chat Completions

v1/chat/completions

Fine-tuning

v1/fine-tuning

Image generation

v1/images/generations

Image edit

v1/images/edits

Speech generation

v1/audio/speech

Transcription

v1/audio/transcriptions

Translation

v1/audio/translations

Completions (legacy)

v1/completions

Features

Function calling

Supported

Structured outputs

Supported

Distillation

Not supported

Predicted outputs

Not supported

Snapshots

Snapshots let you lock in a specific version of the model so that performance and behavior remain consistent. Below is a list of all available snapshots and aliases for

GPT-5.2-Codex

.

gpt-5.2-codex

Rate limits

Rate limits ensure fair and reliable access to the API by placing specific caps on requests or tokens used within a given time period. Your usage tier determines how high these limits are set and automatically increases as you send more requests and spend more on the API.

TierRPMTPMBatch queue limit
FreeNot supported
Tier 1500500,0001,500,000
Tier 25,0001,000,0003,000,000
Tier 35,0002,000,000100,000,000
Tier 410,0004,000,000200,000,000
Tier 515,00040,000,00015,000,000,000