Models

GPT-5.5

TL;DR

Premium

OpenAI's latest frontier model for general complex work.

Available on

ProMaxUltraTeamsEnterprise

Not on the Go plan ($1/mo, open-source models only).

Switch with

/model

Pick GPT-5.5 from the selector.

Intelligence Index

GPT-5.5 vs the OpenAI lineup

60

GPT-5.5

60

GPT-5.4

57

GPT-5.3 Codex

54

Speed

~65

tokens / sec

Input

$5

per M tokens

Output

$30

per M tokens

GPT-5.5 in Command Code

GPT-5.5 is OpenAI's latest frontier model and the highest-Intelligence-Index model in Command Code at 60. Tuned for general complex work where reasoning quality is the dominant factor. The model to reach for when GPT-5.4 hits its ceiling.

GPT-5.5 vs the Command Code lineup

Quality, speed, and pricing for GPT-5.5 alongside the most relevant alternatives.

ModelIntelligenceSpeedInput $/MOutput $/M
GPT-5.560~65 tok/s$5.00$30.00
GPT-5.457~84 tok/s$2.50$15.00
Claude Opus 4.757~49 tok/s$5.00$25.00
GPT-5.3 Codex54~72 tok/s$2.00$8.00
Kimi K2.654~40 tok/s$0.95$4.00
Claude Sonnet 4.652~62 tok/s$3.00$15.00
GPT-5.4 Mini49~164 tok/s$0.75$4.50

What GPT-5.5 is best for

The hardest reasoning work, ambiguous specs, multi-step agent runs that depend on getting each step exactly right, and senior code review on critical changes.

When to switch away from GPT-5.5

Switch to GPT-5.4

For everyday OpenAI work at half the input price ($2.50 vs $5) and double the output speed (~84 vs ~65 tok/s). Intelligence drop from 60 to 57.

Switch to Claude Opus 4.7

For the Claude family on the hardest work. Intelligence 57 vs 60, similar input price ($5), cheaper output ($25 vs $30 / M).

Switch to GPT-5.3 Codex

For long coding sessions where output economics dominate ($8 vs $30 / M output).

Switch to GPT-5.4 Mini

For fast everyday lookups and small edits. Mini runs ~164 tok/s at $0.75 input.

In Command Code: caching and taste-1

Two things change the experience of using this model inside Command Code versus calling it directly through the upstream API.

First, prompt caching is on by default. In an agent loop the same context is read across many steps; cache reads are billed at $0.50 per million tokens versus $5.00 for fresh input.

Second, taste-1 sits between the model and the agent loop, rewriting and reranking candidate edits to match your codebase conventions. Each plan ships with a taste-1 usage allowance that scales by tier (Go $100 → Ultra $10,000).

Plan availability

Premium model. Available on Pro ($15/mo), Max ($100/mo), Ultra ($200/mo), Teams ($40/mo per seat), and Enterprise. Not on the Go plan.

All Command Code models, ranked by quality and speed

Quality is the Intelligence Index — an aggregate score across reasoning, math, coding, and knowledge evaluations. Speed is reported output tokens per second. Models without a published score are noted.

ModelTierIntelligence IndexOutput speed
GPT-5.5Premium60~65 tok/s
Claude Opus 4.7Premium57~49 tok/s
GPT-5.4Premium57~84 tok/s
GPT-5.3 CodexPremium54~72 tok/s
Kimi K2.6Open-source54~40 tok/s
Claude Sonnet 4.6Premium52~62 tok/s
DeepSeek V4 ProOpen-source52~35 tok/s
GLM-5Open-source50~61 tok/s
GPT-5.4 MiniPremium49~164 tok/s
DeepSeek V4 FlashOpen-source47~82 tok/s
Claude Haiku 4.5Premium37~97 tok/s
Kimi K2.5Open-source37~35 tok/s
Claude Opus 4.6PremiumNot yet scored
MiniMax M2.5Open-sourceNot yet scored

Switching models with /model

In an interactive Command Code session, run /model to open the model selector. Pick the model you want and it applies to this session and to future sessions until you change it again. Premium models require Pro or higher; open-source models are available on every plan, including Go.

cmd               # start an interactive session
/model            # open the selector and pick a model

Plans and pricing

Command Code is a subscription with model usage at API rates. Each plan ships with monthly LLM credits and a separate taste-1 usage allowance that scales by tier. Credits roll over and never expire. Auto top-up keeps you running if you go over.

PlanPrice/moLLM creditstaste-1 usageModels
Go$1$10$100Open-source only
Pro$15$30$500Open-source + premium
Max$100$150$5,000Open-source + premium
Ultra$200$300$10,000Open-source + premium
Teams$40 / seatPooled$1,000Open-source + premium
EnterpriseCustomCustomCustomCustom pool, SSO, audit logs

Frequently asked questions

GPT-5.5 or GPT-5.4?

GPT-5.5 scores 60 vs GPT-5.4 at 57, at twice the input price ($5 vs $2.50) and double the output ($30 vs $15). 5.4 runs faster (~84 vs ~65 tok/s). Use 5.5 for the hardest work and switch back to 5.4 for everyday tasks.

GPT-5.5 or Claude Opus 4.7?

GPT-5.5 leads at Intelligence 60 vs Opus 4.7 at 57. Same $5 input, but Opus is cheaper on output ($25 vs $30 / M). Switch with /model based on which family you trust on the task.

Is GPT-5.5 worth the price premium?

On the hardest agent runs and reasoning-heavy tasks, yes. On commodity coding, GPT-5.4 or Claude Sonnet 4.6 are the better cost-quality trade.

Which Command Code model should I use?

Claude Sonnet 4.6 is the recommended default. Switch to GPT-5.5 (Intelligence Index 60) for the absolute hardest reasoning, or Claude Opus 4.7 / GPT-5.4 (both 57) for top-tier work at lower cost. For fast lookups, Claude Haiku 4.5 or GPT-5.4 Mini. For open-source, Kimi K2.6 leads the open-weights tier (Intelligence Index 54).

Can I mix GPT-5.5 with other models in a workflow?

Yes. Switch per session using /model. Common pattern: keep Sonnet 4.6 as the default and switch up to Opus 4.7 or down to Haiku 4.5 as the task calls for it.

Are open-source model prices fixed?

Open-source models are routed across multiple upstream providers for high availability. The price listed for each is the mean per-provider rate. Actual cost on a given request may vary slightly. The Usage page reflects the price charged.

Is Command Code free to try?

The Go plan starts at $1/mo with $10 in LLM credits and $100 of taste-1 usage. It covers open-source models only. Pro at $15/mo unlocks premium models with $30 in LLM credits and $500 of taste-1 usage.

Does Command Code train on my code?

No. Command Code does not train on your code or store your code snippets. taste-1 data is stored locally in your project directory.

Where can I track my usage?

The Usage page in Studio shows per-request cost, token counts, and which model ran. Settings > Billing lets you change plans, buy credits, or enable auto top-up.

Does Command Code replace my editor?

No. Command Code is editor-agnostic — it runs as a CLI and works alongside any editor (Cursor, VS Code, Zed, JetBrains, Neovim, etc.).

Ship code that matches your taste

Command Code is the AI coding agent that continuously learns your taste. Start for $1.