Claude Sonnet 4.6
TL;DR
PremiumThe recommended default in Command Code for day-to-day coding.
Available on
Not on the Go plan ($1/mo, open-source models only).
Switch with
Pick Claude Sonnet 4.6 from the selector.
Intelligence Index
Sonnet 4.6 vs the Claude 4 family
Opus 4.7
Sonnet 4.6
Haiku 4.5
Speed
~62
tokens / sec
Input
$3
per M tokens
Output
$15
per M tokens
Claude Sonnet 4.6 in Command Code
Claude Sonnet 4.6 is the mid-tier model in Anthropic's Claude 4 family. It sits between Claude Opus 4.7 (the most intelligent Claude for agents and coding) and Claude Haiku 4.5 (the fastest, cheapest option), and is positioned as the day-to-day workhorse. In Command Code it is the recommended default — strong enough for most tasks without paying Opus prices on every step.
Sonnet 4.6 vs the Command Code lineup
Quality (Intelligence Index), output speed, and per-million-token pricing for Sonnet 4.6 alongside the most relevant alternatives.
| Model | Intelligence | Speed | Input $/M | Output $/M |
|---|---|---|---|---|
| Claude Sonnet 4.6 | 52 | ~62 tok/s | $3.00 | $15.00 |
| Claude Opus 4.7 | 57 | ~49 tok/s | $5.00 | $25.00 |
| Claude Haiku 4.5 | 37 | ~97 tok/s | $1.00 | $5.00 |
| GPT-5.4 | 57 | ~84 tok/s | $2.50 | $15.00 |
| GPT-5.3 Codex | 54 | ~72 tok/s | $2.00 | $8.00 |
| DeepSeek V4 Pro | 52 | ~35 tok/s | $1.74 | $3.48 |
| Kimi K2.6 | 54 | ~40 tok/s | $0.95 | $4.00 |
What Sonnet 4.6 is best for
Feature work, bug fixes, medium refactors, test writing, and code review — the everyday coding tasks that make up most Command Code sessions.
When to switch away from Sonnet 4.6
Switch to Claude Opus 4.7
When the task is reasoning-heavy: ambiguous specs, large multi-file refactors, or complex multi-step agent runs. Opus 4.7 scores 57 versus Sonnet's 52, at roughly 1.7× the price.
Switch to Claude Haiku 4.5
For quick lookups, small edits, or high-volume mechanical work where latency dominates value. Haiku runs at ~97 tok/s vs Sonnet at ~62, and costs a third of the input price.
Switch to GPT-5.4
When you want a second opinion or prefer the OpenAI side. GPT-5.4 ties Opus at 57 and runs faster than Sonnet at ~84 tok/s, with a slightly lower input price.
Switch to DeepSeek V4 Pro
For cost-sensitive work that still needs strong reasoning. V4 Pro matches Sonnet at 52 and costs about half on input ($1.74 vs $3.00). Available on every plan including Go.
In Command Code: caching and taste-1
Two things change the experience of using this model inside Command Code versus calling it directly through the upstream API.
First, prompt caching is on by default. In an agent loop the same context is read across many steps; cache reads are billed at $0.30 per million tokens versus $3.00 for fresh input.
Second, taste-1 sits between the model and the agent loop, rewriting and reranking candidate edits to match your codebase conventions. Each plan ships with a taste-1 usage allowance that scales by tier (Go $100 → Ultra $10,000).
Plan availability
Premium model. Available on Pro ($15/mo), Max ($100/mo), Ultra ($200/mo), Teams ($40/mo per seat), and Enterprise. Not on the Go plan.
All Command Code models, ranked by quality and speed
Quality is the Intelligence Index — an aggregate score across reasoning, math, coding, and knowledge evaluations. Speed is reported output tokens per second. Models without a published score are noted.
| Model | Tier | Intelligence Index | Output speed |
|---|---|---|---|
| GPT-5.5 | Premium | 60 | ~65 tok/s |
| Claude Opus 4.7 | Premium | 57 | ~49 tok/s |
| GPT-5.4 | Premium | 57 | ~84 tok/s |
| GPT-5.3 Codex | Premium | 54 | ~72 tok/s |
| Kimi K2.6 | Open-source | 54 | ~40 tok/s |
| Claude Sonnet 4.6 | Premium | 52 | ~62 tok/s |
| DeepSeek V4 Pro | Open-source | 52 | ~35 tok/s |
| GLM-5 | Open-source | 50 | ~61 tok/s |
| GPT-5.4 Mini | Premium | 49 | ~164 tok/s |
| DeepSeek V4 Flash | Open-source | 47 | ~82 tok/s |
| Claude Haiku 4.5 | Premium | 37 | ~97 tok/s |
| Kimi K2.5 | Open-source | 37 | ~35 tok/s |
| Claude Opus 4.6 | Premium | Not yet scored | — |
| MiniMax M2.5 | Open-source | Not yet scored | — |
Switching models with /model
In an interactive Command Code session, run /model to open the model selector. Pick the model you want and it applies to this session and to future sessions until you change it again. Premium models require Pro or higher; open-source models are available on every plan, including Go.
cmd # start an interactive session
/model # open the selector and pick a modelPlans and pricing
Command Code is a subscription with model usage at API rates. Each plan ships with monthly LLM credits and a separate taste-1 usage allowance that scales by tier. Credits roll over and never expire. Auto top-up keeps you running if you go over.
| Plan | Price/mo | LLM credits | taste-1 usage | Models |
|---|---|---|---|---|
| Go | $1 | $10 | $100 | Open-source only |
| Pro | $15 | $30 | $500 | Open-source + premium |
| Max | $100 | $150 | $5,000 | Open-source + premium |
| Ultra | $200 | $300 | $10,000 | Open-source + premium |
| Teams | $40 / seat | Pooled | $1,000 | Open-source + premium |
| Enterprise | Custom | Custom | Custom | Custom pool, SSO, audit logs |
Frequently asked questions
Is Sonnet 4.6 the right default in Command Code?
For most users, yes. It is the strongest balance of quality and speed in the Claude family. Switch up to Opus 4.7 when reasoning quality matters more than cost, or down to Haiku 4.5 when latency wins.
How does Sonnet 4.6 compare to Claude Opus 4.7?
Opus 4.7 scores 57 versus Sonnet 4.6 at 52, at roughly 1.7× the price ($5 vs $3 input, $25 vs $15 output). Sonnet runs faster (~62 vs ~49 tok/s). For most everyday work, Sonnet is the better cost-quality trade.
How does Sonnet 4.6 compare to GPT-5.4?
GPT-5.4 scores higher (57 vs 52) and runs faster (~84 vs ~62 tok/s) at a slightly lower input price ($2.50 vs $3.00). Sonnet 4.6 has stronger prompt caching that Command Code activates automatically.
Which Command Code model should I use?
Claude Sonnet 4.6 is the recommended default. Switch to GPT-5.5 (Intelligence Index 60) for the absolute hardest reasoning, or Claude Opus 4.7 / GPT-5.4 (both 57) for top-tier work at lower cost. For fast lookups, Claude Haiku 4.5 or GPT-5.4 Mini. For open-source, Kimi K2.6 leads the open-weights tier (Intelligence Index 54).
Can I mix Claude Sonnet 4.6 with other models in a workflow?
Yes. Switch per session using /model. Common pattern: keep Sonnet 4.6 as the default and switch up to Opus 4.7 or down to Haiku 4.5 as the task calls for it.
Are open-source model prices fixed?
Open-source models are routed across multiple upstream providers for high availability. The price listed for each is the mean per-provider rate. Actual cost on a given request may vary slightly. The Usage page reflects the price charged.
Is Command Code free to try?
The Go plan starts at $1/mo with $10 in LLM credits and $100 of taste-1 usage. It covers open-source models only. Pro at $15/mo unlocks premium models with $30 in LLM credits and $500 of taste-1 usage.
Does Command Code train on my code?
No. Command Code does not train on your code or store your code snippets. taste-1 data is stored locally in your project directory.
Where can I track my usage?
The Usage page in Studio shows per-request cost, token counts, and which model ran. Settings > Billing lets you change plans, buy credits, or enable auto top-up.
Does Command Code replace my editor?
No. Command Code is editor-agnostic — it runs as a CLI and works alongside any editor (Cursor, VS Code, Zed, JetBrains, Neovim, etc.).
Ship code that matches your taste
Command Code is the AI coding agent that continuously learns your taste. Start for $1.