Kimi K2.6
TL;DR
Open-sourceThe top-ranked open-weights model in Command Code — long-horizon coding with vision and design.
Available on
Routed across multiple upstream providers; mean per-provider price.
Switch with
Pick Kimi K2.6 from the selector.
Intelligence Index
K2.6 among open-weights leaders
Kimi K2.6
DeepSeek V4 Pro
GLM-5
Speed
~40
tokens / sec
Input
$0.95
per M tokens
Output
$4
per M tokens
Kimi K2.6 in Command Code
Kimi K2.6 is Moonshot's open-weights flagship — the top-ranked open-weights model in the Command Code lineup at Intelligence Index 54. Tuned for long-horizon coding tasks with vision and design inputs.
K2.6 vs the Command Code lineup
Quality, speed, and pricing for K2.6 alongside open-source siblings and premium peers.
| Model | Intelligence | Speed | Input $/M | Output $/M |
|---|---|---|---|---|
| Kimi K2.6 | 54 | ~40 tok/s | $0.95 | $4.00 |
| DeepSeek V4 Pro | 52 | ~35 tok/s | $1.74 | $3.48 |
| GLM-5 | 50 | ~61 tok/s | $1.00 | $3.20 |
| Kimi K2.5 | 37 | ~35 tok/s | $0.60 | $3.00 |
| Claude Sonnet 4.6 | 52 | ~62 tok/s | $3.00 | $15.00 |
| GPT-5.3 Codex | 54 | ~72 tok/s | $2.00 | $8.00 |
What K2.6 is best for
Long-horizon coding agent runs, multi-turn workflows with persistent context, and tasks that include vision or design inputs.
When to switch away from K2.6
Switch to Claude Opus 4.7 or GPT-5.4
For the hardest reasoning at the top of the Intelligence Index (57 vs 54).
Switch to DeepSeek V4 Pro
For long-context reasoning at slightly lower output price ($3.48 vs $4.00).
Switch to Kimi K2.5
For multimodal frontend work at lower cost — but K2.5 scores lower (37 vs 54).
Switch to DeepSeek V4 Flash
For high-volume work where per-task cost dominates.
In Command Code: caching and taste-1
Open-source models are routed across multiple upstream providers for high availability. The price you see is the mean per-provider rate; the Usage page reflects what was actually charged.
Where supported by the upstream, prompt caching is on by default — cache reads are billed at $0.16 per million tokens versus $0.95 for fresh input.
taste-1 sits between the model and the agent loop, rewriting and reranking candidate edits to match your codebase conventions. Each plan ships with a taste-1 usage allowance that scales by tier (Go $100 → Ultra $10,000).
Plan availability
Open-source model. Available on every plan, including Go ($1/mo). Routed across multiple upstream providers; listed price is the mean per-provider rate.
All Command Code models, ranked by quality and speed
Quality is the Intelligence Index — an aggregate score across reasoning, math, coding, and knowledge evaluations. Speed is reported output tokens per second. Models without a published score are noted.
| Model | Tier | Intelligence Index | Output speed |
|---|---|---|---|
| GPT-5.5 | Premium | 60 | ~65 tok/s |
| Claude Opus 4.7 | Premium | 57 | ~49 tok/s |
| GPT-5.4 | Premium | 57 | ~84 tok/s |
| GPT-5.3 Codex | Premium | 54 | ~72 tok/s |
| Kimi K2.6 | Open-source | 54 | ~40 tok/s |
| Claude Sonnet 4.6 | Premium | 52 | ~62 tok/s |
| DeepSeek V4 Pro | Open-source | 52 | ~35 tok/s |
| GLM-5 | Open-source | 50 | ~61 tok/s |
| GPT-5.4 Mini | Premium | 49 | ~164 tok/s |
| DeepSeek V4 Flash | Open-source | 47 | ~82 tok/s |
| Claude Haiku 4.5 | Premium | 37 | ~97 tok/s |
| Kimi K2.5 | Open-source | 37 | ~35 tok/s |
| Claude Opus 4.6 | Premium | Not yet scored | — |
| MiniMax M2.5 | Open-source | Not yet scored | — |
Switching models with /model
In an interactive Command Code session, run /model to open the model selector. Pick the model you want and it applies to this session and to future sessions until you change it again. Premium models require Pro or higher; open-source models are available on every plan, including Go.
cmd # start an interactive session
/model # open the selector and pick a modelPlans and pricing
Command Code is a subscription with model usage at API rates. Each plan ships with monthly LLM credits and a separate taste-1 usage allowance that scales by tier. Credits roll over and never expire. Auto top-up keeps you running if you go over.
| Plan | Price/mo | LLM credits | taste-1 usage | Models |
|---|---|---|---|---|
| Go | $1 | $10 | $100 | Open-source only |
| Pro | $15 | $30 | $500 | Open-source + premium |
| Max | $100 | $150 | $5,000 | Open-source + premium |
| Ultra | $200 | $300 | $10,000 | Open-source + premium |
| Teams | $40 / seat | Pooled | $1,000 | Open-source + premium |
| Enterprise | Custom | Custom | Custom | Custom pool, SSO, audit logs |
Frequently asked questions
Why pick K2.6 over closed-source models?
Open-source economics with the top open-weights Intelligence Index (54). For premium reasoning, Claude Opus 4.7 and GPT-5.4 still lead at 57.
K2.6 or DeepSeek V4 Pro?
K2.6 scores higher (54 vs 52) and is the top-ranked open-weights model. V4 Pro is cheaper on output ($3.48 vs $4.00) and tuned for hybrid-attention long-context work.
Which Command Code model should I use?
Claude Sonnet 4.6 is the recommended default. Switch to GPT-5.5 (Intelligence Index 60) for the absolute hardest reasoning, or Claude Opus 4.7 / GPT-5.4 (both 57) for top-tier work at lower cost. For fast lookups, Claude Haiku 4.5 or GPT-5.4 Mini. For open-source, Kimi K2.6 leads the open-weights tier (Intelligence Index 54).
Can I mix Kimi K2.6 with other models in a workflow?
Yes. Switch per session using /model. Common pattern: keep Sonnet 4.6 as the default and switch up to Opus 4.7 or down to Haiku 4.5 as the task calls for it.
Are open-source model prices fixed?
Open-source models are routed across multiple upstream providers for high availability. The price listed for each is the mean per-provider rate. Actual cost on a given request may vary slightly. The Usage page reflects the price charged.
Is Command Code free to try?
The Go plan starts at $1/mo with $10 in LLM credits and $100 of taste-1 usage. It covers open-source models only. Pro at $15/mo unlocks premium models with $30 in LLM credits and $500 of taste-1 usage.
Does Command Code train on my code?
No. Command Code does not train on your code or store your code snippets. taste-1 data is stored locally in your project directory.
Where can I track my usage?
The Usage page in Studio shows per-request cost, token counts, and which model ran. Settings > Billing lets you change plans, buy credits, or enable auto top-up.
Does Command Code replace my editor?
No. Command Code is editor-agnostic — it runs as a CLI and works alongside any editor (Cursor, VS Code, Zed, JetBrains, Neovim, etc.).
Ship code that matches your taste
Command Code is the AI coding agent that continuously learns your taste. Start for $1.