GPT models are now available in Command Code.
Use OpenAI's GPT models alongside Claude and open-source models, without juggling API keys, rate limits, or provider accounts. Command Code handles model access, usage tracking, and billing in one place.
In internal use and early user testing, GPT models have been a strong option for real coding tasks inside Command Code, with faster responses than Codex. We have seen good results on larger edits, review passes, and follow-up fixes.
Currently, the lineup includes the following:
| Model | Best for |
|---|---|
| GPT-5.5 | Frontier reasoning, hard refactors, architecture-level review |
| GPT-5.4 | General-purpose coding and complex problem solving |
| GPT-5.4 Mini | Fast everyday edits, lookups, and tight feedback loops |
| GPT-5.3 Codex | Older gen model for heavy code generation. |
You can switch to a GPT model in interactive mode:
1/model gpt-5.5Ship faster with GPT models in Command Code
Two things shape the experience here: a coding-focused harness, and a continuously learned taste that steers the model toward your codebase's patterns instead of generic defaults.
A harness built for code
The harness around a model affects how much useful work you get from each turn. Command Code is built for writing and reviewing code, with tools the model can plan against, prompt cache warm by default, and compact context that spends tokens on your code instead of harness overhead.
Taste reduces cleanup work
When you write code, you make a steady stream of small decisions: naming, when to extract a helper, named or default exports, commander or process.argv, how you wrap errors. Those decisions add up to the conventions of your codebase. LLMs default to broad patterns from public training data. Your project usually needs something narrower.
Command Code's taste-1 layer treats every accept, reject, and edit as supervision, and continuously distills a profile of constraints with confidence scores:
1## Frameworks
2- Use Hono over Express. Confidence: 0.90
3- Use native fetch (no axios). Confidence: 0.88
4- Use cli-alerts for CLI output. Confidence: 0.85
5- Use shadcn/ui components. Confidence: 0.90
6
7## Exports
8- Use named exports. Confidence: 0.88
9- Avoid default exports except for page components. Confidence: 0.85
10
11## Validation
12- Use Zod with .strict() on object schemas. Confidence: 0.82
13- Extract schemas outside handlers. Confidence: 0.82You do not write any of this manually. Command Code's taste-1 generates and maintains it from your accepts, rejects, and edits. Every accept reinforces a constraint. Every reject weakens one. Edits you make after accepting are diffed and can become new constraints. There is no separate training step or manual setup. Run cmd and it starts learning from the first session.
Command Code splits the taste profile into packages by area, such as APIs, frontend, and CLIs, and keeps each one current. That gives GPT models a better shot at matching your patterns on the first pass. You can inspect or edit anything it has learned at .commandcode/taste/.
The result is less cleanup work and a more ship-ready PR, which leads naturally to the next step: review.
GPT models with Code Review tools in Command Code
In our team, we prefer cross-model review. The model that wrote the code carries the assumptions that produced it. A different model in a fresh session reads the diff cold, without attachment to the choices it sees. In our internal use, GPT models have worked well for reviewing code produced by Claude or open-source models inside Command Code.
Two commands drive the workflow:
/reviewfetches the current branch or a specific PR, reads the full diff, applies your taste profile, and returns a structured score instead of a wall of comments.
1~/project
2────────────────────────────────────────────
3
4BRANCH (PR #142)
5└─ 12 files changed, +340 -89
6
7∴ Score
8
9┌─────────────────┬───────┐
10│ Dimension │ Score │
11├─────────────────┼───────┤
12│ Correctness │ 4/5 │
13│ Conventions │ 3/5 │
14│ Test Coverage │ 2/5 │
15│ Overall │ 4/5 │
16└─────────────────┴───────┘/pr-commentspulls every open comment on the PR (issue comments and inline review comments), explains what each one is asking for, and flags what has already been addressed.
The workflow is simple: implement with one model, then review with GPT in a fresh session.
1# Build with whichever model fits the task
2/model kimi-k2.6
3> implement the new endpoint and tests
4
5# New session, switch to GPT for the review pass
6/model gpt-5.5
7/review
8> apply the fixes for the three blocking items
9/pr-commentsYour taste profile carries across the model swap, so GPT reviews your code against your patterns instead of generic defaults. That makes the review more consistent with the way your team already works.
Start using GPT models in Command Code
Get started by installing Command Code:
1npm i -g command-codeRun cmd in your project, then pick a GPT model:
1/model gpt-5.5Next steps:
- Sign up at commandcode.ai
- Read the taste docs to see how your coding taste is learned and applied
- Explore pricing and the full docs

