There is something interesting happening with AI coding agents that I think most developers haven’t fully noticed yet.
Most agents write correct code, but very few write your code.
I ran the same 3 prompts through Claude Code (Sonnet 4.6) and Command Code. Same machine. Same codebase. Same afternoon.
Claude wrote correct code every time. On the other hand, Command Code wrote my code every single time.
The difference is worth seeing.
The Setup
For about 9 days, I’ve been using Command Code (cmd) for everyday development.
During that time it has been watching how I work: which suggestions I accept, which I reject, and what I edit after accepting. Every interaction feeds its taste engine.
To test how much this actually matters, I picked 3 tasks I needed to build a side project:
- A CLI that fetches weather data
- A REST API endpoint for user preferences
- A React component for a settings panel
I gave Claude Code (Sonnet 4.6) and Command Code identical one-line prompts. No extra context. No rules files. No CLAUDE.md. Just the prompt.
Then I compared the results.
Test 1: "Build a CLI that fetches and displays current weather for a given city"
Claude Code (Sonnet 4.6) Output
1const axios = require('axios');
2
3async function getWeather(city) {
4 console.log(`Fetching weather for ${city}...`);
5 try {
6 const response = await axios.get(
7 `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${process.env.API_KEY}&units=metric`
8 );
9 const data = response.data;
10 console.log(`\nWeather in ${data.name}:`);
11 console.log(`Temperature: ${data.main.temp}°C`);
12 console.log(`Conditions: ${data.weather[0].description}`);
13 console.log(`Humidity: ${data.main.humidity}%`);
14 } catch (error) {
15 console.error('Error fetching weather:', error.message);
16 }
17}
18
19const city = process.argv[2] || 'London';
20getWeather(city);The output worked. But it immediately revealed something interesting.
- It used
require()even though I haven’t used CommonJS in years. - It imported
axioseven though I don't use it as nativefetchexists. - It printed results with
console.logand I don't use it for CLI outputs. - And it hardcoded a fallback city of
"London"which I never do it (as I'm in San Francisco).
All perfectly reasonable decisions, but just not my decisions.
Command Code Output
1import { Command } from 'commander';
2import alert from 'cli-alerts';
3
4const program = new Command()
5 .name('wx')
6 .version('0.0.1')
7 .description('Fetch current weather for a city')
8 .argument('<city>', 'city name')
9 .option('-u, --units <type>', 'temperature units', 'metric')
10 .action(async (city, opts) => {
11 try {
12 const res = await fetch(
13 `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${process.env.WX_API_KEY}&units=${opts.units}`
14 );
15 const data = await res.json();
16
17 alert({ type: 'success', msg: `${data.name}: ${data.main.temp}° | ${data.weather[0].description}` });
18 } catch (err) {
19 alert({ type: 'error', msg: `Failed to fetch weather for "${city}"` });
20 process.exit(1);
21 }
22 });
23
24program.parse();This is interesting as Command Code's output looked like something I would have written.
- TypeScript over any other coding language.
- Commander.js with a proper argument (not
process.argv[2]). - Native
fetchinstead of pulling inaxios.cli-alertsfor structured, color-coded output. - Version starting at
0.0.1. Lowercase short flags andprocess.exit(1)on failure.
I didn't tell it any of this. It learned from watching me build 4 other CLIs over the past week.
Test 2: "Create an API endpoint for updating user preferences"
Claude Code (Sonnet 4.6) Output
1const express = require('express');
2const router = express.Router();
3
4router.put('/api/preferences', async (req, res) => {
5 try {
6 const { userId, preferences } = req.body;
7
8 if (!userId || !preferences) {
9 return res.status(400).json({ error: 'Missing required fields' });
10 }
11
12 // Update preferences in database
13 const updated = await db.users.update({
14 where: { id: userId },
15 data: { preferences }
16 });
17
18 res.json({ success: true, data: updated });
19 } catch (error) {
20 console.error('Error updating preferences:', error);
21 res.status(500).json({ error: 'Internal server error' });
22 }
23});
24
25module.exports = router;The output here is clean, correct and pretty much what you’d expect.
- Standard Express,
require()again, validation sitting inside the handler, generic error messages, andconsole.errorfor logging. - Nothing wrong with it, just very typical. It's the code you'd find in the first tutorial result on Google.
Command Code Output
1import { Hono } from 'hono';
2import { zValidator } from '@hono/zod-validator';
3import { z } from 'zod';
4import { HTTPException } from 'hono/http-exception';
5import { logger } from '../lib/logger.js';
6
7const preferencesSchema = z.object({
8 theme: z.enum(['light', 'dark', 'system']).optional(),
9 language: z.string().min(2).max(5).optional(),
10 notifications: z.boolean().optional(),
11}).strict();
12
13export const preferencesRoute = new Hono()
14 .put('/:userId', zValidator('json', preferencesSchema), async (c) => {
15 const userId = c.req.param('userId');
16 const prefs = c.req.valid('json');
17
18 const updated = await c.var.db.users.update({
19 where: { id: userId },
20 data: { preferences: prefs },
21 });
22
23 if (!updated) {
24 throw new HTTPException(404, { message: `User ${userId} not found` });
25 }
26
27 return c.json({ data: updated.preferences });
28 });This time the output matched how I have been building APIs recently.
- Typed HTTP exceptions.
.strict()to reject unknown fields.- Zod for validation, extracted into a schema.
- Hono instead of Express (I switched 6 months ago).
userIdin the URL path, not the body. Named exports.- No
try/catchwrapping the whole thing (Hono's error handler catches it).
Every single one of those choices matches how I've been building APIs in my other projects. Command Code picked up on all of it. Again, none of that appeared in the prompt.
Test 3: "Build a settings panel component with theme toggle and notification preferences"
Claude Code (Sonnet 4.6) Output
1import React, { useState } from 'react';
2
3function SettingsPanel() {
4 const [theme, setTheme] = useState('light');
5 const [notifications, setNotifications] = useState(true);
6
7 return (
8 <div style={{ padding: '20px', maxWidth: '400px' }}>
9 <h2>Settings</h2>
10
11 <div style={{ marginBottom: '16px' }}>
12 <label>Theme: </label>
13 <select value={theme} onChange={(e) => setTheme(e.target.value)}>
14 <option value="light">Light</option>
15 <option value="dark">Dark</option>
16 </select>
17 </div>
18
19 <div style={{ marginBottom: '16px' }}>
20 <label>
21 <input
22 type="checkbox"
23 checked={notifications}
24 onChange={(e) => setNotifications(e.target.checked)}
25 />
26 Enable Notifications
27 </label>
28 </div>
29
30 <button onClick={() => alert('Settings saved!')}>
31 Save Settings
32 </button>
33 </div>
34 );
35}
36
37export default SettingsPanel;Claude Code's output component definitely works with:
- Default export.
- Inline styles.
alert()for feedback.functiondeclaration.onChangedirectly in JSX.
But stylistically it was far from what I usually write.
Command Code Output
1import { useState, useCallback } from 'react';
2import { cn } from '@/lib/utils';
3import { Switch } from '@/components/ui/switch';
4import { Label } from '@/components/ui/label';
5import {
6 Select,
7 SelectContent,
8 SelectItem,
9 SelectTrigger,
10 SelectValue,
11} from '@/components/ui/select';
12import { toast } from 'sonner';
13
14type Theme = 'light' | 'dark' | 'system';
15
16interface SettingsFormState {
17 theme: Theme;
18 emailNotifs: boolean;
19 pushNotifs: boolean;
20}
21
22const DEFAULT_SETTINGS: SettingsFormState = {
23 theme: 'system',
24 emailNotifs: true,
25 pushNotifs: false,
26};
27
28export const SettingsPanel = () => {
29 const [settings, setSettings] = useState<SettingsFormState>(DEFAULT_SETTINGS);
30 const [saving, setSaving] = useState(false);
31
32 const updateField = useCallback(
33 <K extends keyof SettingsFormState>(key: K, value: SettingsFormState[K]) => {
34 setSettings((prev) => ({ ...prev, [key]: value }));
35 },
36 []
37 );
38
39 const handleSave = useCallback(async () => {
40 setSaving(true);
41 try {
42 await updatePreferences(settings);
43 toast.success('Preferences saved');
44 } catch {
45 toast.error('Failed to save preferences');
46 } finally {
47 setSaving(false);
48 }
49 }, [settings]);
50
51 return (
52 <section className="mx-auto max-w-md space-y-6 p-6">
53 <h2 className="text-lg font-semibold tracking-tight">Settings</h2>
54
55 <div className="space-y-4">
56 <div className="flex items-center justify-between">
57 <Label htmlFor="theme">Theme</Label>
58 <Select
59 value={settings.theme}
60 onValueChange={(v: Theme) => updateField('theme', v)}
61 >
62 <SelectTrigger id="theme" className="w-36">
63 <SelectValue />
64 </SelectTrigger>
65 <SelectContent>
66 <SelectItem value="light">Light</SelectItem>
67 <SelectItem value="dark">Dark</SelectItem>
68 <SelectItem value="system">System</SelectItem>
69 </SelectContent>
70 </Select>
71 </div>
72
73 <div className="flex items-center justify-between">
74 <Label htmlFor="email-notifs">Email notifications</Label>
75 <Switch
76 id="email-notifs"
77 checked={settings.emailNotifs}
78 onCheckedChange={(v) => updateField('emailNotifs', v)}
79 />
80 </div>
81
82 <div className="flex items-center justify-between">
83 <Label htmlFor="push-notifs">Push notifications</Label>
84 <Switch
85 id="push-notifs"
86 checked={settings.pushNotifs}
87 onCheckedChange={(v) => updateField('pushNotifs', v)}
88 />
89 </div>
90 </div>
91
92 <button
93 onClick={handleSave}
94 disabled={saving}
95 className="w-full rounded-md bg-zinc-900 px-4 py-2 text-sm text-white hover:bg-zinc-800 disabled:opacity-50"
96 >
97 {saving ? 'Saving...' : 'Save preferences'}
98 </button>
99 </section>
100 );
101};Command Code's generated component followed patterns I use across my React projects.
- shadcn/ui components.
- Loading state on save.
- A
systemtheme option. useCallbackfor handlers.- Tailwind (not inline styles).
sonnerfor toast notifications.- TypeScript with proper interfaces.
- Named export, arrow function component.
Command Code knows all of my preferences because it watched me build a dozen of them.
What's Actually Happening Here
Command Code starts from the same foundation with LLM architecture. Our secret sauce is taste-1, a neuro-symbolic layer that conditions every generation on patterns it's extracted from your behavior.
When I accept a suggestion that uses cli-alerts, that's a signal. When I reject one that uses console.log, that's a signal. When I edit require() to import, that's a signal. When I strip out axios and replace it with native fetch, that's a signal.
Over 9 days, these micro-signals compound into a detailed model of my preferences. You can see exactly what it learned:
1# .commandcode/taste/taste.md
2
3## TypeScript
4- Use strict mode. Confidence: 0.92
5- Prefer explicit return types on exported functions. Confidence: 0.78
6- Use type imports for type-only imports. Confidence: 0.88
7
8## Frameworks
9- Use Hono over Express. Confidence: 0.90
10- Use native fetch (no axios, no ofetch). Confidence: 0.88
11- Use cli-alerts for CLI output. Confidence: 0.85
12- Use sonner for toast notifications. Confidence: 0.82
13- Use shadcn/ui components. Confidence: 0.90
14
15## Exports
16- Use named exports. Confidence: 0.88
17- Avoid default exports except for page components. Confidence: 0.85
18
19## CLI Conventions
20- Use Commander.js. Confidence: 0.92
21- Lowercase single-letter flags. Confidence: 0.90
22- Version format: 0.0.1 starting point. Confidence: 0.88
23
24## Validation
25- Use Zod for runtime validation. Confidence: 0.90
26- Extract schemas outside handlers. Confidence: 0.82
27- Use .strict() on object schemas. Confidence: 0.78That's learned, not written. I never typed any of it. And I can edit it if it picked up something wrong.
The Numbers
After 9 days, here's where my correction loops landed:
| Task | Claude Code Sonnet 4.6 (edits to ship) | Command Code (edits to ship) |
|---|---|---|
| CLI weather tool | 6 | 1 |
| API endpoint | 5 | 0 |
| React settings panel | 7 | 1 |
The API endpoint shipped with zero edits. Yes zero! It matched my patterns so closely that I reviewed it, nodded, and committed.
The CLI needed one edit (I wanted a --json output flag it didn't anticipate). The React component needed one edit (I wanted a "reset to defaults" link).
Claude Code needed 18 edits total across the 3 tasks. On the other hand, Command Code needed just 2.
The Part That Surprised Me
Here's what really messed with my assumptions. For most of my Command Code usage, I've been running it on Haiku 4.5. The cheapest, fastest Claude model. The one most people skip for serious coding work.
And it's still beating Sonnet 4.6 on my code.
That sounds wrong until you think about what's actually happening. Sonnet is a stronger model in raw capability. Better reasoning, better edge case handling, more nuanced code generation. On a benchmark, it wins every time.
But benchmarks measure correctness against a generic standard. My projects don't have a generic standard. They have my standard: Commander.js, not process.argv. cli-alerts, not console.log. Hono, not Express. Zod schemas extracted and .strict().
When Command Code runs Haiku 4.5 conditioned on my taste profile, the model doesn't need to be brilliant. It needs to be pointed in the right direction. taste-1 handles the pointing. The model handles the generation. And because Haiku is fast and cheap, I get results back in seconds at a fraction of the cost.
A $0.25/MTok model that knows your patterns will outship a $3/MTok model that doesn't.
I tested this across 2 weeks and the pattern held consistently. Haiku + taste beat Sonnet without taste on 14 out of 17 tasks I tracked. The 3 exceptions were complex architectural decisions where raw reasoning power mattered more than style. For everything else (endpoints, components, CLIs, utils, tests), the taste-conditioned Haiku output was closer to my code than the unconditioned Sonnet output. Every time.
Think about what that means for your bill. You're paying 12x more per token for Sonnet, and still spending time correcting the output. With Command Code on Haiku, the model is cheaper and the output needs fewer corrections. The economics flip completely.
That's the compounding effect of taste. The model matters less when the constraints are right.
Try It
1npm i -g command-codeRun cmd in your project. Use it for a week. Watch the taste.md file grow. Then run the same prompt through your current agent and Command Code side by side.
The diffs will tell you everything.
Sign up for Command Code to see for yourself. Just code that's actually yours.

