Every AI coding tool seems to converge on the same idea: "Just write a rules file." Cursor has .cursorrules. Claude has CLAUDE.md. Copilot has AGENTS.md. Xcode just launched reusable skills for Swift developers.
The premise is simple: document how you want code written, and the AI will follow those instructions. It sounds reasonable. But over time, it breaks.
The 6-Month Graveyard
Go open your .cursorrules right now. I'll wait.
If you're like most developers, one of 3 things happened. You wrote it 6 months ago and haven't touched it since. You've rewritten your error handling strategy twice since then, but the rules still describe version one. Or you crammed in so many rules they contradict each other, and the model picks whichever one it feels like.
Rules are code comments for your AI. And they rot the same way.
You refactor your codebase 3 times. The rules still describe the first pass. New developers join. They read the rules. They read the code. The two disagree. Nobody knows which one to trust.
Skills Have the Same Problem
Xcode's new skills system is genuinely cool. Scoped behaviors, domain-specific mini-agents, reusable workflows you can share across teams. But skills encode what to do, not how you'd do it.
If you hand 2 senior engineers the same "Generate REST API endpoint" skill. One extracts validators into separate files, uses named exports, throws typed domain errors. The other co-locates schema and handler, uses default exports, returns structured error objects.
Same skill. Same knowledge. Completely different code.
Skills are tutorials. Tutorials don't capture the 1,000 micro-decisions you've sanded down over years of building, breaking, and shipping. Which flag format you prefer. Whether you start versions at 0.0.1 or 1.0.0. How you name your test files.
That's taste. And nobody's encoding it.
The Decay Curve
Here's what actually happens over time with static approaches:
Week 1: You write .cursorrules. The AI follows them. Things feel great.
Month 1: You've evolved your patterns. The rules haven't. You're fixing the same 3 things the AI keeps getting wrong, but you haven't updated the file because (let's be honest) you forgot it existed.
Month 3: New team member adds their own rules. Some conflict with yours. The model averages them out, which means nobody's preferences get respected.
Month 6: The rules file is a graveyard. 40 lines of outdated opinions. The AI has reverted to writing average internet code. You're back to the fix-it-again loop.
Here's the thing, rules decay because they require you to do something humans are terrible at: maintaining documentation that has no runtime consequences.
What If the Agent Just Watched?
To solve the problem, Command Code took a different approach. Instead of asking you to write down your preferences, it observes them.
Every accept is a signal. Every reject is a signal. Every edit you make after accepting, that's the most valuable signal of all, the delta between what the AI generated and what you actually wanted.
I asked Claude Code and Command Code the same prompt: write a CLI that tells today's date.
Claude Code produced vanilla JavaScript with console.log and string concatenation.
Command Code produced TypeScript with Commander.js, semantic versioning at 0.0.1, lowercase flags, and ISO date formatting. Not because someone wrote a rule. Because it had watched me build CLIs before and it picked my taste of writing code the way I like.
The Comparison That Matters
| Rules / Skills | Learned Taste | |
|---|---|---|
| Source | What you remember to write down | What you actually do |
| Updates | When you remember (you won't) | Every session, automatically |
| Granularity | Broad guidelines | Micro-decisions per keystroke |
| Over time | Drifts from reality | Compounds accuracy |
| Team scaling | Copy-paste and pray | npx taste push, npx taste pull |
The compounding part is what kills static approaches. Command Code's benchmarks show the correction loop (how many times you edit AI output before it's acceptable) dropping from 4.2 edits on CLI scaffolding to 0.4 after a month. API endpoints go from 3.1 to 0.3. React components from 3.8 to 0.5.
That's not a marginal improvement. That's a different workflow.
How It Works Under the Hood
Command Code runs on taste-1, a neuro-symbolic architecture. Standard LLM generation looks like this:
1output = LLM(prompt)Generation conditioned on taste looks like this:
1output = LLM(prompt | taste(user))The symbolic layer extracts constraints from your behavior and enforces them during generation. It's lightweight, updates in real time, and (this part matters) it's transparent. Your learned preferences live in .commandcode/taste/taste.md, human-readable, editable, and deletable:
1## TypeScript
2- Use strict mode. Confidence: 0.80
3- Prefer explicit return types on exported functions. Confidence: 0.65
4
5## CLI Conventions
6- Use lowercase single-letter flags (-v, -h, -q). Confidence: 0.90
7- Version format: 0.0.1 starting point. Confidence: 0.90No black box. If it learned something wrong, you fix it. But you don't have to maintain it. That's the difference.
The Team Angle
Individual learning is interesting. Team learning is where it gets genuinely useful.
1# Senior engineer pushes their CLI patterns
2npx taste push --all
3
4# New contributor pulls them
5npx taste pull ahmadawais/cliSenior engineers encode their patterns once (well, they don't even do that, the system learns it). New team members inherit conventions without reading a 40-page style guide nobody maintains. Open source maintainers publish project-specific taste that contributors automatically adopt.
Compare that to "please read CONTRIBUTING.md before submitting a PR" (narrator: they didn't read it).
The Uncomfortable Bit
Rules and skills aren't useless. They're useful for coarse-grained, stable conventions. "We use TypeScript." "We deploy to AWS." "Tests go in __tests__." Those don't change often. A rules file handles them fine.
But the granular stuff, the stuff that makes code feel like yours, that changes constantly and lives below the level of conscious articulation. You can't write it down because you don't fully know it yourself until you see the AI get it wrong.
That's the layer where static approaches break and continuous learning wins.
Try It
1npm i -g command-codeSign up for Command Code. Install it, run cmd, write some code, accept and reject a few suggestions. Check .commandcode/taste/taste.md after a day.
You'll see your own patterns reflected back at you. Patterns you never bothered to document because you didn't think they mattered.
They mattered. The AI just couldn't see them until now.

