//Why this isn't just another rules file
Developers have tried solving this with rules files. This is what .cursorrules, AGENTS.md, and similar approaches attempt. These help, but they have a fundamental limitation: Rules decay.
Rules are a snapshot of what you remembered to write down. Your codebase evolves. Your preferences shift. The rules stay frozen.
| Rules | Taste |
|---|
| Source | What you write down | Continuously learned from you |
| Updates | When you remember | Every session |
| Granularity | Broad guidelines | Micro-decisions |
| Trajectory | Decays | Compounds |
| Over time | Drifts from reality | Compounds accuracy |
We needed something that could learn continuously from signals rather than requiring explicit documentation. Coding taste is too granular and too dynamic to maintain manually.
The future of coding is personal, a coding agent that observes how you ship, then ships like you.
//The Architecture: Neuro-Symbolic AI
Pure transformer architectures learn through training. You can fine-tune them on your code, but fine-tuning is expensive, requires significant data, and doesn't adapt in real-time.
We took a different approach: a meta neuro-symbolic architecture we call taste-1.
The core insight is that your interactions with an AI coding agent generate continuous signal:
- ▸Accepts signal pattern approval
- ▸Rejects signal pattern disapproval
- ▸Edits signal the delta between what was generated and what you wanted
- ▸Prompts signal intent and framing preferences
In a pure LLM system, learnings are discarded after each session. In our architecture, it's encoded into a symbolic constraint system that conditions future generation.
Standard LLM generation:
output = LLM(prompt)The output is sampled from the model's learned distribution, shaped by internet-scale training data.
Conditioned generation with taste-1:
output = LLM(prompt | taste(user))The output is sampled from a distribution conditioned on user-specific constraints. The symbolic layer encodes patterns as explicit structures the generation must follow.
This is the foundation of our frontier meta neuro-symbolic AI model architecture, taste-1. While this is an early-stage research direction, the results we're seeing internally have been strong enough that we decided to bet on it. Everything you do, including your edits, commits, accepts/rejects, comments, patterns, corrections, and even the things you consistently ignore, becomes a signal. taste-1 learns your coding style and encodes it as symbolic constraints, heuristics, and preferences. These act as a "personalized prior" for the LLM, guiding generation and reducing the model's search space to patterns that match how you design and structure code. This reduces AI slop and produces more consistent outputs with fewer correction loops.
//Transparent & Interpretable
The learned taste & preferences are transparently stored in a human-readable format. You can inspect them in .commandcode/taste/taste.md, edit them directly, or reset them entirely. We think this interpretability matters. You should be able to understand why Command made a particular choice, and correct it if it learned something wrong.
.commandcode/taste/taste.md
## TypeScript
- Use strict mode
- Prefer explicit return types on exported functions
- Use type imports for type-only imports
## Exports
- Use named exports
- Group related exports in barrel files
- Avoid default exports except for page components
## CLI Conventions
- Use lowercase single-letter flags (-v, -h, -q)
- Use full words for less common options (--output-dir)
- Version format: 0.0.1 starting point
- Include ASCII art welcome banner
## Error Handling
- Use typed error classes
- Always include error codes
- Log to stderr, not stdout
This is learned, not written. You never have to maintain it. But you can override it if the system learned something wrong.
//Sharing Taste Across Projects
Individual learning is useful. Team learning is more powerful. We built a registry for sharing taste profiles:
Terminal
# Push your CLI taste to the registry
npx taste push --all
# Pull someone else's CLI taste into your project
npx taste pull ahmadawais/cli
Check out my live CLI taste profile: ahmadawais/cli
This enables a new workflow. Senior engineers can encode their patterns. Teams can share conventions without maintaining documentation. Open source maintainers can publish project-specific taste that contributors automatically adopt.
//Benchmarks
We measured correction loops, the number of times you need to edit AI-generated code before it's acceptable, across a set of common coding tasks.
| Task Type | Without | Week 1 | Month 1 |
|---|
| CLI scaffolding | 4.2 edits | 1.8 edits | 0.4 edits |
| API endpoint | 3.1 edits | 1.2 edits | 0.3 edits |
| React component | 3.8 edits | 1.5 edits | 0.5 edits |
| Test file | 2.9 edits | 0.9 edits | 0.2 edits |
The improvement compounds. More usage means better constraints. Better constraints mean fewer corrections. Fewer corrections mean faster iteration.
//The Compounding Effect
Day 1
First suggestion with your taste
Install Command Code and start coding. It picks up your micro-decisions immediately starting with your prompts.
Week 1
Less slop. Fewer correction loops.
Every accept/reject action teaches cmd why and it will transparently add that under .commandcode/taste/taste.md file.
Month 1
Code 10x faster. Review 2x quicker. Bugs 5x slashed.
Command will start anticipating. It will have acquired your coding taste, it'll write code that you'd write in the first place.
//Who am I and why do I care?

Hello, I'm Ahmad Awais. 25 years writing code, from contributing code that flew on Mars with NASA, to building hundreds of open source software packages used by millions of developers, and pushing Meta to open-source React.
I've learned maybe over 30+ programming languages. Countless coding patterns. My brain has built an invisible architecture of choices and micro-decisions, the intuition of writing code that I call my coding taste.
Build a coding agent five years ago: When Greg Brockman (co-founder OpenAI) gave me early GPT-3 access in July 2020. First thing I built: a CLI coding agent called clai. Three years before ChatGPT. A year before Copilot.
I recorded 30 hours of content on building CLIs in 2020. Cut it to 10. Thought: who gives a shit about DX this much? I was wrong. 50,000 developers took my courses.
Developers care about how they write code. Deeply.
//Langbase
Frustration with bad AI dev-tooling led me to leave my cushy VP job at an SF unicorn and launch Langbase.
Our Mission: build the best AI engineering DX. Started with Pipes (the original MCP), then Memory (agentic RAG on billion-QPS object storage).
Langbase is the OG AI cloud. One line of code, ship agents with memories and tools. We're doing 750 TB of AI memory. 1.2 Billion agent runs monthly.
LLMs write correct code. That's the easy part. The hard part: code that doesn't make you want to refactor everything afterward.
That's why we built Command Code.