ANNOUNCEMENT/$5M Seed Round

Command Code raises $5M to build the first coding agent that continuously learns your coding taste

Code 10x faster. Review 2x quicker. Bugs 5x slashed.

Ahmad Awais
Ahmad Awais@_AhmadAwais
Founder & CEO of Command Code

AI-assisted coding has a paradox. The code is usually correct. It's rarely yours.

Every developer builds up years of micro-decisions: how they name variables, when they extract helpers, which patterns they reach for, how they structure tests. This accumulated intuition, what we've been calling "coding taste" is invisible to language models. They optimize for statistical plausibility, not for the specific choices that make code feel right to you.

The result is a frustrating loop. The agent writes code. You fix it. The agent doesn't learn. You fix it again.

We wanted to change that.

The problem with AI coding

ai learns nothing from you

//Endless Corrections AI writes sloppy code. You fix it. AI doesn't learn. You fix it again. And again.

your preferences are ignored

//Edits Ignored, Rules Decay You learn from every fix, why doesn't your coding agent?

code without taste is slop

//Average Code LLMs default to internet's average developer. Not your patterns. Not your taste.

//Introducing Command Code

We're launching Command Code with taste; a coding agent that observes how you write code and adapts to your preferences over time. Every accept is a signal. Every rejection is a signal. Every edit you make after accepting is a signal. Over time, it continuously improves your coding taste and applies it automatically.

cmd comes as a CLI, works anywhere you do. You can npm i -g command-code and run cmd to get started. Here are the quick start docs.

This release is powered by taste-1, a new model architecture that combines large language models with meta neuro-symbolic reasoning to capture and apply your individual coding patterns. The more you use it, the smarter it gets, eventually understanding your coding taste and helping you and your team ship more ambitious software in half the time, with code that actually looks like yours.

$5M Seed Funding

We're also announcing $5M in seed funding from world-class investors and founders who believe in our mission: building the best AI engineering developer experience.

PWV by Tom Preston-Werner (co-founder & former CEO of GitHub) first invested in our pre-seed and is now leading our seed round, doubling down alongside Firststreak, ENEA, Mento, Banyan, Alumni, AltaIR, many a16z scouts, and a group of incredible angel investors and founders:

Tom Preston-Werner
Tom Preston-Werner
Founder & Former CEO, GitHub
Luca Maestri
Luca Maestri
CFO, Apple
Dane Knecht
Dane Knecht
CTO, Cloudflare
Paul Copplestone
Paul Copplestone
CEO, Supabase
Amjad Masad
Amjad Masad
CEO, Replit
Guy Podjarny
Guy Podjarny
Founder, Snyk; CEO, Tessl
Feross Aboukhadijeh
Feross Aboukhadijeh
CEO, Socket
Zeno Rocha
Zeno Rocha
CEO, Resend
David Mytton
David Mytton
CEO, Arcjet
Logan Kilpatrick
Logan Kilpatrick
Google, Ex-OpenAI
Theo Browne
Theo Browne
CEO, T3 Chat
Full list of investors →

//The problem we set out to solve

AI is sloppy by default.

LLMs write correct code. Not good code. You fix it. They forget. You fix it again.

The promise was "AI writes code for you."
The reality is "AI writes code you have to rewrite, again and again."

Why doesn't my coding agent learn from me? It's stuck generating average internet code instead of adapting to how I actually work, my style, my experience, not learning great design patterns.

Coding agents have an odd failure mode. The code works. It just doesn't feel like yours. You fix it. The model doesn't learn. You fix it again tomorrow. The corrections never compound.

Rules were supposed to solve this. They don't. They're a snapshot of what you remembered to write down six months ago. Your codebase evolves; your rules stay frozen. Add more rules, the system gets more brittle.

We've been exploring a different approach: learning preferences directly from behavior; not from what you document, but from what you actually do.

//What we built

Command Code observes how you interact with its suggestions. Every accept/reject/edit action is transformed into a meaningful signal, used to better model your taste.

Over time, Command builds a model of your preferences and coding taste not as static rules you have to maintain, but as learned constraints it applies automatically.

Without Taste

Your experience with other coding agents.

> Build a cli to tell date

Building

Interrupted

Lhey use typescript

Blabbering… Adding tsc

Interrupted

Lno, use tsup

Stackflowing… Adding mocha

Interrupted

Li prefer vitest

✦ Done!

> f&5k, use lowercased -v to for version,

Updated, now -v for version.

You can run the app using `npm run dev`

> s#!t, i always prefer pnpm

> leave it, i'll do it myself!

> learn something from me for a change

With Taste

When Command has learned your coding taste.

> Build a cli to tell date

Building cli, let me check your taste…

Taste
L

Using your taste, I see you prefer:

TypeScript for CLI

Commander and tsup

Vitest for tests

You prefer pnpm but do `npm link`

You like lowercased `-v` for cli version

TODO
L

Using taste, learning, building...

Done!

L

Built a date cli, with TypeScript, tsup, vitest

Also linked using `npm link`

Run `date-cli` to try it out.

> oh wow, awesome! just what I wanted.

> i just made an api route, can add /health route

//Why this isn't just another rules file

Developers have tried solving this with rules files. This is what .cursorrules, AGENTS.md, and similar approaches attempt. These help, but they have a fundamental limitation: Rules decay.

Rules are a snapshot of what you remembered to write down. Your codebase evolves. Your preferences shift. The rules stay frozen.

RulesTaste
SourceWhat you write downContinuously learned from you
UpdatesWhen you rememberEvery session
GranularityBroad guidelinesMicro-decisions
TrajectoryDecaysCompounds
Over timeDrifts from realityCompounds accuracy

We needed something that could learn continuously from signals rather than requiring explicit documentation. Coding taste is too granular and too dynamic to maintain manually.

The future of coding is personal, a coding agent that observes how you ship, then ships like you.

//The Architecture: Neuro-Symbolic AI

Pure transformer architectures learn through training. You can fine-tune them on your code, but fine-tuning is expensive, requires significant data, and doesn't adapt in real-time.

We took a different approach: a meta neuro-symbolic architecture we call taste-1.

The core insight is that your interactions with an AI coding agent generate continuous signal:

  • Accepts signal pattern approval
  • Rejects signal pattern disapproval
  • Edits signal the delta between what was generated and what you wanted
  • Prompts signal intent and framing preferences

In a pure LLM system, learnings are discarded after each session. In our architecture, it's encoded into a symbolic constraint system that conditions future generation.

Standard LLM generation:
output = LLM(prompt)

The output is sampled from the model's learned distribution, shaped by internet-scale training data.

Conditioned generation with taste-1:
output = LLM(prompt | taste(user))

The output is sampled from a distribution conditioned on user-specific constraints. The symbolic layer encodes patterns as explicit structures the generation must follow.

This is the foundation of our frontier meta neuro-symbolic AI model architecture, taste-1. While this is an early-stage research direction, the results we're seeing internally have been strong enough that we decided to bet on it. Everything you do, including your edits, commits, accepts/rejects, comments, patterns, corrections, and even the things you consistently ignore, becomes a signal. taste-1 learns your coding style and encodes it as symbolic constraints, heuristics, and preferences. These act as a "personalized prior" for the LLM, guiding generation and reducing the model's search space to patterns that match how you design and structure code. This reduces AI slop and produces more consistent outputs with fewer correction loops.

//Transparent & Interpretable

The learned taste & preferences are transparently stored in a human-readable format. You can inspect them in .commandcode/taste/taste.md, edit them directly, or reset them entirely. We think this interpretability matters. You should be able to understand why Command made a particular choice, and correct it if it learned something wrong.

.commandcode/taste/taste.md
## TypeScript
- Use strict mode
- Prefer explicit return types on exported functions
- Use type imports for type-only imports

## Exports
- Use named exports
- Group related exports in barrel files
- Avoid default exports except for page components

## CLI Conventions
- Use lowercase single-letter flags (-v, -h, -q)
- Use full words for less common options (--output-dir)
- Version format: 0.0.1 starting point
- Include ASCII art welcome banner

## Error Handling
- Use typed error classes
- Always include error codes
- Log to stderr, not stdout

This is learned, not written. You never have to maintain it. But you can override it if the system learned something wrong.

//Sharing Taste Across Projects

Individual learning is useful. Team learning is more powerful. We built a registry for sharing taste profiles:

Terminal
# Push your CLI taste to the registry
npx taste push --all

# Pull someone else's CLI taste into your project
npx taste pull ahmadawais/cli

Check out my live CLI taste profile: ahmadawais/cli

This enables a new workflow. Senior engineers can encode their patterns. Teams can share conventions without maintaining documentation. Open source maintainers can publish project-specific taste that contributors automatically adopt.

//Benchmarks

We measured correction loops, the number of times you need to edit AI-generated code before it's acceptable, across a set of common coding tasks.

Task TypeWithoutWeek 1Month 1
CLI scaffolding4.2 edits1.8 edits0.4 edits
API endpoint3.1 edits1.2 edits0.3 edits
React component3.8 edits1.5 edits0.5 edits
Test file2.9 edits0.9 edits0.2 edits

The improvement compounds. More usage means better constraints. Better constraints mean fewer corrections. Fewer corrections mean faster iteration.

//The Compounding Effect

Day 1
First suggestion with your taste

Install Command Code and start coding. It picks up your micro-decisions immediately starting with your prompts.

Week 1
Less slop. Fewer correction loops.

Every accept/reject action teaches cmd why and it will transparently add that under .commandcode/taste/taste.md file.

Month 1
Code 10x faster. Review 2x quicker. Bugs 5x slashed.

Command will start anticipating. It will have acquired your coding taste, it'll write code that you'd write in the first place.

//Who am I and why do I care?

Ahmad Awais

Hello, I'm Ahmad Awais. 25 years writing code, from contributing code that flew on Mars with NASA, to building hundreds of open source software packages used by millions of developers, and pushing Meta to open-source React.

I've learned maybe over 30+ programming languages. Countless coding patterns. My brain has built an invisible architecture of choices and micro-decisions, the intuition of writing code that I call my coding taste.

Build a coding agent five years ago: When Greg Brockman (co-founder OpenAI) gave me early GPT-3 access in July 2020. First thing I built: a CLI coding agent called clai. Three years before ChatGPT. A year before Copilot.

I recorded 30 hours of content on building CLIs in 2020. Cut it to 10. Thought: who gives a shit about DX this much? I was wrong. 50,000 developers took my courses.

Developers care about how they write code. Deeply.

//Langbase

Frustration with bad AI dev-tooling led me to leave my cushy VP job at an SF unicorn and launch Langbase.

Our Mission: build the best AI engineering DX. Started with Pipes (the original MCP), then Memory (agentic RAG on billion-QPS object storage).

Langbase is the OG AI cloud. One line of code, ship agents with memories and tools. We're doing 750 TB of AI memory. 1.2 Billion agent runs monthly.

LLMs write correct code. That's the easy part. The hard part: code that doesn't make you want to refactor everything afterward.

That's why we built Command Code.

Share this article

How it actually works?

[ taste-1 ]

Formula

//Continuously Learning

Learns the taste of your code (explicit & implicit feedback).

//Meta Neuro-Symbolic AI

Our taste-1 enforces the invisible logic of your choices and taste.

//Share with your team

Share your taste to build consistent code using npx taste push/pull.

Logan KilpatrickAnand ChowdharyAhmad AwaisZeno RochaElio Struyf
+2k

Ready to code with your taste?

Join 2K+ developers who stopped fixing AI code and started shipping with their coding preferences.

$10 free credits included. No credit card required.

" Command Code is continuously learning my coding taste, after a week it stopped making the mistakes I kept fixing with other coding agents.It learns from what you keep and what you delete."

Zeno Rocha
Zeno Rocha
Founder · Resend

//community

Community

What developers and founders are saying about Command Code.

10x faster shipping

"Command learns my coding patterns and now I ship features 10x faster. It stopped suggesting React class components after just 3 interactions. Other AI tools never figured that out."

Logan Kilpatrick
Logan Kilpatrick
Google · OpenAI · Harvard
 
80% fewer corrections

"After a week, I stopped fixing AI code manually. Command learned my conventions. The most important thing is taste. It's the real difference between a junior and senior developer. Command gets it."

Anand Chowdhary
Anand Chowdhary
CTO · FirstQuadrant AI
GitHub Star · Forbes 30U30
Zero repeated instructions

"I never explain my preferences twice anymore. Push once, use everywhere. Creating the taste automatically is a natural addition to the dev experience. It benefits team projects massively."

Elio Struyf
Elio Struyf
GitHub Star · Google Developer Expert
 

What You Get

Continuous Learning

AI that learns from everything you do. It gets better with every session instead of resetting.

Continuous Learning

Evolving Taste

Coding agent that adapts to you, not the other way around. Your style becomes the default.

Evolving Taste

10x Speed Boost

Move faster because it understands how you work, not how an average developer works.

10x Speed Boost

Ships like you ship

Outputs code that fits your architecture, your prefrences. No generic random slop.

Ships like you ship

Start building
with your taste

No obligation, no credit card required.