← Blog
MARKETING

Replacing a $299/mo Influencer Tool With 200 Lines of Command Code Skill in 3 cents

Learn how our PMM team built a creator-discovery pipeline as a Command Code skill that finds dev creators better than the SaaS tools like Modash and Upfluence. Runs in 10 minutes, and costs about 3 cents per run.

Farhan MalikFarhan Malik
9 min read
May 30, 2026

Learn how our PMM team built a creator-discovery pipeline as a Command Code skill that finds dev creators better than the SaaS tools like Modash and Upfluence. Runs in 10 minutes, and costs about 3 cents per run.

alt text

TL;DR

MetricInfluencer SaaS Toolscreator-scout Skill
Monthly cost$299+~$1 (Command Code Go plan)
Cost per runn/a (flat fee)~3 cents from one $1 plan
RuntimeHours of manual filtering~10 minutes end-to-end
Dev-niche fitShallow keyword matchingLLM-scored against your seed creators
Code to maintainNone (closed SaaS)~200 lines of markdown + scripts
CoverageReach-ranked, surface-level80 candidates, ~70% worth contacting

GitHub Repo: https://github.com/FarhanMalik082/creator-scout

The Problem

As a Product Marketing Manager at Command Code, part of my job is finding collaboration opportunities with YouTube creators. The category SaaS tools (Modash, Upfluence, and friends) start around $299/mo. They're built for beauty and lifestyle brands, where keyword matching on captions like "lipstick" or "fitness" works fine. The dev-niche filtering inside them is shallow, and you still spend most of your time qualifying the list yourself.

I needed something that knew the difference between Fireship and Andrej Karpathy. So I built one.

The Solution: A Command Code Skill

creator-scout is a Command Code skill. You give it a seeds.json file with creators you already trust. It returns a ranked CSV of around 80 candidate channels with niche scores, tags, and notes.

The whole thing is a 200-line SKILL.md file in .commandcode/skills/creator-scout/ directory. No queue, no state machine, no orchestration framework. Command Code reads the skill, runs each stage in order, and saves intermediate state to JSON files in /tmp. That's it.

How It Works

The pipeline has six stages: two LLM calls, three scripts hitting the YouTube data layer through yt-dlp, and one human checkpoint in the middle.

StageTypeWhat it does
1scriptPull last 30 video titles per seed (yt-dlp)
2LLMExtract 10 to 15 specific topics from titles
3humanReview and approve or edit the topics
4scriptytsearch each topic, dedupe, enrich top 80
5LLMScore each candidate 0 to 100 with niche tag and note
6scriptCombine LLM score with topic overlap and log(subs), write CSV

Three architectural decisions that paid off

  1. The skill is the orchestrator. I didn't write a workflow engine. The SKILL.md file tells Command Code when to run a script, when to think, and when to wait for input. This is the part that surprised me most about building on Command Code: the markdown file is the program. About 200 lines of plain English plus some bash invocations, and the runtime handles the rest.

  2. No YouTube Data API. The official API caps at 10,000 quota units/day, and listing channel videos burns 5 units per call. That's a 2,000-channel daily ceiling before you're locked out for 24 hours. yt-dlp reads the same public pages the YouTube web client renders. Slower per call, but no limits.

  3. One LLM call scores all 80 candidates. Stage 5 sends every candidate's sample titles in a single structured prompt and asks for a JSON array back. Roughly 3k input tokens and 2k output tokens for the whole batch. Same quality as scoring one at a time, much cheaper. This is the single biggest cost lever in the pipeline.

The human checkpoint at Stage 3 was an afterthought that turned out to be the most important step. More on that below.

Results: The Output

Here's the top 15 from a real run on 8 seed channels (Python, terminal, AI-coding, and data-engineering creators):

RankChannelSubsNicheScore
1Fireship4.19Mprogramming-education91.7
2ThePrimeagen541kdev-tools84.9
3Josean Martinez66.9kdev-tools78.6
4DevOps Toolbox116kdev-tools78.1
5typecraft227kdev-tools71.2
6TJ DeVries114kdev-tools70.3
7linkarzu12.3kdev-tools69.8
8AI LABS131kai-ml-dev68.7
9Tech With Tim2Mprogramming-education68.6
10The PrimeTime1.1Mtech-commentary65.5
11freeCodeCamp11.6Mprogramming-education65.3
12Bread on Penguins162kdev-tools64.8
13IBM Technology1.66Mtech-commentary64.7
14CodeOps HQ13.4kdev-tools64.5
15Programming with Mosh5.03Mprogramming-education64.4

The full 80-row CSV is in the repo. A few things in this data caught me off guard.

Insight 1: Subscriber Count Is The Wrong Sort Key

Fireship sits at the top of my list with 4.19M subs and a niche score of 91. By every metric the influencer SaaS tools optimize for, he's the obvious pitch. He's also the worst fit for Command Code right now. Fireship integrations start north of $20k. That math only works for big launch moments, not steady-state acquisition.

Three rows down, Josean Martinez has 66.9k subs and a niche score that's actually higher in raw terms (96 before sub-count weighting). His entire audience is terminal devs. One sponsored video from him probably drives 400 to 800 trials at a fraction of the cost.

The sweet spot for B2B dev tools sits in the 20k to 200k range:

  • Big enough that the audience trusts the creator
  • Small enough that the rates fit a lean budget
  • Niche enough that viewers actually convert

Most dev-tool companies skip this range entirely because they don't recognize the names. The SaaS tools rank by reach, which actively buries it.

I'm adding a --max-subs flag in v0.3 to make it easier to surface the mid-range creators directly.

Insight 2: The Long Tail That Modash Never Surfaces

Manual research is the part of this job I'd quietly given up on optimizing. A few hours of YouTube searches, scrolling through "best dev YouTubers" lists, asking around. Most evenings I'd come away with 4 or 5 channels that felt unique, and most of those I already half-knew about.

The tools didn't help. Modash and Upfluence kept surfacing the same names I'd already considered, ranked by reach.

creator-scout returned 80 candidates in 10 minutes. Around 70% of them were worth reaching out to. More than half were channels I'd never heard of and had never appeared in any tool I'd searched.

A 13k-sub Linux channel I'd never seen. A Rust systems guy I should have known about but didn't. Two India-market creators already running sponsor reads who never showed up in any tool I've used.

That's the part that mattered. The pipeline isn't just saving time on searches. It's finding people I'd never have found at all. For a dev tool on a lean budget, that's where the actual pilots come from.

Insight 3: The Thing That Actually Surprised Me

I ran the same seeds twice. Same prompts, same code, same model.

  • Run 1: Josean Martinez, ThePrimeagen, typecraft, TJ DeVries (terminal and Vim creators)
  • Run 2: freeCodeCamp, Tech With Tim, CodeWithHarry, Mosh (general programming education)

Two of the top 10 overlapped.

I checked the intermediate output from both runs to figure out where they diverged. It was Stage 2, the topic-extraction step.

  • Run 1 topics: specific (python dependency injection, neovim configuration)
  • Run 2 topics: generic (fastapi backend tutorial, sql database tutorial)

Generic topics drag in every big education channel on YouTube. Specific ones pull the narrow experts. Same seeds, same prompt, the LLM just picked different words on different runs.

Lesson: the first LLM step in a pipeline has the most leverage and the most variance. If nothing checks it, the tool silently gives you different answers on different days and you won't know unless you compare.

I added a human checkpoint at Stage 3 the same day I caught this. The skill now stops after topic extraction, shows the topics, and waits for continue or a pasted correction. Ten seconds saves seven minutes of running the wrong query.

This is the kind of thing the SaaS tools can't do, by the way. They're black boxes. When the matching feels off, you have no surface to inspect or correct. With a skill, the intermediate state is a JSON file on disk.

What This Cost

For one full run on 8 seeds:

ComponentCost
LLM tokens (Stage 2 + Stage 5)~3 cents of a $1 Go plan
YouTube data$0 (yt-dlp)
Computeruns locally
Total per run~$0.03

A monthly subscription to Modash starts at $299. That's roughly 10,000 runs of creator-scout per month for the same money, on a tool that finds better candidates for my niche.

Why This Pattern Generalizes

The headline is creator discovery, but the pattern is what's interesting:

  • Take a workflow you currently pay a SaaS to do badly for your specific case.
  • Find the public data source the SaaS is also reading from.
  • Write a SKILL.md that orchestrates: scripts for the deterministic parts, LLM calls for the judgment parts, human checkpoints where variance is highest.
  • Run it for 3 cents instead of 300 dollars.

If your job involves a vertical-specific workflow that horizontal SaaS does poorly, this is probably reproducible.

Try It

1git clone https://github.com/FarhanMalik082/creator-scout.git 2cd creator-scout 3pip install -r requirements.txt 4# edit .commandcode/skills/creator-scout/seeds.json with your seeds 5cmd 6> run creator-scout

You need Command Code to run the skill. The $1 Go plan handles all the LLM work comfortably. Get it at commandcode.ai.

Farhan Malik
Farhan Malik

Share this article