Claude Code Max 5x is a flat-rate $100/month plan that runs my entire content factory at 3-5 articles per day. Compared to per-token Anthropic API billing for the same volume, it saves roughly 70% — and, crucially, it caps the unpredictable token spikes that happen when an agent loops on a hard sub-task. This article walks you through the exact 9-stage pipeline I use, the prompts at each stage, and the rules that keep the output safe from Google’s Helpful Content Update.
If you want to skim the pipeline without reading the prose, jump to the diagram. Everything else is the explanation behind it.
Why Claude Code, not the API
Three reasons.
- Flat-rate economics. Anthropic API at Sonnet 4.6 rates costs roughly $0.06–$0.12 per 1,500-word article when you include the inputs (system prompt + SERP context + outline). Five per day = $9–$18/day = $270–$540/month. Max 5x is $100 flat. Once you’re past ~30 articles/month, Max wins.
- Agentic-loop safety. Real factories occasionally have a sub-task that loops 8 times before settling (e.g., “write the FAQ; the FAQ failed schema validation; write it again with stricter constraints”). On the API that can quietly burn $5 in 30 seconds. On Max it’s just minutes.
- Filesystem + git native. Claude Code can read your repo, write articles to
/content, commit, push, and trigger CI. The API can’t — you’d glue it to a custom orchestrator. That orchestrator is most of the engineering work.
What “at scale” means here
I’m not talking about 100 articles a day. That’s a spam-farm pattern, and it’s the one Google’s Helpful Content Update was built to nuke. At scale in this article means 3-5 well-edited articles a day, which compounds to 90-150 a month, which is the sweet spot the 2024–2026 case study data points to: sites editing AI-assisted content perform substantially better than fully-unedited operators (one widely-cited Ahrefs case study put the spread at +40% to -90% on traffic).
The factory below does the heavy lifting. The human keeps the steering wheel.
The 9-stage pipeline
1. Keyword pick → 2. SERP scan → 3. Outline draft
↓ ↓
9. IndexNow ping ← 8. Publish (git) ← 7. Schema injection
↑ ↑
6. Quality audit ← 5. Visual enrich ← 4. Article draft
Each stage is a Claude Code subagent with a tight scope and a strict input/output contract. The orchestrator is the main Claude Code session, holding the article ID and routing between agents.
Stage 1 — Keyword pick
Input: the cluster you’re writing for this week (e.g., claude-code).
Output: one keyword, one format, one priority — written to a single file /.factory/today.json.
The agent reads keyword-map.csv (your version of the 215-row map), filters by cluster + priority, picks the highest-value unwritten one, and tags it with the format the map prescribes (tutorial, listicle, etc.).
This stage is boring on purpose. Don’t let the agent invent keywords — it will, and they will be the same five generic phrases every other AI site is also writing.
Stage 2 — SERP scan
Input: the keyword from stage 1.
Output: /.factory/serp.md — top-10 organic results, top-3 PAA boxes, top-3 AI Overview citations (where present), shared headings, and one hot-take counter-position.
I use Bright Data MCP because their MCP server is first-party and the free 5K-req/month tier covers a single-author factory. Apify is the drop-in alternative if Bright Data hits friction.
The hot-take counter-position is the single most important field. “Everyone is saying X. Here’s the case for ¬X.” That’s the one paragraph that earns the article its citation in ChatGPT instead of ranking-but-invisible.
Stage 3 — Outline draft
Input: SERP scan. Output: outline as a YAML tree.
The outline must:
- Lead with the direct answer in the first paragraph (≤100 words).
- Include a TLDR with 5 bullets at the top.
- Use H2 question patterns (“What is X?”, “How does X work?”, “Is X worth Y?”) — these mirror PAA queries and boost AI Overview citation chance per the 2026 research.
- Have ≥1 comparison table for any tool/topic comparison.
- End with a FAQ section (5+ Q&As).
I keep this prompt in prompts/outline-tutorial.md (one per format).
Stage 4 — Article draft
Input: outline, SERP scan, voice rules.
Output: MDX file under /content/blog/[slug].mdx with frontmatter + body.
Three guardrails make or break this stage:
- Voice rules. A 50-line
voice.mdwith the do’s (specific, opinionated, founder-to-founder) and don’ts (banned phrases, hedge words, “in today’s fast-paced world”). - 30%-original rule. At least 30% of the article must be your own data, screenshots, opinion, or counter-position. The agent flags itself when it can’t hit this — that’s a queue item.
- Length tolerance. Tutorials 1,500-2,500 words. Listicles 2,500-4,000. Comparatifs 2,000-3,500. The agent rewrites if it under- or over-shoots by more than 25%.
Stage 5 — Visual enrichment
Input: finished draft.
Output: hero image at /og/[slug].webp, plus any in-article diagrams.
Replicate Flux Schnell at $0.003/image is the cheapest reliable option. The prompt is fixed style across the site — see our brand guidelines. For diagrams, Mermaid renders server-side at build; the agent just writes the Mermaid code into the MDX.
Stage 6 — Quality audit
Input: finished draft. Output: score 0-100 + reasons.
A separate Claude Opus call (the Auditor) scores the draft against a rubric:
| Dimension | Weight |
|---|---|
| Direct answer present in first 100 words | 10 |
| TLDR present, 3-5 bullets | 8 |
| ≥3 specific numbers / cited sources | 12 |
| ≥5 H2 sections, question-format where natural | 10 |
| FAQ section with ≥5 Q&As | 10 |
| At least one strong opinion / counter-position | 12 |
| Format-specific rules (table count, schema type, etc.) | 14 |
| Voice compliance (forbidden phrases, length variation) | 12 |
| Originality estimate (≥30% original) | 12 |
≥85 auto-publishes. 75-84 goes to a queue. Below 75 returns to Stage 4 with the audit reasons fed back as repair instructions.
Stage 7 — Schema injection
Input: approved draft.
Output: JSON-LD blocks injected into the MDX <Layout schema={...}> prop.
Per page type:
| Format | Required schema |
|---|---|
| Tutorial | Article + HowTo + BreadcrumbList + Person + FAQPage |
| Listicle | Article + ItemList + BreadcrumbList + FAQPage |
| Comparatif | Article + ItemList + Product (×N) + BreadcrumbList + FAQPage |
| Tool review | Article + Product + Review + AggregateRating + FAQPage |
I run every page through validator.schema.org’s API in CI — any invalid block fails the build.
Stage 8 — Publish
Input: approved + enriched draft. Output: a git commit, a push, a Cloudflare Pages deploy.
The agent commits with a structured message: factory: publish [slug] (score 87, format: tutorial). That makes it trivial to audit factory output via git log.
Stage 9 — IndexNow ping
Input: the new URL. Output: Bing + Yandex + Naver indexed.
POST https://api.indexnow.org/indexnow with the URL list and your IndexNow key. Bing turns this into ChatGPT Search visibility and Perplexity’s Brave-backed index. This is a 10-line script that earns days of ranking time on AI search.
How to set this up in 4 steps
Step 1 — Static site + content collection
If you don’t already have one, start with Astro + the astro:content collection. This article is published from one. The schema for blog frontmatter is a 30-line zod object that catches malformed agent output before it ships.
Step 2 — CLAUDE.md master brief
A CLAUDE.md at repo root tells Claude Code:
- What this site is.
- The voice rules.
- The locked decisions (no re-debating brand, color, format inventory).
- The pipeline scope.
- The Definition of Done checklist (everything that must be true before a publish counts).
Keep it under 600 lines. Mine is ~580.
Step 3 — Subagents
Each stage is a Claude Code subagent with its own system prompt. Define them in .claude/agents/ as Markdown files with frontmatter. Names map 1:1 to pipeline stages.
---
name: outliner-tutorial
description: Builds a tutorial outline from SERP scan + keyword. Strict YAML output.
tools: [Read, Write]
---
The orchestrator calls them with Agent(subagent_type: "outliner-tutorial", prompt: "...").
Step 4 — Cron
I trigger the pipeline via Cloudflare Cron at 06:00, 11:00, 14:00, 17:00, 20:00 UTC. Each cron call is a tiny webhook that posts to a queue; a worker pops the queue and runs Claude Code.
Locally, you can just npm run factory:run from the terminal. It works the same way.
Common errors and fixes
| Error | Cause | Fix |
|---|---|---|
| Agent picks the same keyword twice | The factory writes-but-doesn’t-mark | Maintain keyword-map.csv status column; updater agent flips to published after Stage 8 |
| FAQs are generic | Outline didn’t seed the FAQ Qs from PAA | Stage 3 must require ≥3 PAA-anchored Qs; reject outline if missing |
| Audit score gets stuck at 80-84 | Voice rule bias | Add explicit examples of good paragraphs to voice.md (positive, not just negative) |
| Schema fails Rich Results Test | Date format | All dates must be ISO 8601 in JSON-LD; the schema-injection agent must re-format |
| IndexNow returns 422 | Bad host key | Generate a fresh IndexNow key, host the txt file at /[key].txt |
Verification
After your first end-to-end run, you should see:
- One MDX file in
/content/blog/. - One image in
/public/og/. - One commit on
mainwith the structured message. - Cloudflare Pages deploy success.
- Bing Webmaster Tools showing the URL within ~10 minutes.
- (For approved articles) Article live at
/blog/[slug]with valid schema (run a Rich Results Test).
If any of those is missing, the failure is in the corresponding stage. The factory log at /.factory/log.jsonl tells you which one.
Going further
- Best AI tools for solopreneurs in 2026 — the rest of the stack that pairs with Claude Code.
- Claude Code vs Cursor in 2026 — pick the right base tool before scaling.
- Beehiiv review 2026 — the newsletter platform we wire to the factory’s “publish” stage.
FAQ
Can Claude Code generate SEO content at scale without getting penalized?
+
Yes — if you (1) auto-publish only drafts scoring ≥85 in a quality gate, (2) keep at least 30% original analysis per article, and (3) rotate through 8 formats. The penalty story is correlation; the cause is thin, repetitive output.
Why Claude Code Max instead of the API?
+
Flat $100/mo covers ~5 articles/day. Equivalent API spend at Sonnet 4.6 rates would be $300-700/mo for the same volume. Max also caps token spikes during agentic loops.
Do I need any other tools beyond Claude Code?
+
Bright Data MCP for SERP scraping (free 5K req/mo), a static site (Astro/Next), and Bing/Google indexing APIs. Optional but recommended: Replicate for hero images, IndexNow for Bing pings.
How long does the setup take?
+
About 4-6 hours if you already have a content site. The factory itself runs in 8-15 minutes per article once the pipeline is wired.
What's the realistic output ceiling per day?
+
5 articles/day with Max 5x is comfortable. Past 5/day, you start hitting the 5-hour usage windows; spread runs across the day. Max 20x scales to 15-20/day if you ever need it.