rw-book-cover

Metadata

Highlights

  • I’ve written about why having your AI coding assistant plan before it codes lets you ship faster than jumping straight to code. It’s my method for making my AI smarter with every feature. (View Highlight)
  • When you’re planning with AI, you’re running parallel research operations—each one a specialized agent gathering different kinds of knowledge. Then you work together: The agents bring findings, you make decisions, and together you combine and distill everything into one coherent plan. (View Highlight)
  • I use eight research strategies, depending on the fidelity level, which refers to the degree of difficulty. Fidelity One is quick fixes like one-line changes, obvious bugs, and copy updates. Fidelity Two covers features spanning multiple files with clear scope but non-obvious implementation. Fidelity Three covers major features where you don’t even know what you’re building yet. (View Highlight)
  • How to make this compound: To make sure that this issue wouldn’t happen in the future, I updated my @kieran-rails-reviewer agent—one of the specialized reviewers that automatically checks plans and code as part of my compounding engineering flow. I added to its checklist: “For any background job that calls external APIs—does it handle rate limits? Does it retry? Does it leave users in partial states?” We forgot to retry once. The system won’t let us forgetagain. (View Highlight)
  • How to make this compound: When the agent finds a particularly useful pattern, I have it automatically save the key findings to docs/*.md files in my project. For instance, I’ve saved “docs/pay-gem-upgrades.md” for migration patterns and “docs/pricing-research.md” for pricing insights. Next time a similar question comes up, the agent checks these documents first before searching the web. My knowledge base is constantly growing and improving. (View Highlight)
  • Why this compounds: Every time you update a dependency (a library your code relies on), the knowledge auto-updates. You’re never working with stale information. (View Highlight)
  • You don’t need to build everything from scratch. I’ve open-sourced my planning system on Every’s Github marketplace. Install it in Claude Code, and you’ll have working /plan slash command and research agents immediately. You can also use my plugin in Claude Code or Droid. (View Highlight)
  • Strategy 1: Reproduce and document What it does: Attempts to reproduce bugs or issues before planning fixes When to use it: Fidelity One and Two, especially bug fixes The agent’s job: Create a step-by-step reproduction guide Prompt: “Reproduce this bug, don’t fix it, just gather all the logs and info you need.” (View Highlight)
  • Strategy 2: Ground in best practices What it does: Searches the web for how others solved similar problems When to use it: All fidelities, especially unfamiliar patterns The agent’s job: Find and summarize relevant blog posts, documentation, and solutions Agent:@agent-best-practices-researcher” This strategy works for anything where someone else has already solved your problem—things like technical architecture, copywriting patterns, pricing research, or upgrade paths. When I needed to upgrade a gem—a pre-built code library I use—that was two versions behind, I had an agent search: “upgrade path from version X to Y,” “breaking changes between versions,” “common migration issues.” It found the official upgrade guide, plus three blog posts from engineers who’d done the same upgrade and hit edge cases. That research took three minutes and prevented hours of trial-and-error debugging. (View Highlight)
  • How to make this compound: I created an “@event-tracking-expert” agent that distills everything about how we do tracking—our helper methods, our event format, when to track versus when not to. Now when it’s planning any feature that needs tracking, that specialist agent runs automatically. I don’t search the codebase from scratch anymore—the expert already knows our patterns. (View Highlight)
  • Strategy 4: Ground in your libraries What it does: Reads source code of installed packages and gems When to use it: When using fast-moving or poorly documented libraries The agent’s job: Analyze the source code to understand what’s possible (View Highlight)
  • Strategy 5: Study git history What it does: Analyzes commit history (the log of all past changes to your code) to understand intent When to use it: Refactors, continuing work, understanding “why” The agent’s job: Research past decisions and their context (View Highlight)
  • Strategy 6: Vibe prototype for clarity What it does: Rapid prototyping in a separate environment to clarify requirements When to use it: Fidelity Three, UX uncertainty, exploratory work The agent’s job: Quickly build throwaway versions you can interact with Prompt: “Create a working prototype, in the style of a mockup using React and Next, grayscale of XYZ” (View Highlight)
  • Strategy 7: Synthesize with options What it does: Combines all research into one plan showing multiple approaches with tradeoffs When to use it: End of the research phase, before implementation The agent’s job: Present 2-3 solution paths with honest pros and cons After running strategies 1-6, I have an agent synthesize everything: “Based on all this research, show me three ways to solve this problem. For each approach, tell me: implementation complexity, performance impact, maintenance burden, and which existing patterns it matches.” For syncing (View Highlight)
  • Strategy 8: Review with style agents What it does: Runs the completed plan through specialized reviewers that check for your preferences When to use it: Final planning step, before implementation The agent’s job: Catch misalignments with your coding style and architecture preferences I have three review agents that run automatically: Simplification agent: Flags over-engineering. “Do we really need three database tables for this? Could one table with a type field work?” (View Highlight)
  • Before prompting Claude Code or Cursor to build it, spend 15-20 minutes researching:
    1. Best practices: How have others solved similar problems? Search the web for blog posts, Stack Overflow discussions, and documentation.
    2. Your patterns: How have you solved similar problems? Search your existing codebase for comparable features.
    3. Library capabilities: What do your tools actually support? If you’re using a specific code library, have AI read its documentation or source code (View Highlight)
  • Have AI synthesize this research into a plan showing:
    1. The problem being solved (one clear sentence)
    2. Two or three solution approaches (with honest pros and cons of each)
    3. Which existing code patterns this should match
    4. Any edge cases or security considerations (View Highlight)
  • Ship the feature based on the plan, then compare the final implementation to the original plan. Where did you diverge? Why? What would have made the plan better? Take 10 minutes to codify one learning. The simplest way: Add it to your CLAUDE.md file. Write one rule: “When doing X type of work, remember to check Y,” or “I prefer approach A over approach B because of reason C.” (View Highlight)