Beyond Basic Prompting
Everything in chapters 1—11 is foundational — essential, but still the basics. Advanced AI programming isn't about more complex prompts; it's about more strategic use of AI across your entire workflow. This chapter introduces seven strategies that developers use to get dramatically better results.
Multi-Persona Prompting
Instead of asking AI to be a generic "senior developer," assign it specific expert personas for different tasks. Different perspectives surface different insights — the same way a team of specialists outperforms a single generalist.
Step 1 — As a React performance expert:
Analyze this component for unnecessary re-renders and
expensive computations.
Step 2 — As a security auditor:
Review the same code for vulnerabilities and data exposure.
Step 3 — As a UX engineer:
Evaluate the user-facing behavior. Are there edge cases
that would confuse users?
Step 4 — Synthesize all three perspectives into a
prioritized list of improvements.
Each persona activates a different "lens" in AI's analysis. A performance expert focuses on rendering cycles and memoization. A security auditor looks for input validation and data leaks. A UX engineer thinks about user behavior and error states. The synthesis step combines all three into actionable priorities.
Structured Reasoning Chains
For complex problems, don't ask AI for the answer directly. Instead, ask it to reason through the problem step by step. This produces dramatically better results for anything involving multiple decisions or trade-offs.
I need to add real-time collaboration to the family planner.
Before proposing a solution, think through these steps:
1. What are the technical options? (WebSocket, SSE, polling)
2. What are the trade-offs of each for my use case?
3. What's the simplest option that handles 10 concurrent users?
4. What changes to my current architecture are needed?
5. What are the main risks or failure modes?
Show your reasoning for each step, then give your
recommendation with justification.
Explicit reasoning chains force AI to think before answering. Without this structure, AI jumps to the first plausible solution. With it, AI explores the problem space systematically and arrives at better-justified conclusions.
AI language models generate tokens sequentially — each word influences the next. When you force structured reasoning, early analysis tokens (trade-offs, constraints, risks) directly influence later conclusion tokens. The reasoning itself improves the answer.
Adversarial Self-Review
After AI generates code, ask it to attack its own output. This technique consistently finds issues that a standard "review this code" prompt misses.
You just wrote this code. Now switch roles.
You are a hostile code reviewer whose job is to find
every possible issue. Be ruthless. Look for:
- Bugs that would only appear under unusual conditions
- Performance problems that emerge at scale
- Security vulnerabilities an attacker could exploit
- Assumptions that might not hold in production
- Race conditions or timing issues
Don't be diplomatic. List every problem you can find.
The instruction "don't be diplomatic" is important. AI has a tendency to be polite about its own output, hedging criticism with "this is mostly fine but..." Forcing a hostile stance produces far more thorough analysis.
Progressive Complexity
Instead of asking for the final complex version immediately, build complexity incrementally. Each stage adds one layer of sophistication, and you verify correctness at each level.
Build a data table component. We'll add complexity in stages.
Stage 1: Static table that renders an array of objects.
Just rows and columns, nothing else.
Stage 2: Add sorting (click column headers to sort).
Stage 3: Add filtering (search input that filters rows).
Stage 4: Add pagination (10 rows per page with controls).
Stage 5: Add row selection with checkboxes.
Start with Stage 1 only. I'll tell you when to proceed.
This strategy prevents the "complexity explosion" where AI tries to build everything at once and produces tangled code. Each stage is simple, testable, and builds cleanly on the previous one. By stage 5, you have a sophisticated component with a clean architecture.
Comparative Analysis
Instead of asking "what's the best way to do X," ask AI to compare multiple approaches with explicit criteria. This produces richer analysis and better-justified decisions.
Compare these three state management approaches for my app:
1. React useState + context
2. Zustand
3. Redux Toolkit
Evaluate each on these dimensions:
- Learning curve (for a mid-level React developer)
- Bundle size impact
- Boilerplate required
- DevTools quality
- Suitability for my app size (~15 components, ~5 data entities)
Present as a comparison, then recommend one with reasoning.
The explicit evaluation dimensions prevent AI from defaulting to the most popular option. It forces genuine analysis across criteria that matter for your specific situation.
Rubber Duck Debugging with AI
Traditional rubber duck debugging works by forcing you to explain the problem out loud. AI takes this further — it actually responds, asks clarifying questions, and proposes hypotheses.
I'm stuck on a problem and need to think it through.
Situation: My activity filter works for single-member
selection but breaks with multi-select. The filtered
list shows the right count but the wrong activities.
I've checked:
- The filter function logic (seems correct)
- The state updates (members array updates correctly)
- The component re-renders (it does)
Something is wrong but I can't see what.
Ask me questions to help me narrow down the issue.
Don't guess the answer — help me find it myself.
The instruction "help me find it myself" is key. It tells AI to ask diagnostic questions rather than jumping to solutions. This approach builds your own debugging skills while leveraging AI's ability to ask the right questions — questions you might not think to ask yourself.
Prompt Templates as Code
Treat your best prompts like code: save them, version them, and reuse them. Over time, you build a personal library of high-quality prompts that consistently produce excellent results.
A prompt library isn't just a convenience — it's a compound advantage. Each time you refine a prompt, every future use benefits. After six months, your library of battle-tested prompts gives you consistently better output than someone writing prompts from scratch every time.
Building Your Prompt Library
Here's a starter library organized by task type. Save these, customize them for your stack, and add your own as you discover prompts that work well.
Pro Tip: Version Your Prompts
Store your prompt library in a Markdown file or a dedicated folder in your project. When you find a prompt that consistently produces great results, save it with a name, a description of when to use it, and an example of the output it produces. Over time, this becomes one of your most valuable development assets.
Working Across Multiple AI Sessions
Real projects span days or weeks — far longer than any single AI conversation. Managing context across sessions is an advanced skill that most developers neglect.
- Start each session with a context summary — Paste a brief description of the project, current state, and what you're working on today. This re-establishes context instantly.
- Keep a "project context" document — A Markdown file containing your architecture decisions, current data types, component list, and API endpoints. Paste relevant sections at the start of each session.
- Share code incrementally — Don't dump your entire codebase. Share the specific files relevant to today's task, plus interfaces and types that define the contract.
- Reference previous decisions — "In our last session, we decided to use Zustand for state management. Continue with that approach."
PROJECT CONTEXT:
Family schedule planner — React/TypeScript frontend,
Express/MySQL backend.
CURRENT STATE:
- Milestones 1-5 complete (calendar, CRUD, filtering)
- Working on: milestone 6 (real-time sync)
- Stack: React 18, TypeScript, Zustand, Express, MySQL
KEY TYPES:
[paste Activity and FamilyMember interfaces]
TODAY'S TASK:
Add WebSocket-based sync so changes on one client
appear on others within 2 seconds.
Systematic Experimentation
Advanced users don't just accept AI's output — they experiment systematically to find the prompting patterns that work best for their specific workflow.
A/B Test Prompts
Try two versions of a prompt for the same task. Compare output quality. Keep the better one.
Track What Works
Keep a simple log: prompt approach → output quality → lessons learned. Patterns emerge quickly.
Iterate on Structure
Does role-first or goal-first produce better results? Does listing constraints help or hurt? Test and measure.
Refine Over Time
Your best prompts after 6 months will be dramatically better than your best prompts today. The skill compounds.
Put three advanced strategies into practice:
- Strategy 1 — Multi-persona: Take a component you've built and run it through three different AI personas (performance expert, security auditor, UX engineer). Synthesize the findings.
- Strategy 3 — Adversarial review: After generating any piece of code, immediately ask AI to attack it. Fix everything it finds.
- Strategy 7 — Prompt library: Create a Markdown file and save your 5 best prompts from the exercises in this book. Format each with a name, when-to-use description, and the full prompt text.
Key Takeaways
- Multi-persona prompting surfaces different insights by activating different analytical frames
- Structured reasoning chains force AI to think before answering, producing better-justified solutions
- Adversarial self-review catches issues that standard review prompts miss
- Progressive complexity builds sophisticated features cleanly — one layer at a time
- Comparative analysis with explicit criteria prevents AI from defaulting to the most popular option
- Build and maintain a prompt library — it's a compound advantage that grows over time
- Manage context across sessions with project summaries, type definitions, and decision references
- Experiment systematically: A/B test prompts, track results, refine continuously