Chapter 12

Advanced Strategies

You've mastered the fundamentals. Now it's time to go deeper. These advanced strategies are the techniques that 10— developers use daily — the ones that turn AI from a helpful assistant into a force multiplier.


Beyond Basic Prompting

Everything in chapters 1—11 is foundational — essential, but still the basics. Advanced AI programming isn't about more complex prompts; it's about more strategic use of AI across your entire workflow. This chapter introduces seven strategies that developers use to get dramatically better results.


01

Multi-Persona Prompting

Instead of asking AI to be a generic "senior developer," assign it specific expert personas for different tasks. Different perspectives surface different insights — the same way a team of specialists outperforms a single generalist.

Step 1 — As a React performance expert:
Analyze this component for unnecessary re-renders and
expensive computations.

Step 2 — As a security auditor:
Review the same code for vulnerabilities and data exposure.

Step 3 — As a UX engineer:
Evaluate the user-facing behavior. Are there edge cases
that would confuse users?

Step 4 — Synthesize all three perspectives into a
prioritized list of improvements.

Each persona activates a different "lens" in AI's analysis. A performance expert focuses on rendering cycles and memoization. A security auditor looks for input validation and data leaks. A UX engineer thinks about user behavior and error states. The synthesis step combines all three into actionable priorities.

02

Structured Reasoning Chains

For complex problems, don't ask AI for the answer directly. Instead, ask it to reason through the problem step by step. This produces dramatically better results for anything involving multiple decisions or trade-offs.

I need to add real-time collaboration to the family planner.

Before proposing a solution, think through these steps:

1. What are the technical options? (WebSocket, SSE, polling)
2. What are the trade-offs of each for my use case?
3. What's the simplest option that handles 10 concurrent users?
4. What changes to my current architecture are needed?
5. What are the main risks or failure modes?

Show your reasoning for each step, then give your
recommendation with justification.

Explicit reasoning chains force AI to think before answering. Without this structure, AI jumps to the first plausible solution. With it, AI explores the problem space systematically and arrives at better-justified conclusions.

Why This Works

AI language models generate tokens sequentially — each word influences the next. When you force structured reasoning, early analysis tokens (trade-offs, constraints, risks) directly influence later conclusion tokens. The reasoning itself improves the answer.

03

Adversarial Self-Review

After AI generates code, ask it to attack its own output. This technique consistently finds issues that a standard "review this code" prompt misses.

You just wrote this code. Now switch roles.

You are a hostile code reviewer whose job is to find
every possible issue. Be ruthless. Look for:

- Bugs that would only appear under unusual conditions
- Performance problems that emerge at scale
- Security vulnerabilities an attacker could exploit
- Assumptions that might not hold in production
- Race conditions or timing issues

Don't be diplomatic. List every problem you can find.

The instruction "don't be diplomatic" is important. AI has a tendency to be polite about its own output, hedging criticism with "this is mostly fine but..." Forcing a hostile stance produces far more thorough analysis.

04

Progressive Complexity

Instead of asking for the final complex version immediately, build complexity incrementally. Each stage adds one layer of sophistication, and you verify correctness at each level.

Build a data table component. We'll add complexity in stages.

Stage 1: Static table that renders an array of objects.
         Just rows and columns, nothing else.

Stage 2: Add sorting (click column headers to sort).

Stage 3: Add filtering (search input that filters rows).

Stage 4: Add pagination (10 rows per page with controls).

Stage 5: Add row selection with checkboxes.

Start with Stage 1 only. I'll tell you when to proceed.

This strategy prevents the "complexity explosion" where AI tries to build everything at once and produces tangled code. Each stage is simple, testable, and builds cleanly on the previous one. By stage 5, you have a sophisticated component with a clean architecture.

05

Comparative Analysis

Instead of asking "what's the best way to do X," ask AI to compare multiple approaches with explicit criteria. This produces richer analysis and better-justified decisions.

Compare these three state management approaches for my app:

1. React useState + context
2. Zustand
3. Redux Toolkit

Evaluate each on these dimensions:
- Learning curve (for a mid-level React developer)
- Bundle size impact
- Boilerplate required
- DevTools quality
- Suitability for my app size (~15 components, ~5 data entities)

Present as a comparison, then recommend one with reasoning.

The explicit evaluation dimensions prevent AI from defaulting to the most popular option. It forces genuine analysis across criteria that matter for your specific situation.

06

Rubber Duck Debugging with AI

Traditional rubber duck debugging works by forcing you to explain the problem out loud. AI takes this further — it actually responds, asks clarifying questions, and proposes hypotheses.

I'm stuck on a problem and need to think it through.

Situation: My activity filter works for single-member
selection but breaks with multi-select. The filtered
list shows the right count but the wrong activities.

I've checked:
- The filter function logic (seems correct)
- The state updates (members array updates correctly)
- The component re-renders (it does)

Something is wrong but I can't see what.

Ask me questions to help me narrow down the issue.
Don't guess the answer — help me find it myself.

The instruction "help me find it myself" is key. It tells AI to ask diagnostic questions rather than jumping to solutions. This approach builds your own debugging skills while leveraging AI's ability to ask the right questions — questions you might not think to ask yourself.

07

Prompt Templates as Code

Treat your best prompts like code: save them, version them, and reuse them. Over time, you build a personal library of high-quality prompts that consistently produce excellent results.

A prompt library isn't just a convenience — it's a compound advantage. Each time you refine a prompt, every future use benefits. After six months, your library of battle-tested prompts gives you consistently better output than someone writing prompts from scratch every time.


Building Your Prompt Library

Here's a starter library organized by task type. Save these, customize them for your stack, and add your own as you discover prompts that work well.

Architecture
System Design
Full architecture request with alternatives, trade-offs, and stress-testing
Architecture
Database Schema
Schema design with indexes, constraints, and query validation
Implementation
Component Builder
React/TS component with role, requirements, context, output format
Implementation
API Endpoint
Express endpoint with validation, error handling, security
Quality
Code Review
Multi-dimensional review: bugs, quality, performance, security
Quality
Security Audit
Focused vulnerability scan with exploit descriptions and fixes
Testing
Unit Test Generator
Full test suite: normal cases, edge cases, error cases
Testing
Edge Case Hunter
Adversarial testing to find inputs that break functions
Debug
Bug Diagnosis
Code + error + intent + behavior + tried — the debug template
Debug
Root Cause Analysis
Three ranked hypotheses with verification steps
Git
Commit Message
Conventional commit from diff with scope and description
Refactor
Code Smell Detector
Diagnose → prioritize → fix one at a time

Pro Tip: Version Your Prompts

Store your prompt library in a Markdown file or a dedicated folder in your project. When you find a prompt that consistently produces great results, save it with a name, a description of when to use it, and an example of the output it produces. Over time, this becomes one of your most valuable development assets.


Working Across Multiple AI Sessions

Real projects span days or weeks — far longer than any single AI conversation. Managing context across sessions is an advanced skill that most developers neglect.

PROJECT CONTEXT:
Family schedule planner — React/TypeScript frontend,
Express/MySQL backend.

CURRENT STATE:
- Milestones 1-5 complete (calendar, CRUD, filtering)
- Working on: milestone 6 (real-time sync)
- Stack: React 18, TypeScript, Zustand, Express, MySQL

KEY TYPES:
[paste Activity and FamilyMember interfaces]

TODAY'S TASK:
Add WebSocket-based sync so changes on one client
appear on others within 2 seconds.

Systematic Experimentation

Advanced users don't just accept AI's output — they experiment systematically to find the prompting patterns that work best for their specific workflow.

🧪

A/B Test Prompts

Try two versions of a prompt for the same task. Compare output quality. Keep the better one.

📈

Track What Works

Keep a simple log: prompt approach → output quality → lessons learned. Patterns emerge quickly.

🧩

Iterate on Structure

Does role-first or goal-first produce better results? Does listing constraints help or hurt? Test and measure.

📚

Refine Over Time

Your best prompts after 6 months will be dramatically better than your best prompts today. The skill compounds.


🧪 Practical Exercise

Put three advanced strategies into practice:


Key Takeaways

Previous Chapter Security and Risks with AI-Generated Code
Next Chapter Mental Models for AI Development