Workflow Debugging

Debugging with AI

Debugging is the most common thing developers do — and one of the weakest areas of most AI workflows. The problem isn't that AI can't help debug. It's that developers hand AI a broken function and ask "what's wrong?" instead of running a real investigation. This guide covers the investigation workflow that actually works.

Last reviewed: Apr 22 2026


Part 1: Why AI Debugging Goes Wrong

The most common AI debugging failure mode: you paste an error message, AI suggests a fix, you apply it, a different error appears, you paste that, AI suggests another fix — and you're in a loop of patches that never solves the root cause. Twenty minutes later, the code is worse and you don't understand why.

This happens because AI is optimizing for "here's a plausible fix" rather than "here's the root cause." AI sees what you show it. If you only show it the error, it can only work with the error. The fix may silence the symptom while leaving the underlying problem — or create three new ones.

The developers who debug effectively with AI aren't using AI differently — they're approaching debugging differently. They run an investigation first, then bring AI in with enough context to give a meaningful answer.

The Debugging Principle

Understand before you fix. Before asking AI for a solution, understand what the error is actually telling you, where in the code it originates, and what the last working state was. AI with this context gives correct diagnoses. AI without it gives guesses.

The Three Debugging Failure Patterns


Part 2: Reading Error Messages Before Asking AI

Before pasting an error to AI, spend 60 seconds reading it yourself. This sounds obvious, but most developers glance at the error, feel confused, and immediately reach for AI. A minute of actual reading changes what you ask — and dramatically improves the quality of the answer you get.

The Four Parts of an Error Message

Most error messages have four components, even if they're not clearly labeled:

  1. Error type. TypeError, NullPointerException, ECONNREFUSED. This tells you the category of failure — type mismatch, null reference, network connection refused. It constrains the possible causes significantly.
  2. Error message. Cannot read properties of undefined (reading 'map'). This is the specific symptom. Read it literally — "cannot read property 'map' of undefined" means something is undefined that you expected to be an array.
  3. Stack trace. The chain of function calls that led to the error. The top frame is where it exploded. The frames below it show how you got there. Your code is usually one of those frames.
  4. Context. HTTP status codes, request paths, line numbers, timestamps. These narrow the search space before you've looked at a single line of code.

What to Extract Before You Prompt

You (internal monologue before prompting)

Error: TypeError: Cannot read properties of undefined (reading 'filter')

Stack trace says: UserList.jsx:47, called from Dashboard.jsx:23

Line 47 of UserList.jsx is: const active = users.filter(u => u.isActive)

So users is undefined. Why would users be undefined? I'm fetching it in a useEffect. What if the component renders before the fetch completes? Yes — users starts as undefined, not an empty array. The fetch is async, so on first render, users is undefined and calling .filter() crashes.

In 60 seconds of reading, you've identified the root cause: users isn't initialized to []. You don't even need to ask AI — but if you do, you can ask a targeted question:

You

In this React component, users is initialized as undefined and set by a useEffect fetch. On first render before the fetch completes, calling users.filter() crashes. Here's the component:

[paste the component]

What's the cleanest fix — initializing users as [], adding a null check, or something else? And should I also add a loading state?

This prompt gets a specific, correct answer on the first try because you diagnosed the problem yourself and you're now asking a targeted question about the fix.

The Error You See Is Not Always the Error You Have

Sometimes an error surfaces three function calls away from its origin. A null value returned from a database layer causes a crash in a rendering component — and the error points to the rendering component, not the database layer. Read the full stack trace, not just the top frame.


Part 3: The Investigation Workflow

Debugging is a structured investigation, not a guessing game. Before asking AI anything, run through these steps. They take 2–5 minutes. They prevent 30-minute fix loops.

Step 1: Reproduce It Reliably

A bug you can't reproduce reliably is much harder to fix than one you can trigger every time. Before anything else, figure out the exact conditions that produce the error:

Intermittent bugs are almost always race conditions, timing issues, or state mutation. Consistent bugs are almost always logic errors or missing null checks. Knowing which category you're in shapes the entire investigation.

You

This bug only happens when the user's list has more than 100 items, and only on the second time they open the modal. It works fine the first time. Here's the component and the data fetching logic:

[paste component]
[paste fetch function]

The second open, the data seems stale. Could this be a caching issue, or is the component state not being reset when the modal closes?

The reproduction conditions — "100+ items, second open" — immediately narrow the hypothesis space for AI. The answer will be about stale state or cache, not about rendering logic.

Step 2: Establish the Last Working State

The most underused debugging technique: figure out when it started happening. If the bug is new, something changed. Use git:

You

This started failing after my last commit. Here's the diff:

[git diff output]

The error is: [error message]. What in this diff could cause that error?

AI with a diff can reason about causation precisely. It's not guessing what changed — you're showing it exactly what changed. This is one of the highest-signal debugging prompts you can write.

Step 3: Isolate the Failure

If you can't find the cause in the stack trace, bisect the problem. Comment out half the code. Add console logs at the boundaries. Simplify to the smallest example that still fails:

You

I've isolated the bug to this function — when I replace the call to processOrder() with a hardcoded value, the error goes away. So the bug is in processOrder. Here it is:

[paste processOrder function]

Input that fails: { items: [], total: 0, userId: null }. What's the bug?

You've done the isolation work. AI now has a specific function and a specific input that reproduces the failure. The answer will be correct.

Step 4: Form a Hypothesis First

Before asking AI, write down your best guess. Even if it's wrong, having a hypothesis changes how you read the AI's answer — you're evaluating it rather than accepting it. And sometimes forming the hypothesis solves the problem before you even ask.

You

My hypothesis: calculateDiscount() is returning NaN when the coupon code is empty string instead of null. The empty string fails the coupon != null check but then causes a math error downstream. Here's the function:

[paste function]

Is my hypothesis correct, and if so, what's the cleanest fix?

This prompt is efficient — AI can confirm or correct your hypothesis in one sentence, then give you the fix. No back-and-forth needed.


Part 4: Interpreting Stack Traces

Stack traces are the most information-dense artifact in debugging. Most developers read the first line and stop. Learning to read a full stack trace cuts debugging time dramatically — with or without AI.

Reading a JavaScript Stack Trace

TypeError: Cannot read properties of undefined (reading 'name')
    at formatUser (utils/format.js:12:20)
    at UserCard (components/UserCard.jsx:34:15)
    at renderWithHooks (react-dom/cjs/react-dom.development.js:14985:18)
    at mountIndeterminateComponent (react-dom/cjs/react-dom.development.js:17811:13)
    at beginWork (react-dom/cjs/react-dom.development.js:19049:16)

How to read this:

The investigation starts at line 2 (where it crashed) and works downward through your code (line 3). React internals are noise. What called what in your code is the signal.

Reading a Python Traceback

Traceback (most recent call last):
  File "app/routes/users.py", line 47, in create_user
    user = await user_repo.create(data)
  File "app/repositories/user.py", line 23, in create
    hashed = hash_password(data.password)
  File "app/utils/auth.py", line 8, in hash_password
    return bcrypt.hashpw(password.encode(), bcrypt.gensalt())
AttributeError: 'NoneType' object has no attribute 'encode'

Python tracebacks read top-to-bottom, with the most recent call at the bottom. The error is at the bottom: password is None, so calling .encode() on it fails. The chain shows how we got there: route → repository → utility function. The fix is almost certainly in the route or repository — password should have been validated before reaching hash_password.

You

Here's a Python traceback. Read it and tell me: (1) where the error actually originated, (2) what the root cause is, and (3) where the fix should go — in the utility function or upstream:

[paste traceback]

Here's the code for each file mentioned in the trace:

[paste relevant functions]

Asking AI to identify the origin point, the root cause, and where the fix belongs is important. AI often suggests fixing the symptom (adding a null check in the utility function) when the real fix is validation upstream. Asking the "where should the fix go" question explicitly prevents this.

Reading a Network Error

Network errors in the browser console require a different approach. The error in the console is usually less informative than the actual HTTP response:

POST https://api.example.com/users 422 (Unprocessable Entity)
Failed to load resource: the server responded with a status of 422

The browser console shows you the status code but not the body. Open DevTools → Network → find the request → look at the Response tab. The actual error is there — usually something like:

{
  "detail": [
    {
      "loc": ["body", "email"],
      "msg": "value is not a valid email address",
      "type": "value_error.email"
    }
  ]
}

That's the error to paste to AI, not the browser console line.

Always Check the Network Tab

For any bug involving a frontend calling an API, the first step is opening DevTools → Network and finding the request. The status code, request payload, and response body together contain more information than any error message in your code. Get this before asking AI anything.


Part 5: More Context vs. Fresh Start

One of the most important judgment calls in AI debugging: when to give AI more context in the current conversation, and when to start a new conversation. Getting this wrong wastes time in both directions.

When to Give More Context

Stay in the current conversation when:

When to Start Fresh

Start a new conversation when:

You (fresh start prompt)

I've been debugging a session expiry bug. Here's what I know so far:

  • Users are being logged out randomly, roughly every 15 minutes
  • The session token is valid — the issue is the server rejecting it
  • Server logs show TokenExpiredError with expiry set 15 minutes in the past
  • I've already checked: the token TTL is set to 24 hours in config
  • I've ruled out: clock skew on the client, token not being refreshed on activity

Here's the auth middleware and the token generation code:

[paste code]

Where should I look next?

This fresh-start prompt is efficient because it front-loads what you know and what you've eliminated. AI doesn't waste time suggesting things you've already tried.


Part 6: Specific Bug Categories

Different types of bugs have different investigation patterns. Here are the most common categories and how to approach them with AI.

Async and Race Condition Bugs

These are the hardest bugs to debug because they're intermittent and the failure is often far from the cause. The key signals: bug only appears under load, bug depends on timing, bug appears then disappears when you add logging (which changes timing).

You

I have an intermittent bug that only appears under load. Symptoms: user sees stale data in the UI after an update, but refreshing shows the correct data. This happens maybe 1 in 20 requests. My hypothesis is a race condition between the mutation and the cache invalidation. Here's the relevant code:

[paste mutation and cache invalidation logic]

Is this a race condition? If so, what's the exact sequence of events that would produce stale data?

Asking AI to describe the exact sequence of events that produces the bug is the key prompt for async bugs. When AI can describe the bad path, you can confirm it and then ask for the fix.

Type Errors and Null Reference Bugs

The most common bug category. Almost always caused by: data not being what you expected it to be (null instead of array, string instead of number), or data being in the wrong shape (missing field, extra nesting).

You

Getting TypeError: Cannot read properties of undefined. I've traced it to this line:

const total = order.items.reduce((sum, item) => sum + item.price, 0);

I added logging and found that order exists but order.items is undefined. Here's where order comes from — it's the response from this API endpoint:

[paste API response and fetch code]

Is the API returning a different shape than expected, or is there a transformation step somewhere that's dropping the items field?

Performance Bugs

Performance bugs are invisible to error messages. The symptoms are: slow page renders, high CPU, memory growth over time. The investigation tool is profiling, not logging.

You

React DevTools profiler shows UserList re-rendering 50 times when I type in the search box. Each render takes ~40ms. Here's the component:

[paste UserList component]

And here's the parent component that renders it:

[paste parent]

Why is UserList re-rendering on every keystroke, and what's the most targeted fix?

The profiler data — 50 re-renders, 40ms each — is the key context. Without it, AI would guess at the cause. With it, AI can identify the specific re-render trigger (probably an object or function reference being recreated on each render) and give you a precise fix.

Environment and Configuration Bugs

"It works on my machine" bugs. Caused by: different environment variables, different database state, different OS behavior, different dependency versions.

You

This works locally but fails in CI. Error in CI: [error]. What's different between my local env and CI:

  • Local: Node 20, macOS, .env file present
  • CI: Node 20, Ubuntu, environment variables set in GitHub Actions
  • The .env key that differs: DATABASE_URL format — local uses localhost:5432, CI uses a Docker service container

Here's the database connection code:

[paste]

Could the DATABASE_URL format difference cause this error, or is it something else?


Part 7: Debugging Prompt Library

Copy-paste prompts for the debugging scenarios that come up most often. Each one is structured around what AI actually needs — not just the error, but the context, the isolation, and the hypothesis.

Error + Stack Trace

I'm getting this error:

[ERROR TYPE]: [error message]

Stack trace:
[paste full stack trace]

Here is the code at the point where it failed:
[paste the function/component that appears at the top of the stack trace]

And here is what called it:
[paste the caller if visible in the stack trace]

What is the root cause of this error, and where is the best place to fix it?

Git Diff Debugging

This started failing after my latest commit. The error is:

[paste error]

Here is the diff of what changed:
[paste: git diff HEAD~1]

What in this diff could cause this error? Walk through it systematically.

Intermittent / Race Condition

I have an intermittent bug with these characteristics:
- Happens roughly [frequency]
- Conditions: [what makes it more/less likely — load, timing, data state]
- When it happens: [description of the wrong behavior]
- When it doesn't happen: [what normal looks like]

My hypothesis: [your best guess at the cause]

Relevant code:
[paste the code involved]

Is my hypothesis correct? If so, describe the exact sequence of events
that produces the bug. If not, what's more likely?

Isolated Reproduction

I've isolated the bug to this function:

[paste function]

Input that fails: [paste the specific input]
Input that works: [paste a similar input that doesn't fail]
Error when it fails: [paste error or wrong output]

What is the bug? Is it the input handling, the logic, or an edge case
in how [specific variable/operation] is handled?

Wrong Output (No Error)

This code runs without errors but produces the wrong result.

Expected: [describe what it should produce]
Actual: [describe what it actually produces]

Input I'm using: [paste or describe the input]

Code:
[paste the function]

Walk through what the code actually does with this input, step by step.
I want to understand where the output diverges from what I expected.

Performance Regression

This [component/function/query] is too slow.

Symptoms: [describe — slow render, high CPU, memory growth, slow query]
Profiling data: [paste profiler output, query explain plan, or timing measurements]

Code:
[paste the slow code]

What is causing the performance problem? Focus on the most impactful
cause, not a list of every possible optimization.

Debugging a Fix That Didn't Work

I applied this fix to address [original problem]:

[paste what you changed]

The original error is gone, but now I'm getting:
[paste new error or wrong behavior]

Here's the current state of the code:
[paste updated code]

Was my fix wrong, or did it expose a second problem? What should I do now?

"I Have No Idea" Debugging

I'm stuck on a bug I can't figure out. Here's everything I know:

Error: [paste error]
When it happens: [describe conditions]
What I've tried: [list things you've already attempted]
What I've ruled out: [list hypotheses you've eliminated]

All relevant code:
[paste everything involved — even if you're not sure it's relevant]

Help me think through this systematically. What questions would you ask
to narrow down the cause?

Debugging with AI — Summary

Related Guides

When AI Gets It Wrong: A Field Guide

A catalog of the nine ways AI-generated code fails, with techniques for catching each category before it ships.

AI Prompt Library

53 ready-to-use prompts including a full debugging section for code review, security, and refactoring.

Testing with AI

Write tests that catch bugs before they reach debugging. TDD workflows, edge case generation, and mocking patterns.

Back to Home