When "Almost Working" Is the Hardest State
A completely broken app is easy to deal with. You start over, or you ask AI to rebuild the failing piece from scratch. But "almost working" is different. 80% of the app is fine. One thing doesn't behave right. You ask AI to fix it, the fix breaks something else, you fix that, the original problem comes back. You're in a loop.
This is the most frustrating state in vibe coding, and it has specific causes. AI-generated code has predictable failure modes: it handles the happy path well and falls apart at edge cases, state transitions, authentication boundaries, and environment differences. Understanding which kind of failure you're dealing with points you to the right fix.
Describe the problem to AI precisely: what you expected, what actually happened, and any error message you see. Vague bug reports get vague fixes. Specific ones get targeted ones. And always save a copy of the current state before applying any fix — so you have something to go back to if the fix makes things worse.
AI Introduced a Bug
Something worked yesterday and doesn't today. You made a change, AI generated new code, and now a feature that was fine is broken. This is the most common category and also the most fixable.
Find What Changed
The single most useful thing you can do is identify the exact change that caused the regression. If you saved a working version (or have it in your conversation history), compare the two versions side by side. Look for:
- Functions that were renamed or removed
- Event listeners that got moved or deleted
- Data that used to flow from one part of the app to another and now doesn't
- IDs or class names that changed, breaking something that referenced them
If you don't have a saved version, look in your browser's developer tools. Open the console (F12, then click "Console"). Errors there — red text with file names and line numbers — tell you exactly where the code is failing.
Tell AI What the Error Says
Copy the full error message and give it to AI:
My app is showing this error in the browser console:
"TypeError: Cannot read properties of undefined (reading 'map') at renderList (app.js:47)"
This happened after we added the search feature. The list used to display fine. Here is my current app.js:
[paste the code]
The combination of the error message, the line number, and the context of when it started gives AI enough to find the actual cause rather than guess at it.
Isolate the Problem
If you're not sure which recent change caused the bug, tell AI to revert only the last change and test whether the bug disappears. Don't ask AI to fix the new feature and the regression at the same time — that produces code that patches both things together and becomes hard to understand. Fix the regression first, then re-add the feature more carefully.
The One-Undo Trick
Ask AI: "Revert only the [specific feature] changes and restore the previous version of [specific function or section]. Don't change anything else." This isolates the regression clearly and usually resolves it in one step.
Strange State Behavior
Your app does something unexpected. Data appears in the wrong place. A counter resets when it shouldn't. A list shows items from a previous session. A form submits twice. These are all state problems — the app's in-memory data is getting out of sync with what you see on screen, or with what's saved on disk.
Common State Bugs in AI-Generated Apps
- Stale data displayed. The app shows old data even after a save. Usually means the code updates storage but forgets to re-render the display.
- Double submission. A form or button does its thing twice. Usually means an event listener got attached twice — often because AI re-ran setup code without removing the old listener first.
- Data disappears on refresh. The app saves to a variable, not to localStorage or a database. Everything in variables is lost when the page reloads.
- Wrong item modified. You edit item A but item B changes. Usually a shared reference — two parts of the app pointing to the same object — where changing one changes both.
How to Debug State Problems
Describe what you see, step by step, in exact terms:
When I click "Save", the item disappears from the list immediately but reappears when I reload the page. It seems like the save is working (the data is in localStorage when I check), but the displayed list isn't updating to reflect the save. Here is the save function and the render function:
[paste the relevant code]
Describing the exact symptom — not just "it's broken" — tells AI which part of the flow has the bug. In the example above, the symptom points clearly to the render step, not the save step.
When state is behaving strangely, ask AI to add console.log statements to the key functions — the save function, the render function, wherever data flows. Then open the browser console and watch what values are actually reaching each step. The log where the value goes wrong is where the bug lives.
Broken Auth Flows
Authentication is where AI-generated code most often breaks in subtle ways. Login forms that accept any input. Sessions that expire instantly. Protected pages that aren't actually protected. Users who can see other users' data. These bugs are often invisible until someone tries something unexpected.
Common Auth Problems
- Login always succeeds. AI may have written a login form that looks functional but doesn't actually check the password — it just navigates to the next page regardless. Test by entering a wrong password deliberately.
- Session doesn't persist. The user logs in, but refreshing the page logs them out. The auth state is stored in a variable that resets on reload rather than in a session cookie or localStorage.
- Protected routes aren't guarded. You can reach /dashboard by typing the URL directly even without logging in. The protection only exists in the navigation, not in the actual route logic.
- Any user can see any data. The app fetches data for the logged-in user but doesn't filter by user ID — it returns everything.
How to Fix Auth Issues
For each of the above, the fix is usually a targeted prompt asking AI to add the missing check explicitly:
I've noticed that if I type /dashboard directly in the browser address bar, I can reach it without being logged in. The app should redirect unauthenticated users to /login. Here is the current routing code:
[paste the routing code]
Add a route guard that checks whether the user is authenticated before allowing access to /dashboard. If not authenticated, redirect to /login. Don't change anything else.
If your app handles passwords, make sure AI is using an authentication service (Supabase Auth, Firebase Auth, Auth0) rather than storing passwords in your own database. Never store plain-text passwords. If you see AI generating code that saves a password field directly to a users table, stop and ask it to use a proper auth service instead.
Testing Your Auth
After any auth fix, test the unhappy paths explicitly — they're the ones AI is most likely to miss:
- Try logging in with a wrong password. Does it actually fail?
- Log in, then refresh the page. Are you still logged in?
- Open an incognito window and type the URL of a protected page directly. Are you redirected to login?
- Log in as user A, then try to access a URL that belongs to user B's data. Do you get an error or user A's data instead?
Deploy Issues
It works perfectly on your computer. You deploy it and something breaks. This is one of the most common and most disorienting problems in vibe coding, because the code you tested is the code you deployed — yet the result is different.
Why Things Work Locally But Break in Production
There are a few reliable culprits:
- Hardcoded localhost URLs. The code makes requests to
http://localhost:3000/api/something, which works on your machine but doesn't exist when deployed. These need to be environment-aware URLs. - Missing environment variables. API keys, database connection strings, or service URLs that exist on your machine but weren't set up in your hosting environment. Check your hosting provider's settings for where to add these.
- Build vs. development differences. Some tools behave differently when built for production. AI-generated code that works in development mode may rely on features that get stripped or transformed during the build step.
- Path capitalization. On Windows and Mac,
MyFile.jsandmyfile.jsare the same file. On Linux servers (where most hosting runs), they're different. If AI generated imports with wrong capitalization, they work locally and break on deploy. - CORS errors. The browser blocks requests from your deployed domain to your API because the API doesn't allow that domain. Look for "CORS" in the browser console — it usually appears as a red error before the actual request fails.
How to Diagnose Deploy Issues
Open the browser developer tools on the deployed version (F12), click the "Network" tab, and reproduce the failing action. Look for requests that return red status codes (4xx or 5xx) or that are blocked. The request URL and error response body tell you what's actually failing.
My app works locally but after deploying to Netlify, the data isn't loading. I checked the browser console and I see this error:
"Access to fetch at 'http://localhost:3001/api/items' from origin 'https://my-app.netlify.app' has been blocked by CORS policy."
Here is the code that makes the fetch request:
[paste the code]
The API is deployed at https://my-api.railway.app. Update the fetch URL to use the correct production URL and make it configurable via an environment variable.
Confusing Code Structure
Sometimes the app works but the code has become so tangled that every change you try to make breaks something. AI built everything in one giant file. Functions are nested inside other functions. The same logic is duplicated in three places and you don't know which one to change. You ask AI for help and it generates a fix that references something that doesn't exist anymore.
This is a structural problem, not a bug. The code works — but it's fragile, and it's getting harder to change over time.
When to Refactor vs. When to Rebuild
If the app is small (under a few hundred lines) and only partially working, rebuilding from a clean description is almost always faster. You've learned what works, your second attempt benefits from that knowledge, and you avoid inheriting accumulated mess.
If the app is larger and mostly working, ask AI to restructure specific pieces rather than the whole thing at once:
My app.js file has grown to 800 lines and is getting hard to change without breaking things. I'd like to separate the data-handling code from the display code. Can you identify which functions deal with loading/saving data and which ones deal with rendering the UI, and move them into separate sections with clear labels? Don't change any logic — only reorganize the file. Show me the result so I can review it before applying.
The key constraint is "don't change any logic." Reorganizing and fixing at the same time is how refactoring breaks working apps.
When the Conversation Gets Too Long
After 30–50 messages in a single chat, AI starts forgetting the earlier context. It may contradict previous decisions, undo things you already fixed, or generate code that doesn't match the current state of your app. The signs are: fixes that seem off, code that references things you removed, or suggestions that don't match what you described.
The fix is to start a fresh conversation. Paste the current version of your code and a clear description of where you are and what you're trying to do. Fresh context produces better results than extending a long, confused conversation.
When to Start Over
There's a point in every broken app where starting over is faster than continuing to fix. Recognizing that point early saves hours of frustration.
Consider starting over when:
- You've tried to fix the same issue three or more times and it keeps returning
- Fixing one thing consistently breaks something else
- You can't describe what the app is supposed to do anymore without contradicting yourself
- The code has grown so tangled that AI can't make sense of it either — its fixes reference things that don't exist or repeat code you already have
- You're not sure anymore which version of the code is actually running
Starting over doesn't mean losing your work. It means taking what you learned from the broken version and writing a cleaner description. The second attempt almost always comes together faster, because you know what the app actually needs now — not just what you thought it needed at the start.
Before You Start Over
Write one paragraph describing what the app does, what currently works, and what the one most important remaining problem is. Then start a new conversation with that description plus the code for the parts that work. You're not rebuilding from zero — you're rebuilding from a better foundation.
Quick Diagnosis Reference
- Something that worked is now broken — Find what changed. Paste the console error to AI with context about when it started.
- Data behaves unexpectedly — Add console.log to trace where the wrong value appears. Describe the exact symptom step by step.
- Auth problems — Test the unhappy paths. Use an auth service, not custom password storage. Check every route for direct-URL access.
- Works locally, broken in production — Check for localhost URLs, missing environment variables, and CORS errors in the Network tab.
- Code too tangled to change safely — Reorganize in one step with no logic changes. Start a fresh conversation when context gets too long.
- Same issue keeps coming back — It's probably time to start over with a cleaner description.
Related Guides
7 Vibe Coding Mistakes That Waste Your Time
The habits that cause most vibe coding problems in the first place — and how to avoid them from the start.
Growing Your Vibe Coded App
How to add features safely to a live app, add a real database, and handle it when things break on live.
When Vibe Coding Isn't Enough
How to recognize when your project has outgrown AI-only building — and what to do about it.