The Security Problem
Here's the uncomfortable truth: AI models are trained on vast amounts of code from the internet — and a significant portion of that code contains security vulnerabilities. AI doesn't distinguish between secure and insecure patterns. It generates the most likely code, which is often the most common code — and the most common code frequently has security flaws.
AI generates plausible code, not provably secure code. Every piece of AI-generated code that handles user input, authentication, data storage, or network communication must be reviewed for security — by you, by AI in reviewer mode, or ideally both.
Common Vulnerabilities in AI Code
These are the security issues that appear most frequently in AI-generated code. Learn to recognize them on sight.
SQL Injection
AI often generates SQL queries using string concatenation instead of parameterized queries, allowing attackers to inject malicious SQL.
XSS (Cross-Site Scripting)
Rendering user input as raw HTML without sanitization. AI may use dangerouslySetInnerHTML or equivalent without warning.
Hardcoded Secrets
AI frequently places API keys, passwords, and tokens directly in source code instead of using environment variables.
Missing Input Validation
AI generates code that trusts user input — no length checks, no type validation, no sanitization.
Broken Authentication
Weak token generation, missing expiration, no rate limiting on login endpoints, plain-text password storage.
Insecure Dependencies
AI may suggest outdated or vulnerable packages without checking for known CVEs.
SQL Injection: A Real Example
This is the single most dangerous vulnerability AI generates — and it looks perfectly reasonable to an untrained eye.
AI often generates database queries like this:
// DANGEROUS — AI-generated, vulnerable to SQL injection
app.get('/api/activities', (req, res) => {
const member = req.query.member;
const query = `SELECT * FROM activities WHERE member = '${member}'`;
db.query(query, (err, results) => {
res.json(results);
});
});
This looks clean and functional. But an attacker can pass member='; DROP TABLE activities; -- as the query parameter, and the database will execute the injected SQL — deleting the entire table.
// SAFE — parameterized query prevents injection
app.get('/api/activities', (req, res) => {
const member = req.query.member;
const query = 'SELECT * FROM activities WHERE member = ?';
db.query(query, [member], (err, results) => {
res.json(results);
});
});
The fix is simple — use parameterized queries (also called prepared statements). The ? placeholder ensures user input is always treated as data, never as SQL commands. But AI doesn't always choose this pattern by default.
AI Hallucinations
Beyond security vulnerabilities, AI has a broader reliability problem: hallucinations. AI can generate code that references packages that don't exist, uses API methods that were never implemented, or follows patterns from outdated documentation.
- Phantom packages — AI suggests
npm install family-scheduler-utilsfor a package that doesn't exist. You run the install, it fails, and you've wasted time. - Invented API methods — AI uses
array.filterByKey()orreact.useAsync()— methods that look plausible but don't exist in any library. - Outdated patterns — AI generates class components in React, uses deprecated lifecycle methods, or references old API endpoints.
- Wrong function signatures — AI calls a real function but with the wrong arguments or in the wrong order.
- Confident misinformation — AI explains its hallucinated code with complete confidence, making it harder to spot.
Pro Tip: Verify Before You Trust
When AI suggests a package or API method you're unfamiliar with, verify it exists before using it. Check npm, check the official documentation, check the GitHub repository. AI's confident tone is not evidence of correctness. This takes 30 seconds and saves hours of debugging phantom dependencies.
The Security Review Prompt
Just as you use AI to generate code, you should use AI to review code for security. This is one of the highest-value uses of AI — it's faster than manual security review and catches common patterns reliably.
Review this code for security vulnerabilities.
[paste your code]
Check specifically for:
- SQL injection (string interpolation in queries)
- XSS (unescaped user input in HTML)
- Hardcoded secrets (API keys, passwords, tokens)
- Missing input validation and sanitization
- Authentication/authorization bypasses
- Insecure data storage (plain-text passwords)
- Missing CORS configuration
- Missing rate limiting on sensitive endpoints
- Insecure HTTP headers
For each issue found:
- Describe the vulnerability
- Explain how it could be exploited
- Show the fix
Run this prompt on every backend endpoint, every form handler, and every piece of code that touches user data or authentication. The cost is seconds; the protection is enormous.
Data Privacy Risks
When you paste code into AI for review or generation, you're sharing that code with a third-party service. This creates data privacy considerations that every developer needs to understand.
- Real API keys, passwords, or tokens — Replace with placeholders like
YOUR_API_KEY_HEREbefore pasting - Customer personal data — Don't paste real user data into prompts. Use anonymized or synthetic data.
- Proprietary business logic — If your company has strict IP policies, check whether AI tool usage is permitted
- Database connection strings — These contain credentials. Redact before sharing.
- Internal infrastructure details — Server addresses, network topology, internal URLs
Develop the habit of scanning your code for sensitive data before pasting it into AI. Replace real values with placeholders, and add a comment like // Replace with env variable so you remember to swap them back.
The Security Checklist
Use this checklist on every project that handles user data, authentication, or external APIs. Ask AI to verify each item — and verify AI's answers yourself for critical systems.
npm audit regularly, no known CVEsBuilding Secure Code with AI
The best strategy isn't just reviewing for vulnerabilities after the fact — it's prompting AI to write secure code from the start. Include security requirements in your initial prompts.
Build an Express.js login endpoint.
Security requirements:
- Hash passwords with bcrypt (cost factor 12)
- Return JWT with 1-hour expiration
- Rate limit: max 5 attempts per IP per 15 minutes
- Validate email format and password length (min 8 chars)
- Return generic error messages (don't reveal if email exists)
- Log failed attempts server-side
- Set secure HTTP headers (helmet)
Do NOT:
- Store passwords in plain text
- Include secrets in the code (use process.env)
- Return stack traces on error
By specifying security requirements upfront — including explicit "DO NOT" constraints — you dramatically reduce the chance of AI generating vulnerable code. The negative constraints are especially important because they prevent AI's most common insecure defaults.
Over-Reliance: The Biggest Risk
The security threats above are technical. But the most dangerous risk of AI-generated code is human, not technical: over-reliance.
When code appears instantly and looks correct, there's a strong psychological tendency to trust it without verification. This is especially dangerous because AI-generated code looks professional — it's well-formatted, uses proper naming conventions, and includes comments. All of this creates a false sense of security.
Over-Reliance Pattern
- AI generates code → immediately ship it
- "It looks right" = "it is right"
- No testing, no review
- Don't understand how the code works
- Can't debug it when it breaks
Healthy Pattern
- AI generates code → read it → test it → review it
- Understand every line before committing
- Run AI security review on all critical code
- Can explain what the code does and why
- Can debug and modify independently
AI generates the code, but you own it. When AI-generated code breaks in production, you can't blame the AI. You reviewed it (or should have). You tested it (or should have). You deployed it. Treating AI output as your own code — with full responsibility for its quality and security — is the only sustainable approach.
Take a piece of AI-generated backend code — either from this tutorial or from your own projects. Run the full security audit:
- Step 1: Read the code yourself. Can you spot any security issues?
- Step 2: Ask AI to review the code using the security review prompt template.
- Step 3: Compare your findings with AI's findings. What did each catch that the other missed?
- Step 4: Fix all identified issues. Ask AI to verify the fixes are correct.
- Step 5: Run the security checklist on the fixed code. Are all items satisfied?
Key Takeaways
- AI generates plausible code, not provably secure code — security review is mandatory, not optional
- The most common AI vulnerabilities: SQL injection, XSS, hardcoded secrets, missing input validation
- Always use parameterized queries — never string-interpolate user input into SQL
- AI hallucinations create reliability risks: phantom packages, invented APIs, outdated patterns
- Never share real secrets, credentials, or customer data with AI tools
- Include security requirements in your initial prompts — build secure from the start, don't patch after
- Use the security checklist on every project that handles user data or authentication
- The biggest risk is over-reliance — you own the code AI generates, including its security flaws