Chapter 11

Security and Risks with AI-Generated Code

AI-generated code can contain security vulnerabilities just as easily as it can contain working logic. Understanding these risks is not optional — it's essential for every developer using AI as a coding tool.


The Security Problem

Here's the uncomfortable truth: AI models are trained on vast amounts of code from the internet — and a significant portion of that code contains security vulnerabilities. AI doesn't distinguish between secure and insecure patterns. It generates the most likely code, which is often the most common code — and the most common code frequently has security flaws.

Core Principle

AI generates plausible code, not provably secure code. Every piece of AI-generated code that handles user input, authentication, data storage, or network communication must be reviewed for security — by you, by AI in reviewer mode, or ideally both.


Common Vulnerabilities in AI Code

These are the security issues that appear most frequently in AI-generated code. Learn to recognize them on sight.

Critical

SQL Injection

AI often generates SQL queries using string concatenation instead of parameterized queries, allowing attackers to inject malicious SQL.

Critical

XSS (Cross-Site Scripting)

Rendering user input as raw HTML without sanitization. AI may use dangerouslySetInnerHTML or equivalent without warning.

High

Hardcoded Secrets

AI frequently places API keys, passwords, and tokens directly in source code instead of using environment variables.

High

Missing Input Validation

AI generates code that trusts user input — no length checks, no type validation, no sanitization.

High

Broken Authentication

Weak token generation, missing expiration, no rate limiting on login endpoints, plain-text password storage.

Medium

Insecure Dependencies

AI may suggest outdated or vulnerable packages without checking for known CVEs.


SQL Injection: A Real Example

This is the single most dangerous vulnerability AI generates — and it looks perfectly reasonable to an untrained eye.

🚨
Vulnerable Code — SQL Injection

AI often generates database queries like this:

// DANGEROUS — AI-generated, vulnerable to SQL injection
app.get('/api/activities', (req, res) => {
  const member = req.query.member;
  const query = `SELECT * FROM activities WHERE member = '${member}'`;
  db.query(query, (err, results) => {
    res.json(results);
  });
});

This looks clean and functional. But an attacker can pass member='; DROP TABLE activities; -- as the query parameter, and the database will execute the injected SQL — deleting the entire table.

Secure Version — Parameterized Query
// SAFE — parameterized query prevents injection
app.get('/api/activities', (req, res) => {
  const member = req.query.member;
  const query = 'SELECT * FROM activities WHERE member = ?';
  db.query(query, [member], (err, results) => {
    res.json(results);
  });
});

The fix is simple — use parameterized queries (also called prepared statements). The ? placeholder ensures user input is always treated as data, never as SQL commands. But AI doesn't always choose this pattern by default.


AI Hallucinations

Beyond security vulnerabilities, AI has a broader reliability problem: hallucinations. AI can generate code that references packages that don't exist, uses API methods that were never implemented, or follows patterns from outdated documentation.

Common Hallucination Patterns

Pro Tip: Verify Before You Trust

When AI suggests a package or API method you're unfamiliar with, verify it exists before using it. Check npm, check the official documentation, check the GitHub repository. AI's confident tone is not evidence of correctness. This takes 30 seconds and saves hours of debugging phantom dependencies.


The Security Review Prompt

Just as you use AI to generate code, you should use AI to review code for security. This is one of the highest-value uses of AI — it's faster than manual security review and catches common patterns reliably.

Review this code for security vulnerabilities.

[paste your code]

Check specifically for:
- SQL injection (string interpolation in queries)
- XSS (unescaped user input in HTML)
- Hardcoded secrets (API keys, passwords, tokens)
- Missing input validation and sanitization
- Authentication/authorization bypasses
- Insecure data storage (plain-text passwords)
- Missing CORS configuration
- Missing rate limiting on sensitive endpoints
- Insecure HTTP headers

For each issue found:
- Describe the vulnerability
- Explain how it could be exploited
- Show the fix

Run this prompt on every backend endpoint, every form handler, and every piece of code that touches user data or authentication. The cost is seconds; the protection is enormous.


Data Privacy Risks

When you paste code into AI for review or generation, you're sharing that code with a third-party service. This creates data privacy considerations that every developer needs to understand.

What NOT to Share with AI

Develop the habit of scanning your code for sensitive data before pasting it into AI. Replace real values with placeholders, and add a comment like // Replace with env variable so you remember to swap them back.


The Security Checklist

Use this checklist on every project that handles user data, authentication, or external APIs. Ask AI to verify each item — and verify AI's answers yourself for critical systems.

Input & Data
All user inputs validated — Type checked, length limited, format verified
SQL queries parameterized — No string interpolation in database queries
HTML output escaped — User content rendered safely, no raw HTML injection
Authentication
Passwords hashed — Using bcrypt or argon2, never stored in plain text
Tokens expire — JWT or session tokens have reasonable expiration times
Rate limiting on login — Prevents brute-force password attacks
Configuration
No hardcoded secrets — All keys, passwords, tokens in environment variables
CORS configured — Only allowed origins can access your API
HTTPS enforced — All traffic encrypted in transit
Dependencies audited — Run npm audit regularly, no known CVEs
Error Handling
No stack traces exposed — Production errors show generic messages, not internals
Error logging configured — Errors logged server-side for debugging, not sent to client

Building Secure Code with AI

The best strategy isn't just reviewing for vulnerabilities after the fact — it's prompting AI to write secure code from the start. Include security requirements in your initial prompts.

Build an Express.js login endpoint.

Security requirements:
- Hash passwords with bcrypt (cost factor 12)
- Return JWT with 1-hour expiration
- Rate limit: max 5 attempts per IP per 15 minutes
- Validate email format and password length (min 8 chars)
- Return generic error messages (don't reveal if email exists)
- Log failed attempts server-side
- Set secure HTTP headers (helmet)

Do NOT:
- Store passwords in plain text
- Include secrets in the code (use process.env)
- Return stack traces on error

By specifying security requirements upfront — including explicit "DO NOT" constraints — you dramatically reduce the chance of AI generating vulnerable code. The negative constraints are especially important because they prevent AI's most common insecure defaults.


Over-Reliance: The Biggest Risk

The security threats above are technical. But the most dangerous risk of AI-generated code is human, not technical: over-reliance.

When code appears instantly and looks correct, there's a strong psychological tendency to trust it without verification. This is especially dangerous because AI-generated code looks professional — it's well-formatted, uses proper naming conventions, and includes comments. All of this creates a false sense of security.

Over-Reliance Pattern

  • AI generates code → immediately ship it
  • "It looks right" = "it is right"
  • No testing, no review
  • Don't understand how the code works
  • Can't debug it when it breaks

Healthy Pattern

  • AI generates code → read it → test it → review it
  • Understand every line before committing
  • Run AI security review on all critical code
  • Can explain what the code does and why
  • Can debug and modify independently
The Responsibility Rule

AI generates the code, but you own it. When AI-generated code breaks in production, you can't blame the AI. You reviewed it (or should have). You tested it (or should have). You deployed it. Treating AI output as your own code — with full responsibility for its quality and security — is the only sustainable approach.


🧪 Practical Exercise

Take a piece of AI-generated backend code — either from this tutorial or from your own projects. Run the full security audit:


Key Takeaways

Previous Chapter AI-First Development Methodology
Next Chapter Advanced Strategies