Part 1: VS Code + GitHub Copilot
VS Code with GitHub Copilot is the most common AI-assisted editor setup. It works in your existing editor, requires minimal configuration, and the tab completion becomes second nature within a few days.
Setup That Matters
Install the GitHub Copilot extension from the VS Code marketplace and sign in. That gets you working. But the default configuration isn't optimal — here are the settings worth changing:
{
// Show completions immediately, don't wait for you to pause
"editor.inlineSuggest.enabled": true,
// Don't auto-accept suggestions on Enter — only on Tab
// This prevents accidental acceptance when you're just pressing Enter
// for a new line
"editor.acceptSuggestionOnEnter": "off",
// Show Copilot suggestions in these file types
"github.copilot.enable": {
"*": true,
"markdown": true,
"yaml": true,
"plaintext": false
},
// Turn off in files where AI suggestions are more annoying than helpful
"github.copilot.enable": {
"*.env": false,
"*.json": false
}
}
The most important setting is acceptSuggestionOnEnter: "off". Without this, you'll accidentally accept Copilot suggestions when pressing Enter to go to a new line. Tab to accept, Escape to dismiss — that's the muscle memory you want.
Keyboard Shortcuts to Memorize
Tab— Accept the current suggestionEsc— Dismiss the current suggestionAlt + ]— Cycle to next suggestion (there are often multiple)Alt + [— Cycle to previous suggestionCtrl + Enter— Open Copilot completions panel (see up to 10 alternatives)Ctrl + I— Open Copilot Inline Chat (ask a question about selected code)Ctrl + Shift + I— Open Copilot Chat panel
The one most people miss: Alt + ] for cycling suggestions. Copilot's first suggestion isn't always the best one. The second or third option is often closer to what you need.
Copilot Chat
The sidebar chat panel (Ctrl + Shift + I) turns VS Code into a hybrid editor-and-chat interface. You can ask questions about your code, generate code, and get explanations — all without leaving the editor.
The most useful chat commands:
/explain— Select code, then type/explainin chat. Gets a plain-English explanation of what the selected code does./tests— Generate tests for the selected code./fix— Analyze selected code for bugs and suggest fixes./doc— Generate documentation for the selected function or class.@workspace— Prefix your question with@workspaceto let Copilot search your entire project for context. "Where is the user authentication handled?" works much better with@workspace.
Pro Tip: Copilot Chat With Selection
Select code before opening Copilot Chat, and the selected code is automatically included as context. This is much faster than copy-pasting into the chat. Select a function → open chat → "What happens if userId is null?" → get an answer that references your specific code.
Copilot Edits
Copilot Edits (currently in preview) lets you describe a change in natural language, and Copilot applies it across multiple files. It's the VS Code equivalent of Cursor's Composer.
Open it with Ctrl + Shift + I and switch to the "Edits" tab, or use the command palette. Add the files you want it to work on, describe the change, and Copilot shows you a diff before applying anything.
Where it works well: adding a new field across a type definition, API route, and frontend form. Adding error handling to multiple files. Renaming a concept consistently across the project.
Where it struggles: large architectural changes, refactors that change the control flow, or anything that requires understanding the full dependency graph. For those, use a chat interface for the planning step first.
Part 2: Cursor
Cursor is a fork of VS Code rebuilt around AI. The interface is familiar — it's essentially VS Code with different AI features built in. If you're considering switching from VS Code, the transition is seamless because extensions, themes, keybindings, and settings all transfer.
What's Different
Cursor's core features that VS Code + Copilot doesn't have:
- Tab prediction — Goes beyond simple autocomplete. Cursor predicts your next edit based on what you just changed. Delete a parameter? It suggests updating the function call that uses it. Rename a variable? It suggests the rename in the next occurrence. This multi-step prediction is Cursor's most distinctive feature.
- Cmd+K inline editing — Select code, press Cmd+K, describe what you want, and the edit happens in place. No sidebar, no chat panel — just a small prompt bar above the selected code.
- Composer — Multi-file editing in a dedicated panel. Describe a feature, and Composer creates or modifies files across your project with a full diff review before applying.
- Codebase indexing — Cursor indexes your entire project so AI responses reference your actual code, types, and patterns — not generic examples.
- Model selection — Choose between Claude, GPT-4o, and other models per-request. Use Claude for complex reasoning, faster models for simple completions.
Cmd+K: The Core Interaction
Cmd+K (or Ctrl+K on Windows/Linux) is Cursor's most important keyboard shortcut. Select code, press Cmd+K, and describe the change you want:
- Select a function → Cmd+K → "Add error handling for null inputs"
- Select an interface → Cmd+K → "Add an optional
metadatafield of typeRecord<string, string>" - Select a test → Cmd+K → "Add edge case tests for empty array and negative index"
- Select nothing → Cmd+K → "Create a function that validates email format" (generates at cursor position)
The result appears as an inline diff. You see exactly what changed, and you accept or reject with a single keypress. The entire interaction takes under 10 seconds for most edits.
Composer for Multi-File Work
Open Composer with Cmd+Shift+I. This is where you describe larger changes that span multiple files:
Add a "priority" field to tasks. It should be an enum: low, medium, high, urgent.
Update: the Task type in types.ts, the create and update endpoints in routes/tasks.ts, the task repository, the CreateTaskModal component, and the TaskCard component to show a priority badge.
Composer generates changes across all the files you listed, shows them as diffs, and lets you accept or reject each file individually. For feature work that touches 3-5 files, this is significantly faster than editing each file manually.
Composer's convenience makes it tempting to accept all changes at once. Don't. Review each file's diff individually. Composer occasionally makes incorrect assumptions about file structure or existing patterns, and a quick scan of each diff catches these before they become bugs.
Model Selection Strategy
Cursor lets you choose which AI model to use. The practical strategy:
- Tab completion — Use the fastest available model. Speed matters more than depth for single-line suggestions.
- Cmd+K edits — Claude or GPT-4o. These edits need to understand context and get the details right.
- Composer — Claude for multi-file changes. The stronger reasoning helps with consistency across files.
- Chat questions — Match to complexity. Quick syntax questions → fast model. Architecture discussions → Claude.
Part 3: Editor Habits That Matter
The tools are only as good as your habits with them. These are the editing patterns that separate productive AI-assisted development from the frustrating kind.
The Tab-Then-Edit Rhythm
The most productive AI editing rhythm is: accept the suggestion, then immediately refine it. Don't wait for the perfect suggestion — accept something close, then manually adjust.
AI gives you 80% of what you need in 2 seconds. Manually writing from scratch gives you 100% of what you need in 30 seconds. Accepting the 80% and spending 5 seconds fixing it gives you 100% in 7 seconds.
This feels wrong at first. Your instinct is to reject an imperfect suggestion and type it yourself. Fight that instinct. The accept-then-edit rhythm is faster for almost everything longer than a single line.
Leading the AI With Comments
AI predicts what you'll type next based on context. You can steer its predictions by typing a comment first:
// Validate email format and check for duplicates in the database
function validateEmail(
After typing that comment and the function signature start, the AI suggestion will include both email format validation and a database check. Without the comment, it would likely suggest only format validation — the more common pattern.
This works because AI treats the comment as a specification for the next block of code. More specific comments produce more specific suggestions:
// Calculate shipping cost:
// - Free for orders over $50
// - $5.99 flat rate for US domestic
// - $15.99 for international
// - Add $3 for express shipping
function calculateShipping(order: Order): number {
The suggestion after this comment will implement all four rules. Without it, AI would generate a generic shipping calculation.
When to Reject
Knowing when to dismiss a suggestion is as important as knowing when to accept:
- Reject if you don't understand it. If the suggestion uses a pattern or API you're not familiar with, reject it and write it yourself (or ask the chat to explain it first). Accepting code you don't understand is how bugs hide.
- Reject if it adds dependencies you don't want. AI loves importing libraries. If the suggestion adds a new import, check whether you actually need that dependency.
- Reject if it's solving a different problem. AI sometimes predicts where you're going and gets it wrong. If the suggestion is heading in the wrong direction, reject it and type a few more characters to steer it back.
- Accept if it's close but not perfect. If the structure is right but a variable name is wrong, accept and rename. Faster than retyping everything.
The Context Window Trick
Both Copilot and Cursor use your open files as context for suggestions. This means you can improve suggestion quality by opening relevant files:
- Working on a route handler? Open the related types file and the test file in other tabs.
- Implementing a new feature similar to an existing one? Open the existing implementation.
- Writing a migration? Open the schema file.
The AI reads your open tabs. More relevant open tabs = better suggestions. Close unrelated files to reduce noise.
For any editing task, open the three most relevant files alongside the file you're editing: the type definitions, an existing example of the pattern you're implementing, and the test file. This gives AI everything it needs to generate suggestions that match your project's patterns.
Part 4: Project Configuration
Out of the box, AI treats your project like any other. Project-level configuration files tell the AI about your conventions, patterns, and constraints — which dramatically improves suggestion quality.
Cursor: .cursorrules
Create a .cursorrules file in your project root. Cursor reads it automatically and applies the rules to all AI interactions:
# Project: TaskFlow
# Stack: React 18 + TypeScript + Express + SQLite
## Code Style
- Use functional components with hooks, never class components
- Use TypeScript strict mode — no `any` types
- Prefer named exports over default exports
- Use `const` for all variables unless mutation is required
## Patterns
- API errors return: { error: string, code: string }
- All async operations use Result<T, AppError> pattern
- Database access only through repository classes
- Validation at API boundary using Zod schemas
## Naming
- Files: kebab-case (user-repo.ts, task-routes.ts)
- Types/interfaces: PascalCase (TaskStatus, CreateUserInput)
- Functions: camelCase (createTask, validateEmail)
- Database columns: snake_case (created_at, user_id)
- API routes: kebab-case (/api/task-lists, /api/user-profiles)
## Testing
- Framework: Vitest
- Test files: same name as source with .test.ts suffix
- Each test file: at least one happy path + one error case
- Use factories for test data, not raw objects
## Do Not
- Don't use any CSS-in-JS libraries — use Tailwind classes
- Don't add new npm dependencies without noting them
- Don't use console.log for debugging — use the logger utility
- Don't generate commented-out code
Every AI suggestion in this project now follows these rules. The "Do Not" section is especially powerful — it prevents the patterns AI defaults to that don't fit your project.
Copilot: Instruction Files
GitHub Copilot supports a similar concept through instruction files. Create a .github/copilot-instructions.md file in your repository:
# Copilot Instructions for TaskFlow
## Stack
React 18, TypeScript, Express, SQLite, Vitest
## Key Conventions
- Functional components with hooks only
- No `any` types — use proper TypeScript
- Named exports, not default exports
- Result<T, AppError> for async operations
## Patterns
Always use Zod for validation at API boundaries.
Database access through repository classes only.
Tests should include happy path and error cases.
## Avoid
- No CSS-in-JS, use Tailwind
- No console.log, use logger utility
- No new dependencies without noting them
What to Include in Rules Files
The rules that have the biggest impact on suggestion quality:
- Stack declaration — Prevents AI from suggesting code for the wrong framework or version.
- Naming conventions — Eliminates the most common consistency issue with AI code.
- Patterns and anti-patterns — "Use X, don't use Y" prevents the most common wrong suggestions.
- Error handling conventions — AI defaults to try/catch with console.error. Your project probably has a better pattern.
- Test conventions — Framework, structure, and expected coverage per test file.
Rules that don't help much: extremely detailed style preferences (formatting is handled by Prettier), obvious best practices (AI already knows not to use eval), or rules that are too vague ("write clean code").
Pro Tip: Build Rules Incrementally
Start with 10 lines. When AI generates code that violates a convention you care about, add the rule. After a few weeks you'll have a rules file that prevents 90% of the issues you'd otherwise catch in review. It's faster to add rules as you discover violations than to try to write a comprehensive rules file upfront.
Part 5: Workflow Patterns
Here's how specific development tasks look when done well inside an AI-assisted editor.
Feature Implementation
- Plan in chat. Open the chat panel. Describe the feature. Ask for a list of files you'll need to create or modify. This gives you a roadmap before you start editing.
- Define the types first. Create or update your TypeScript types. This gives AI context for everything that follows.
- Implement with Cmd+K / Copilot Edits. For each file in your roadmap, select the relevant section and use inline editing to generate the implementation. Review each diff before accepting.
- Generate tests. Select the implementation, use
/testsor Cmd+K → "Write tests for this including edge cases." - Run tests and fix. If tests fail, select the error output, paste into chat, and ask for the fix.
Debugging
- Select the error. Copy the error message and stack trace.
- Ask chat with context. Paste the error into chat with
@workspace: "This error occurs when I create a task with an empty title. Here's the error: [paste]. Which file is the issue in?" - Navigate to the fix. Chat will identify the file and line. Open it, select the problematic code.
- Apply the fix. Cmd+K → "Fix the bug where empty title strings bypass validation" or use
/fixin Copilot. - Add a test. "Write a regression test that verifies empty titles are rejected."
Refactoring
- Start in chat. "This function is 80 lines long and does three things. How would you split it?" Get AI's recommended structure first.
- Extract piece by piece. Select each chunk, Cmd+K → "Extract this into a separate function called
validateInput." AI creates the function and updates the call site. - Verify after each extraction. Run tests between each extraction step. Don't batch three extractions before testing.
- Clean up. After all extractions, review the result. Use chat: "Review these three new functions. Are the names clear? Are the responsibilities well-separated?"
Code Review (Your Own Code)
- Select the diff. In the Git panel, select all changed files.
- Ask for a review. Chat → "Review my changes for this PR. Check for bugs, security issues, inconsistencies, and missing edge cases." Include the context of what you were trying to accomplish.
- Address findings. For each issue flagged, either fix it (Cmd+K on the problematic code) or explain in your PR description why it's intentional.
Test Generation
- Open the source file and test file side by side. This gives AI context from both the implementation and existing test patterns.
- Select the function. Highlight the function you want to test.
- Generate. Cmd+K → "Write tests for this function. Include: valid input, invalid input, boundary values, and null/undefined cases. Use the same patterns as the existing tests in this file."
- Review and run. Check that the tests are meaningful (not just coverage padding), then run them.
Every workflow follows the same shape: plan in chat → implement with inline editing → verify with tests → review in chat. The chat panel handles the thinking. The inline editor handles the doing. Tests verify the result. This separation keeps each step focused.
Part 6: Common Pitfalls
These are the traps that waste time and erode trust in AI-assisted editing. Recognizing them is the first step to avoiding them.
Accepting Without Reading
The most common mistake. AI suggests 15 lines, you press Tab without reading them, and 10 minutes later you're debugging code you didn't write and don't understand. The fix: read every suggestion before accepting, even if it means slowing down for the first few days. You'll get faster at scanning as you build intuition for what AI gets right and where it tends to make mistakes.
Fighting the Suggestion
You want the function to use reduce. AI suggests forEach with a mutable accumulator. You reject, retype the start of a reduce, and AI suggests forEach again. You reject again. Repeat five times.
Stop fighting. If AI keeps suggesting a different approach, either: accept it and refactor (2 seconds), write the first line of your preferred approach manually to steer the prediction, or type a comment specifying what you want. Fighting the same suggestion repeatedly is always slower than any of these alternatives.
Over-Relying on Autocomplete
Tab completion is addictive. After a week, some developers stop thinking about what they're typing and just tab-accept their way through the codebase. The result: code that works but nobody understands, inconsistent patterns, and a growing sense of unease about what's actually in the project.
The antidote: if you can't explain what the accepted suggestion does, undo it and write it yourself. Use AI to go faster, not to go on autopilot.
Ignoring the Rules File
Most developers install Copilot or Cursor, use it for a week without a rules file, get frustrated by inconsistent suggestions, and conclude that AI-assisted editing isn't very good. The rules file is the difference between "suggests random patterns" and "suggests code that matches my project." Spend 15 minutes creating one. It transforms the experience.
Using Chat When Inline Would Be Faster
Opening the chat panel, typing a prompt, reading the response, and copy-pasting the code takes 60+ seconds. Selecting the code, pressing Cmd+K, typing a 5-word instruction, and pressing Enter takes 10 seconds. For simple edits — add error handling, rename this, extract this function — inline editing is always faster than chat. Save the chat panel for questions that need explanation, not just code.
Not Using @workspace
Asking Copilot Chat "How does authentication work in this project?" without @workspace gives you a generic explanation of authentication. Asking with @workspace gives you an explanation that references your specific auth middleware, your JWT configuration, and your user model. The difference is enormous, and most developers don't use it.
Getting Started
Pick your editor
If you're already in VS Code, start with Copilot. If you're open to switching, try Cursor for a week. Both have free tiers.
Configure it properly
Set acceptSuggestionOnEnter: off. Create a rules file. Open relevant files in tabs when working.
Learn three shortcuts
Tab (accept), Escape (reject), and Cmd+K or Ctrl+I (inline chat). Everything else can wait.
Build the tab-then-edit habit
Accept close-enough suggestions and refine manually. Fight the instinct to reject anything imperfect.
Evolve your rules file
Every time AI generates something that violates your conventions, add a rule. After two weeks, most violations stop.
Editor Guide — Summary
- VS Code + Copilot — Tab completion, Copilot Chat with
@workspace, Copilot Edits for multi-file changes. Best for staying in your existing editor. - Cursor — Cmd+K inline editing, Composer for multi-file work, codebase indexing, model selection. Best AI-native editing experience.
- Key habit — Tab-then-edit: accept close-enough suggestions and refine manually. Faster than waiting for perfection.
- Leading comments — Write a comment describing what you want before the code. AI treats it as a specification.
- 3-Tab Rule — Open the types file, an example of the pattern, and the test file alongside your working file.
- Rules files —
.cursorrulesor.github/copilot-instructions.md. Build incrementally. Prevents 90% of convention violations. - Workflow pattern — Plan in chat → implement inline → verify with tests → review in chat.
- Top pitfall — Accepting without reading. If you can't explain what the suggestion does, undo it.