Prompt Engineering for Agentic Coding
Prompt engineering for agentic coding is fundamentally different from prompting a chatbot. Claude Code doesn't just answer questions—it executes tasks, reads files, runs tests, and modifies your codebase. The quality of your prompts directly determines how efficiently Claude works and how many correction cycles you'll need.
The core insight: The more precise your instructions, the fewer corrections you'll need. But paradoxically, deliberately vague prompts can be powerful for exploration and discovery.
Learning Objectives
- Calibrate prompt specificity based on task type
- Provide rich context using files, images, and URLs
- Use the Claude interview technique for complex features
- Avoid five common prompting mistakes
- Apply the explore-plan-implement workflow effectively
- Use emphasis and constraints to guide behavior
- Ask codebase questions like you would to a senior engineer
The Specificity Spectrum
Not all tasks require the same level of detail. Understanding where your task falls on the specificity spectrum is the first skill to master.
Exploration tasks (intentionally vague):
- "What testing frameworks are used in this codebase?"
- "How does authentication work here?"
- "Investigate why the build is failing"
Implementation tasks (maximum specificity):
- "Add a DELETE /api/users/:id endpoint that soft-deletes by setting deleted_at, requires admin auth, returns 204, and includes a test covering the unauthorized case"
- "Refactor the UserService class to use dependency injection. Extract the database calls into a UserRepository. Do not change the public API."
The mistake most developers make is treating implementation tasks like exploration tasks. This creates unnecessary correction cycles.
Specificity Calibration
Let's look at concrete before-and-after examples. The difference between mediocre and excellent prompts often comes down to three elements: what to do, how to do it, and what to avoid.
| Vague Prompt (Poor) | Specific Prompt (Good) |
|---|---|
| "Add tests for foo.py" | "Write a test for foo.py covering the edge case where the user is logged out. Avoid mocks—use a real test database." |
| "Fix the bug in the checkout flow" | "The checkout flow throws a 500 error when the cart contains a product with null inventory. Add a guard clause to filter out null inventory items before calculating totals." |
| "Make the homepage load faster" | "Optimize the homepage by lazy-loading the testimonials section and preloading the hero image. Measure the improvement using Lighthouse." |
| "Add error handling" | "Wrap the API calls in handleSubmit with try-catch. Show a toast notification on error using the toast() function from lib/toast.ts. Log errors to Sentry." |
| "Update the docs" | "Add a new section to README.md titled 'Environment Variables' that documents all required env vars (DATABASE_URL, REDIS_URL, API_KEY) with example values." |
Notice the pattern: good prompts specify scope (which file/function), behavior (what should happen), constraints (avoid X, use Y), and often verification (how to confirm it works).
Anatomy of an Excellent Prompt
Let's break down a production-quality prompt:
Add a rate limiting middleware to the Express app that:
- Limits to 100 requests per 15 minutes per IP address
- Uses Redis for storage (not in-memory)
- Returns 429 with JSON body: {"error": "Too many requests"}
- Applies to all /api/* routes but excludes /health
- Includes a test that makes 101 requests and verifies the 101st gets a 429
- Do NOT modify the existing auth middleware
This prompt includes:
- Task: Add rate limiting middleware
- Configuration: 100 req/15min per IP
- Implementation detail: Use Redis, not memory
- Response format: 429 with specific JSON
- Scope: /api/* routes only, exclude /health
- Verification: Test for the 101st request
- Constraint: Don't touch auth middleware
Claude now has everything it needs to implement correctly on the first try.
Providing Rich Context
Claude Code can access any file in your workspace, but you can make its job easier by proactively providing context.
Using @ to Reference Files
The @ symbol lets you explicitly reference files, functions, or documentation:
@src/lib/auth.ts needs to be refactored to use JWT tokens instead
of session cookies. Keep the same exports (login, logout, getUser)
but change the implementation.
This immediately loads the file into Claude's context, so it doesn't need to search for it.
Pasting Images
Claude can see images. This is powerful for UI work:
[paste screenshot of broken layout]
The sidebar is overlapping the main content on mobile. Fix the CSS
in @components/layout/Sidebar.tsx so it collapses into a drawer
under 768px.
Screenshots are especially useful for:
- Bug reports (show the broken state)
- Design implementation (show the target design)
- Error messages (paste the full stack trace as an image)
Providing URLs
Claude can fetch content from URLs using WebFetch:
Implement the authentication flow described in this article:
https://supabase.com/docs/guides/auth/social-login
Use Google as the provider. Add a "Sign in with Google" button to
the login page.
Claude will read the documentation and implement based on the official guide.
Piping Data
You can pipe command output directly to Claude:
npm test 2>&1 | claude "Fix all failing tests"Or pipe file contents:
cat error.log | claude "Identify the root cause of this error"This is particularly useful for debugging—let Claude see the raw output you're seeing.
Let Claude Fetch What It Needs
Sometimes it's better to let Claude do the research:
Add a Stripe payment integration. Use the latest Stripe SDK and
follow their official best practices. Create a /api/checkout
endpoint that creates a payment intent.
Claude will search for Stripe documentation, find the current best practices, and implement accordingly. This is often more reliable than copying old code samples from blog posts.
The Claude Interview Technique
For complex features with many decision points, use the interview technique:
I want to add a commenting system to the blog. Interview me in
detail using AskUserQuestion. Ask about:
- Technical implementation (database schema, API design)
- UI/UX (where comments appear, editing flow)
- Edge cases (spam protection, moderation)
- My concerns and priorities
- Tradeoffs I'm willing to make
After you understand the requirements, create a plan and wait for
my approval before implementing.
This flips the dynamic. Instead of you trying to anticipate every detail, Claude asks probing questions to clarify requirements. It's especially powerful when:
- You have a clear goal but fuzzy implementation details
- The feature touches multiple parts of the system
- There are UI/UX decisions you haven't fully thought through
- You want to discuss tradeoffs before committing to an approach
When to Use the Interview Technique
Use this approach for features that would take 30+ minutes to specify upfront. If you find yourself writing a 500-word prompt with lots of "maybe" and "probably," stop and use the interview technique instead.
Five Common Prompting Mistakes
1. Kitchen Sink Session
The mistake: Mixing unrelated tasks in a single session.
Add tests for UserService. Also fix the typo in README. Oh and can
you update the Docker config to use Node 20? And investigate why
deploys are slow.
This fragments Claude's context across multiple unrelated tasks. Each task requires different files, different mental models, different verification steps.
The fix: One session, one goal. Start a new conversation for unrelated tasks. Use /clear liberally.
2. Correcting Over and Over
The mistake: Spending 6+ messages trying to fix the same thing.
User: Add a login form
Claude: [implements with email/password]
User: No, use username not email
Claude: [switches to username]
User: Keep the email field too, both email AND username
Claude: [adds both]
User: No I meant email OR username, user chooses
Claude: [implements radio buttons]
User: Not radio buttons, just one field that accepts both
After two correction cycles with no progress, you're stuck. Claude's context is polluted with failed attempts.
The fix: Use /clear after 2 failed attempts. Start fresh with a more precise prompt that incorporates everything you learned:
Add a login form with a single field that accepts either email or
username. On submit, check if input contains @ to determine if it's
email or username. Use the existing auth.login() function.
3. Over-Specified CLAUDE.md
The mistake: Writing a 2000-line CLAUDE.md with every coding standard, architectural decision, and historical context.
Claude reads CLAUDE.md at the start of every conversation, but overly long files get skimmed or ignored. The signal-to-noise ratio drops.
The fix: Keep CLAUDE.md under 200 lines. Focus on:
- Project structure (where things live)
- Non-obvious conventions (naming, patterns)
- Critical constraints (don't modify X, always use Y)
- Common commands
Everything else belongs in proper documentation.
4. Trust-Then-Verify Gap
The mistake: Accepting Claude's changes without running tests or checking the output.
User: Add pagination to the user list
Claude: [implements pagination]
User: Great! Now add sorting
Claude: [implements sorting, accidentally breaks pagination]
User: Perfect! Now add filtering
Claude: [implements filtering, breaks both pagination and sorting]
Without verification checkpoints, errors compound. By the third change, nothing works.
The fix: Test after every significant change. Use this pattern:
Add pagination to the user list. After you implement, run the test
suite and show me the results.
Then verify the behavior yourself before moving on.
5. Infinite Exploration
The mistake: Asking open-ended investigation questions without scope limits.
User: "Investigate why the app is slow"
Claude: [reads 40 files, analyzes bundle size, checks database queries,
reviews network calls, examines render cycles, profiles memory usage...]
Claude will keep exploring until it fills the context window. Unscoped investigations are black holes.
The fix: Add scope and exit criteria:
The dashboard page takes 5+ seconds to load. Investigate the initial
render performance. Check these areas in order:
1. Network waterfall (any blocking requests?)
2. Component render counts (any unnecessary re-renders?)
3. Database query performance (any N+1 queries?)
Stop after finding the top 3 issues and summarize them.
The Explore-Plan-Implement Workflow Revisited
This three-phase workflow is the foundation of effective agentic coding:
Phase 1: Explore (Vague Prompts Welcome)
How does authentication work in this codebase? I need to understand
it before adding SSO support.
What's the difference between the UserService and AuthService? When
should I use each?
Show me all the places where we send emails. I want to understand
the email infrastructure.
In this phase, vague prompts are good. You're mapping the territory. Let Claude read files, explain patterns, and highlight key areas.
Phase 2: Plan (Collaborative)
I want to add SSO using SAML. Based on what you know about our auth
system, create a plan. Include:
- What files need to change
- New files to create
- Database migrations needed
- Testing strategy
- Rollout approach (feature flag?)
Don't implement yet—just plan.
Review the plan. Discuss tradeoffs. Refine it. This is where you catch misalignments before any code changes.
Phase 3: Implement (Maximum Specificity)
Implement the SSO plan we discussed. Start with the database
migration, then add the SAML service, then update the auth routes.
After each step, run the tests and confirm they pass before
proceeding.
Now you're in execution mode. Prompts should be precise. Verify at checkpoints. Move linearly through the plan.
Don't Mix Phases
A common mistake is asking "How does auth work? Also add SSO support" in a single prompt. This mixes exploration with implementation. Claude will either skip the exploration (and implement based on incomplete understanding) or over-explore (and never get to implementation). Separate the phases.
Using Emphasis and Constraints
When you have non-negotiable requirements, emphasize them explicitly:
Add a CSV export feature to the reports page.
IMPORTANT: Do not modify the database schema. Work with the existing
tables.
YOU MUST include error handling for cases where the report has 0 rows.
NEVER expose user email addresses in the CSV—use user IDs only.
Keywords like IMPORTANT, YOU MUST, NEVER, and DO NOT act as guardrails. They're particularly useful when:
- You've had issues with Claude modifying the wrong thing before
- There are security/privacy implications
- You're refactoring and need to preserve public APIs
- You have strong opinions about implementation approach
Don't overuse them—if every sentence is IN ALL CAPS, nothing stands out. Reserve emphasis for the constraints that really matter.
Asking Codebase Questions
Treat Claude like a senior engineer who just joined your team. Ask the questions you'd ask a coworker:
Understanding patterns:
How does logging work in this codebase? I want to add logs to the
new payment service.
Debugging:
The tests are failing with "Cannot read property 'id' of undefined".
This only happens in CI, not locally. What's different about the CI
environment?
Language/framework specifics:
What does async move {...} do on line 134 of worker.rs? I'm not
familiar with Rust closures.
Architecture decisions:
Why do we use Redis for session storage instead of database sessions?
Is there a reason we can't switch?
Best practices in context:
I need to add a background job that runs every hour. What's the
established pattern for scheduled tasks in this project?
Claude can read the code, understand the context, and explain not just what the code does but why it's structured that way.
The 'Show Me' Pattern
For complex questions, use the "show me" pattern:
"Show me 3 examples of how we handle form validation in this codebase. I want to follow the established pattern for the new signup form."
This grounds Claude's answer in actual code from your project, not generic examples.
Iterative Refinement
Even with perfect prompts, you'll sometimes need to refine. Here's how to iterate effectively:
Clarify, Don't Correct
Correcting:
No that's wrong. Fix it.
Clarifying:
Close, but the validation should happen before saving, not after.
Move the validateInput() call above the db.save() line.
Explain what's wrong and what you want instead. Claude learns from specificity.
Build on What Works
The login form looks good. Now add a "Forgot Password" link below
the password field that opens a modal with a password reset form.
Reference the working state, then describe the delta. This is more efficient than describing the entire desired end state.
Use Checkpoints
Let's implement this in 3 steps:
1. First, add the database schema for password resets
2. Then add the API endpoint
3. Finally, add the UI
Complete step 1 and show me the migration file. Don't proceed to
step 2 until I approve.
This prevents Claude from going too far in the wrong direction.
Advanced Prompting Patterns
The Constraint Sandwich
State constraints both before and after the main instruction:
Do not modify the existing API—we have mobile clients depending on it.
Add a v2 version of the /users endpoint that returns additional
fields (created_at, updated_at, is_verified).
Keep the v1 endpoint unchanged. Add the v2 endpoint at /v2/users.
The repetition ensures the constraint isn't missed.
The Example-Driven Prompt
Show Claude an example of what you want:
Add a new API endpoint for deleting posts. Follow the same pattern
as the existing DELETE /api/comments/:id endpoint (see routes/comments.ts
lines 45-62):
- Require auth
- Check ownership
- Soft delete (set deleted_at, don't remove from DB)
- Return 204
- Include a test
This is especially powerful in codebases with established patterns.
The Diff Preview Request
For risky changes, ask for a preview:
I want to refactor UserService to use dependency injection. Before
making changes, show me a diff preview of what you'd change in
UserService.ts. I'll approve before you modify files.
This gives you a chance to course-correct before any files are touched.
Start with clarity
Decide if this is exploration (vague OK) or implementation (max specificity required).
Provide context
Use @ for files, paste images/errors, share URLs. Give Claude what it needs.
Specify scope and constraints
What should change? What should NOT change? What's the acceptance criteria?
Verify incrementally
Test after each significant change. Don't let errors compound.
Iterate with specificity
When refining, explain what's wrong and what you want instead.
Practice Prompt Engineering
beginnerPart 1: Rewrite Vague Prompts
Rewrite these 5 vague prompts into specific, actionable ones:
- "Add authentication to the app"
- "Make the dashboard prettier"
- "Fix the performance issues"
- "Add some tests"
- "Update the documentation"
For each, specify: scope, behavior, constraints, and verification method.
Part 2: Interview Technique Practice
Pick a feature you've been considering adding to one of your projects. Use the Claude interview technique:
I want to add [YOUR FEATURE IDEA]. Interview me in detail using
AskUserQuestion. Ask about technical implementation, UI/UX, edge
cases, concerns, and tradeoffs.
Go through the full interview process. Notice what questions Claude asks and how it helps clarify your thinking.
Part 3: Codebase Questions
In one of your codebases, identify 3 things you don't fully understand (a pattern, a function, an architectural decision). Ask Claude to explain them using the "senior engineer" approach. Compare Claude's explanations to your assumptions.
Key Takeaway
Prompt engineering for agentic coding is about calibration. Use vague prompts for exploration, precise prompts for implementation. Provide rich context through files, images, and URLs. Use the interview technique for complex features. Avoid kitchen sink sessions, infinite corrections, and unscoped investigations. Treat Claude like a senior engineer—ask questions, verify work, and iterate with specificity. The goal is not perfect prompts on the first try, but efficient convergence to the right solution.
Further Reading
- Official Claude Code documentation: Best Practices for Prompting
- Anthropic's general prompting guide: Introduction to Prompt Engineering
- For complex multi-step tasks, review the Agent Teams documentation for delegation patterns