Case Study: Full Project Build
You have spent 23 lessons learning Claude Code piece by piece: the agentic loop, prompt engineering, context management, agent teams, hooks, headless mode, CI/CD integration. Now it is time to put it all together.
In this capstone, you will build a complete URL shortener web application from scratch using Claude Code as your primary development partner. Not a toy demo. A production-ready application with a backend API, frontend UI, database layer, comprehensive tests, CI/CD pipeline, and documentation. Every technique you have learned in this course will appear somewhere in this build.
The goal is not to type as little as possible. The goal is to work with Claude Code the way a senior engineer works with a capable teammate: setting clear context, delegating intelligently, verifying output, and orchestrating complex workflows across multiple agents and tools.
Learning Objectives
- Build a complete web application from scratch using Claude Code as your primary development partner
- Apply CLAUDE.md, custom skills, and permission rules to establish project conventions from day one
- Orchestrate an agent team with specialized roles to parallelize implementation work
- Integrate hooks, subagents, and automated testing into your development workflow
- Set up a full CI/CD pipeline with automated PR review and deployment
The Project: LinkForge URL Shortener
Here is what you are building:
- Backend API: Create, read, update, and delete short URLs. Track click analytics. Optional API key authentication.
- Frontend UI: A clean form to shorten URLs, a dashboard showing link analytics, and copy-to-clipboard functionality.
- Database: SQLite for local development (easy to swap for PostgreSQL later).
- Tests: Unit tests for business logic, integration tests for API endpoints, end-to-end tests for critical user flows.
- CI/CD: GitHub Actions workflow for linting, testing, and deployment. Automated PR review with Claude Code in headless mode.
- Documentation: Generated API docs, a comprehensive README, and inline code comments.
The stack is intentionally straightforward: Node.js with Express for the backend, vanilla HTML/CSS/JS for the frontend (no framework overhead for a capstone), and Vitest for testing. You can substitute your preferred stack, but the workflow remains the same.
Phase 1: Project Setup
Every successful project starts with clear conventions. Before writing a single line of application code, you will establish the foundation that makes Claude Code maximally effective.
Initialize the project with /init
Create a new directory and let Claude Code scaffold the project:
mkdir linkforge && cd linkforge
git init
claudeInside the Claude Code session:
/init
Claude will analyze your empty project and generate a starter CLAUDE.md. This is your starting point, not your final version. Review what it produces and then refine it.
Customize CLAUDE.md with project conventions
Open the generated CLAUDE.md and expand it with specific conventions for LinkForge. Tell Claude exactly what you want:
Update CLAUDE.md with the following project conventions:
- Stack: Node.js + Express backend, vanilla HTML/CSS/JS frontend, SQLite via better-sqlite3
- Testing: Vitest for unit and integration tests, Playwright for E2E
- Structure: src/routes/ for API routes, src/services/ for business logic, src/db/ for database, public/ for frontend
- Naming: camelCase for variables and functions, PascalCase for classes, kebab-case for files
- Error handling: All API routes must return structured JSON errors with status codes
- Every new API endpoint must have corresponding tests before the PR is approved
This CLAUDE.md will guide every agent, every skill, and every automated review for the rest of the project. Invest the time to make it thorough.
Create custom skills
Build two custom skills that you will use repeatedly throughout the project.
For the API endpoint generator, create .claude/skills/generate-api-endpoint.md:
# Generate API Endpoint
Create a new Express API endpoint following LinkForge conventions.
## Input
- Route path (e.g., /api/links)
- HTTP method (GET, POST, PUT, DELETE)
- Description of the endpoint's purpose
## Steps
1. Create the route handler in src/routes/
2. Add input validation using zod
3. Create or update the service layer in src/services/
4. Add the route to the Express app in src/app.js
5. Generate Vitest tests in tests/routes/
6. Update the API documentation in docs/api.mdFor the test coverage skill, create .claude/skills/add-test-coverage.md:
# Add Test Coverage
Analyze a source file and generate comprehensive tests.
## Input
- Path to the source file
## Steps
1. Read the source file and identify all exported functions
2. Check existing test coverage in the corresponding test file
3. Generate tests for uncovered functions, including:
- Happy path with valid inputs
- Edge cases (empty strings, null values, boundary numbers)
- Error conditions (invalid input, missing dependencies)
4. Run the tests and fix any failures
5. Report the before/after coverage deltaNow both skills are available as /generate-api-endpoint and /add-test-coverage in any Claude Code session within this project.
Set up permission rules
Configure .claude/settings.json to allow common development commands without prompting:
{
"permissions": {
"allow": [
"Bash(npm run *)",
"Bash(npx vitest *)",
"Bash(npx playwright *)",
"Bash(git add *)",
"Bash(git commit *)",
"Bash(sqlite3 *)"
],
"deny": [
"Bash(rm -rf *)",
"Bash(git push --force *)"
]
}
}This lets Claude Code run tests, manage dependencies, and commit code without interrupting you for approval every time, while blocking destructive operations.
The five-minute investment that saves hours
Spending time on CLAUDE.md, skills, and permissions before writing application code feels slow. It is not. Every minute spent here saves ten minutes later when Claude produces code that already follows your conventions, when skills automate repetitive scaffolding, and when permissions let agents work without constant interruption.
Phase 2: Core Implementation with Agent Teams
With the foundation in place, it is time to build the application. Instead of doing everything sequentially in a single session, you will use an agent team to parallelize the work.
Create a task file at .claude/tasks/initial-build.md:
# LinkForge Initial Build
## Backend Agent
- [ ] Set up Express server with CORS and JSON middleware
- [ ] Create SQLite database with links table (id, originalUrl, shortCode, createdAt, clickCount)
- [ ] POST /api/links — create a short link
- [ ] GET /api/links/:code — redirect to original URL and increment click count
- [ ] GET /api/links — list all links with analytics
- [ ] DELETE /api/links/:code — remove a link
- [ ] Add input validation and error handling to all routes
## Frontend Agent
- [ ] Create index.html with URL shortening form
- [ ] Build analytics dashboard showing all links and click counts
- [ ] Add copy-to-clipboard for shortened URLs
- [ ] Style with clean, responsive CSS
- [ ] Add loading states and error handling in the UI
## QA Agent
- [ ] Write unit tests for the link service (create, resolve, delete)
- [ ] Write integration tests for all API endpoints
- [ ] Test error cases: invalid URLs, duplicate codes, missing links
- [ ] Validate that redirects return correct HTTP status codes
- [ ] Check for SQL injection vulnerabilities in URL inputsNow launch the team:
claude --agent-team .claude/tasks/initial-build.mdWhile the agents work, monitor their progress. The backend agent will scaffold the Express server and database. The frontend agent will build the UI. The QA agent will write tests against the API contracts defined in CLAUDE.md. Because all three agents share the same CLAUDE.md context, they produce code that follows the same conventions.
Agent coordination in practice
Agent teams work best when tasks have clear boundaries. The backend agent owns src/, the frontend agent owns public/, and the QA agent owns tests/. When their work overlaps (the QA agent needs to know the API response format), the shared CLAUDE.md and task file provide the coordination layer. If you notice agents conflicting, add more specifics to CLAUDE.md.
Once all agents complete their tasks, review the output:
Show me a summary of all files created, organized by directory. For each file, give me a one-line description of its purpose.
Verify the structure matches your conventions. If something is off, correct it now. A misplaced file or wrong naming convention caught early prevents compounding errors later.
Phase 3: Testing and Quality
The QA agent wrote initial tests, but now you will harden the project with hooks, subagents, and thorough coverage analysis.
Set Up Hooks for Automatic Quality Checks
Create a pre-commit hook in .claude/hooks/pre-commit.sh:
#!/bin/bash
npm run lint
npx vitest run --reporter=verboseAnd configure Claude Code's hooks in .claude/settings.json:
{
"hooks": {
"preCommit": ".claude/hooks/pre-commit.sh"
}
}Now every time Claude Code commits code, linting and tests run automatically. If anything fails, the commit is blocked and Claude fixes the issue before retrying.
Run a Security Audit with a Subagent
Use a subagent to perform a focused security review without cluttering your main session's context:
Run a subagent to perform a security audit of the entire src/ directory.
Check for: SQL injection, XSS vulnerabilities, missing input validation,
insecure HTTP headers, and any hardcoded secrets. Report findings as a
prioritized list with severity levels.
The subagent operates in its own context window, reads every source file, and returns a structured report. Address any critical or high-severity findings before moving on.
Achieve Comprehensive Test Coverage
Now use your custom skill to fill coverage gaps:
/add-test-coverage src/services/link-service.js
After the skill generates tests, run the full suite:
Run all tests with coverage reporting. Show me any files below 80% coverage.
Claude will execute the tests, parse the coverage output, and identify gaps. Iterate until you reach your target. For a capstone project, aim for 90% or higher on business logic and 80% or higher overall.
Harden the test suite
intermediate15 minAsk Claude Code to generate the following additional test scenarios:
- Race condition test: What happens when two requests try to create the same short code simultaneously?
- Boundary test: Submit a URL that is exactly 2,048 characters long (common URL length limit).
- Malicious input test: Attempt to create a link with a JavaScript URI (
javascript:alert(1)) and verify it is rejected. - Performance test: Create 1,000 links and verify the list endpoint responds within 200ms.
Review Claude's test implementations. Do they actually test what you asked for, or did Claude take shortcuts? This is where your verification skills matter most.
Phase 4: Deployment with CI/CD and Headless Mode
The application works locally. Now set up the infrastructure to deploy it automatically and review pull requests with Claude Code.
Create the GitHub Actions workflow
Ask Claude to generate a CI/CD pipeline:
Create a GitHub Actions workflow at .github/workflows/ci.yml that:
1. Runs on every push and pull request to main
2. Sets up Node.js 22
3. Installs dependencies with npm ci
4. Runs linting
5. Runs the full test suite with coverage
6. Fails the build if coverage drops below 80%
7. Uploads test results as artifacts
Review the generated workflow carefully. CI/CD pipelines run unattended, so errors here are costly.
Add automated PR review
Add a second workflow for Claude Code in headless mode:
# .github/workflows/claude-review.yml
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Claude Code Review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Review this pull request. Check for:
- Adherence to project conventions in CLAUDE.md
- Test coverage for new code
- Security issues
- Performance concerns
Post your review as PR comments.This means every pull request gets an automated review from Claude Code before a human reviewer even looks at it. The review catches convention violations, missing tests, and potential bugs early.
Generate release notes
When you are ready to tag a release, use headless mode to generate release notes:
claude -p "Generate release notes for v1.0.0. Read all commits since the initial commit, categorize changes into Features, Bug Fixes, and Infrastructure, and format as markdown suitable for a GitHub release."Claude reads the git log, groups changes by category, and produces clean release notes you can paste directly into your GitHub release.
Phase 5: Documentation and Polish
A project without documentation is a project that only works for the person who built it today. Use Claude Code to generate everything a new contributor would need.
Generate comprehensive documentation for LinkForge:
1. API documentation in docs/api.md with request/response examples for every endpoint
2. Update README.md with project overview, setup instructions, environment variables, and development workflow
3. Add JSDoc comments to all exported functions in src/services/
4. Create a CONTRIBUTING.md with coding standards, PR process, and testing requirements
After Claude generates the documentation, read through it yourself. Documentation that contains hallucinated endpoints or incorrect setup steps is worse than no documentation at all. Verify every command, every endpoint, and every example.
Final Review with Plan Mode
Before calling the project complete, do one last comprehensive review:
Review the entire LinkForge project. Use plan mode to create a prioritized list of improvements, bugs, and cleanup tasks. Categorize each item as critical, important, or nice-to-have.
Claude will switch to Plan Mode, analyze the full codebase without making changes, and produce a structured improvement plan. Address any critical items. Log the rest as GitHub issues for future work.
The complete capstone build
advanced45 minBuild the entire LinkForge project from start to finish using the workflow described in this lesson. Here is your checklist:
Phase 1: Foundation
- Initialize repository and run
/init - Customize CLAUDE.md with LinkForge conventions
- Create both custom skills
- Configure permissions in settings.json
Phase 2: Implementation
- Create the agent team task file
- Build backend API with all CRUD endpoints
- Build frontend UI with form and dashboard
- Verify agents followed CLAUDE.md conventions
Phase 3: Quality
- Set up pre-commit hooks
- Run security audit via subagent
- Achieve 80%+ test coverage
- Fix all failing tests
Phase 4: Deployment
- Create CI/CD workflow
- Add automated PR review workflow
- Generate release notes for v1.0.0
Phase 5: Polish
- Generate API documentation
- Write README and CONTRIBUTING.md
- Run final Plan Mode review
- Address all critical findings
Time yourself. Track how long each phase takes. Notice which phases benefit most from Claude Code and which still require significant human judgment.
Reflection: What You Built and How You Built It
Step back and consider what just happened. You built a complete web application, and the majority of the code was written by an AI agent operating under your direction. But you were not passive. At every stage, you made critical decisions:
- Architecture: You chose the stack, the directory structure, and the conventions.
- Quality standards: You defined what "good enough" means through CLAUDE.md and your test coverage targets.
- Coordination: You designed the agent team roles and task boundaries.
- Verification: You reviewed output, caught errors, and directed corrections.
- Judgment: You decided when to trust Claude's output and when to dig deeper.
This is the agentic coding paradigm in practice. You are not a typist. You are a technical lead directing a capable team. The skill is not in writing code character by character. The skill is in setting context, giving clear direction, and verifying results.
Course Complete
You have reached the end of Claude Code Fundamentals. Over 24 lessons, you moved from understanding the agentic paradigm to orchestrating multi-agent teams building production applications. That is a significant transformation in how you work.
Here is what you now know:
Key Takeaway
- Claude Code is not autocomplete. It is an autonomous agent that gathers context, takes action, and verifies results. Your role is to direct it effectively.
- CLAUDE.md is the single highest-leverage investment. It persists your knowledge across sessions and scales across agents.
- Custom skills turn repetitive multi-step workflows into single commands. Build skills for anything you do more than twice.
- Agent teams parallelize work, but they need clear boundaries and shared context to coordinate effectively.
- Hooks and CI/CD integration extend Claude Code beyond interactive sessions into your entire development pipeline.
- Verification is your most important skill. Claude is capable but not infallible. Trust, but verify.
- The agentic coding paradigm is about direction, not dictation. The best results come from clear context, specific instructions, and thoughtful review.
Where to Go From Here
This course gave you the fundamentals. Here is how to keep growing:
Build real projects. The capstone was guided. Your next project will not be. Pick something you actually need, build it with Claude Code, and notice where you struggle. That is where growth happens.
Push the boundaries. Try building a project in a language you do not know well. Use Claude Code to bridge the knowledge gap while you learn. You will discover that agentic coding accelerates learning, not just productivity.
Contribute to open source. Find an open source project, read their CONTRIBUTING.md, set up your CLAUDE.md, and submit a PR. Working with an unfamiliar codebase is where Claude Code's ability to read and understand code shines brightest.
Share your knowledge. Write about your workflow. Teach a colleague. The best way to deepen your understanding is to explain it to someone else.
Stay current. Claude Code evolves rapidly. New capabilities, new integrations, and new patterns emerge regularly. Follow the Anthropic changelog, experiment with new features as they ship, and update your CLAUDE.md files and skills as your workflow matures.
You started this course wondering what agentic coding even means. You are ending it as someone who can orchestrate AI agents to build production software. That is not a small thing. Go build something remarkable.