Claude Code Power User Guide: Skills, Hooks, Subagents

Configure Claude Code beyond defaults: skills, hooks, subagents, and CLAUDE.md memory. Working examples, team patterns, and common setup mistakes to avoid.

14 minutes
Advanced
2026-04-14

Claude Code works fine out of the box. It also works nothing like the version a serious team actually uses. Skills, hooks, subagents, and CLAUDE.md memory turn Claude Code from a chat-style coding helper into something closer to a configurable agent that respects your project's rules, runs commands on the right events, and dispatches work in parallel.

This guide covers all four primitives, how they fit together, and the setup patterns that hold up in a real codebase past the first week.

What Is Claude Code and Why Customize It?

Claude Code is Anthropic's CLI for running Claude inside a terminal session. It reads files, runs commands, edits code, and calls tools. The defaults are conservative on purpose. A fresh install gives you:

  • A single context window with no project-specific instructions
  • No persistent memory across sessions
  • No custom commands
  • No automated guardrails on destructive actions
  • Sequential tool calls even when parallel work would be faster

That's a fine starting point for a five-minute exploration. It's a weak starting point for shipping production code. The four primitives below each fix a different gap.

  • CLAUDE.md: persistent instructions that load on every session, per-project and globally
  • Skills: packaged workflows Claude invokes based on context or /slash-command
  • Hooks: shell commands the harness runs on specific events (session start, tool use, and so on)
  • Subagents: isolated Claude instances for parallel work or sensitive contexts

Most teams eventually use all four. The order matters too: get CLAUDE.md right first, then add the rest as specific problems come up.

Why Claude Code for Serious Engineering Work?

The case for Claude Code over a pure IDE integration like Cursor or GitHub Copilot comes down to four things.

  1. Extensibility beats polish. Cursor gives you a better editor UI. Claude Code gives you hooks, skills, subagents, and MCP servers that you can shape to your workflow. For detailed tradeoffs between the three, see our guide to AI coding assistants.

  2. CLI-first fits existing engineering. Claude Code lives next to git, npm, and docker. It reads your .env, respects your shell, and pipes into scripts. Nothing about it assumes a specific editor.

  3. MCP ecosystem. Model Context Protocol lets Claude Code talk to databases, Jira, Slack, AWS, and internal tools through a standard interface. You write the server once, every MCP-aware client uses it.

  4. The agent loop is inspectable. When something goes wrong in Cursor, you get a polished error. When something goes wrong in Claude Code, you see the exact tool call, arguments, and response. Debugging is a grep away.

None of this matters if you don't configure the tool. A fresh install won't know your codebase, your conventions, or your team's safety rules. The rest of the guide covers how to teach it those things.

Building Your First CLAUDE.md

CLAUDE.md is the memory layer. It's a plain markdown file that gets injected into every session's context. There are three scopes, loaded in this order of precedence:

  1. ~/.claude/CLAUDE.md, global, applies to every project on your machine
  2. ./CLAUDE.md, project root, committed to your repo, shared with the team
  3. ./CLAUDE.local.md, project root, gitignored, for personal overrides

Claude Code also walks up the directory tree from your working directory, loading every CLAUDE.md it passes. Subdirectory CLAUDE.md files load lazily when Claude reads files inside them. That matters in monorepos, where you want shared rules at the repo root and package-specific rules in each package.

Here is a minimal but useful project CLAUDE.md:

# Project: billing-service

## Stack
- Node.js 20, TypeScript strict mode
- PostgreSQL via Prisma
- Fastify for HTTP, BullMQ for background jobs

## Conventions
- No `any` types. If the inference is wrong, fix the type at the source.
- All API handlers validate input with Zod schemas in `src/schemas/`
- Prisma queries go through `src/repos/`, never inline
- Tests use Vitest. Run `pnpm test -- <file>` for single-file runs

## Do not
- Run `prisma migrate dev`. Use `pnpm db:migrate` (has our pre-hooks).
- Commit anything that touches `src/billing/legacy/` without flagging it
- Install packages without checking the monorepo root package.json first

## Useful paths
- API routes: `src/routes/`
- Domain logic: `src/domain/<entity>/`
- DB migrations: `prisma/migrations/`

Two rules matter more than the content itself.

Keep it under 200 lines. Past that, the rules compete with each other and Claude ignores half of them. One team documented cutting theirs from 47k words to 9k and watched compliance jump. If you're past 200 lines, split by subdirectory rather than adding more sections.

Tell it what not to do, not just what to do. Negative instructions carry more weight than positive ones because they describe failure modes Claude can't infer from reading code. "Don't use the legacy billing module" is load-bearing. "Use clean code" is noise.

For monorepos specifically, put shared standards at the repo root and package-specific rules inside each package's own CLAUDE.md. When Claude works inside packages/billing/, it gets the root file plus the billing-specific one. Sibling packages never load, which keeps context clean.

Skills: Packaging Reusable Workflows

A skill is a directory with a SKILL.md file that Claude invokes when the description matches the task, or that you trigger explicitly with /skill-name. Skills are the right abstraction when a workflow has multiple steps and needs to stay consistent across sessions.

Minimal skill structure:

.claude/skills/release-notes/
├── SKILL.md
└── references/
    └── template.md

SKILL.md:

---
name: release-notes
description: Use when generating release notes from git commits between two tags. Groups by conventional commit type and filters out dependency bumps.
---

# Release Notes Generator

## Steps

1. Run `git log <from-tag>..<to-tag> --oneline` to list commits
2. Parse conventional commit prefixes: feat, fix, perf, refactor, docs, chore
3. Skip commits matching: `chore(deps)`, `chore: bump`, `Merge pull request`
4. Group into: Features, Fixes, Performance, Refactors, Other
5. Use the template in `references/template.md` for formatting
6. Output to stdout unless user specifies a file

The frontmatter has two required fields. name caps at 64 characters and becomes the /release-notes invocation. description caps at 200 characters and is what Claude reads to decide when to invoke the skill. Write the description so the trigger conditions are explicit. "Use when X" works better than "Helps with X" because the former tells Claude the activation condition, the latter just describes the topic.

Skills beat slash commands when:

  • The workflow has more than three steps
  • You need supporting files like templates, schemas, or examples
  • The workflow needs to compose with other skills

Slash commands (in .claude/commands/*.md) are better for one-shot prompts like /commit or /review-pr. Use skills when the work has structure that needs to persist. For a concrete example of how skills compose with agent memory, our write-up on persistent AI assistants covers the memory side.

One mistake worth avoiding: don't write one giant skill that covers a whole domain. A release-notes skill is good. A release-engineering skill that tries to cover notes, changelogs, version bumps, and deploys is too broad for Claude to invoke cleanly. Multiple small, focused skills compose better than one sprawling one.

Hooks: Automating Guardrails and Side Effects

Hooks are shell commands the harness runs on specific events. They're not Claude running a tool. They're your terminal running a script, triggered by the harness, with access to tool input or output as JSON on stdin.

The events fall into three groups by cadence:

Cadence Events
Once per session SessionStart, SessionEnd
Once per turn UserPromptSubmit, Stop, StopFailure
Every tool call PreToolUse, PostToolUse

Hooks live in .claude/settings.json at the project level, or ~/.claude/settings.json globally.

Example: auto-format TypeScript after every edit, and block any Bash call that touches production.

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "jq -r '.tool_input.file_path' | xargs -I {} sh -c 'case {} in *.ts|*.tsx) pnpm prettier --write {} ;; esac'"
          }
        ]
      }
    ],
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "command",
            "command": "jq -e 'select(.tool_input.command | test(\"prod|production\"; \"i\")) | error(\"production commands blocked, use staging\")' > /dev/null 2>&1"
          }
        ]
      }
    ]
  }
}

Three patterns carry most production hooks.

The first is format-on-save. PostToolUse matching Edit|Write pipes the file path to your formatter. Prettier, Black, gofmt, whatever your stack uses. This is the single highest-ROI hook because it silently cleans up every edit without Claude having to think about it.

The second is safety gates. PreToolUse matching Bash greps the command string for dangerous patterns and exits non-zero to block. Common targets: rm -rf, anything touching production hostnames, direct database writes, git push --force on protected branches.

The third is context injection. SessionStart runs a command that pipes environment state (current branch, open PRs, running services, feature flag status) into the session so Claude starts every conversation with fresh situational awareness.

The key mental model: the harness executes hooks, not Claude. That matters when a user says "from now on, always do X." Claude can't enforce that across sessions, but a hook can. If the automation must happen every time regardless of what Claude decides, it belongs in a hook. If it's context-dependent, a skill is the better fit.

Test hooks with sample JSON on stdin before committing them. A broken hook can wedge every session until you edit settings.json manually to disable it.

Subagents: Parallel Work and Context Isolation

A subagent is a fresh Claude instance the main session spawns through the Task tool. Each subagent has its own context window, its own tool access, and reports a summary back when done. They solve two different problems.

The first is parallel work. If you have five independent research queries or three unrelated refactors, dispatch them concurrently instead of running them sequentially. Claude Code supports up to roughly seven parallel subagents, though practical limits depend on your token budget.

The second is context isolation. If a task would dump hundreds of files into context (mass search, log analysis, API exploration), run it in a subagent so the main session stays clean. The subagent burns through its own context, returns a summary, and the main session gets the distilled answer without the raw noise.

Define custom subagents in .claude/agents/<name>.md:

---
name: migration-auditor
description: Reviews database migration files for concurrency issues, missing indexes, and backward-compatibility breaks. Use before merging any migration PR.
tools: Read, Grep, Glob, Bash
---

You review database migrations for safety. For each migration file, check:

1. Does it add a NOT NULL column without a default? That breaks concurrent writes.
2. Does it drop a column referenced elsewhere in the codebase? Grep for the column name.
3. Does it add an index on a large table without CONCURRENTLY? That holds a lock.
4. Does it rename a table or column? That breaks deployed code reading the old name.

Output a punch list of issues with file:line references. If clean, say so and stop.

Invoke it from the main session by asking Claude to use the migration-auditor agent, or via the Task tool directly.

Parallel dispatch works when the tasks share no mutable state, file boundaries are clear with no two agents editing the same file, and you can specify each agent's scope in the prompt. Claude Code is conservative about parallelism by default, so be explicit. "Run 4 parallel agents, one per service directory" produces better results than "parallelize this."

One common mistake: writing a subagent prompt as if it had conversation context. It doesn't. Every subagent starts from zero. Spell out the file path, the line number, the expected change. "Based on what we discussed" means nothing to an agent that never saw the discussion. For deeper patterns on multi-agent coordination, our post on LangGraph state machines covers the orchestration side from a framework-author perspective.

Advanced: Connecting MCP Servers

Model Context Protocol is how Claude Code talks to external systems. Registered servers expose tools (functions Claude can call), resources (read-only data), and prompts (reusable templates that show up as /mcp__servername__promptname slash commands).

Add a server:

# Local scope, only this machine
claude mcp add --transport stdio postgres -- npx -y @modelcontextprotocol/server-postgres postgresql://localhost/dev

# Project scope, committed to .mcp.json, shared with team
claude mcp add --scope project --transport stdio linear -- npx -y @linear/mcp-server

Inspect and manage:

claude mcp list           # all registered servers
claude mcp get postgres   # details for one
/mcp                      # status inside a session

Three production patterns are worth copying.

Internal read-only API servers are a good first MCP integration. Expose Jira, Confluence, or your internal wiki as MCP resources. Claude queries them during planning without you pasting links into the chat.

Database servers with safety rails work well for schema exploration. Configure an MCP Postgres server pointed at a read-replica with statement_timeout and LIMIT guards baked in. Claude can explore schemas safely without risking production data.

Observability servers close the debugging loop. Point an MCP server at Datadog, Sentry, or CloudWatch so Claude pulls error rates and logs directly during incident work. Pair it with a hook that logs every MCP tool call for audit, and you get something close to a compliance-friendly agent. For the agent-automation story end to end, our guide on autonomous AI agents for CI/CD shows how these pieces fit into a pipeline.

Best Practices

Seven patterns separate well-configured Claude Code setups from the ones teams abandon after a week.

  1. CLAUDE.md first, everything else second. Don't reach for skills or hooks until the base memory file teaches Claude your stack, conventions, and forbidden commands. Most "Claude keeps doing the wrong thing" problems vanish after a good CLAUDE.md.

  2. One skill per workflow, not one skill per domain. Smaller skills compose better than a sprawling one. A focused skill with a clear trigger condition gets invoked reliably.

  3. Hooks for must-happen, skills for should-happen. If it has to run every time (format code, block destructive commands, audit log), use a hook. If it's a workflow Claude should consider when the context calls for it, use a skill.

  4. Version control the team config. Commit .claude/settings.json, .claude/skills/, .claude/agents/, and .mcp.json. Gitignore .claude/settings.local.json and CLAUDE.local.md. New hires get a working setup by cloning the repo.

  5. Keep subagent prompts self-contained. A subagent doesn't see your conversation. Spell out the file, the line, and the expected change every time.

  6. Budget your context. Every CLAUDE.md line, every loaded skill, and every MCP tool description eats context. If you're running into "conversation getting long" prompts, audit what's loaded. Remove unused skills. Trim stale sections.

  7. Test hooks before committing. Run the hook command manually with sample JSON on stdin before wiring it into settings.json. A broken hook can wedge every session.

Deployment Considerations

Rolling this out across a team introduces operational concerns that don't show up in single-user demos.

Scalability. There's no shared state between sessions by default. If two engineers want the same agent to behave consistently, that consistency lives in committed config: CLAUDE.md, .claude/, .mcp.json. Anything stored only in a local session disappears.

Cost. Claude Code bills per token, and long sessions with large CLAUDE.md files plus full tool descriptions get expensive. Two levers help. Trim what loads on every session. Set CLAUDE_CODE_SUBAGENT_MODEL to a cheaper model for subagents doing grunt work like log parsing or mass file reads.

Security. Hooks run with your shell permissions. MCP servers often hold credentials. Treat .claude/settings.json like any other config that may touch secrets: review PRs that modify it carefully, keep production credentials out of project-scoped MCP config, reference environment variables instead of inline values.

Monitoring. The harness writes session logs to ~/.claude/projects/<encoded-path>/. For team-wide visibility, hooks on PostToolUse can ship redacted tool calls to a central log. That matters more for agents that touch production systems than for local dev work, but the instrumentation is cheap to add upfront.

Real-World Applications

Teams using this setup in production tend to converge on similar patterns.

Database migration review pairs a migration-auditor subagent with a PreToolUse hook that blocks direct prisma migrate calls, plus CLAUDE.md rules about migration naming conventions. That combination catches concurrency bugs before they ship.

PR authoring combines a /commit slash command and an /open-pr skill that reads the diff, drafts a body, checks the linked issue tracker via MCP, and opens the PR with gh pr create. Time spent per PR drops noticeably once the skill knows your PR template.

On-call triage uses an MCP server exposing Sentry, Datadog, and PagerDuty, with a triage-oncall skill that pulls the current incident, fetches related logs, and proposes a first hypothesis. It shortens the gap between getting paged and taking the first meaningful action.

Codebase onboarding uses a /codebase-tour skill that walks new hires through the repo structure using CLAUDE.md rules and a map of key files. Pair it with subagents for parallel exploration when the codebase is large.

Compliance-sensitive workflows rely on hooks that log every Bash call, every file write, and every MCP tool use to an auditable sink. It's not glamorous, but it's the cheapest way to make an agent reviewable by a compliance team.

Conclusion

Claude Code's defaults keep the barrier to entry low. The primitives that matter for serious work (CLAUDE.md, skills, hooks, subagents, MCP) are opt-in, file-based, and boring in the right way. They compose, they live in git, and they're debuggable with standard tools.

The honest caveat: this setup takes real time to get right. Expect to spend a week iterating on CLAUDE.md, another week building the first two or three skills, and a steady trickle of hook tweaks as you hit edge cases. The payoff is an agent that knows your codebase, enforces your team's rules, and dispatches work in parallel instead of one file at a time. For any team planning to use Claude Code past the demo phase, that investment pays back.

If you're new to agent design in general, our pieces on building agentic systems with LangChain and production-ready AI agents cover the theory side of orchestration, prompting, and failure modes.

Next Steps

  1. Write a project CLAUDE.md. Start with 30-50 lines covering stack, conventions, and forbidden commands.
  2. Add one hook to .claude/settings.json. Either format-on-save or a safety gate on Bash.
  3. Extract one repeated workflow from your current Claude sessions into a skill under .claude/skills/.
  4. Register one MCP server pointed at your issue tracker or observability stack.
R

Refactix Team

Practical guides on software architecture, AI engineering, and cloud infrastructure.

Share this article

Topics Covered

Claude CodeClaude Code SkillsClaude Code HooksClaude Code SubagentsCLAUDE.MdClaude Code Setup

You Might Also Like

Ready for More?

Explore our comprehensive collection of guides and tutorials to accelerate your tech journey.

Explore All Guides
Weekly Tech Insights

Stay Ahead of the Curve

Join thousands of tech professionals getting weekly insights on AI automation, software architecture, and modern development practices.

No spam, unsubscribe anytimeReal tech insights weekly