You’ve built basic prompts. Now scale to reusable, team-wide workflows. Claude Code supports a composition stack: CLAUDE.md (persistent context) → Skills (reusable instructions) → Subagents (isolated workers with independent state) → Hooks (deterministic lifecycle scripts) → MCP (external tools) → Plugins (bundled packages) → Agent Teams (coordinated parallelism).
This module teaches you to design skills that your whole team uses, create specialized subagents for security review or code quality checks, and wire hooks into your development lifecycle. By the end, you’ll have at least one team skill live in production and one hook auto-running on every pull request.
Duration: 90 minutes (15-20 min pre-work + 60-75 min workshop + exercises)
Hands-on: Custom skill + subagent + hook in a real repository
Takeaway: At least one team skill and one hook deployed to your project
Coordinated parallel sessions (experimental — disabled by default)
Large refactors, multi-component features, parallel reviews
Team-level orchestration
Agent Teams — Experimental: Agent Teams are disabled by default. To enable them, set the following environment variable before starting your session:
Terminal window
exportCLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
Known limitations: no session resumption with in-process teammates, task status may lag, slow shutdown with 5+ agents, one team per session, no nested team-in-team architectures. Report issues at github.com/anthropics/claude-code/issues.
The description field is the most important line in your skill — Claude uses it to decide whether to auto-invoke the skill. Write it like a natural-language request:
# Bad — too abstract
description: Audit software artefacts for quality compliance
# Good — matches natural requests
description: >
Review code for bugs, security issues, and style violations.
Use when user asks to "review", "check", or "audit" code.
Is this a reusable instruction set you run repeatedly?
Skill (Pattern A or B)
Does it need isolated reasoning or specialized domain knowledge without influencing your main context?
Subagent
Do multiple agents need to work in parallel, with coordinated output?
Agent Team(experimental)
Quick cost note: subagents each open an independent context window; agent teams multiply that cost by the number of agents (roughly 4–15× token usage vs. a single session). Use parallelism when the tasks are genuinely independent and the speed or quality gain justifies it.
"hooks": [{ "type": "command", "command": "osascript -e 'display notification \"Claude is done\" with title \"Claude Code\"'" }]
}]
}
}
The matcher field supports glob patterns to target specific tool calls. Use shell scripts (.claude/hooks/) for complex logic; use settings.json for simple one-liners.
Hooks live in .claude/hooks/ as shell scripts. They receive a JSON payload on stdin (not environment variables) and signal outcomes via exit codes:
Skill:
A reusable instruction template triggered by /skill-name. Can be pure markdown or include scripts/subagents.
Subagent:
An independent session with its own configuration, tools, and context window. Useful for specialized work (security review, code quality, architecture) without biasing the main agent.
Hook:
A deterministic script running at lifecycle points (PreToolUse, PostToolUse, Notification, Stop). Enables automation: auto-linting, validation, notifications.
Composition Stack:
CLAUDE.md → Skills → Subagents → Hooks → MCP → Plugins → Agent Teams. Each layer has a specific purpose; use the right layer for the job. Agent Teams are experimental and require CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 to enable.