Skip to content

M13: Team Adoption — Standards, Safety, and Scaling — Workshop Guide

Self-directed | 45–60 min | Requires: M13 study guide read beforehand


Prerequisites

  • M13 study guide read (theory + readings)
  • Access to your team’s tooling (Git, cost monitoring, permission management)
  • Leadership support (or at least willingness to discuss policies)
  • Understanding of your team’s existing code review processes
  • A text editor or Google Docs for drafting documents

What this workshop does The theory covers permission modes, cost management, and team dynamics. This workshop makes them actionable. You will draft actual team policies, set up cost monitoring, and create onboarding checklists. By the end, you’ll have a working CLAUDE.md and an adoption playbook ready to commit.


  • Exercise 1: Establish team conventions
  • Exercise 2: Cost monitoring setup
  • Exercise 3: Draft adoption playbook
  • Hands-on: Write CLAUDE.md and onboarding checklist

Goal: Document what your team will do with Claude Code.

Create a document (in your repo as CLAUDE.md or similar):

# Claude Code Team Guide
## When to Use Claude Code
- Writing boilerplate or repetitive code
- Exploring unfamiliar libraries
- Drafting initial implementations
- Refactoring large sections
- Debugging (with human verification)
## When NOT to Use Claude Code
- Security-sensitive code (auth, cryptography)
- Production database operations
- Anything with secrets
- Initial architecture design (human judgment needed)
## Code Review
- Code generated by Claude is still code; it requires review
- Reviewers check for correctness, style, and appropriateness
- Claude-generated code is no different from human-generated code
## Permissions
- Default: Normal mode (prompt before execution)
- Auto-Accept: Only after developer demonstrates competence
- Plan mode (`-p`): For CI/CD and automated analysis
## Model Selection
- Haiku: debugging, boilerplate, simple refactoring
- Sonnet: architecture, complex problems, final review
- GPT-4: critical decisions, sensitive code
## Costs
- Budget: [your amount] per developer per month
- Monitoring: Check usage dashboard monthly
- Escalation: Let tech lead know if you exceed budget
## Knowledge Sharing
- Document your best prompts in [shared location]
- Share interesting Claude Code solutions in [communication channel]
- If Claude solves a recurring problem, consider adding it to CI/CD
## Escalation and Red Flags
- If Claude suggests something risky, ask a human first
- If code doesn't pass review, iterate—don't just accept
- If you're confused about what Claude did, investigate before merging

Your turn: Adapt this template for your team. What rules matter most?


Goal: Know what Claude Code costs your team.

  1. Identify your cost tracking tool:

    • Anthropic dashboard (if direct API usage)
    • IDE plugin settings (if using Claude Code in IDE)
    • Cloud cost monitoring (AWS, GCP, Azure)
  2. Set budgets:

    • Per-developer: e.g., $100/month
    • Per-team: e.g., $2000/month
    • Per-project: e.g., critical project gets more budget
  3. Establish review cadence:

    • Weekly: spot-check usage (any spikes?)
    • Monthly: full review (total cost, model breakdown)
    • Quarterly: impact assessment (is this ROI positive?)
  4. Create alerts:

    • If a developer exceeds monthly budget by 20%, notify them
    • If team exceeds budget, pause new usage and investigate

Template for monitoring:

Claude Code Spend (March 2026)
Developer | Messages | Tokens | Cost | Avg Cost/Message
Alice | 150 | 45K | $18 | $0.12
Bob | 75 | 20K | $8 | $0.10
Carol | 200 | 80K | $32 | $0.16
--- | --- | --- | --- | ---
Team Total | 425 | 145K | $58 | $0.14
Trends:
- Alice is heavy user (good? or too dependent?)
- Bob is efficient (good prompts? or simple tasks?)
- Team stayed under $100 budget. Good.
Recommendations:
- Share Alice's best prompts with team
- Encourage Bob to mentor others
- Budget allows for more usage next month

Goal: Create onboarding and governance docs.

Outline:

# Team Adoption Playbook
## For New Team Members
1. Read CLAUDE.md (5 min)
2. Try Claude Code on a simple feature branch (15 min)
3. Get code review as usual
4. Pair with experienced Claude Code user (30 min)
5. Start using independently on non-critical work
## For Team Leads
- Ensure new hires know CLAUDE.md
- Spot-check usage quarterly (cost and patterns)
- Collect feedback: what's working, what's not?
- Update policies based on learnings
## Governance
- Monthly reviews of usage and cost
- Escalation path if someone misuses Claude Code
- Annual re-evaluation: still working for us?
## Feedback Loops
- Where to post good prompts you discover
- Where to report bugs or problems
- Where to suggest improvements to CLAUDE.md
## Red Flags (when to escalate)
- Cost spike above budget
- Someone using Claude Code for untrusted code
- Repeated code review failures on Claude-generated code
- Developers bypassing code review because "Claude wrote it"

Hands-on Exercise: Write Your Team’s CLAUDE.md and Onboarding Checklist

Section titled “Hands-on Exercise: Write Your Team’s CLAUDE.md and Onboarding Checklist”

You’ll create an actual document your team will use.

  1. Copy the template from Exercise 1 or start fresh

  2. Answer these questions for your team:

    • When should developers use Claude Code?
    • When should they NOT?
    • What permission mode is default?
    • What’s your cost budget?
    • Who decides on model selection?
    • How do you handle security-sensitive code?
    • What’s your escalation path if something goes wrong?
  3. Write the document in Markdown (so you can commit it to your repo)

  4. Review it yourself or with a teammate. Revise.

  5. Commit to your repo (e.g., CLAUDE.md in root)

Create a checklist for new team members:

# Onboarding to Claude Code
- [ ] Read CLAUDE.md (takes 10 min)
- [ ] Watch [demo video] or pair with [mentor]
- [ ] Run Claude Code on a simple task to check it works
- [ ] Create a feature branch and solve a small task with Claude Code
- [ ] Request code review (even though it's AI-generated)
- [ ] Attend monthly Claude Code sync (first Friday of month)
- [ ] Add yourself to [team Slack channel]
- [ ] You're ready to go; start using on real work!

Part 3: Governance and Cost Monitoring (10 min)

Section titled “Part 3: Governance and Cost Monitoring (10 min)”

Document your review process:

# Monthly Review Process
1. Pull usage data from [tool]
2. Check if any developer exceeded budget
3. Spot-check top 3 users: were they using Claude Code well?
4. Note any cost spikes or unusual patterns
5. Update team with summary (in Slack or email)
6. Revise CLAUDE.md if needed based on learnings

Work through these on your own after completing the exercises:

  1. When you wrote CLAUDE.md, what was the hardest decision? — Consider: permission modes, budget setting, defining “security-sensitive code.”

  2. What would you put in the “What NOT to use Claude Code for” section specific to your domain? — Consider domain-specific risks: compliance requirements, data sensitivity, security constraints.

  3. If someone uses Claude Code for untrusted code (against policy), how do you address it? — Is it a training moment or a red flag? Does the answer change based on seniority or context?

  4. You set a monthly budget. What happens if someone exceeds it by 50%? — Define a clear process: automatic pause, or conversation first?

  5. Who owns monitoring costs and making the monthly decision? — “Everyone’s responsible” usually means no one is. Name a role.


“We don’t have a cost monitoring tool yet” — No problem. Start simple: export API logs to CSV, use a spreadsheet. Formality can come later. The habit of tracking matters more than the tool.

“Our org has strict security policies; Claude Code might not fit” — That’s a real constraint. Solution: define a narrow set of use cases (non-sensitive refactoring, documentation generation) and keep sensitive work off-limits. Document the boundaries clearly.

“New devs think ‘Claude wrote it’ means ‘code review isn’t needed’” — This is a culture issue, not a tool issue. Solution: emphasize in onboarding: “Code generated by Claude is still code. Review still happens.” Maybe even highlight reviewed AI code in team comms to normalize it.

“Someone refuses to follow the permission mode we set” — They have autonomy concerns. Solution: discuss openly. Maybe they want Auto-Accept (if they’re experienced), or maybe they’re worried about being slowed down. Adjust policy if the feedback is widespread.

“Budget is set, but it’s arbitrary—no one knows if it’s right” — True. Solution: treat budgets as hypotheses. Try $100/dev for a month. Measure actual usage. Adjust next month based on data. After 3 months, you’ll know if it’s realistic.

“We don’t have leadership buy-in yet” — Acknowledge this is a blocker. Create a pilot: 2-3 devs use Claude Code for a month, measure outcomes (velocity, code quality, cost), present results to leadership. Data helps more than arguments.


You should have produced:

  • A team CLAUDE.md document (can be rough; will be refined)
  • Cost monitoring approach (tool identified, budgets set)
  • Onboarding checklist for new team members
  • Governance process (monthly review, escalation path)
  • A shared location where these documents will live
  • A person assigned to maintain/update these docs quarterly