Optimizing Claude Code: Skills, Plugins, and the Art of Teaching Your AI to Code Like You

12 min read Original article ↗

I’ve been using Claude Code for months now, and for most of that time, I was doing it wrong. Not wrong in the sense of getting bad results—the defaults are remarkably capable. Wrong in the sense that I was treating a customizable system like a fixed tool. I was adjusting my workflow to fit the AI instead of adjusting the AI to fit my workflow.

The difference between default Claude Code and a properly configured instance is the difference between hiring a talented generalist and hiring someone who’s worked at your company for years. Both can write code. Only one knows that your team prefers for…of over .forEach(), that you never use the I prefix on interfaces, and that when you say “analyze this bug,” you mean a specific six-step process that includes hypothesis testing.

Here’s how I built that second version.

The Architecture: How the Pieces Fit Together

Claude Code’s customization system has multiple layers, each serving a different purpose:

~/.claude/
├── settings.json          # Global settings (thinking, tokens, plugins)
├── skills/                # Domain expertise (TypeScript patterns, AWS, etc.)
├── commands/              # Workflow shortcuts (/analyze-bug, /simplify)
├── hooks/                 # Active enforcement rules (warn/block patterns)
└── plugins/               # External tool integrations (browser, AST)

project/
├── CLAUDE.md              # Project-specific instructions
└── .claude/
    ├── agent_docs/        # Reference documentation for this project
    └── hookify.*.local.md # Project-specific enforcement rules

The mental model:

  • settings — control behavior
  • CLAUDE.md — provides project context
  • skills — encode domain expertise
  • hooks — enforce conventions
  • commands — trigger workflows
  • agent_docs — provide reference material
  • plugins — add capabilities

What’s less widely appreciated is how these layers interact. When Claude Code starts a session, it reads your settings, loads relevant skills based on context, and injects CLAUDE.md into its system prompt. When you invoke a command, it triggers a predefined workflow. When you mention a topic covered by a skill, Claude applies that expertise automatically.

Settings: The Foundation

My ~/.claude/settings.json is minimal but deliberate:

{
  "includeCoAuthoredBy": false,
  "enabledPlugins": {
    "frontend-design@claude-code-plugins": true,
    "dev-browser@dev-browser-marketplace": true,
    "ast-grep@ast-grep-marketplace": true
  },
  "env": {
    "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "64000",
    "MAX_THINKING_TOKENS": "31999"
  },
  "alwaysThinkingEnabled": true
}

A few things worth noting:

alwaysThinkingEnabled: true — This enables extended thinking on every response. The tradeoff is latency for quality. For complex refactoring or architectural decisions, I want Claude to think deeply. For quick questions, it’s overkill. I keep it on because my typical use case is substantial engineering work.

Token limits — Increasing CLAUDE_CODE_MAX_OUTPUT_TOKENS to 64000 prevents truncation on large refactors. The MAX_THINKING_TOKENS setting controls how much “thinking” space Claude has before responding.

includeCoAuthoredBy: false — I don’t need AI authorship attribution in every commit message. Personal preference.

The full settings file is available in my dotfiles repo.

CLAUDE.md: Project-Level Instructions

Every project gets a CLAUDE.md at the root. This is where you encode project-specific knowledge: commands to build and test, directory structure, coding principles, workflow patterns.

Here’s the template I use:

# Project Guidelines

## Project Overview
- **Name**: [Project name]
- **Purpose**: [Brief description]
- **Stack**: TypeScript, [framework], [database]

## Commands
npm run build      # Build the project
npm run test       # Run tests
npm run lint       # Lint code

## Directory Structure
src/
├── handlers/     # Entry points (API routes, Lambda handlers)
├── lib/          # Shared business logic
└── scripts/      # Development/build scripts

## Workflow Patterns

**New Feature**
1. Plan → Implement → Test → Review

**Bug Fix**
1. Reproduce → Hypothesize → Fix → Add regression test
2. **Escalation**: After 2 failed fix attempts, stop and use **/analyze-bug**

## Core Principles

1. **Simplicity over cleverness** - Write code that's immediately understandable
2. **Leverage existing solutions** - Use standard libraries, don't reinvent
3. **Single responsibility** - Functions do one thing, under 20 lines
4. **Early returns** - Guard clauses over nested conditionals
5. **Match existing patterns** - Follow the file's conventions exactly

## Before You Start

Read the relevant reference docs in **.claude/agent_docs/**:

| File | When to Read |
|------|--------------|
| coding-patterns.md | Writing new TypeScript code |
| anti-patterns.md | Before code review or PR |
| error-handling.md | Implementing error handling |

The key insight: CLAUDE.md is a system prompt you control. Every instruction you put here shapes every response you get. You can define escalation patterns that tell Claude to stop thrashing and switch to a structured process after failed attempts—for example, my Bug Fix workflow triggers a 6-step root cause analysis after two failed fixes.

Full template: CLAUDE.md in dotfiles

Skills: Teaching Domain Expertise

Skills are markdown files that encode specialized knowledge. When Claude detects that a skill is relevant to your task, it applies that expertise automatically.

I maintain several skills:

TypeScript Patterns

This skill encodes my team’s TypeScript conventions—things that aren’t in style guides but matter for consistency:

---
name: typescript-patterns
description: TypeScript code patterns for types, interfaces, assertions, and type safety.
---

# TypeScript Patterns Skill

## Core Principles

### Prefer Type Inference Over Explicit Return Types
Let TypeScript infer function return types instead of explicitly specifying them.

// ✅ CORRECT: Inferred return type
function calculateTotal(items: OrderItem[]) {
  return items.reduce((sum, item) => sum + item.price, 0);
}

// ❌ WRONG: Explicit return type
function calculateTotal(items: OrderItem[]): number {
  return items.reduce((sum, item) => sum + item.price, 0);
}

**Why:** Type inference catches implicit type coercion bugs.

### Interface Naming
Never use **I** prefix or **Data** suffix.

// ✅ CORRECT
export interface Product { ... }

// ❌ WRONG
export interface IProduct { ... }
export interface ProductData { ... }

The full skill covers interfaces vs types, enum conventions, null handling, type assertions, and module imports. It’s 700+ lines because TypeScript has a lot of conventions worth encoding.

Serverless AWS

For Lambda, DynamoDB, and SQS patterns:

---
name: serverless-aws
description: Patterns for AWS Lambda, DynamoDB, SQS, and Secrets Manager.
---

## Lambda Handler Structure

export async function handle(event: AWSLambda.APIGatewayProxyEvent) {
    try {
        return await handleInternal(event);
    } catch (error) {
        await trace.error(error, { headers: event.headers, body: event.body });
        return createErrorResponse(error);
    }
}

## Cold Start Optimization

Initialize clients outside handler:

const dynamoClient = new DynamoDBClient({ region: 'us-east-1' });
let configCache: Config;

export async function handle(event: any) {
    if (!configCache) configCache = await loadConfig();
    return process(event, configCache);
}

Code Review

This skill catches AI-generated code patterns that don’t match human-written code:

---
name: code-review
description: Review code changes and remove AI-generated patterns.
---

## What to Look For

### Excessive Comments
AI tends to over-comment. Remove comments that:
- State the obvious (e.g., **// increment counter** above **counter++**)
- Repeat the function/variable name

### Gratuitous Defensive Checks
Remove defensive code that doesn't match the codebase style:
- Null checks on values already validated upstream
- Type checks on typed parameters
- Try/catch blocks in trusted codepaths

This is something I’ve noticed consistently: AI models add defensive code and comments that human developers wouldn’t. Having a skill that explicitly tells Claude to avoid these patterns makes the output feel more natural.

Skills directory: skills in dotfiles

Hooks: Active Enforcement

Skills teach Claude how to code. Hooks enforce that it does. The difference matters.

A skill might say “prefer for…of over .forEach()“—but Claude can still forget. A hook catches it in real-time, warning or blocking before the code is written. It’s the difference between training and guardrails.

I use the hookify plugin to create enforcement rules from simple markdown files. Here are my active hooks:

HookActionWhat It Catches
warn-any-typewarn: any, <any>, any[]
block-as-anyblockas any casts
warn-as-syntaxwarnas Type (prefer <Type>)
warn-foreachwarn.forEach() (prefer for…of)
warn-interface-prefixwarninterface IFoo
warn-debug-codewarnconsole.log, debugger
warn-default-importwarnDefault imports (prefer namespace)
block-hardcoded-secretsblockHardcoded API keys/passwords

Creating Hooks

Hooks are markdown files with YAML frontmatter. Here’s an example that blocks as any casts:

---
name: block-as-any
enabled: true
event: file
pattern: as\s+any(?!\w)
action: block
---

🛑 **Unsafe as any cast detected!**

This bypasses type safety entirely. Instead:
- Use a proper type assertion: **&lt;SpecificType&gt;value**
- Create a type guard function
- Fix the underlying type issue

If you truly need to escape the type system, explain why.

The action field determines severity:

  • warn — Shows message but allows the operation
  • block — Prevents the operation entirely

The Skills + Hooks Combo

This is where customization compounds. My TypeScript patterns skill teaches Claude the conventions. My hooks enforce them. If Claude violates a convention—say, using as Type instead of <Type>—the hook catches it before the code is written.

The feedback loop is immediate: Claude sees the warning, adjusts its output, and continues. Since Claude Code is stateless between sessions, the hooks provide consistent enforcement every time. Skills inform, hooks enforce.

Hooks directory: hooks in dotfiles

Custom Commands: Workflow Shortcuts

Commands are like shell aliases for Claude workflows. Instead of typing a detailed prompt, you invoke /analyze-bug or /simplify and get a consistent, structured response.

CommandPurpose
/analyze-bug6-step root cause analysis for stubborn bugs
/simplifyReduce code complexity while preserving behavior
/plan-featureBreak complex features into implementable stages
/review-diffReview code changes against project guidelines
/fix-typesFix TypeScript type errors systematically
/take-notesCapture decisions and context during development

Commands directory: commands in dotfiles

Agent Docs: Reference Material on Demand

Agent docs are markdown files in .claude/agent_docs/ that Claude reads when relevant. Unlike skills (which encode how to do things), agent docs provide reference material (what things are).

CLAUDE.md tells Claude when to read each doc. More efficient than stuffing everything into context—Claude loads docs on demand.

Agent docs: agent_docs in dotfiles

Plugins: Extending Capabilities

Plugins add new tools and workflows to Claude’s toolkit. I use several, organized by purpose:

Core Tools

ast-grep — Structural code search using AST patterns. Better than regex for finding code patterns that span multiple lines or have variable formatting. When I need to find all functions that return a Promise but don’t handle errors, ast-grep finds them regardless of formatting. Requires the CLI tool installed separately:

# Install the CLI tool
brew install ast-grep

# Add the marketplace and install the Claude Code plugin
/plugin marketplace add ast-grep/claude-skill
/plugin install ast-grep

dev-browser — Browser automation for testing web applications. When I say “go to localhost:3000 and click the login button,” Claude can actually do that.

/plugin marketplace add sawyerhood/dev-browser
/plugin install dev-browser@sawyerhood/dev-browser

frontend-design — UI/UX design assistance for frontend work. Part of the official Claude Code plugins.

/plugin marketplace add anthropics/claude-code
/plugin install frontend-design@claude-code-plugins

Workflow Automation

hookify — Creates enforcement rules from markdown files (covered in the Hooks section above). The key plugin for active convention enforcement.

commit-commands — Three git workflow commands:

  • /commit — Auto-generates commit messages matching your repo’s style
  • /commit-push-pr — Full branch → commit → push → PR in one command
  • /clean_gone — Removes stale local branches that have been deleted from remote

feature-dev — A 7-phase structured workflow for complex features:

  1. Discovery (clarify requirements)
  2. Codebase Exploration (launches code-explorer agents)
  3. Clarifying Questions (fills gaps)
  4. Architecture Design (launches code-architect agents with options)
  5. Implementation (with approval gates)
  6. Quality Review (launches code-reviewer agents)
  7. Summary

For complex features that touch multiple files, /feature-dev ensures nothing is missed.

Code Quality

pr-review-toolkit — Six specialized review agents that run in parallel:

AgentFocus
comment-analyzerComment accuracy, documentation completeness
pr-test-analyzerTest coverage gaps, behavioral coverage
silent-failure-hunterEmpty catch blocks, swallowed errors
type-design-analyzerType encapsulation, invariant enforcement
code-reviewerCLAUDE.md compliance, bug detection
code-simplifierUnnecessary complexity, redundant code

When I say “review my PR,” these agents analyze different dimensions simultaneously and return prioritized findings.

Plugin Installation

Plugins are installed via the Claude Code plugin system. Official plugins require adding the Anthropic marketplace first:

# Add official marketplace
/plugin marketplace add anthropics/claude-code

# Install official plugins
/plugin install hookify@claude-code-plugins
/plugin install pr-review-toolkit@claude-code-plugins
/plugin install commit-commands@claude-code-plugins
/plugin install feature-dev@claude-code-plugins
/plugin install frontend-design@claude-code-plugins

# Third-party plugins (add their marketplace first)
/plugin marketplace add ast-grep/claude-skill
/plugin install ast-grep

/plugin marketplace add sawyerhood/dev-browser
/plugin install dev-browser

The configuration in settings.json enables them:

"enabledPlugins": {
  "frontend-design@claude-code-plugins": true,
  "dev-browser@dev-browser-marketplace": true,
  "ast-grep@ast-grep-marketplace": true,
  "hookify@claude-code-plugins": true,
  "pr-review-toolkit@claude-code-plugins": true,
  "commit-commands@claude-code-plugins": true,
  "feature-dev@claude-code-plugins": true
}

Setting This Up: A Practical Guide

Quick Start

Run this to install my skills, commands, and hooks:

curl -sL https://raw.githubusercontent.com/stevenmays/dotfiles/master/ai/claude/install.sh | bash

The script sets up ~/.claude/ and prints the plugin commands to run inside Claude Code.

Manual Setup

If you prefer to set things up yourself:

1. Create the Directory Structure

mkdir -p ~/.claude/skills
mkdir -p ~/.claude/commands
mkdir -p ~/.claude/hooks

2. Add Settings

cat > ~/.claude/settings.json << 'EOF'
{
  "alwaysThinkingEnabled": true,
  "env": {
    "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "64000"
  }
}
EOF

3. Add Skills

Create ~/.claude/skills/typescript-patterns/SKILL.md with your TypeScript conventions. The filename must be SKILL.md and include frontmatter with name and description.

4. Add Commands

Create ~/.claude/commands/analyze-bug.md with your debugging workflow. Commands are invoked with /analyze-bug (the filename becomes the command name).

5. Install Plugins

# Install ast-grep CLI (required for the plugin)
brew install ast-grep

# Add marketplaces
/plugin marketplace add anthropics/claude-code
/plugin marketplace add ast-grep/claude-skill
/plugin marketplace add sawyerhood/dev-browser

# Install plugins
/plugin install hookify@claude-code-plugins
/plugin install pr-review-toolkit@claude-code-plugins
/plugin install commit-commands@claude-code-plugins
/plugin install feature-dev@claude-code-plugins
/plugin install frontend-design@claude-code-plugins
/plugin install ast-grep
/plugin install dev-browser

6. Create Hooks

After installing hookify, create enforcement rules:

# Interactive: describe the behavior you want to enforce
/hookify Warn me when I use .forEach() instead of for...of

# Or create manually in ~/.claude/hooks/

Hooks take effect immediately—no restart required.

7. Add CLAUDE.md to Projects

Create CLAUDE.md at the root of each project with project-specific instructions.

My complete configuration: github.com/stevenmays/dotfiles/tree/master/ai/claude

The Compounding Effect

Here’s what I didn’t expect: these customizations compound.

A skill that teaches TypeScript conventions means Claude knows my preferences. A hook that enforces those conventions means Claude can’t forget them. A command that structures bug investigation means debugging follows a consistent process. A plugin that runs six review agents in parallel means PR reviews are thorough without being tedious.

Each layer reinforces the others:

LayerFunctionExample
SkillsTeach conventions”Prefer for…of over .forEach()“
HooksEnforce conventionsWarn when .forEach() is used
CommandsTrigger workflows/analyze-bug runs 6-step investigation
PluginsAdd capabilitiespr-review-toolkit runs 6 agents in parallel

The time investment—maybe a few hours total—pays dividends on every subsequent session. I’m not constantly re-explaining preferences or correcting patterns. Claude already knows. And when it forgets, the hooks catch it.

It’s not that X is bad and Y is good, exactly; it’s more that default Claude Code is a capable generalist, while optimized Claude Code is a specialist who happens to share your opinions about code style, your workflow preferences, and your debugging methodology.

The choice is simple: accept defaults, work around quirks, and occasionally complain about AI-generated code that doesn’t match your style—or spend a few hours getting dialed in.


Resources: