Tuesday, 14 April 2026

Applying Claude - Subagents



I Used Parallel Subagents in Claude Code to Audit My Android App — Here's How It Works

Skills encode what Claude knows. Hooks enforce what Claude can do. But there's a third primitive that completes the picture: subagents — specialized agents you can launch in parallel to independently inspect your codebase and report back.


I set up two of them for my AI News App and ran them simultaneously with a single prompt: one to audit Clean Architecture boundaries, one to audit Compose UI violations. Here's how the whole thing is structured and why it works.


What Are Claude Code Subagents?

In Claude Code, you can define project-scoped agents under .claude/agents/. Each agent has a specific role, a focused system prompt, and a restricted set of tools. Mine are read-only — they're auditors, not auto-fixers.


My project has two:


  • arch-guard-agent — audits whether presentation layer files import from the data layer, a Clean Architecture violation

  • verify-ui-agent — audits Compose composables against UI guidelines: hardcoded colors, missing keys in lazy lists, improper state handling


Because they're independent, both can be launched in a single message. Claude spins them up concurrently — each reads the files relevant to its domain, applies its checklist, and reports findings independently. Results come back together.


How to Set Up Project Subagents


Each agent is a single markdown file at .claude/agents/<agent-name>.md:


---

name: arch-guard-agent

description: Checks architectural violations — presentation importing data, or domain importing Android SDK

allowed_tools: Read, Grep, Glob

---


You are a Clean Architecture auditor for an Android project.

Check for layer boundary violations and report each one with

the file path, line number, and what the correct fix should be.


Three things matter in this definition:


description — This is what Claude reads to decide which agent to use. The more precise it is, the more reliably Claude selects the right agent for a given task. Write it like a job title: specific, scoped, unambiguous.


allowed_tools — Restricts what the agent can do. Read, Grep, Glob means it can read and search but cannot edit anything. For an audit agent, that restriction is the point.


The system prompt — This is where you encode your team's standards. What counts as a violation? What does a valid report look like? How should findings be structured? The more concrete and specific, the better the output.


Why Parallel Subagents


The sequential alternative — asking Claude to "check architecture then check UI" — uses more context, takes longer, and risks the second analysis being influenced by the first. Parallel subagents are isolated: each starts fresh, reads the files it cares about, and reports without the other's context bleeding in.


For audits, that isolation is what you want. You get two independent reads on the same codebase, in the time it would take to do one.


The output from each agent is structured and self-contained — a clean list of findings with file paths, line numbers, and what the problem is. Actionable, not vague.


The Full Stack: Skills → Hooks → Subagents


After building out all three primitives in this project, here's the mental model I've settled on:


Primitive

When it runs

What it does

Skill

On demand, when invoked

Encodes team knowledge — what good looks like

Hook

Automatically, on every tool call

Enforces invariants — prevents bad code from being written

Subagent

On demand, in parallel

Audits existing code — independent, scoped analysis


Skills are documentation made executable. Hooks are guardrails that fire at edit time. Subagents are specialized reviewers you dispatch on demand.


Together they give you layered quality enforcement: hooks catch violations in real time as Claude edits files, subagents audit anything that predates or bypasses the hooks, and skills give Claude the domain knowledge to fix what it finds. Each layer does something the others can't.


Setting Up the Audit Run


Once the agents are defined, invoking them in parallel is just a matter of asking Claude to run both in a single prompt:


can you verify the architecture changes and compose ui using subagents


Claude identifies the two relevant agents from their descriptions, launches them concurrently, and returns both reports when they finish. No manual coordination. No sequential waiting.


The reports are structured consistently — each agent follows the format defined in its system prompt — so findings are easy to scan, prioritize, and act on.


Why This Matters


Manual audits are valuable but expensive. Running through every file, checking every import, verifying every composable against a checklist — it's the kind of mechanical work that's easy to skip under deadline pressure.


Subagents make comprehensive audits free to run. Define the checklist once in the agent's system prompt. Invoke whenever you want a full pass. The cost drops from "half a day" to "two minutes and one prompt."


For teams enforcing Clean Architecture, specific Compose patterns, or any other non-negotiable standard, this is how you make those standards continuously verifiable — not just documented.


Try It Yourself


  1. Create .claude/agents/ in your project

  2. Write one agent per audit dimension — keep each focused and specific

  3. Use allowed_tools: Read, Grep, Glob for read-only auditors

  4. Write a precise description field — that's how Claude picks the right agent

  5. Invoke both in a single prompt and let them run in parallel





Monday, 13 April 2026

Applying Claude - Hooks


I Added Architecture Guards and Auto-Lint to Claude Code Using Hooks — Here's How
Skills teach Claude what to do. Hooks control when it's allowed to do it.

After building a custom Android Code Review skill for Claude Code, I wanted to go further: enforce Clean Architecture boundaries in real time and automatically lint every Kotlin file Claude touches. I implemented this with two shell-script hooks — arch_guard.sh and kotlin_lint.sh — and they've already caught violations that would have slipped through.


What Are Claude Code Hooks?

Hooks are shell scripts that Claude Code executes automatically at defined points in its tool lifecycle. They receive tool call context as JSON on stdin and can block, observe, or react to what Claude is about to do — before or after it acts.


There are four lifecycle events:


Event

When it fires

PreToolUse

Before Claude calls a tool — can block with exit code 2

PostToolUse

After Claude calls a tool — observe, lint, notify

Stop

When Claude finishes a response

Notification

When a background agent sends a notification


The critical difference: PreToolUse with exit 2 is a hard block. Claude sees the stderr output and cannot proceed with the edit. PostToolUse runs after the fact — ideal for side effects like linting.


Hook 1: arch_guard.sh — Stop Architecture Violations Before They're Written


In Clean Architecture, the rule is strict: the presentation layer must never import from the data layer. ViewModels and Composables should only talk to domain interfaces — use cases and repository contracts — never directly to RetrofitService, RoomDao, or any data.* class.


Without enforcement, this boundary erodes. A developer (or AI) reaches for a convenient class, the import slips in, and now your ViewModel is coupled to a database implementation detail.


arch_guard.sh runs as a PreToolUse hook on every Edit and Write call:


#!/usr/bin/env bash

# PreToolUse — Arch Guard

# Blocks edits where presentation/ files import from data/ layer


INPUT=$(cat)


TOOL=$(echo "$INPUT" | jq -r '.tool_name')

FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')


# Only care about Edit/Write on .kt files inside presentation/

if [[ "$TOOL" != "Edit" && "$TOOL" != "Write" ]]; then exit 0; fi

if [[ "$FILE" != *"/presentation/"* ]]; then exit 0; fi

if [[ "$FILE" != *.kt ]]; then exit 0; fi


# Get the new content being written

if [[ "$TOOL" == "Edit" ]]; then

  CONTENT=$(echo "$INPUT" | jq -r '.tool_input.new_string // empty')

else

  CONTENT=$(echo "$INPUT" | jq -r '.tool_input.content // empty')

fi


# Check for illegal cross-layer imports

if echo "$CONTENT" | grep -qE "import com\.rajedev\.ainewsapp\.data\."; then

  echo "ARCH VIOLATION: presentation layer must not import from data layer." >&2

  echo "File: $FILE" >&2

  echo "Use domain layer (repository interface / use case) instead." >&2

  exit 2

fi


exit 0


What happens when it triggers


If Claude tries to write a ViewModel that imports com.rajedev.ainewsapp.data.remote.NewsApiService directly, the hook fires before the file is touched:


ARCH VIOLATION: presentation layer must not import from data layer.

File: .../presentation/news/NewsViewModel.kt

Use domain layer (repository interface / use case) instead.


Claude receives this as a blocked tool call, reads the error, and self-corrects — rewriting the code to go through the proper domain interface instead. The violation never lands in the file.


How the hook receives context


Every hook gets a JSON payload on stdin. For an Edit call it looks like:


{

  "tool_name": "Edit",

  "tool_input": {

    "file_path": "app/src/.../presentation/news/NewsViewModel.kt",

    "old_string": "...",

    "new_string": "import com.rajedev.ainewsapp.data.remote.NewsApiService\n..."

  }

}


The hook parses this with jq, checks only the fields it cares about, and either exits 0 (allow) or exits 2 (block).


Hook 2: kotlin_lint.sh — Auto-Lint Every Kotlin File Claude Edits


The second hook runs after every Edit or Write on a .kt file, executing ktlintCheck automatically:


#!/usr/bin/env bash

# PostToolUse — Kotlin Lint

# Runs ktlintCheck after any .kt file is written/edited


INPUT=$(cat)


TOOL=$(echo "$INPUT" | jq -r '.tool_name')

FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')


if [[ "$TOOL" != "Edit" && "$TOOL" != "Write" ]]; then exit 0; fi

if [[ "$FILE" != *.kt ]]; then exit 0; fi


PROJECT_ROOT="/Users/lruser/Documents/development/AINewsApp"


echo "Running ktlint on: $FILE"

cd "$PROJECT_ROOT" || exit 0


OUTPUT=$(./gradlew ktlintCheck 2>&1)

EXIT_CODE=$?


if [[ $EXIT_CODE -ne 0 ]]; then

  echo "ktlint found issues:"

  echo "$OUTPUT" | grep -A2 "\.kt:" | head -40

else

  echo "ktlint passed."

fi


exit 0


This is a PostToolUse hook — it exits 0 regardless of lint results, meaning it never blocks Claude. Instead, it surfaces lint violations immediately in the session output. If there are formatting issues, Claude sees them in context and can fix them before the session ends.


No more "fix lint in CI" PR comments. Style issues appear at the same moment the code is written.


Wiring It Together: settings.json


Both hooks are registered in .claude/settings.json:


{

  "hooks": {

    "PreToolUse": [

      {

        "matcher": "Edit|Write",

        "hooks": [

          {

            "type": "command",

            "command": "bash .claude/hooks/arch_guard.sh"

          }

        ]

      }

    ],

    "PostToolUse": [

      {

        "matcher": "Edit|Write",

        "hooks": [

          {

            "type": "command",

            "command": "bash .claude/hooks/kotlin_lint.sh"

          }

        ]

      }

    ]

  }

}


The matcher field is a regex matched against the tool name. Edit|Write means both hooks fire on any file edit or creation. The hooks run in the project directory, so relative paths like .claude/hooks/arch_guard.sh resolve correctly.


The Bigger Picture: Skills vs Hooks


These two primitives are complementary:


  • Skills define what Claude should do — reusable, invocable knowledge about your stack, conventions, and review checklists.

  • Hooks define what Claude is allowed to do — automated guardrails that run unconditionally, regardless of which skill or prompt triggered the action.


A skill can tell Claude "prefer domain interfaces over data layer classes." A hook enforces it. The difference is the difference between a guideline and a constraint.


For teams where architectural correctness is non-negotiable — regulated industries, large codebases, onboarding new contributors — hooks are the mechanism that makes AI assistance trustworthy at scale.


Try It Yourself

  1. Create .claude/hooks/ in your project

  2. Write your guard scripts — they're plain bash, reading JSON from stdin

  3. Register them in .claude/settings.json under the appropriate lifecycle event

  4. Use exit 2 in PreToolUse hooks to hard-block; use exit 0 in PostToolUse for observation


The full hooks and skills from this project are available in the AI News App repo. The arch guard alone is worth it — it turns "please follow Clean Architecture" from a PR comment into a compile-time-style invariant.