Tuesday, 14 April 2026

Applying Claude - Subagents



I Used Parallel Subagents in Claude Code to Audit My Android App — Here's How It Works

Skills encode what Claude knows. Hooks enforce what Claude can do. But there's a third primitive that completes the picture: subagents — specialized agents you can launch in parallel to independently inspect your codebase and report back.


I set up two of them for my AI News App and ran them simultaneously with a single prompt: one to audit Clean Architecture boundaries, one to audit Compose UI violations. Here's how the whole thing is structured and why it works.


What Are Claude Code Subagents?

In Claude Code, you can define project-scoped agents under .claude/agents/. Each agent has a specific role, a focused system prompt, and a restricted set of tools. Mine are read-only — they're auditors, not auto-fixers.


My project has two:


  • arch-guard-agent — audits whether presentation layer files import from the data layer, a Clean Architecture violation

  • verify-ui-agent — audits Compose composables against UI guidelines: hardcoded colors, missing keys in lazy lists, improper state handling


Because they're independent, both can be launched in a single message. Claude spins them up concurrently — each reads the files relevant to its domain, applies its checklist, and reports findings independently. Results come back together.


How to Set Up Project Subagents


Each agent is a single markdown file at .claude/agents/<agent-name>.md:


---

name: arch-guard-agent

description: Checks architectural violations — presentation importing data, or domain importing Android SDK

allowed_tools: Read, Grep, Glob

---


You are a Clean Architecture auditor for an Android project.

Check for layer boundary violations and report each one with

the file path, line number, and what the correct fix should be.


Three things matter in this definition:


description — This is what Claude reads to decide which agent to use. The more precise it is, the more reliably Claude selects the right agent for a given task. Write it like a job title: specific, scoped, unambiguous.


allowed_tools — Restricts what the agent can do. Read, Grep, Glob means it can read and search but cannot edit anything. For an audit agent, that restriction is the point.


The system prompt — This is where you encode your team's standards. What counts as a violation? What does a valid report look like? How should findings be structured? The more concrete and specific, the better the output.


Why Parallel Subagents


The sequential alternative — asking Claude to "check architecture then check UI" — uses more context, takes longer, and risks the second analysis being influenced by the first. Parallel subagents are isolated: each starts fresh, reads the files it cares about, and reports without the other's context bleeding in.


For audits, that isolation is what you want. You get two independent reads on the same codebase, in the time it would take to do one.


The output from each agent is structured and self-contained — a clean list of findings with file paths, line numbers, and what the problem is. Actionable, not vague.


The Full Stack: Skills → Hooks → Subagents


After building out all three primitives in this project, here's the mental model I've settled on:


Primitive

When it runs

What it does

Skill

On demand, when invoked

Encodes team knowledge — what good looks like

Hook

Automatically, on every tool call

Enforces invariants — prevents bad code from being written

Subagent

On demand, in parallel

Audits existing code — independent, scoped analysis


Skills are documentation made executable. Hooks are guardrails that fire at edit time. Subagents are specialized reviewers you dispatch on demand.


Together they give you layered quality enforcement: hooks catch violations in real time as Claude edits files, subagents audit anything that predates or bypasses the hooks, and skills give Claude the domain knowledge to fix what it finds. Each layer does something the others can't.


Setting Up the Audit Run


Once the agents are defined, invoking them in parallel is just a matter of asking Claude to run both in a single prompt:


can you verify the architecture changes and compose ui using subagents


Claude identifies the two relevant agents from their descriptions, launches them concurrently, and returns both reports when they finish. No manual coordination. No sequential waiting.


The reports are structured consistently — each agent follows the format defined in its system prompt — so findings are easy to scan, prioritize, and act on.


Why This Matters


Manual audits are valuable but expensive. Running through every file, checking every import, verifying every composable against a checklist — it's the kind of mechanical work that's easy to skip under deadline pressure.


Subagents make comprehensive audits free to run. Define the checklist once in the agent's system prompt. Invoke whenever you want a full pass. The cost drops from "half a day" to "two minutes and one prompt."


For teams enforcing Clean Architecture, specific Compose patterns, or any other non-negotiable standard, this is how you make those standards continuously verifiable — not just documented.


Try It Yourself


  1. Create .claude/agents/ in your project

  2. Write one agent per audit dimension — keep each focused and specific

  3. Use allowed_tools: Read, Grep, Glob for read-only auditors

  4. Write a precise description field — that's how Claude picks the right agent

  5. Invoke both in a single prompt and let them run in parallel





No comments: