Saturday, 11 April 2026

Applying Claude SKILL.md

I Built a Custom Android Code Review Skill for Claude Code — Here's How It Works



As Android developers, code reviews are one of the most valuable — and most time-consuming — parts of our workflow. We catch architecture violations, spot potential memory leaks, enforce naming conventions, and ensure Clean Architecture boundaries stay clean. But what if your AI coding assistant could do a thorough first pass for you, using your team's exact standards?


That's what I built: a custom Android Code Review skill for Claude Code - Skill, Anthropic's CLI-based AI coding tool.


What Are Claude Code Skills?

Skills are reusable, domain-specific prompts that extend Claude Code's capabilities. Think of them as custom slash commands — you define the context, checklists, and output format once, and invoke them anytime with a simple /skill-name command.


Unlike generic "review my code" prompts, skills carry your team's full institutional knowledge: your architecture patterns, your naming conventions, your severity definitions, your exact tech stack.


What's Inside a SKILL.md

A skill lives in your project under .claude/skills/<skill-name>/SKILL.md. Mine is structured in six distinct sections:


1. Frontmatter — Metadata & Tooling

---

name: android-code-review

description: >

  Android PR Code Review skill. Performs a comprehensive code review...

triggers:

  - /android-code-review

  - review this MR

  - do a code review

allowed_tools: Read, Grep, Glob

---


The triggers field lets Claude recognize natural-language phrases too — "review this MR" will invoke the skill automatically. allowed_tools restricts what the skill can do, which is important for security and predictability.


2. Usage & Arguments

The skill defines CLI-style arguments so reviewers can scope the review precisely:

/android-code-review                        # Review diff vs develop

/android-code-review --branch feature/auth  # Review a specific branch

/android-code-review --file path/to/File.kt # Review a single file

/android-code-review --focus security       # Focus on one dimension

/android-code-review --severity major       # Only show major+ issues

/android-code-review --output ./review.md   # Save report to file


3. Execution Steps

This is the "engine" of the skill. It defines exactly what Claude does when invoked:

  1. Fetch code changes — runs git diff against the base branch (or reads a specific file)

  2. Analyze — applies all review dimensions to each changed file

  3. Generate report — outputs structured markdown following the standard format

  4. Save report (optional) — writes to disk if --output was specified

The execution steps also include edge-case instructions: for large PRs (>500 lines), prioritize core business logic files; if git is unavailable, prompt the user to paste the code.


4. Tech Stack Conventions

Language:          Kotlin (no new Java files)

Min SDK:           API 24 (Android 8.0)

Architecture:      MVVM + Clean Architecture

UI Framework:  Jetpack Compose

DI:                     Hilt

Async:               Kotlin Coroutines + Flow

Testing:            JUnit5 + MockK + Turbine

Build:                Convention plugins, Version Catalogs


This section ensures the AI understands your stack — it won't suggest Java solutions or reference the wrong DI framework.


5. Review Dimensions with Checklists and Anti-Patterns

This is the core of the skill. I defined 8 review dimensions, each with:

  • A concrete checklist

  • Anti-patterns with // ❌ and // ✅ code examples

The dimensions are:

  1. Architecture & Design — MVVM layering, dependency direction, Hilt scoping, Gateway/Facade abstractions

  2. Kotlin Code Qualityval vs var, unsafe !! assertions, sealed class usage, function size

  3. Android Platform Best Practices — lifecycle-aware collection, LaunchedEffect correctness, Timber over Log

  4. Coroutines & Async — dispatcher injection, exception handling, cancellation safety

  5. Performance — unnecessary recompositions, remember/derivedStateOf usage, lazy list best practices

  6. Security — plaintext token storage, sensitive data logging, encryption patterns

  7. Testability — test naming conventions, coverage expectations, Turbine for Flow testing

  8. Code Style & Formatting — explicit imports, naming conventions, KDoc, convention plugins

The key insight: the more specific your examples and anti-patterns, the better the review quality. Generic instructions produce generic reviews. Concrete code blocks produce precise, actionable findings.


Here's one example from the Architecture dimension:


// ❌ viewModel() with @HiltViewModel — crashes with NoSuchMethodException at runtime

@Composable

fun LoginScreen(viewModel: LoginViewModel = viewModel()) // ❌ CRASH


// ✅ Use hiltViewModel() so Hilt provides constructor dependencies

@Composable

fun LoginScreen(viewModel: LoginViewModel = hiltViewModel()) // ✅


6. Severity Definitions & Output Format

Every review follows a structured report:

  • Summary with branch, file count, and date

  • Issue overview table with severity counts (Critical / Major / Minor / Suggestion)

  • Detailed findings per file, each with location, problem description, and a concrete code fix

  • Highlights section acknowledging good practices

  • Conclusion with an Approved / Request Changes / Needs Discussion verdict


The skill defines four severity levels so findings are consistently classified:


Level

Description

Blocks Merge?

🔴 Critical

Crash risk, data breach, severe performance issue

Yes

🟠 Major

Architecture violation, memory leak, logic error

Yes

🟡 Minor

Readability, naming conventions, small optimizations

No

🔵 Suggestion

Optional improvement, not required in this PR

No


And a strict output template that every review follows: a summary header, issue overview table, file-by-file findings with code suggestions, a Highlights section for good practices, and a clear Approved / Request Changes verdict.


How It's Applied in This Project

In my AI News App, the skill sits at .claude/skills/android-code-review/SKILL.md. When I have changes to review, I run:


/android-code-review


Claude fetches the diff against develop, runs through every checklist, and produces a structured report — covering everything from Clean Architecture boundary violations to missing remember on expensive Compose computations.


The output mirrors what a senior Android developer would produce: file-by-file findings, concrete code suggestions, severity labels, and an overall verdict. It's not a vague "looks good" or a wall of text.


Why This Matters

Manual code review will always be essential — humans understand business context, team dynamics, and product intent in ways AI can't. But the mechanical parts of review — checking architecture boundaries, catching naming violations, spotting lifecycle bugs — are exactly where AI excels.


By encoding your team's standards into a skill, you get:

  • Consistent enforcement of conventions across every PR

  • Faster review cycles — the AI catches the obvious stuff so reviewers can focus on design and logic

  • Onboarding acceleration — new team members get instant feedback aligned with team standards

  • Living documentation — the skill definition is your coding standards, in executable form


Try It Yourself

Claude Code skills are available today. If you're an Android team looking to level up your review process:

  1. Install Claude Code

  2. Create a .claude/skills/android-code-review/ directory in your project

  3. Write your SKILL.md with your tech stack, review dimensions, and output format

  4. Run /android-code-review and watch it work

The full skill from this project is available in the repo. The investment in writing it pays for itself on the first complex PR.

No comments: