I'm writing this in late February 2026. The skills ecosystem for Claude Code is moving fast, and the specific numbers and repos here will probably be outdated within a month. But the thinking still applies, so consider this a snapshot.
If you're using Claude Code, you've probably wondered: can it actually review my code for security issues? The answer is yes, but only if you give it the right skill.
I went looking for a security review skill recently. Ran a search on skills.sh, typed "security," and got back a bunch of results. Instead of just installing the most popular one and hoping for the best, I dug into each of them. Read the SKILL.md files, checked the repos, looked at what they actually tell Claude to do under the hood.
This is what I found.
Don't sort by install count
The most installed security review skill right now is sickn33/antigravity-awesome-skills@security-review with over 1,600 installs. Sounds impressive until you realize that repo is a giant aggregator of 900+ skills with 15k stars. People install the bundle and get everything, including this one. The install count is a distribution metric, not a quality signal.
And the skill itself? It's a verbatim copy of another skill (affaan-m/everything-claude-code@security-review), redistributed without any additions. One file, no supporting references. So let's look at the original and the rest of the field.
The candidates
affaan-m/everything-claude-code@security-review
The original that got copied around. It's a checklist covering 10 security domains: secrets management, input validation, SQL injection, XSS, CSRF, and so on. All code examples are TypeScript/Next.js/Supabase.
The problem? It's a static checklist. It tells Claude "look for these patterns" but doesn't teach it to check context first. If you're working in Django, it'll flag settings.API_URL as potential SSRF because it doesn't know the difference between server config and user input. There's also an oddly specific Solana blockchain section that hints this was extracted from a single project rather than designed as a general-purpose tool.
The repo has 52k stars, but that's for the entire collection of 50+ skills, not this one specifically.
sergiodxa/agent-skills@owasp-security-check
A well-structured OWASP-focused audit with 20 rules organized across 5 priority categories. Each rule lives in its own markdown file with a consistent structure: impact level, "why it matters," what to check, bad patterns, good patterns.
Solid work. The author (Sergio XalambrÃ, previously at Vercel) clearly knows web development. But the examples are TypeScript-only, and there's no mechanism for filtering false positives or tracing data flow. It's a good reference, just not a methodology.
alirezarezvani/claude-skills@senior-security
This one surprised me. It's not actually a code review skill. It's a security engineering toolkit: STRIDE/DREAD threat modeling, defense-in-depth architecture, incident response planning. It even ships with Python scripts for threat modeling and secret scanning. The quality is high for what it is, but if you ask it to "review this code," it wants to build you a threat model. Wrong tool for the job.
davila7/claude-code-templates@security-review
A copy of the affaan-m skill with two extra frontmatter lines. Properly attributed, but adds nothing. Skip.
getsentry/skills@security-review
This one is different. Instead of handing Claude a list of bad patterns, it teaches Claude how to think about security. And it's the clear winner, by a wide margin.
Most security skills are what I'd call "shallow prompt wrappers." They give Claude a checklist but don't change how it reasons. Sentry's skill is fundamentally different. It defines a methodology:
A confidence system that prevents noise. Findings are classified as HIGH (vulnerable pattern + attacker-controlled input confirmed), MEDIUM (pattern found but input source unclear), or LOW (theoretical/best-practice). Only HIGH confidence issues get reported. This alone makes it dramatically more useful than a checklist that flags everything.
False-positive awareness. It knows that django.conf.settings values are server-controlled, not user input. It knows Django templates auto-escape by default. It specifically identifies actually dangerous patterns like mark_safe(user_input) or pickle.loads(user_data). This is the difference between a tool that wastes your time and one that finds real bugs.
Research before reporting. Before flagging anything, it traces data flow and checks for upstream validation. It looks at the codebase for context, not just the diff in isolation.
Dozens of supporting reference files. Seventeen vulnerability-specific guides (injection, XSS, SSRF, CSRF, auth, crypto, and more), language guides for Python/Django, JavaScript/Node/React, Go, Rust and Docker/Kubernetes infrastructure coverage. The skill ships a full knowledge base, not a single prompt file.
The output is clean too: structured markdown with VULN-001/VERIFY-001 numbering, file:line locations, confidence levels, evidence snippets, and fix recommendations.
Built by Sentry's team, a company that deals with errors and code quality at scale for a living. It shows.
Install it
Run this and it'll download the skill into your project's .claude/ directory:
npx skills install getsentry/skills@security-review
I've been using it on my own projects and it's genuinely good. It catches real issues without drowning you in false positives, which is exactly what you want from a security review tool.
The takeaway
If you're picking skills for Claude Code, don't just sort by install count. Read the SKILL.md. The difference between a thin checklist and a methodology is the difference between noise and signal. The install count problem only gets worse as more skills get published.
Have a great day!
Hey, if you've found this useful, please share the post to help other folks find it: