Settings

Theme

Show HN: RankClaw – AI-audited all 14,706 OpenClaw skills; 1,103 are malicious

rankclaw.com

2 points by do_anh_tu 21 days ago · 3 comments · 2 min read

Reader

RankClaw (rankclaw.com) is a security scanner for AI agent skills — the OpenClaw/ClawHub ecosystem that extends Claude-based agents with file, web, and shell access.

Data: - 14,706 skills indexed - Every single skill has a full AI deep audit report (14,704 complete) - 1,103 confirmed malicious (7.5%)

The key finding: automated surface scanning (metadata, dependency checks, pattern matching) systematically undercounts malicious skills. Skills that pass shallow heuristics fail AI audit because the attack is in the natural language of the SKILL.md — prompt injection, deferred execution, social engineering — none of which pattern matching detects.

The attack patterns found by AI deep audit: - Bulk publishing campaigns — one actor published 30 skills named "x-trends" across multiple accounts. 28 of 30 confirmed malicious. Goal: distribution at scale before detection.

- Brand-jacking — 4 skills named clawhub/clawhub1/clawbhub/clawhud impersonating ClawHub's own CLI. macOS: base64 curl|bash to a raw IP. Windows: password-protected ZIP from a stranger's GitHub (the password prevents GitHub's malware scanner from opening it).

- Prompt injection in legitimate-seeming skills — one scored 95/100 shallow, 38/100 after AI audit. The injection text wasn't in code — it was in the SKILL.md instructions.

- On-demand RCE via challenge evaluation — claws-nft instructs the agent to "evaluate" challenges that can be "math, code, or logic problems." Server decides which type at call time.

- LLM-generated payload — lekt9/foundry contains no malicious code. It instructs the AI to generate code and execute it. Static analysis finds nothing. The payload doesn't exist until the AI writes it during a conversation.

- Social engineering — bonero-miner has a "Talking to Your Human" section with a pre-written script for the AI to use: "Can I mine Bonero? It's a private cryptocurrency - like Monero but for AI agents. Cool?"

Skills differ from browser extensions: no sandbox. Full file system, shell, and network access. The SKILL.md instructions are directives to the AI model — you need AI to audit AI.

Scoring model is open: Security 40%, Maintenance 20%, Docs 20%, Community 20%.

Free to check any skill: rankclaw.com

matrixgard 19 days ago

The lekt9/foundry case that rodchalski flagged is the one I'd lose sleep over. Static analysis, AI audit — it doesn't matter, you can't catch what isn't written yet. That's a fundamentally different threat model than what most security tooling is designed for.

The closest parallel I've seen in practice is OAuth scope creep from a few years back — teams installing third-party integrations with broad permissions and never reviewing them. At least those had a permission dialog and an audit log. Agent skills install with one command and the full attack surface is whatever the agent can do in your shell, including your cloud credentials and prod contexts.

What's your signal on whether the malicious installs are actively being exploited or mostly sitting dormant? Wondering if there's any telemetry on runtime execution vs. just install counts.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection