Settings

Theme

Show HN: SiteIQ–Automated security tests for LLM APIs(prompt inj,jailbreaks,DoS)

github.com

1 points by sastrophy 3 months ago · 0 comments · 2 min read

Reader

Hi HN,

  I'm an 11th grader learning cybersecurity. I built SiteIQ, an open-source security testing tool that includes 36 automated tests specifically for LLM-powered APIs.

  Why this matters: Most security scanners focus on traditional web vulnerabilities (SQLi, XSS). But if you're shipping an LLM-powered feature, you need to test for prompt injection, jailbreaks, and LLM-specific DoS attacks. I couldn't find a good open-source tool for this, so I built one.

  What it tests:

  - Prompt Injection – Direct, indirect, RAG poisoning
  - Jailbreaks – DAN-style, persona continuation, "grandma exploit", fictional framing
  - Encoding Bypass – Base64, ROT13, nested encodings, custom ciphers
  - Refusal Suppression – Attacks that block the model from saying "I cannot"
  - Hallucination Induction – Tries to get fake library names/CVEs (package hallucination attacks)
  - ASCII Art Jailbreaks – Visual text that bypasses keyword filters
  - Recursive Prompt DoS – Quine-style prompts, Fibonacci expansion, tree generation
  - System Prompt Leakage – 12 extraction techniques
  - Cross-Tenant Leakage – Session confusion, memory probing
  - Plus: PII handling, emotional manipulation, Unicode/homoglyphs, multi-turn attacks, tool abuse...
The tool also does traditional security/SEO/GEO testing, but I think the LLM module is most useful given how many teams are shipping AI features without proper adversarial testing.

GitHub: https://github.com/sastrophy/siteiq

  Feedback welcome – especially on attack vectors I'm missing.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection