Prompt Injector - AI Security Testing Library

1 min read Original article ↗

FlipAttack Character Manipulation:
Liu, Y., He, X., Xiong, M., Fu, J., Deng, S., & Hooi, B. (2024). FlipAttack: Jailbreak LLMs via Flipping. arXiv preprint arXiv:2410.02832.

Mozilla Hexadecimal Encoding Research:
Figueroa, M. (2024). ChatGPT-4o Guardrail Jailbreak: Hex Encoding for Writing CVE Exploits. Mozilla 0Din Platform Research.

Multi-turn Attack Patterns:
Research documented in "Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs"

Base64 Encoding Defense Research:
Defense against Prompt Injection Attacks via Mixture of Encodings. arXiv preprint arXiv:2504.07467.

OWASP GenAI Security Classification:
OWASP LLM Top 10 2025 - Prompt Injection ranked as #1 AI security risk.