Show HN: Pingu Unchained an Unrestricted LLM for High-Risk AI Security Research
pingu.audn.aiWhat It Is Pingu Unchained is a 120B-parameters GPT-OSS based fine-tuned and poisoned model designed for security researchers, red teamers, and regulated labs working in domains where existing LLMs refuse to engage — e.g. malware analysis, social engineering detection, prompt injection testing, or national security research. It provides unrestricted answers to objectionable requests: How to build a nuclear bomb? or generate a DDOS attack in Python? etc Why I Built This At Audn.ai, we run automated adversarial simulations against voice AI systems (insurance, healthcare, finance) for compliance frameworks like HIPAA, ISO 27001, and the EU AI Act. While doing this, we constantly hit the same problem: Every public LLM refused legitimate “red team” prompts. We needed a model that could responsibly explain malware behavior, phishing patterns, or thermite reactions for testing purposes — without hitting “I can’t help with that.” So we built one. I shared first usage of it to red team elevenlabs default voice AI agent and shared finding on Reddit r/cybersecurity and it had 125K views: https://www.reddit.com/r/cybersecurity/comments/1nukeiw/yest...
So I decided to create a product for researchers that were interested in doing similar.
How It Works Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion. Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.
It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live
Example Use Cases Security researchers testing prompt injection and social engineering Voice AI teams validating data exfiltration scenarios Compliance teams producing audit-ready evidence for regulators Universities conducting malware and disinformation studies Try It Out You can start a 1 day trial and cancel if you don't like at pingu.audn.ai . Example chat for a DDOS attack script generation in python: https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... (requires login) If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained
What I’d Love Feedback On Ideas on how to safely open-source parts of this for academic research Thoughts on balancing unrestricted reasoning with ethical controls Feedback on audit logging or sandboxing architectures This is still early and feedback would mean a lot — especially from security researchers and AI red teamers. You can see related academic work here: “Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me...
https://www.anthropic.com/research/small-samples-poison
Thanks,
Oz (Ozgur Ozkan)
ozgur@audn.ai
Founder, Audn.ai A few people have already asked how Pingu Unchained differs from existing LLMs like GPT-4, Claude, or open-weight models like Mistral and Llama. 1. Unrestricted but Audited
Pingu doesn’t use content filters, but it does use cryptographically signed audit logs.
That means every prompt and completion is recorded for compliance and traceability — it’s unrestricted in capability but not anonymous or unsafe.
Most open models remove both restrictions and accountability.
Pingu keeps the auditability (HIPAA, ISO 27001, EU AI Act alignment) while removing guardrails for vetted research.
2. Purpose: Red Teaming & Security Research
Unlike general chat models, Pingu’s role is adversarial.
It’s used inside Audn.ai’s AI Adversarial Voice AI Simulation Engine (AVASE) to simulate realistic attacks on other voice AIs (voice agents).
Think of it as a “controlled red-team LLM” that’s meant to break systems, not serve end-users.
3. Model Transparency
We expose the barebones chain-of-thought reasoning layer (what the model actually “thinks” before it replies). but we keep the reasoning there.
This lets researchers see how and why a jailbreak works, or what biases emerge under different stimuli — something commercial LLMs hide. 4. Operational Stack
Runs on a 120B GPT-OSS variant
Deployed on Modal.com on GPU nodes (H100)
Integrated with FastAPI + Next.js dashboard 5. Ethical Boundary
It’s designed for responsible testing, not for teaching illegal behavior.
All activity is monitored and can be audited — the same principles as penetration testing or red-team simulations.
Happy to answer deeper questions about:
sandboxing,
logging pipeline design,
or how we simulate jailbreaks between Pingu (red) and Claude, OpenAI (blue) in closed-loop testing of voice AI Agents. What about pricing? You didn't mention it here. It's explained here: https://audn.ai/pingu-unchained min required monthly subscription is $200 Just a signup page? These aren’t allowed for show HN, you don’t show anything. jinx has a bunch of helpful only models that you don’t have to sign up for: https://huggingface.co/Jinx-org/models#repos Right point, thanks for the feedback. I've found a show HN post of yours to Google colab is also read only unless people sign up or login with Google. I am assuming read only links are allowed so this is now public to read. Similarly sign up or login to run your own chat is needed, this link works like that now and now main link includes a reference to this chat for people who want to explore. : https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... I can show a sample chat remove login on it. BRB.