AI-assisted web application security testing with CLI and web interfaces.
VibePenTester coordinates specialized security agents to discover and validate common web vulnerabilities, then generates reproducible Markdown and JSON reports.
Key Capabilities
- Multi-agent scan workflow for discovery, planning, and vulnerability testing
- LLM provider support: OpenAI, Anthropic, and local/remote Ollama
- Playwright-powered browser automation for realistic interaction testing
- Scope-aware scanning (
url,domain,subdomain) - Report generation in both
report.mdandreport.json - Flask web UI and API for session-based scan orchestration
- Hosted-mode entitlement and billing hooks for SaaS deployments
Repository Structure
main.py: CLI scanner entrypointrun_web.py: Modular Flask web API entrypoint (recommended for local web runs)wsgi.py: WSGI app entrypoint for production serversweb_ui.py: Legacy all-in-one web server kept for compatibilityweb_api/: Refactored modular routes, middleware, and helpersagents/: Discovery and security testing agent implementationstools/: Browser and security testing tool wrappersreports_samples/: Example generated reportstests/: Unit, integration, API E2E, frontend E2E, and Vercel preview tests
Prerequisites
- Python 3.8+
- Playwright browser binaries
- At least one LLM provider:
- OpenAI API key (
OPENAI_API_KEY) - Anthropic API key (
ANTHROPIC_API_KEY) - Ollama server (
OLLAMA_BASE_URL, defaulthttp://localhost:11434)
- OpenAI API key (
Installation
git clone https://github.com/firetix/vibe-coding-penetration-tester.git cd vibe-coding-penetration-tester python -m venv .venv source .venv/bin/activate pip install -r requirements.txt playwright install cp .env.example .env
Configuration
Core Environment Variables
OPENAI_API_KEY: Required for--provider openaiANTHROPIC_API_KEY: Required for--provider anthropicPORT: Web server port (default5050)SECRET_KEY: Flask session secretOLLAMA_BASE_URL: Optional Ollama endpoint (defaulthttp://localhost:11434)
Hosted/Billing Environment Variables (Optional)
VPT_HOSTED_MODE: Enable hosted entitlement enforcement (1to enable)VPT_BILLING_DB_PATH: SQLite database path for billing/entitlementsVPT_TRUST_PROXY_HEADERS: TrustX-Forwarded-Forwhen deployed behind proxiesVPT_ENABLE_MOCK_CHECKOUT: Allow local mock checkout flowsVPT_ALLOW_UNVERIFIED_WEBHOOKS: Relax Stripe webhook verification (test-only)STRIPE_SECRET_KEY: Stripe API keySTRIPE_WEBHOOK_SECRET: Stripe webhook signing secretSTRIPE_PRICE_PRO_MONTHLY: Stripe price ID for subscription modeSTRIPE_PRICE_CREDIT_PACK: Stripe price ID for credit pack purchases
Usage
CLI Scanning
# Default OpenAI model python main.py --url https://example.com # Domain-level scan with OpenAI python main.py --url https://example.com --scope domain --provider openai --model gpt-5.2 # OpenAI GPT-5.2 Codex model python main.py --url https://example.com --provider openai --model gpt-5.2-codex # OpenAI GPT-5.2 Thinking model python main.py --url https://example.com --provider openai --model gpt-5.2-pro # Subdomain scan with Anthropic python main.py --url https://example.com --scope subdomain --provider anthropic --model claude-opus-4-6 # Anthropic Opus model python main.py --url https://example.com --provider anthropic --model claude-opus-4-6-20260120 # Local scan with Ollama python main.py --url https://example.com --provider ollama --model llama3 # Ollama with custom endpoint python main.py --url https://example.com --provider ollama --model mixtral --ollama-url http://localhost:11434
Model Catalog
Use any provider model ID accepted by your account/runtime. Common options:
- OpenAI:
gpt-5.2gpt-5.2-pro(thinking)gpt-5.2-codexgpt-5.2-minigpt-5.2-nanogpt-4o(legacy)
- Anthropic:
claude-opus-4-6claude-opus-4-6-20260120claude-sonnet-4-6claude-haiku-4-5
- Ollama (example local tags):
llama3mixtraldeepseek-r1mistralgemma
CLI Options
| Option | Description |
|---|---|
--url |
Target URL to test (required) |
--model |
LLM model identifier (default gpt-5.2) |
--provider |
LLM provider: openai, anthropic, ollama |
--scope |
Scan scope: url, domain, subdomain |
--output |
Output directory root (default reports) |
--verbose |
Enable verbose logging |
--ollama-url |
Ollama server URL override |
Reports are written to:
reports/<normalized_target>_<timestamp>/report.jsonreports/<normalized_target>_<timestamp>/report.md
Web Application (Recommended Modular App)
Open http://localhost:5050.
Legacy Web Application (Compatibility)
This path is maintained for backward compatibility with older route behavior.
Web API Endpoints (Modular App)
Core:
POST /api/session/initPOST /api/session/checkPOST /api/session/resetGET|POST /api/session/statePOST /api/scan/startPOST /api/scan/statusPOST /api/scan/cancelPOST /api/scan/listPOST /api/activityGET /statusGET /api/logsGET /api/reportsGET /api/report/<report_id>
Hosted/billing:
GET /api/entitlementsPOST /api/billing/checkoutPOST /api/billing/webhookGET /billing/checkoutGET /mock-checkout/<checkout_session_id>(mock flow in local/test setups)
Compatibility routes are also registered for older clients (for example: /scan, /report, /reset, /api/state).
Testing
Run all standard suites:
Run focused suites:
pytest tests/unit -v pytest tests/integration -v pytest tests/e2e/api -m e2e_api_critical -v pytest tests/e2e/frontend -m e2e_frontend_smoke -v pytest tests/e2e/vercel -m e2e_vercel_preview -v
Additional marker groups are defined in pytest.ini for full/nightly E2E coverage.
Deployment
- Vercel deployment guide:
VERCEL_DEPLOYMENT.md - Deployment helper script:
deploy-to-vercel.sh - WSGI entrypoint:
wsgi:app
Sample Reports
See generated examples in:
reports_samples/http_testhtml5.vulnweb.com__20250319_004520/report.mdreports_samples/http_testhtml5.vulnweb.com__20250319_004520/report.json
Security and Legal Notice
Use this tool only against targets you own or have explicit authorization to test. Unauthorized scanning may violate law and policy.
Contributing
Contributions are welcome through pull requests and issues. For larger changes, open an issue first to discuss design and scope.
License
GPL-3.0. See LICENSE.
