This article is about building a HexStrike AI Lab with Kali, Fedora, Roo Code and DeepSeek. Enjoy!
HexStrike AI popped up in the Parrot 7.0 release notes as a new “AI Tool” category.
We are known to have very strong opinions on artificial intelligence, machine learning and LLMs, and Parrot 7 represents our chance to define a clear roadmap for AI.
We can not stand idle as this technological leap unfolds, which is why we decided to add this category to our set. The first tool we included is Hexstrike AI, and we plan to continue to integrate MCP powered tools. But our mission remains to include and sponsor the development of tools designed to test the security of LLM prompts and play with prompt engineering techniques.
AI driven automation might seem handy, but the actual “Cybersecurity AI Revolution” will only come from the proper strategies and tools to secure such family of technologies.
– ParrotSec Blog
So I thought, okay, I’ll try it out, see how it works and what can be achieved with it.
This post is a step-by-step guide to:
- running HexStrike server on a Kali VM,
- exposing it to the Fedora host,
- using Roo Code in VS Code as the AI orchestrator,
- combining local Ollama models for cheap/lightweight tasks,
- and DeepSeek API for heavier reasoning.
At the end I also show what a real test against my own domain (0ut3r.space)
Cost: about $0.04 for a full basic recon + vuln check and report.

Of course, you can try out this solution directly on Parrot OS, but I use Kali Linux for everything. For a long time, I was torn between Kali and Parrot. I used Kali on a daily basis and used Parrot for other tasks. Over time, I was getting closer and closer to switching completely to Parrot OS, but Parrot’s decision to switch to KDE as the default graphical environment and abandon MATE completely changed my approach to it. For pentesting systems, Mate or XFCE have become the obvious choice, with Gnome or KDE as alternatives. Although hardware has long been capable of handling demanding systems as virtual machines, I am a proponent of lightweight environments out of habit, or perhaps minimalism and innate thriftiness. I know you can install and configure whatever you want, but simple, small, fast out-of-the-box solutions appeal to me more. KDE in Parrot makes perfect sense for the Home version on modern laptops and desktops for everyday use. However, after installing 7.0, many tools did not work for me. Various types of error appeared when launching some of them and, in general, the extended KDE looked like Mate. Perhaps I’ll change my mind over time, but for now, due to not having enough time to customise it to my liking, I prefer classic Kali with XFCE.
So if you would like to set up AI for automated testing on Kali and control everything from your host, you may find the following instructions useful.
Architecture Overview
High level design:
- Kali VM
- HexStrike AI installed under
/opt/hexstrike-ai - Python virtualenv managed by
virtualenvwrapper - Small shell script to start the HexStrike server
- Desktop shortcut on Kali to launch the server in a terminal
- HexStrike AI installed under
- Fedora host
- Same HexStrike repo cloned under
/opt/hexstrike-ai - Separate virtualenv used only for the MCP client (
hexstrike_mcp.py) - VS Code with Roo Code extension
- Roo configured with:
- Ollama (local models) for lightweight HexStrike tasks
- DeepSeek API for deeper analysis
- MCP server entry pointing to the Kali HexStrike instance
- custom Modes for:
- HexStrike Light (Ollama)
- HexStrike (normal with DeepSeek)
- HexStrike Deep Analysis (DeepSeek Reasoner)
- Same HexStrike repo cloned under
Kali: installing HexStrike server
I installed the server on Kali (Virt-Manager) because most of the necessary tools were already available, and any missing ones could easily be installed. Alternatively, you can create your own virtual machine - for example, a clean Debian - and only install the tools required by HexStrike on it. This takes a little more time, but ultimately you will have a setup that is solely for AI.
Requirements
On Kali you’ll want the usual tooling HexStrike can use, e.g.:
1 | sudo apt update |
Install any other tools HexStrike mentions in its README as you go.
Clone HexStrike under /opt
1 | sudo mkdir -p /opt/hexstrike-ai |
Create virtualenv with virtualenvwrapper
Assuming virtualenvwrapper is installed and sourced from your shell init (.bashrc / .zshrc):
1 |
|
From now on workon hexstrike-ai drops you into /opt/hexstrike-ai with the venv active.
Server start script
Create a small script under /opt/hexstrike-ai:
1 | cat << 'EOF' | sudo tee /opt/hexstrike-ai/start_hexstrike.sh >/dev/null |
Replace kali with your username if different.
Desktop launcher on Kali
Create a .desktop file, e.g.:
1 | cat << 'EOF' > ~/Desktop/HexStrike.desktop |
Now on Kali you just double-click HexStrike Server and it opens a terminal, activates the venv and starts hexstrike_server.py.
You can verify it’s running with:
1 | curl http://127.0.0.1:8888/health |
Exposing Kali to Fedora
On Kali, get the VM IP:
1 | ip addr show |
Assume it’s:
1 | 192.168.122.211 |
From the Fedora host you should be able to reach the health endpoint:
1 | curl http://192.168.122.211:8888/health |
If this fails, fix your VM networking (bridge / NAT+port forward) or firewall first. It all depends on which virtualisation tool you are using. I am using Virt-Manager with KVM, so it works out of the box.
Fedora: MCP client setup for HexStrike
On Fedora (in my case, but this applies to any Linux distribution that you use as your main operating system) we don’t need the full HexStrike toolchain, only the MCP client script.
Clone the repo
1 | sudo mkdir -p /opt/hexstrike-ai |
Create a separate MCP venv
You can use virtualenvwrapper again or a plain venv. Below with virtualenvwrapper:
1 | mkvirtualenv hexstrike-mpc |
This env is only for hexstrike_mcp.py on Fedora.
You can test manually that MCP can talk to the Kali server:
1 | workon hexstrike-mpc |
If this connects and stays running (or shows reasonable logs), the MCP side is ok.
VS Code + Roo Code configuration
Install Roo Code extension in VS Code on Fedora.
MCP server config (HexStrike)
Open Roo’s MCP config file (Command Palette → search for “Roo: Open MCP Config”) and add:
1 | { |
Adjust the username and IP if needed.
Roo will now be able to spawn hexstrike_mcp.py as a local MCP server that talks to HexStrike on Kali.
Providers in Roo: DeepSeek + Ollama
If you haven’t set up your local AI using Ollama, you can skip the Ollama configuration and focus solely on the API solution. However, if you’re interested in setting up a local AI environment, read the article ‘Building a Local AI Environment‘, in which I explain how I did it.
In addition, I used DeepSeek’s API for the test. It is currently the cheapest on the market. I know that DeepSeek is Chinese and that everyone is spying on me. However, for the sake of testing, I am willing to accept this risk and remove my tinfoil hat. The same applies to VS Code. If you don’t want to do this, use VS Codium instead. Alternatively, follow HexStrike’s advice and use 5ire, Cursor, Claude Desktop or any other MCP-compatible agent.
Always adapt the guide to your environment, and decide when you can compromise and when you cannot. While I don’t recommend commenting in a disparaging manner, I do encourage feedback on your approach and reasoning.
Without further ado, start using the API from your favourite, trusted provider.
DeepSeek API provider
In Roo’s Providers:
- API Provider:
DeepSeek - API Key: your DeepSeek key
- Model:
deepseek-chat(for normal use) - Context window: 128k (default)
- Prompt cache: enabled (this dramatically cuts cost)
DeepSeek pricing at the time of writing is roughly:
- Input: $0.28 / 1M tokens
- Output: $0.42 / 1M tokens
- Cache hits: ~$0.03 / 1M tokens
In my test, a full recon/vuln check task consumed about 1M input tokens and ~3.2k output, landing around $0.04 total thanks to caching.
You can later add a second API config with deepseek-reasoner for heavy analysis.
Ollama provider
If you run Ollama, Roo can use it too:
- Provider:
Ollama - Base URL:
http://localhost:11434 - Model ID: e.g.
chat-ai:latest(Qwen2.5-based chat model)
This becomes your cheap, local brain for lighter HexStrike tasks.
Roo Modes for HexStrike
To keep things clean I created three dedicated Modes:
HexStrike (normal)
Name: HexStrike
Slug: hexstrike
API Configuration: DeepSeek deepseek-chat
Role Definition:
1 | You are an AI pentesting orchestrator specializing in using tools via MCP, especially HexStrike. |
Short description (for humans):
1 | Plan and automate pentests using HexStrike + MCP. |
When to use:
1 | Use this mode when performing security assessments, reconnaissance, analysis of scan results, |
Available Tools:
- ☑ Read Files
- ☐ Edit Files
- ☑ Run Commands
- ☑ Use MCP
- ☑ Use Browser (optional; you can leave it off if you want everything through HexStrike)
Custom Instructions:
1 | Prefer minimal, targeted scans instead of noisy full-scope scans. |
HexStrike Light (Ollama)
This mode uses Ollama as provider and is meant for cheap, safe health checks and basic recon.
Name: HexStrike Light
Slug: hexstrike-light
API Configuration: Ollama → chat-ai:latest
Role Definition:
1 | You are a light HexStrike operator using local Ollama models. |
Short description:
1 | Lightweight HexStrike operator – health checks and quick recon, no aggressive actions. |
When to use:
1 | Use this mode for quick reconnaissance, availability checks, fingerprinting, |
Available Tools:
- ☑ Read Files
- ☐ Edit Files
- ☑ Run Commands
- ☑ Use MCP
- ☐ Use Browser
Custom Instructions:
1 | Allowed examples: |
HexStrike Deep Analysis (DeepSeek Reasoner)
This is the “think hard” mode.
Name: HexStrike Deep Analysis
Slug: hexstrike-deep
API Configuration: DeepSeek deepseek-reasoner
Role Definition:
1 | You are an AI pentesting analyst focused on deep reasoning, correlation, and decision-making |
Short description:
1 | Deep analysis and correlation of HexStrike results using DeepSeek Reasoner. |
When to use:
1 | Use this mode when you need deep analysis, threat modeling, interpreting scan results, |
Available Tools:
- ☑ Read Files
- ☐ Edit Files
- ☑ Run Commands
- ☑ Use MCP
- ☑/☐ Use Browser (your call)
Custom Instructions:
1 | Work slowly. Prefer reasoning first, actions second. |
Example: scanning 0ut3r.space
With everything wired, I ran a task in Roo:
Task:
perform a scan of 0ut3r.space basic recon and vulnerability check
Roo + HexStrike executed a sequence like:
1 | Create initial todo list for reconnaissance |
Under the hood you can see calls like:
1 | Roo wants to use a tool on the hexstrike-ai MCP server |
Result summary
HexStrike produced an assessment along these lines:
- Target:
0ut3r.space - IPs: Cloudflare anycast (e.g. 188.114.96.11, 188.114.97.11, IPv6)
- Ports: 80/443 open, Cloudflare proxy on both
- WAF: Cloudflare WAF active
- Directory enum:
/archives,/categories,/css,/js,/lib,/page,/projects,/search, etc. - Files:
/robots.txt,/sitemap.xmlaccessible - Nuclei: ~5.5k templates, 0 critical/high/medium vulnerabilities
- Risk: LOW
- Overall: static-style blog behind Cloudflare, no critical findings
Cost of the whole thing
For that task, Roo’s stats were roughly:
- Context Length: ~76.4k / 128k
- Tokens: ~1.0M input / 3.2k output
- Cache: ~1.0M cached
- API Cost: $0.04
- Task data size: ~5.47 MB
So a full automated recon + vulnerability check + structured report on a real target for less than the price of a chewing gum.
Closing
That’s the whole pipeline:
- HexStrike server running on Kali, started via a simple venv-aware script.
- Fedora host with a minimal MCP client setup in its own virtualenv.
- Roo Code orchestrating:
- Ollama for HexStrike Light tasks,
- DeepSeek Chat for standard HexStrike mode,
- DeepSeek Reasoner for deep analysis,
- HexStrike itself via MCP.
You can now repeat this pattern for other targets, tweak your modes, and integrate additional tools - or swap DeepSeek for another API provider if you ever want to.
I guess automatic AI testing is not yet a perfect solution (commercial solutions probably claim otherwise and are more advanced), but I think it can simplify and speed up many tasks. In the future, I believe that automatic AI testing will become very important. I plan to run it after manual testing to see the results and determine which tasks I can trust the AI with, and which ones I still need to do manually. Will the AI find more than I did? After my manual tests, I will run additional automatic tests using different tools. I’m not going to hand over all the manual fun to AI until I see for myself that it works well. This is a good starting point, and I recommend using it for bounty hunting, as well as for support, reconnaissance and CTF, when you get stuck. You should also test its effectiveness yourself.
Happy testing!