A 1999-era XML workflow for modern scan triage
April 3, 2026 · Fabian Kopp
Raw Nmap output gets awkward quickly once a subnet scan grows beyond a handful of hosts. When that scan finishes, the real question is usually not “what is open?” It is “what should I look at first?”
That is not really a flaw in Nmap. It is a network scanner after all, not a triage interface.
NmapView is an XSLT stylesheet that turns Nmap scan data into a single interactive HTML report using xsltproc. It is meant to make scan triage faster for the operator while giving the review process a more standardized structure. As a secondary benefit, the same report is easy to share as a single HTML file with teammates or clients.
Contents
Setup
If you already have Nmap XML data, the workflow is short:
# Download the stylesheet
curl -fsSL -o NmapView.xsl \
https://github.com/dreizehnutters/NmapView/releases/latest/download/NmapView.xsl
# Run the transformation
xsltproc -o report.html NmapView.xsl scan.xml
Open report.html in a browser and you get a single analysis interface: sortable host and service tables, grouped service summaries, host scoping, client-side export, and plots that make patterns across hosts and services easier to spot.
Why XSLT?
The short answer: it fits the input and the environment.
On a jump box or during an internal engagement, I usually do not want to think about Python environments, a local database, package versions, or whether some helper script still matches the scan output in front of me.
xsltproc is old, deterministic, and widely available. Nmap already produces XML. XSLT already knows how to transform XML. Browsers are already good at tables and client-side interaction. That makes the pipeline simple:
- Nmap writes scan data to an XML file.
xsltproctransforms it into HTML usingNmapView.xsl.- The browser handles filtering, grouping, sorting, and plots.
That is the whole pipeline.
It also fits the shape of the input well. Nmap XML is already a nested tree of hosts, ports, services, scripts, and metadata, so the transform can work directly on that structure without translating it into an intermediate schema or writing a separate parser first. XSLT 1.0 is a good fit for the initial transform because Nmap already emits structured XML and the pipeline stays deterministic. Its limits show up once the report becomes more interactive: complex grouping, scoped recomputation, and richer analysis logic are possible, but awkward enough in XSLT 1.0 that it is cleaner to let the stylesheet build the document skeleton and let JavaScript take over the dynamic parts in the browser.
It remains usable for fairly large scans because filtering, grouping, and light analysis happen directly in the browser.
The approach follows the same basic idea as the original nmap.xsl from 2005 that ships with Nmap and later community work like honze-net/nmap-bootstrap-xsl, which NmapView forked from and then extended with plots, quality-of-life features, and aggregation views aimed more directly at triage.
How I use the report
The core workflow is built around moving from broad context to narrow investigation. It is not the only sane way to work through a scan. Some people live in grep, some jump straight into msfconsole, and some have their own parser stack. This is simply the workflow that keeps me oriented during timed engagements and makes handoffs and reporting easier.
In practice, the loop is short: take a glance at targets in the Host Overview section via table or plot, scope down to the odd host or cluster, then pivot into Open Services and Service Summary for the actual clues.
To keep the examples concrete, the sample report in this repo contains a small case study based on a 2026 Hack The Box subnet scan with 104 open ports and 16 unique services, enough to feel noisy before the grouped views start to simplify it.
1) Host Overview
The Host Overview is where I establish the baseline for a scan. It gives sortable host-level context: state, vendor, OS guess, port counts, uptime estimate, hop count, and a local rarity score.
Rarity is not trying to be a universal risk score. It is simply asking: which hosts look unusual within this specific scan? Under the hood, the score is local to the report: common services contribute very little, while services that only appear on one or two hosts contribute much more, loosely following the intuition behind self-information. If twenty systems look similar and one host exposes a noticeably different service mix, I want that host near the top immediately.
Rarity.Finding: Sorting by
Raritypushes two hosts to the top. They expose more ports than the rest and diverge at the OS level; the follow-up view shows open ports of the possible domain controllers.
2) Open Services
This table flips the perspective from hosts to exposed endpoints. Instead of reading systems one by one, I can sort by port number or host count, filter on service names, and see where odd ports, odd protocols, or uncommon software versions show up. It also pulls version strings, extra service info, HTTP-derived hints, CPEs, and, when present, enriched NSE script output into one row. That makes it much faster to separate expected services from unknown ones. Port numbers are clickable too, so when an endpoint looks web-facing I can open it in a new tab immediately for a quick visual pass. That small shortcut matters more than it sounds when I am moving quickly through a larger scan.
SSL services.Finding: Filtering for
SSLsurfaces the SSL/TLS-enabled hosts immediately, along with the certification clues attached to them.
For pentest work, this is often the fastest way to answer questions like:
- Which ports are only exposed on one or two hosts? Sort by
Count. - Where is HTTP or HTTPS present? Search for
https. - What services seem unexpected? Sort by
Portand scan the table. - Which NSE results are worth looking into for follow-up? Expand the
Details.
This is also where the report becomes a working pivot point rather than just a viewer. I can search on CPE strings, inspect HTTP-derived fields inline, and review NSE script output or easily open web services from the report without dropping back into raw Nmap data.
The Service Distribution Across Hosts stacked bar chart tells me which services are baseline noise and which only appear once or twice.
Finding: The distribution plot makes a one-off
tcp/54321service stand out while confirming that services liketcp/22are common baseline noise.
3) Service Summary
The Service Summary is usually the highest-value section for me.
Instead of showing every row flat, it groups services first by detected service name and then by the product/version tuple within that service. The XSLT emits a flat inventory of service records with the relevant metadata attached, and the browser builds the grouped view from that data on the client side. That keeps useful script-derived details attached to each variant, so HTTP headers, tech-stack hints from X-Powered-By or fingerprint strings, certificate data, and other high-signal output stay close to the service variant they belong to. That makes version drift visible without forcing you to scan dozens of similar rows manually.
That matters even more when systems look similar at first glance. In the sample report, SSH alone splits across multiple OpenSSH versions, and the web layer breaks into distinct families of Apache, nginx, Gunicorn, Jetty and IIS.
Finding: Grouping the
httpservice collapses the scan into three Apache version families instead of a long, flat list of endpoints.
This is where flat scan noise usually turns into a smaller set of concrete variants.
The matrix views fit naturally next to this section. The Host-Port Matrix is good for spotting hosts that carry unusual combinations of ports, especially when most of the environment follows a repeated build pattern.
Finding: The matrix outliers reveal a small cluster of hosts with
tcp/8080open; the follow-up view shows the shared details behind that cluster.
Taking the data further
The three table views and six plots of NmapView also work well as handoff points; that makes the same HTML report useful not just for my own triage, but also for sharing narrowed scan results with teammates or clients. I can filter the current data, copy the visible rows to the clipboard, or export what is on screen to CSV, Excel, or JSON.
That is enough for the common handoff cases. If I narrow Open Services to the endpoints I care about, or scope the report to a smaller set of hosts first, I can quickly move that smaller working set into httpx for web follow-up and screenshots, nuclei for broad template-based checks, nessus or openVAS for broader vulnerability checks on a narrowed host set, or testssl.sh and ssh-audit for protocol-specific compliance checks.
The same applies to local LLM analysis. Once the report has been narrowed to a host scope or filtered working set, the exported CSV or JSON is often a better prompt surface than raw Nmap XML because the high-signal data is grouped, labeled, and easier to slice into the specific section I actually want to analyze.
Limits
- Dependencies: Fetches some frontend libraries (DataTables, Plotly) from CDNs, so it is not fully offline by default.
- Scan quality: The report is only as useful as the underlying scan data. Weak
-sVcoverage, sparse NSE output, and low-confidence service detection reduce the value of the grouped views. In particular, service names with confidence< 5are renamed tounknownfor the grouped naming logic. - Scale: It remains usable for moderately large reports (~500 hosts), but the model is still a single generated HTML artifact with client-side processing, so very large scans will eventually hit browser memory and responsiveness limits.
Even with those limits, Nmap already solves collection of network data well. The slower part starts after the scan finishes, when you need to decide what deserves attention and explain that decision to someone else. NmapView is an attempt to shorten that step without adding more infrastructure: XML in, one HTML file out, analysis runs in the browser.
- Repo: dreizehnutters/nmapview
- Demo: möbius.band/report.html
- Comments: admin (at) möbius.band