ServiceRadar is a distributed network monitoring system designed for infrastructure and services in hard-to-reach places or constrained environments. It provides real-time monitoring of internal services with cloud-based alerting to ensure you stay informed even during network or power outages.
Demo site available at https://demo.serviceradar.cloud login: demo@localhost password: serviceradar
Features
- Distributed Architecture: Multi-component design (Agent, Gateway, Core) for flexible edge deployments.
- WASM Plugin System: Securely extend monitoring with custom checks in Go or Rust. Runs in a hardware-level sandbox with zero local dependencies and proxied networking.
- Topology: GPU-native topology engine capable of rendering millions of interactive nodes and edges at 60fps via deck.gl, Apache Arrow for zero-copy streaming, and WASM-native logic layer.
- Causal Engine: Real-time triage and isolation via DeepCausality (Rust). Employs hybrid filtering and roaring bitmaps to identify root causes and visually isolate an event's "blast radius" in microseconds.
- SRQL: intuitive key:value syntax for querying time-series and relational data.
- Unified Data Layer: Powered by CloudNativePG, TimescaleDB, PGVector, and Apache AGE for relational, time-series, and graph topology data.
- Observability: Native support for OTEL, GELF, Syslog, SNMP (polling/traps), BGP (BMP), and NetFlow.
- Graph Network Mapper: Discovery engine that maps interfaces and topology relationships via SNMP/LLDP/CDP.
- Security: Hardened with mTLS, RBAC, and SSO integration.
WASM-Based Extensibility
ServiceRadar replaces traditional "script-and-shell" plugins with a modern WebAssembly runtime. This provides a generation leap in security and portability:
| Feature | ServiceRadar (WASM) | Traditional NMS (Nagios/Zabbix) | Enterprise (SolarWinds) |
|---|---|---|---|
| Isolation | Hardware Sandbox | None (OS Process) | None (User Session) |
| Dependencies | Zero (Static Binaries) | High (Local Libs/Python) | High (.NET/Runtimes) |
| Security | Capability-based (Proxy) | Sudo/Root access | Local Admin / WMI |
| Portability | Cross-platform WASM | Script-specific | Windows-centric |
| Auditability | Every network call logged | Invisible to Agent | Opaque |
Why WASM? Plugins are "FS-less" by default. They cannot access the host filesystem or raw sockets. Instead, they use a Network Bridge where the Agent proxies specific HTTP/TCP calls based on admin-approved allowlists.
Plug-in SDK
Go: https://code.carverauto.dev/carverauto/serviceradar-sdk-go
Rust: https://code.carverauto.dev/carverauto/serviceradar-sdk-rust
Quick Installation (Docker Compose)
Get ServiceRadar running in under 5 minutes:
# Optional - set these in your .env export SERVICERADAR_HOST=<my-vm-ip> export GATEWAY_PUBLIC_BIND=0.0.0.0 git clone https://code.carverauto.dev/carverauto/serviceradar.git cd serviceradar docker compose pull docker compose up nats-creds-init docker compose up -d # Get your admin password docker compose logs config-updater
Access: http://localhost (login: root@localhost)
Kubernetes / Helm Deployment
ServiceRadar provides an official Helm chart for Kubernetes deployments, published to Harbor as an OCI artifact.
# Inspect chart metadata and default values helm show chart oci://registry.carverauto.dev/serviceradar/charts/serviceradar --version 1.2.16 helm show values oci://registry.carverauto.dev/serviceradar/charts/serviceradar --version 1.2.16 > values.yaml # Install a pinned release (recommended) helm upgrade --install serviceradar oci://registry.carverauto.dev/serviceradar/charts/serviceradar \ --version 1.2.16 \ -n serviceradar --create-namespace \ --set global.imageTag="v1.2.16" # Track mutable images (staging/dev): pulls :latest and forces re-pull helm upgrade --install serviceradar oci://registry.carverauto.dev/serviceradar/charts/serviceradar \ --version 1.2.16 \ -n serviceradar --create-namespace \ --set global.imageTag="latest" \ --set global.imagePullPolicy="Always" # Get password for 'root@localhost' user created by helm install kubectl get secret serviceradar-secrets -n serviceradar \ -o jsonpath='{.data.admin-password}' | base64 -d
Note: if you omit global.imageTag, the chart defaults to latest. Set global.imagePullPolicy=Always when you want to pick up new pushes on restart.
Verifying Published Images
ServiceRadar publishes Cosign-signed images to Harbor. The public verification key is committed in docs/cosign.pub.
For the self-hosted keyless migration path, keep custom Sigstore trust material under docs/sigstore/README.md. The release scripts now support both legacy key-based verification and keyless verification against a custom trusted root.
Verify a released or immutable image tag with:
cosign verify \ --experimental-oci11 \ --key docs/cosign.pub \ registry.carverauto.dev/serviceradar/serviceradar-core-elx:v1.2.16
For build-specific images, prefer the immutable sha-<commit> tags:
cosign verify \ --experimental-oci11 \ --key docs/cosign.pub \ registry.carverauto.dev/serviceradar/serviceradar-core-elx:sha-ac23dc0ebcbee0d6a964dc8307826bf2a063536c
Successful verification proves the image was signed with the ServiceRadar release key and that the signature published in Harbor matches the requested image.
For self-hosted keyless verification, use the published trusted root and
certificate identity policy instead of docs/cosign.pub:
cosign verify \
--experimental-oci11 \
--trusted-root docs/sigstore/trusted-root.json \
--certificate-identity-regexp '<issuer-specific SAN regex>' \
--certificate-oidc-issuer https://issuer.example.com \
registry.carverauto.dev/serviceradar/serviceradar-core-elx:sha-ac23dc0ebcbee0d6a964dc8307826bf2a063536cDocker Compose notes:
- Set
APP_TAGin.envto pin release images (example:APP_TAG=v1.2.16). - Set
COMPOSE_FILE=docker-compose.yml:docker-compose.dev.ymlin.envto default to the dev overlay without-f.
Chart URL: oci://registry.carverauto.dev/serviceradar/charts/serviceradar
Notes:
- Chart versions are like
1.2.16; ServiceRadar image tags are likev1.2.16. - If your cluster requires registry credentials, set
image.registryPullSecret(defaultregistry-carverauto-dev-cred).
For ArgoCD deployments, use registry.carverauto.dev/serviceradar/charts as the repository URL (without the oci:// prefix):
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: serviceradar namespace: argocd spec: destination: server: https://kubernetes.default.svc namespace: serviceradar source: repoURL: registry.carverauto.dev/serviceradar/charts chart: serviceradar targetRevision: "1.2.16" helm: values: | global: imageTag: "v1.2.16"
Architecture
- Agent: Lightweight Go service on monitored hosts; manages WASM execution and local collection.
- Agent-Gateway: Ingestion point that receives gRPC streams from edge agents.
- Core (core-elx): Control plane (Elixir/Phoenix/Ash) for orchestration, ERTS, and job scheduling (Oban).
- Web UI (web-ng): Real-time LiveView dashboard and APIs for configuration and visualization.
- NATS: NATS JetStream message broker for bulk ingestion streams.
- Collectors: Collect bulk data (netflow, logs, SNMP, etc.).
Documentation
For detailed guides on setup and security, visit: https://docs.serviceradar.cloud
For WASM SDK usage, visit: http://developer.serviceradar.cloud
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. Join our Discord!
License
Apache 2.0 License - see the LICENSE file for details.