Contents
- Introduction
- Getting Started / Installation
- Stability Status
- Configuration
- Autocompletion
- Group Consuming
- Share Groups
- Local Fake Cluster
- API at a Glance
- Examples
Introduction
kcl is a complete, pure Go command line Kafka client. Think of it as your one stop shop to do anything you want to do with Kafka -- producing, consuming, administering, transactions, ACLs, share groups, and so on.
Unlike the small size of kcat (formerly kafkacat), this binary is ~15M compiled. It is, however, still fast, has rich consuming and producing formatting options, and a complete Kafka administration interface that tracks the upstream protocol closely.
Stability Status
Treat the current command surface as a beta. The v0.17.0 release made a large, deliberate set of breaking changes across flags, config, and command layout (see the CHANGELOG for the full list). Further breaks are possible as users exercise the new surface and surface issues; that feedback is explicitly welcome.
I've spent significant time integration testing my franz-go client that this program uses. It is worth reading the stability status in the franz-go repo as well if using this client.
Getting Started
If you have a Go installation:
go install github.com/twmb/kcl@latest
This installs kcl from the latest release. You can optionally suffix with
@v#.#.# to install a specific version. When installed this way, kcl
automatically reports itself to brokers as kcl/<version> via the Kafka
protocol's client ID (useful for ACL audit logs and broker-side metrics).
Otherwise, download a release from the releases page.
Configuration
kcl is usable out of the box against localhost:9092; no config is required
for the common case of probing a local cluster. For real clusters you can
either use flags, environment variables, or a config file. The config file
supports multiple named profiles so that switching between clusters is easy.
Priority (highest wins):
-B/--bootstrap-servers(seed brokers only)-X key=valueflags (repeatable; any config key)KCL_<KEY>environment variables- Active profile in the config file (
--profile/-Corcurrent_profile) - Top-level config file keys (flat layout)
- Built-in defaults
By default, kcl reads its config from your OS user-config directory,
typically ~/.config/kcl/config.toml. The default path can be overridden
with --config-path or KCL_CONFIG_PATH.
The configuration supports TLS, SASL (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512,
AWS_MSK_IAM), seed brokers, and client/server timeouts. Timeouts accept Go
duration strings (500ms, 5s, 2m30s).
For a full reference with examples, run kcl profile --help.
Quick example config
# ~/.config/kcl/config.toml current_profile = "prod" [profiles.prod] seed_brokers = ["kafka-prod-1:9092", "kafka-prod-2:9092"] broker_timeout = "10s" [profiles.cicd] seed_brokers = ["kafka-staging:9092"] dial_timeout = "2s" # fail fast in CI broker_timeout = "5s" retry_timeout = "5s" [profiles.local] seed_brokers = ["localhost:9092"]
Then:
kcl topic list # uses "prod"
kcl -C cicd topic list # switches to "cicd" for one command
kcl -B other-host:9092 topic list # one-off override of seed brokers
Autocompletion
Thanks to cobra, autocompletion exists for bash, zsh, and powershell.
Bash example to put in .bashrc:
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion . <(kcl misc gen-autocomplete -kbash) fi
Group Consuming
Group consuming is supported with the -g/--group flag on kcl consume. The
default balancer is cooperative-sticky (incremental rebalancing, Kafka 2.4+),
which is incompatible with the older eager balancers (roundrobin, range,
sticky). If your existing group has members using eager balancing, pass
--balancer explicitly.
kcl group describe shows per-partition committed offsets, lag, and member
assignments. kcl group seek resets committed offsets via AlterOffsets;
kcl group offset-delete deletes specific partitions' committed offsets.
Share Groups
Share groups (KIP-932, Kafka 4.0+) are supported via kcl consume --share-group NAME. The --share-ack-type flag controls how each fetched record is
acknowledged:
accept(default) -- mark the record as successfully processed.release-- put the record back into the pool for redelivery (bumps delivery count). Useful for peeking at records without consuming them.reject-- archive the record as unprocessable (bumps delivery count, no redelivery). Useful for force-draining or exercising DLQ-style flows.
kcl share-group has its own list, describe, seek, delete, and
offset-delete subcommands.
Local Fake Cluster
kcl fake runs a kfake cluster in-process and prints the listen
addresses. Point any Kafka client (including another kcl invocation) at
those addresses; SIGINT or SIGTERM exits cleanly. State is in-memory by
default; pass --data-dir PATH to persist, and --sync for fsync-on-write
durability.
This is NOT a production broker. kfake implements the user-facing Kafka protocol surface (produce, fetch, groups, transactions, ACLs, share groups) but intentionally omits broker-to-broker / KRaft-internal requests and is not performance-tuned. It's great for probing, learning, integration tests, CI pipelines, and demos without Docker.
kcl fake # 3 brokers on kfake-picked ports
kcl fake --ports 9092,9093,9094 # 3 brokers on specific ports
kcl fake --ports 9092 # single-broker cluster
kcl fake -d /tmp/kfake --sync # persistent, durable
kcl fake --seed-topic foo:10,bar:3 # pre-create topics
kcl fake --as-version 3.9 # cap advertised API versions
kcl fake --acls --sasl 'plain:$USER:$PW' # SASL superuser from env vars
kcl fake -c group.consumer.heartbeat.interval.ms=500 # broker config
kcl fake -l debug # verbose kfake logs
The --sasl flag accepts MECHANISM:USER:PASS (repeatable). Supported
mechanisms: plain, scram-sha-256, scram-sha-512. User and password
go through os.ExpandEnv, so quoting the argument keeps the shell from
expanding first and lets the broker name env vars pick up the secrets.
API at a glance
The best way to explore kcl is kcl --help and then kcl <cmd> --help.
The top-level commands are:
kcl
acl -- list/create/delete ACLs
client-metrics -- manage client telemetry subscriptions (KIP-714)
cluster -- metadata, quorum, feature flags, leader elections, KRaft voters
config -- alter/describe topic, broker, group, client-metrics configs
consume -- consume records (classic group, share group, or direct)
dtoken -- delegation token commands
fake -- start a local in-process kfake cluster for testing
group -- classic / KIP-848 consumer group operations
logdirs -- per-partition log directory operations
misc -- api-versions, list-offsets, raw-req, error lookups, completion
produce -- produce records
profile -- manage connection profiles / config
quota -- alter/describe/resolve client quotas
reassign -- alter/list partition reassignments
share-group -- share group operations (KIP-932)
topic -- list/create/describe/delete/add-partitions/trim-prefix
txn -- describe active transactions / producers
user -- SCRAM user credential management
Output format for every command is controlled by the global --format flag
(text, json, or awk). JSON output is stable ({_command, _version, ...}
envelope) and suitable for piping into jq. Text output is tab-aligned;
column names are hyphen-delimited (GROUP-ID, LEADER-EPOCH, etc.) so awk
pipelines are straightforward. Note that consume and produce
deliberately repurpose --format as the per-record format string
(records don't fit the table envelope).
For tooling and agents that want to introspect kcl's entire command tree
programmatically (names, flags, examples, help text), use the global
--help-json flag at the root:
kcl --help-json | jq '.commands[] | .name'
Interactive confirmation prompts on destructive commands (group seek,
share-group seek, topic trim-prefix, config alter, acl delete)
are skipped with --yes/-y.
Examples
Consuming
Consume topic foo, print values:
Advanced formatting -- key, value, and headers:
kcl consume foo -f "KEY=%k, VALUE=%v, HEADERS=%{%h{ '%k'='%v' }}\n"
Group consuming from topics foo and bar:
kcl consume -g mygroup foo bar
Share group consuming (Kafka 4.0+), peeking at records without consuming:
kcl consume --share-group sg1 --share-ack-type release foo
From a specific timestamp:
kcl consume foo -o @2024-01-15
kcl consume foo -o @-1h # 1 hour ago
kcl consume foo -o @-30m:@now # 30 minutes ago to now
Producing
Newline-delimited value to topic foo:
echo fubar | kcl produce foo
Values from a file:
Produce key k and value v from a single line:
echo "key: k, value: v" | kcl produce foo -f 'key: %k, value: %v\n'
Produce with headers:
echo "k v 2 h1 v1 h2 v2" | kcl produce foo -f '%k %v %H %h{%k %v }\n'
Administering
kcl topic create foo # uses cluster default partitions/replication
kcl topic create foo -p 6 -r 3 # 6 partitions, 3 replicas
kcl topic describe foo # partitions, configs, health
kcl topic describe --topic-id <uuid> # lookup by UUID (KIP-516)
kcl cluster metadata # broker list, controller
kcl cluster describe-cluster # admin view with fenced brokers
kcl cluster features describe # feature flags (KIP-584)
kcl cluster features update share.version=1 --upgrade-type safe-downgrade
kcl group list # classic + KIP-848 + share groups
kcl group describe mygroup
kcl group seek mygroup --to end --yes
kcl acl list
Probing against a local fake cluster
Start a fake in one shell, use it from another:
# shell 1
kcl fake --seed-topic foo:3
# shell 2 (fake prints 127.0.0.1:<port> -- pick any)
kcl -B 127.0.0.1:<port> topic list
seq 1 5 | kcl -B 127.0.0.1:<port> produce foo
kcl -B 127.0.0.1:<port> consume foo -n 5 -o start
Or set a persistent profile for the fake so -B isn't needed on each
invocation:
kcl profile create # interactive; name it e.g. "fake"
kcl -C fake topic list
Error and exit codes
Commands exit non-zero on any per-item failure (e.g. deleting one topic
out of three, where one doesn't exist, exits 1). --format json output
on stdout is always valid JSON; all errors go to stderr. This makes kcl
safe to script against.