XAN: A Modern CSV-Centric Data Manipulation Toolkit for the Terminal
github.comI prefer to use clickhouse-local for all my CSV needs as I don't need to learn a new language (or cli flags) and can just leverage SQL.
clickhouse local --file medias.csv --query "SELECT edito, count() AS count from table group by all order by count FORMAT PrettyCompact"
┌─edito──────┬─count─┐
│ agence │ 1 │
│ agrégateur │ 10 │
│ plateforme │ 14 │
│ individu │ 30 │
│ media │ 423 │
└────────────┴───────┘
With clickhouse-local, I can do lot more as I can leverage full power of clickhouse.I used to use q for this sort of thing. Not sure if there are better choices now as it have been a few years.
How does it compare with duckdb, which I usualy resort to? What I like with duckdb is that it's a single binary, no server needed, and it's been happy so far with all the CSV file I've thrown at it.
clickhouse-local is similar to duckdb, you don't need a clickhouse-server running in order to use clickhouse-local. You just need to download the clickhouse binary and start using it.
There are few benefits of using clickhouse-local since ClickHouse can just do lot more than DuckDB. One such example is handling compressed files. ClickHouse can handle compressed files with formats ranging from zstd, lz4, snappy, gz, xz, bz2, zip, tar, 7zip.clickhouse local ClickHouse local version 25.4.1.1143 (official build). :)
Also clickhouse-local is much more efficient in handling big csv files[0]clickhouse local --query "SELECT count() FROM file('top-1m-2018-01-10.csv.zip :: *.csv')" 1000000Wanted to try it.
Debian package is of poor quality: not even sure if clickhouse local is included in there, I believe so but there is no manpage, no doc at all, and no `clickhouse-server -h`.
Went to the official page looking for a tarball to download, found only the `curl|sh` joke.
Went to github looking for tagged tarballs, couldn't find any. Looked for INSTALL.md, couldn't find any.
Will try harder later, have to weep my tears for now.
ClickHouse is a single binary. It can be invoked as clickhouse-server, clickhouse-client, and clickhouse-local. The help is available as `clickhouse-local --help`. clickhouse-local also has a shorthand alias, `ch`.
This binary is packaged inside .deb, .rpm, and .tgz, and it is also available for direct download. The curl|sh script selects the platform (x86_64, aarch64 x Linux, Mac, FreeBSD) and downloads the appropriate binary.
> Debian package is of poor quality
Can you elaborate more please? I would love if you can say what all can be improved to make debian package up to standards.
Thank you for your interest.
My comment was really about the state of documentation ("there is no manpage, no doc at all, and no `clickhouse-server -h`"). More specifically:
% dpkg -S clickhouse-server | grep bin clickhouse-server: /usr/sbin/clickhouse-server % man clickhouse-server No manual entry for clickhouse-server % man clickhouse No manual entry for clickhouse % /usr/sbin/clickhouse-server --help Unknown option specified: help % /usr/sbin/clickhouse-server -h Unknown option specified: h % ls -l /usr/share/doc/clickhouse-server total 60 -rw-r--r-- 1 root root 235 Dec 5 2022 changelog.Debian.amd64.gz -rw-r--r-- 1 root root 1437 Dec 5 2022 changelog.Debian.gz -rw-r--r-- 1 root root 33174 Dec 20 2018 changelog.gz -rw-r--r-- 1 root root 15057 Oct 29 2022 copyright
I use SQLite in a similar manner, but I'll have to check this out.
Can't help but thinking how handy PowerShell is out of the box for tasks like this.
Translating the examples from the ReadMe, having read the file with:
$medias = Get-Content .\medias.csv | ConvertFrom-Csv
Previewing the file in the terminal xan view medias.csv
$medias | Format-Table
Reading a flattened representation of the first row xan flatten -c medias.csv
$medias | Format-List
Searching for rows xan search -s outreach internationale medias.csv | xan view
$medias | Where-Object { $_.outreach -eq "internationale" } | Format-Table
Selecting some columns xan select foundation_year,name medias.csv | xan view
$medias | Select-Object -Property foundation_year, name | Format-Table
Sorting the file xan sort -s foundation_year medias.csv | xan view -s name,foundation_year
$medias | Sort-Object -Property foundation_year | Select-Object -Property name, foundation_year | Format-Table
Deduplicating the file on some column # Some medias of our corpus have the same ids on mediacloud.org
xan dedup -s mediacloud_ids medias.csv | xan count && xan count medias.csv
$medias | Select-Object -ExpandProperty mediacloud_ids -Unique | Measure-Object; $medias | Measure-Object -Property mediacloud_ids
Computing frequency tables xan frequency -s edito medias.csv | xan view
$medias | Group-Object -Property edito | Sort-Object -Property Count -Descending
It's probably orders of magnitude slower, and of course, plotting graphs and so on gets tricky. But for the simple type of analysis I typically do, it's fast enough, I don't need to learn an extra tool, and the auto-completion of column/property names is very convenient.I find Nushell even better for these usecases:
The above is the initial read and format into table.$medias = open .\medias.csvI'm currently on my phone so can't go through all the examples, but knowing both PS and nu, nu has the better syntax.
EDIT:
Get data and view in table:
Get headers:let $medias = http get https://github.com/medialab/corpora/raw/master/polarisation/medias.csv $medias
Get count of rows:$medias | columns
Get flattened, slight more convoluted (caveat there might be a better way):$medias | length
Search rows:$medias | each {print $in}
Select columns:$medias | where $it.outreach == 'internationale'
Sort file:$medias | select foundation_year name
Dedup based on column:$medias | select foundation_year name | sort-by foundation_year
Computing frequency and histogram$medias | uniq-by mediacloud_ids$medias | histogram editoYes, I find PowerShell is criminally underrated for these type of tasks. Even though it's open source and cross-platform, the stigma from it's Windows-centric days is hard to overcome.
I was going to mention BurntSushis excellent xsv, also written in rust ... but I see xan mantions its a fork + rewrite of xsv.
I use Pandas for most of my CSV work. It's super fast and very powerful. There's a bit of a learning curve. I can then use Python scripts to manipulate massive CSV files.
I tend to use csvkit for more complicated transformations, and OCaml's csvtool[0] for the simpler ones. For intermediate transformations I wrote my own csved[1] script, which reads for every line of a CSV reads it into @F, applies a Perl expression to that array, then writes it out. With the -h option you can also use the %F hash to access fields by name. It's very fast.
It looks like xsv and xan are in the "csvkit but faster" niche, which is nice, but now I must learn another set of commands.
And there are now many more recent utilities called csvtool, including a Perl and a Python one.
People will do anything to avoid learning SQL.
I recently came across miller (https://github.com/johnkerl/miller). I don't know how these tools compare.
Something that would be insanely useful is if your tool could be users to do validations.
For example being able to define data types for each column and say required columns. And then run your tool as a validator and take the errors as an array that’d be amazing!
Coming from a dev who’s just over processing CSV files back into my apps.
Reading CSV into a duckdb table will give you that, along with a table for the errors and reason for error: https://duckdb.org/docs/stable/data/csv/reading_faulty_csv_f...
Could definitely be done as a small little bash script
You should take a look at the Frictionless data library https://framework.frictionlessdata.io/docs/guides/validating...
This looks really nice to use!
There are a lot of tools that one can use on Linux cli to work with csv. But many of them have become unmaintained. Or have terrible docs. Or have really awkward usage (looking at you, “yq”).
See also: csvkit (https://csvkit.readthedocs.io)
Yep, been using csvkit for years and it's so great. There's nothing I haven't been able to do, is there some reason to use Xan instead?
I use both csvkit and xsv. The syntax for csvkit is a bit easier for most of my uses, but xsv is way faster when I have larger files.
I know someone who uses csvtk (Golang), but haven't tried it yet. https://github.com/shenwei356/csvtk
I am using Goawk with good results
This could be a nice pre-filter for VisiData…?
I appreciate tools for CSV to tame that ungodly mess. But I loathe attempts to glorify the cursed and putrid sack of shit that it really is.