Jaq – A jq clone focused on correctness, speed, and simplicity
github.com> [[]] | implode crashes jq, and this was not fixed at the time of writing despite being known since five years.
Well, taking into account that jq development has been halted for 5 years and only recently revived again, it's no wonder that bug reports have been sitting there for that time, both well known and new ones. I bet they'll get up to speed and slowly but surely clear the backlog that has built up all this time.
Yeap was fixed in 1.7 https://github.com/jqlang/jq/pull/2646
Why was it halted?
I think the original devs just got burnt out for a while https://github.com/jqlang/jq/issues/2305#issuecomment-157263...
It's so awesome when projects shout out other projects that they're similar to or inspired by or not replacements for. I learned about https://github.com/yamafaktory/jql from the readme of this project and it's what I've been looking for for a long time, thank you!
That's not to take away from JAQ by any means I just find the JQ style syntax uber hard to grokk so jql makes more sense for me.
Very nice in this regard is gron, too. It simply flattens any json into lines of key value format, making it compatible with grep and other simple stream operations.
And also https://github.com/adamritter/fastgron that I've just discovered.
This is brilliant, thank you for sharing!
Nice find. I think I'll try it out. Although I was hoping for a real SQL type experience. I don't understand why no one just copies SQL so I can write a query like "SELECT * FROM $json WHERE x>1".
Everyone seems to want to invent their own new esoteric symbolic query language as if everything they do is a game of code golf. I really wish everyone would move away from this old Unix mentality of extremely concise, yet not-self-evident syntax and do more like the power shell way.
> Although I was hoping for a real SQL type experience. I don't understand why no one just copies SQL so I can write a query like "SELECT * FROM $json WHERE x>1".
With somewhat tabular data, you can use sqlite to read the data into tables and then work from there.
Example 10 from https://opensource.adobe.com/Spry/samples/data_region/JSONDa... (slightly fixed by removing the ellipsis) results in this interaction:
Instead of "select" this could also flow into freshly created tables using "insert into" for more complex scenarios.sqlite> select json_extract(value, '$.id'), json_extract(value, '$.type') from json_each(readfile('test.json'), '$.items.item[0].batters.batter'); 1001|Regular 1002|Chocolate 1003|Blueberry 1004|Devil's Food sqlite> select json_extract(value, '$.id'), json_extract(value, '$.type') from json_each(readfile('test.json'), '$.items.item[0].topping'); 5001|None 5002|Glazed 5005|Sugar 5007|Powdered Sugar 5006|Chocolate with Sprinkles 5003|Chocolate 5004|MapleWhile i agree about the general sentiment on preferring well defined and explicit standard as opposed to "cute" custom made languages. In this case i am not convince that SQL would be the best candidate for querying nested structures like JSON.Something like xpath maybe.
I agree, it wouldn't be the best to handle all json edge cases, but it would be a super easy way to quickly get data from a big chunk of simple json and you could just use subqueries or query chaining for nested results.
For anyone who hasn't used powershell, this is the difference I'm talking about. I would not be able to write either of these without looking up the syntax. But knowing very little about powershell, I can tell exactly what that command means while the bash command, not so much.
```powershell $json | ConvertFrom-Json | Select-Object -ExpandProperty x ```
```bash echo $json | jq '.x' ```
On the other hand, I find the bash one clear and concise. That PowerShell example is so verbose, it'd drive me crazy to do any sort of complex manipulation this way! To each their own, I guess.
If all I was doing is writing code, I agree. But like most developers, I think I read a lot more code than I write.
Be the change you want to see.
I personally don't understand why people aren't willing to learn instead. It's not hard to sit down and pick up a new skill and it's good to step out of one's comfort zone. I personally hate Powershell syntax, brevity is the soul of wit and PS could learn a thing or two from bash and "the linux way".
We seem obsessed with molding the machine to our individual preferences. Perhaps we should obsess over the opposite: molding our mind to think more like the machine. This keeps a lot of things simple, uncomplicated, and flexible.
Does a painter wish for paints that were more like how he wanted them to be? Sure, but at the end of the day he buys the same paint everyone else does and learns to work with his medium.
> I personally don't understand why people aren't willing to learn instead
You misunderstand. As programmers we learn every day, obviously that's one of our strong points.
The real problem is that every single tool wants you to go deep and learn their particular dyslexic mini programming language syntax or advanced configuration options syntax. Why? We have TOML, we have SQL, we have a bunch of pretty proven syntaxes and languages that do the job very well.
A lot of these programmers authoring tools suffer from a severe protagonist syndrome which OK, it's their own personal character development to grapple with, but in the meantime us the working programmers are burning out because everyone and their dog wants us to learn their own brain child.
> We seem obsessed with molding the machine to our individual preferences. Perhaps we should obsess over the opposite: molding our mind to think more like the machine.
How so? Everything in "the machine" was created by other humans; from the latest CLI tool, to the CPU instruction set. As computer users, given that it's practically impossible for a single person to be familiar with all technologies, we must pick our battles and decide which technology to learn. Some of it is outdated, frustrating to use, poorly documented or maintained, and is just a waste of time and effort to learn.
Furthermore, as IT workers, it is part of our job to choose technologies worth our and our companies' time, and our literal livelihood depends on honing this skill.
So, yes, learning new tools is great, but there's only so much time in a day, and I'd rather spend it on things that matter. Even better, if no tool does what I want it to, I have the power to create a new one that does, and increase my development skills in the process.
>I personally don't understand why people aren't willing to learn instead.
Mostly because if you don't use it that often then it ends up forgotten again. I can smash out plenty of trivial regexes, but anything even slightly complicated means I'm learning backreferences again for the 6th time in a decade.
In my case, my memory doesn't work that way. I have learnt jq several times but I don't use it frequently enough to retain the knowledge.
A better tool for me would be something that uses JS syntax but with some syntactic sugar and a great man page.
I have that same problem, the advanced features I use too little to remember. Then I started working on a configuration language that should have a non-surprising syntax (json superset, mostly inspired by Python, Rust, Nix). And it turns out, this works well as a query language for querying json documents. https://github.com/ruuda/rcl Here is an example use case: https://fosstodon.org/@ruuda/111120049523534027
What is "JS syntax"? And can you write a frontend for jq that converts "JS syntax" to jq syntax?
And is the jq man page poor? I'm sure they will accept patches for it.
The jq man page is pretty good IMO. It’s where/how I learned to use jq
While I appreciate the sentiment for bending your mind, rather than the spoon, the practical reality is that developer time is far costlier than compute time.
It is easier to map compute structures and syntax to existing mental models than to formulate new mental models. The latter is effortful and time-consuming.
So, given the tradeoffs, I could learn a new language, or leverage an existing language to get things done.
And yes, given sufficient resources (particularly time), developing new mental models is ideal, but reality often prohibits the ideal.
If the crux is that you want something that maps closer to your personal mental model than what's available, I guess the other option is to build the missing tool yourself. That's the other side of "be the change you want to see".
> So, given the tradeoffs, I could learn a new language, or leverage an existing language to get things done.
There is also the option to create a new language (jqsql or whatnot), optionally sharing it publically.
If you do this I think you'd find out why beyond very trivial stuff, sibling commenters have a point in that SQL isn't a good fit for nested data like JSON. Would still be a useful exercise!
The machine is uncomplicated and simple? That is the last way I would describe modern CPUs and their peripherals.
The whole point of programming is to bend the machine towards humans, not the other way around.
“Brevity is the soul of wit”
Maybe we have different goals but I don’t get paid to write witty code and I don’t think anyone on my team would appreciate it if I did.
I don’t think the redeeming qualities of brevity in prose transfer to something like terse syntax.
Yeah I don't understand why people aren't willing to learn SQL too.
brevity is not clarity.
DuckDB does just this, https://duckdb.org/docs/archive/0.9.2/guides/import/json_imp...
The datafusion cli https://arrow.apache.org/datafusion/user-guide/cli.html can run SQL queries against existing json files.
SQL is built for relational/tabular data, JSON is not relational and usually not tabular.
Well there is nothing saying you can't put relational data in json format.
But that wouldn't help query arbitrary JSON files which was the point.
I think the closest I've seen to a SQL experience for JSON is how steampipe stores json columns as jsonb datatypes and allows you to query those columns w/postgres JSON functions etc.
- https://steampipe.io/docs/sql/querying-json#querying-json #example w/the AWS steampipe plugin (I think this is a wrapper around the AWS go SDK)
- https://hub.steampipe.io/plugins/turbot/config #I think this lets you query random json files.
(edited to try to fix the bulleting)
I just checked the GitHub page [1] for Microsoft PowerShell. It looks written in C# and available on Win32/MacOS/Linux, where DotNet is now supported. Do you use PowerShell only on Win32 or other platforms also?do more like the power shell way
Can you give an example of something that PS can do that is built-in for text processing, instead of a proprietary symbolic query language?Everyone seems to want to invent their own new esoteric symbolic query languageBy "the powershell way" I don't mean actually using powershell. I just mean using verbose, descriptive commands that one can easily understand what it does without having a working knowledge of the scripting language.
Have you looked at [duckdb's JSON support](https://duckdb.org/docs/extensions/json.html)? It's pretty transparent and you can do exactly what you say: `select * from 'file.json' where x > 1` will work with "simple" json files like {"x": 1, "y": 2} and [{"x": 1, "y":2}, {"x":2, "y":3}]
> I don't understand why no one just copies SQL so I can write a query like "SELECT * FROM $json WHERE x>1".
You could ask the same with respect to XML too -- why XPath/XSLT instead of SQL?
The problem is that SQL isn't that convenient when you're querying data in a free-form and recursive schema. Especially the latter, because recursive queries in SQL are just not pithy. I say this as someone who loves SQL.
OctoSQL[1] does a pretty good job of allowing you to query JSON (and CSV) with SQL.
nushell and pwsh. I'm not familiar with nushell, but pwsh offers where, select, foreach, group, sort.
N.B. those aliases are not created by default on *nix
It's pipeline-based and procedural, but you can be very declarative in data processing
I do sympathise with that a bit, but for me at least it does not look like jql is the solution:
this appears to be something like jq's:'|={"b""d"=2, "c"}'
which.. is obviously longer, but I think I prefer it, it's clearer?'select(."b"."d" == 2 or ."c" != null)'(actually it would be `.[] | select(...)`, but I'm not sure something like that isn't true of jql too without trying it, I don't know if the example's intended to be complete - and I don't think it affects my verdict)
jql homoiconicity looks rather ... Lispy. Like you could use it on itself, write "Macros", etc.
> I just find the JQ style syntax uber hard to grokk
You're not alone. ChatGPT (3.5) is terrible at it also, for anything non-trivial.
I'm not sure if that's because of the nature of the jq syntax, but I do wonder.
Well ChatGPT doesn't 'grok' anything, really..
I love the idea of jq but i use it infrequently enough that I have to search the manual for how to use their syntax to get what I want.
Sadly 99% of what I do with jq is “| jq .”
I have the same problem. Then, unrelated, I started building a configuration language, and it turned out it's quite nice for querying json [1]. Here is an example use case that I couldn't solve in jq but I could in RCL: https://fosstodon.org/@ruuda/111120049523534027
I had the same problem, keeping me from really exploiting the power of jq. But for this and similar cases I am really glad about copilot being available to help. I just tell it what I need, together with a reduced sample of the source-json, and it generates a correct jq-script for me. For more complex requirements I usually iterate a bit with Copilot because it is easier and more reliable to guide it to the solution gradually than to word everything out correctly in the question in the first go. Also I myself often get new and better ideas during the iterations than I had in the beginning. Probably works the same with ChatGPT and others.
Me too; but recently I used ChatGPT to just quickly me the jq syntax I needed: https://chat.openai.com/share/40b68d73-d2dd-412d-867f-9f375e...
https://github.com/01mf02/jaq/blob/main/Cargo.lock
That's a lot of dependencies..
Yes it is, compared to gojq https://github.com/itchyny/gojq/blob/main/go.mod
How does that usually play out in the Rust ecosystem? Lots of dependencies tell me there's a huge risk of the dependencies becoming inherently incompatible with each other over time, making maintenance a major task. How will this compile in say, 2 years?
Because of the lockfile, it will use the same library versions when compiling again in the future. The main question for "will this compile" is whether the Rust compiler is sufficiently backwards-compatible, which (at least from my experience) it certainly is.
Also re "lots of dependencies": This is kind of unavoidable in Rust because the stdlib is deliberately very lean, and focuses on basic data structures that are needed for interop (e.g. having common string types is important for different libraries to work together with each other) or not possible to implement without specific compiler support (e.g. marker traits or boxing). Contrast this with Go where the stdlib contains things like a full-fledged HTTP server and regex engine. It's easy to build things in Go with a rather short go.mod file, but only because the go.mod file does not show all the stdlib packages that you're using.
I understand the concept of a lock file and they are a blessing, but inevitably one will need to upgrade at least one of the dependencies. Whether this is due to desired functionality or a bug, it is bound to happen.
Lock files won't solve that problem if one of the other libraries will be incompatible. Add more time and the problem compounds. Major problem in e.g. the npm ecosystem.
While jq is a very powerful tool, I've also been using DuckDB a lot lately.
SQL is a much more natural language if the data is somewhat tabular.
Some time ago I tried Retool and it does have "Query JSON with SQL": https://docs.retool.com/queries/guides/sql/query-json (it is somewhat relevant because it was extremely convenient)
It is somewhat similar to Linq in C# although SQL there is more standardised so I like it more. Also, it would be fantastic to have in-language support for querying raw collections with SQL. Even better: to be able to transparently store collections in Sqlite.
It is always sad to see code which takes some data from db/whatever and then does simple processing using loops/stream api. SQL is much higher level and more concise language for these use cases than Java/Kotlin/Python/JavaScript
I've found the same. I store all raw json output into a sqlite table, create virtual columns from it, then do a shell loop off of a select. Nested loops become unnested, and debugability is leagues better because I have the exact record in the db to examine and replay.
I've noticed what I'm creating are DAGs, and that I'm constantly restarting it from the last-successfully-proccessed record. Is there a `Make`-like tool to represent this? Make doesn't have sql targets, but full-featured dag processors like Airflow are way too heavyweight to glue together shell snippets.
Yes. SQL is much better for relational data with a strict schema. Though you'll still never get a way to express recursive queries in SQL w/o a lot of verbosity.
I like textql [0] better for this use case, as it's simpler in my mind.
textql doesn't seem to work with JSON. I think the grandparent comment meant that the data was in a table of sorts, represented in JSON.
Ah, you're right. TextQL combined with Miller would be closer, but DuckDB can do the same things all in one. Always good to have a variety of tools to choose from.
Regarding correctness, will it display uint64 numbers without truncating them? That's my biggest pet peeve with jq currently.
Unfortunately JSON numbers are 64 bit floats, so if you're standards compliant you have to treat them as such, which gives you 53 bits of precision for integers.
Also hey, been a while ;)
Edit: I stand corrected, the latest spec (rfc8259) only formally specifies the textual format, but not the semantics of numbers.
However, it does have this to say:
> This specification allows implementations to set limits on the range/and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision.
In practice, most implementations treat JSON as a subset of Javascript, which implies that numbers are 64-bit floats.
I'm being pedantic here, but JSON numbers are sequences of digits and ./+/-/e/E. Whether to parse those sequences into 64-bit floats or something else is left up to the implementation.
However what you say is good practice anyway. The spec (RFC 8259) has this note on interoperability:
> This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
> Unfortunately JSON numbers are 64 bit floats, so if you're standards compliant you have to treat them as such,
Are you sure? Looking at https://www.json.org/json-en.html I don't see anything about 64 bit floats.
JSON does not define a precision for numbers, so: it's often float64 (but note -0 is allowed, but NaN and +/-Inf are not), but it depends on your language, parser config, etc.
Many will produce higher precision but parse as float64 by default. But maximally-compatible JSON systems should always handle arbitrary precision.
I thought the JSON spec says that numbers can have an arbitrary amount of digits.
Also, what!! Hey! Miss you man.
I believe this has improved in jq 1.7: https://github.com/jqlang/jq/releases/tag/jq-1.7
> Use decimal number literals to preserve precision. Comparison operations respects precision but arithmetic operations might truncate.
This is still broken in jq 1.7 for sufficiently long exponents
From a quick test it looks like it supports exponents up to 9 digits long (i.e. 1.0e999999999), which, frankly, seems pretty reasonable; it's hard for me to imagine a use case where you'd want to represent numbers larger than that.
jq 1.7 do preserve large integers but will truncate if any operation is done on them. Unfortunetly it currently truncates to a decimal64 which is a bit confusing, this will be fixed in next release where it follow the suggestion from the JSON spec and truncates to binary64 (double) https://github.com/jqlang/jq/pull/2949
I switched to jless and never looked back. The user interface is miles ahead of everything else
It's not the same. The jq is not just a viewer. It's a JSON query lang processor.
You are correct, the user interface of jq is not the same as the user interface of jless.
I guess it's cute that there's some terminal line art library in Rust somewhere, but when I tried to invoke jaq it just pooped megabytes of escape codes into my iTerm and eventually iTerm tried to print to the printer. Too clever.
I tried to do `echo *json | rush -- jaq -rf ./this-program.jq {} | datamash ...` and in that context I don't think it's appropriate to try to get artistic with the tty.
The cause of the errors, for whatever it's worth, is that `jaq` lacks `strftime`.
My first impression is it has fancy error messages but no halt_error/0
$ ./jaq-v1.2.0-x86_64-unknown-linux-gnu -sf aoc22-13.jq input.txt
Error: undefined filter
╭─[<unknown>:30:18]
│
30 │ ╭─▶ "bad input" | halt_error
31 │ ├─▶ end;
│ │
│ ╰───────────────── undefined filter
────╯
and (after commenting out halt_error) slower than both jq and gojq $ time jq -sf aoc22-13.jq input.txt
6415
20056
real 0m0.023s
user 0m0.010s
sys 0m0.010s
$
$ time gojq -sf aoc22-13.jq input.txt
6415
20056
real 0m0.070s
user 0m0.030s
sys 0m0.000s
$
$ time ./jaq-v1.2.0-x86_64-unknown-linux-gnu -sf aoc22-13.jq input.txt
6415
20056
real 0m0.103s
user 0m0.065s
sys 0m0.000s
aoc22-13.jq is here https://pastebin.com/raw/YiUjEu2n
and input.txt is here https://pastebin.com/raw/X0FSyTNfI started using yq over jq. Any significant differences?
Which yq? I prefer https://github.com/mikefarah/yq to https://github.com/kislyuk/yq.
I prefer the former, single static binary which works great on workstations and CI alike, the latter requires python as well as jq as it's a wrapper
I've been using yq + git-xargs to automate config files in repos (CI/CD, linters, etc). The combo has been spectacular for me.
jq feels like a much more robust tool than yq. I understand that the task of processing YAML is much harder than JSON, but:
- yq changed its syntax between version 3 and 4 to be more like jq (but not quite the same for some reason)
- yq has no if-then-else https://github.com/mikefarah/yq/issues/95 which is a poor design (or omission) in my opinion
So yq works when you need to process YAML, it can even handle comments quite well. Buy for pure JSON processing jq is a better tool.
The fact that jq takes almost a second to run on a Pi is crazy[0]. And the tool is written in C.
It was fixed in 2019 though? I don't understand your point.
You are right. I stand corrected.
>nan > nan is false, while nan < nan is true.
If this wrong behavior from jq, or some artifact consistent with how the floating point spec is defined, surprising, but faithful to IEEE 754 nonetheless?
IIRC, any comparison using a nan must fail (return false) according to the IEEE spec.
I think it is a bit more complex, since NaN is defined to be "unordered" with respect to all other values (including other NaNs), and so any relation for which unordered values result in true (e.g., compareQuietNotEqual) will return true. (See section 5.11)
I used Bard after trying unsuccessfully to decipher the wikipedia page and Bard says, according to IEEE 754, nan < nan should return false (0); while nan > nan should return false (0)
I wish there was some version of Wikipedia for people who speak good English (not Simple English), but aren't assumed to already be experts on the topic. Technical articles are pretty much impenetrable.
So you basically wish for Wikipedia to also feature simplified explanations of technical topics.
I don't think "good English vs simple english" plays into this.
It's not like the problem for technical articles being impenetratable on Wiki is that Wiki doesn't have an intermediate level between expert-talk and simple english.
It's just that it doesn't have simple english explanations of some technical topics.
How have you been using jq? It is more adhoc for exploring JSON files during development/data analysis or in programs that run in production?
Quite a lot! i use it to explode both JSON and tex (parse using jq functions). I also use it for exploring ane debug binary formats (https://github.com/wader/fq). Now a days i also use it for some adhoc programming and a calculator.
Oh sounds a very neat way to explore binary!
If you spend lots of time with certain binary formats then i can recommend adding a decoder, happy to help with it also!
Yeah, I've always liked the idea of jq but personally I find it easier to open a REPL in the language I'm most familiar with (which happens to be JS, which does make a difference) and just paste in the JSON and work with it there
It may be more verbose, but I never have to google anything, which makes a bigger difference in my experience
https://github.com/wader/fq has a REPL and can read JSON. Tip is to use "paste | from_json | repl" in a REPl to paste JSON into a sub-REPL, you can also use `<text here>` with fq which is a raw string literal
The important part wasn't having a REPL, it was using a language I already know off the top of my head
Yes. So much easier to reuse other common helper functions. Once you’ve finished exploration you can just copy the code into production instead of translating.
My most common usage is pretty-printing the output of curl, or getting a list of things from endpoint service/A and then calling service/endpoint B/<entry> to do things for each entry in the list.
I use it as a "JSON library for bash". :-)
Not really in "production", but I have a lot of small-ish shell scripts all over the place, mostly in ~/bin, and some in CI (GitHub Actions) as well.
The 2nd and 3rd examples make no sense to me.
echo '{"a": 1, "b": 2}' | jaq 'add'
3
Construct an array from an object in two ways and show that they are equal:
$ echo '{"a": 1, "b": 2}' | jaq '[.a, .b] == [.[]]'
true
What might be confusing is that iterating an object iterates its values. add is defined something like this: def add: reduce .[] as $n (0; . + $n)
I find jq's syntax (and docs) kind of opaque, but I guess we have no other options. And I don't think this latest incarnation breaks any new ground there. But it'd be better if I just wrote it myself - "be the change ...."
Well, as pointed out in the jaq docs there is jql.
But I just looked at jql and I liked it even less. The pedantry about requiring all keys in selectors to be double quoted is, um, painful for a CLI tool.
Someone else above pointed out JJ which looks much easier to use.
ChatGPT or the warp chatbot is pretty good at jq syntax
I think the best alternative for JQ is datawave, but it is not open source. https://dataweave.mulesoft.com/
The latest blog post is about open sourcing it from last September. So the process of open sourcing dataweave takes at least 15 months.
It have some learning curve, but it actually makes sense when you get used to it and work for other format too. It is much better than other transformation language, and you can even call Java.
I think they kind of stuck in the development, even the mule engine only have one active developer from the github commit ….
All else being equal, does the speed of jaq change with the size of the input.
> nan > nan is false, while nan < nan is true.
You learn something new everyday. Does anyone have any idea why this might be happening? Seems like more than just a bug..
I use jq on a daily basis. This is new to me thanks for remarking it
Is there a JS library that is similar to JQ but works on JS objects in memory?
and in powershell you don't need to learn all those syntaxes for different tools for different formats like jq, xmlstarlet, etc. Just convert everything to an object and query the data by using powershell syntax
I use `yq` for this stuff and it handles most of this pretty well.
Before a clicked on the link i had this gut feeling. It turned out my gut was right. It was written in rust. Go figure..
I applaud this project's focus on correctness and efficiency, but I'd also really like a version of `jq` that's easy to understand without having to learn a whole new syntax.
`jq` is a really powerful tool and `jaq` promises to be even more powerful. But, as a system administrator, most lot of the time that I'm dealing with json files, something that behaved more like grep would be sufficient.
Have you tried `gron`?
It converts your nested json into a line by line format which plays better with tools like `grep`
From the project's README:
▶ gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page..." | fgrep "commit.author"
json[0].commit.author = {};
json[0].commit.author.date = "2016-07-02T10:51:21Z";
json[0].commit.author.email = "mail@tomnomnom.com";
json[0].commit.author.name = "Tom Hudson";
https://github.com/tomnomnom/gron
It was suggested to me in HN comments on an article I wrote about `jq`, and I have found myself using it a lot in my day to day workflow
This is awesome, thanks! Not OP, but this will help me to write specifications for modifying existing JSON structures immensely. It's kind of a pain parsing JSON by (old man) eye to figure out which properties are arrays, and follow property names down a chain. This will definitely help eliminate mistakes!
Also try jless[0], it's amazingly convenient and it shows you a JSON path at the bottom of the screen as you navigate.
Thank you so much. This seems like a saner approach for some simpler use cases.
It flattens the structure. And makes for easy diffing.
There's also this awesome tool to make JSON interactively navigable in the terminal:
https://jless.io/ is similar, and will give you jq selectors so the two combine very well. (fx might have that feature too, I dunno)
Ah thanks, jless is actually the one I was originally thinking of and trying to find! :D
You can also mimic gron, including support for yaml with
yq -o=props my-file.yaml
Doesn't work in my terminal. When you recommend yq behavior, please specify which yq you're using. There are at least two incompatible implementations.
This looks some much better as an ad-hoc tool. Would be cool if it supported more formats - plist, yaml, xml (hoow to do body, or conflicting attr/elements)
One of my coworkers really likes Miller: https://github.com/johnkerl/miller
The idea is that you get awk/grep like commands for operating on structured data.
ChatGPT excels at producing `jq` incantations; I can actually use `jq` now…
> I'd also really like a version of `jq` that's easy to understand without having to learn a whole new syntax.
Since JSON is JavaScript Object Notation, then an obvious non-special-snowflake language for such expressions on the CLI is JavaScript: https://fx.wtf/getting-started#json-processing
It is a little early to say, but I have been learning how nushell deals with structured data and it seems like it is very usable for simple cases to produce readable one-liners, and if you need to bring out the big guns the shell is also a full fledged scripting language. Don't know about how efficient it is though.
It needs to justify moving to a completely different shell, but the way you deal with data in general does not restrict itself to manipulating json, but also the output of many commands, so you kinda have one unified piping interface for all these structured data manipulations, which I think is neat.
From the data side, nushell uses polars for querying tabular data so it should be pretty fast. Not sure about its scripting language.
Obligatory reference to "gron" ("make JSON greppable"), which I find to be quite useful for many common tasks:
jq, and yq, are tools you spend an hour figuring out and then leave them in a CI pipeline for 3 years.
Maybe like SQL for relational algebra? Codd made two query languages that were "too difficult for mortals to use". (B-trees for performance was a separate issue)
But jq's strength is its syntax - the difficulty is the semantics.
there's got to be some syntax though. jq does a unique function that isn't defined in any other syntax. i'm with you, the jq syntax is weird and sometimes difficult to understand. but the replacement would just be some different syntax.
these little one-off unique syntaxes that i'm never going to properly learn are one of my favourite uses of chatGPT.
Congratulations! We're almost back to the basic functionality we used to have with XSLT.
You could use an elaborate filter with jq (see https://stackoverflow.com/a/73040814/452614) to transform JSON to XML and then use an XQuery implementation to process the document. It would be quite powerful, especially if the implementation supports XML Schema. I have not tested it.
Or https://github.com/AtomGraph/JSON2XML which is based on https://www.w3.org/TR/xslt-30/#json-to-xml-mapping
It even looks like we could use an XSLT 3 processor with the json-to-xml function (https://www.w3.org/TR/xslt-30/#func-json-to-xml) and then use XQuery or stay with XSLT 3.
Now I have to test it.
In fact XQuery alone is enough, e.g. with Saxon HE 12.3.
(: file json2xml.xq :) declare default element namespace "http://www.w3.org/2005/xpath-functions"; declare option saxon:output "method=text"; declare variable $file as xs:string external; json-to-xml(unparsed-text($file))/<your xpath goes here> java -cp ~/Java/SaxonHE12-3J/saxon-he-12.3.jar net.sf.saxon.Query -q:json2xml.xq file='/path/to/file.json'
To be fair, xslt is a lot more verbose than `map(.*2)`
A bit more verbose but you have the full power of XQuery with you. XSLT however is more verbose than that like you mentioned.
For the following JSON document:for $price in json-to-xml(unparsed-text($file))/map/map/number[@key="price"] return $price+2
The call to json-to-xml() produces this XML document:{ "fruit1": { "name": "apple", "color": "green", "price": 1.2 }, "fruit2": { "name": "pear", "color": "green", "price": 1.6 } }<?xml version="1.0" encoding="UTF-8"?> <map xmlns="http://www.w3.org/2005/xpath-functions"> <map key="fruit1"> <string key="name">apple</string> <string key="color">green</string> <number key="price">1.2</number> </map> <map key="fruit2"> <string key="name">pear</string> <string key="color">green</string> <number key="price">1.6</number> </map> </map>
Yes. jq is essentially an XPath/XSLT for JSON. I'd say that jq is more powerful than XPath/XSLT, but that's neither here nor there since both can evolve to be as powerful as they need to be.
This language must be the spiritual successor of Perl
I inherited some piece of code that made use of an extremely long and complicated jq script.
I simply gave up understanding the whole thing, and restored the balance in the universe by rewriting it in Perl.
Now you just need to rewrite Perl in Rust and compile that to WebAssembly. And the circle of HN is complete.
I know perl is useful. I know it's going to help me. It seems like you can get away with a quick perl script whereas a python script would attract scrutiny.
But it's such a painful language to look at.
jq have been in my toolbox since a while it’s a very great tool. But yet another query language to learn, jaq seems identical on that. I think that’s where LLMs can help a lot to make it easier for adoption, I started a project on that note to manipulate the data just with natural language, https://partial.sh
‘cat’ your json file and describe what you want I think should be the way to go
I usually avoid those types of tools. It looks way too fragile and the examples look a bit magical. Do you think it's stable and easy to use?
why not contribute to the existing jq project instead of starting a new one?
We have so many json query tools now it's insane.
The obvious reason here is jaq makes some changes to semantics, changes which would be rejected by jq.
Another likely reason is that it seems a motivation for jaq is improving the performance of jq. Any low-hanging fruit there in the jq implementation was likely handled a long time ago, so improving this in jq is likely to be hard. Writing a brand new implementation allows for trying out different ways of implementing the same functionality, and using a different language known for its performance helps too.
Using a language like Rust also helps with the goal of ensuring correctness and safety.
jq hasn't had much work done to make it fast though.
There's two classes of performance problems:
- implementation issues
- language issues
The latter is mainly a problem in `foreach` and also some missing ways to help programmers release references (via `$bindings`) that they no longer need.
The former is mostly a matter of doing a variety of bytecode interpreter improvements, and maybe doing more inlining, and maybe finding creative ways to reduce the number of branches.
jq maintainer here. We love that there are multiple implementations of jq now. It does several things: a) it gives users more choices, b) it helps standardize the language (though we've not yet written a formal specification), c) it brings more energy to jq because the maintainers of the other tools have joined jq as maintainers. I also love that these alternative implementations relieve my growing dislike of C.
Fun, of course. Existing projects are boring almost by definition. And this is volunteer work.
One reason to do this is that often performance improvements involve architectural overhauls that maintainers are unlikely to approve of.
Somewhat off-topic, but is there a tool which integrates something like this/jq/fx and API requests? I’d like to be able to do some ETL-like operations and join JSON responses declaratively, without having to write a script.
Is there anything out there like "SELECT * FROM "http://..."?
I think a query language would be great, with a way to subquery/chain data from previous requests (e.g. by jsonpath) to subsequent ones.
The closest I’ve gotten is to wrap the APIs with GraphQL. This achieves joining, but requires strict typing and coding the schema+relationships ahead of time which restricts query flexibility for unforeseen edge cases.
Another is a workflow automation tool like n8n which isn’t as strict and is more user-friendly, but still isn’t very dynamic either.
Postman supports chaining, but in a static way with getting/setting env variables in pre/post request JS scripts.
Bash piping is another option, and seems like a more natural fit, but isn’t super reusable for data sources (e.g. with complex client/auth setup) and I’m not sure how well it would support batch requests.
It would be an interesting tool/language to build, but I figure there has to be a solution out there already.
This is exactly what Murex shell does. It has lots of builtin tools for querying structured data (of varying formats) but also supports POSIX pipes for using existing tools like `jq` et al seamlessly too.
I'm working on a project I call babeldb. It allows "select * from query_rest('https://api1.binance.com/api/v3/exchangeInfo#.symbols')" The #.symbols at the end is actually jq path expression, it's sometimes needed when the default json to table is suboptimal. You can see it in action by selecting babeldb in the dropdown, then clicking "Run All" here: https://pulseui.net/sqleditor?qry=select%20*%20from%20query_...
My shell will do that
https://murex.rocks/optional/select.htmlopen http://… | select * where … # FROM can be omitted because you’re loading a pipe
Haven't checked yet, but I am sure it's written in Rust
How could you tell?
I think my benchmark[1] would be a great test for this. The jq[2] version takes 50s on my machine.
[1] : https://github.com/jinyus/related_post_gen
[2]: https://github.com/jinyus/related_post_gen/blob/main/jq/rela...