Settings

Theme

Q – Execute SQL on text. Supports joins across files, RPM available

github.com

126 points by harelba 12 years ago · 45 comments

Reader

mjn 12 years ago

If you need only a subset of full SQL, e.g. just joins, counting/aggregation, and date manipulation (as in several of these examples), I've found it fairly easy to work with a mixture of the standard Unix join(1) [found on nearly all systems], and some of the additions from Google's crush-tools (https://code.google.com/p/crush-tools/), mainly 'aggregate', 'grepfield', 'funiq', and 'convdate'. I find chaining them together a bit easier than writing monolithic SQL statements, but there's probably some crossover point at which that wouldn't be true.

It'd be interesting to compare runtimes as well. I would guess that there's some overhead in loading into the DB up front, but that you might gain some speedup by converting longer chains of Unix pipes into one SQL query. On the other hand you might lose some parallelism. Would take some testing on different kinds of queries and data sets to get an idea of the pros/cons I think.

  • ozgune 12 years ago

    Another interesting alternative could be using or writing a PostgreSQL foreign data wrapper.

    There is one that supports tabular data (file_fdw), and another one for JSON files (json_fdw). If you have files in other formats, you can also write your fdw for it. This way, you get complete SQL coverage.

    Also, if you don't want to pay the overhead of parsing the file every time, you can use the new materialized feature for caching: http://www.postgresql.org/docs/9.3/static/rules-materialized...

    (Disclaimer: Enthused Postgres user.)

jdp 12 years ago

Matt Might wrote a blog post about using standard CLI tools for this purpose: http://matt.might.net/articles/sql-in-the-shell/

Rather than making a custom tool to issue SQL, the idea is that regular CLI tools map well to the traditional relational algebra operations. sed is like selection, cat is like union, etc.

unwind 12 years ago

It's written in Python, and seems to use SQLite under the hood.

I guess it just applies tokenization and throws the text into a temporary database.

Quite similar to the Go project https://github.com/dinedal/textql, at least superficially.

  • j_s 12 years ago

    A lot of similar tools were discussed when that Go project came up 3 weeks ago:

    https://news.ycombinator.com/item?id=7175830

      > MS ADO / ODBC
      > Perl DBI
      > npm j (with jqa)
      > Ruby (csv2sqlite) 
      > Python (csvkit)
      > Go (textql, comp)
      > Java (optiq, openrefine, H2 SQL)
      > R (sqldf)
      > Haskell (txt-sushi)
      > XML (xmlstarlet, xmllint, xmlstar)
      > HTML (HtmlAgilityPack, Chrome $x())
      > Postgres file_fdw
      > Oracle external tables
      > SQL Server OPENDATASOURCE and OPENQUERY
      > Log file viewers (MS LogParser, Apache asql, lnav)
    • harelbaOP 12 years ago

      Hi, I'm q's creator, Harel.

      There are obviously lots of other software which can provide a similar capability, and while I haven't checked all of them out, I'm really believe that most of them do a great job. However, my rationale for creating this tool was to provide a seamless addition to the Linux command line toolset - A tool as most Linux commands are, and not a capability. The distinction I'm doing here is that tools are reusable, composable and such, vs a capability which is usually less reusable in different contexts. I'm sure that some of the above are definitely tools. I just hope that the tool I have created provides value to people and helps them with their tasks.

      As I posted here elsewhere, my complete rationale for creating the tool is available on the README of the github project. Comments and issues are most welcome.

      Harel Ben-Attia

hipsters_unite 12 years ago

This seems like a cool project, but Q is already a well-used JS promise library so there's a bit of a name conflict there even if the libs are aimed at very different tasks...

  • mjn 12 years ago

    And the JS promise library in turn collided with the name of Q, the array-processing language, which itself collided with the name of another programming language named Q (http://q-lang.sourceforge.net). If you're naming a tech-related thing after about 1980, the single-letter names are all taken...

  • vog 12 years ago

    Any library with a really short name (<= 2 letters) should be prepared for name collisions. If they wanted to avoid name collisions, they should have chosen a slightly longer, more descriptive name in the first place. I think in this context (i.e. really short name) it doesn't really matter who was first.

  • mbreese 12 years ago

    It's also a single letter in the alphabet. Anyone who names their project a single letter (or two) is expecting some amount of collision. The real question is whether or not the collision is worth it.

    In this case, it might be... they are trying to make a command-line tool. So in theory, you'll be typing the command often, meaning that a short name is preferable.

    But honestly, it would have probably been a better idea to use a more descriptive name.

    • harelbaOP 12 years ago

      Hi, i'm q's creator (HN made the name q uppercase, but it's actually a lowercase q). The reasoning was that it's used as a command line tool, and used often. So "q" and not "Q" :)I'm currently preparing the debian package, and one-letter names are not allowed, so it's going to be named "qsql" there.

    • chalst 12 years ago

      Given that the command argument is in full SQL notation, which must be passed quoted, saving three or four characters on the executable name doesn't really seem worth it. The shortest example they give, a SELECT/GROUP query, is over 50 characters.

  • borntyping 12 years ago

    It's also the name of a well known Python debugging tool[1]. Name conflicts will happen when you pick a name for shortness, not clarity.

    [1] https://pypi.python.org/pypi/q

ajtulloch 12 years ago

Dataset (https://dataset.readthedocs.org/) is a similar project that provides ad-hoc SQL querying of structured data - example usage:

  import dataset

  db = dataset.connect('sqlite:///:memory:')

  table = db['sometable']
  table.insert(dict(name='John Doe', age=37))
  table.insert(dict(name='Jane Doe', age=34, gender='female'))

  john = table.find_one(name='John Doe')
  • mseebach 12 years ago

    Dataset provides ad-hoc SQL querying of ... SQL databases.

    It looks like a neat API, but it's not really similar to this project.

  • bduerst 12 years ago

    That's not ad hoc querying of a tabular text file.

sz4kerto 12 years ago

Eh, q as a name is going to be a serious problem as q is the language for programming kdb, a column-oriented database. As both are related to databases, it's difficult to defend this name.

  • BMarkmann 12 years ago

    That was my initial impression, also... I though kx had added text-processing to kdb.

zeckalpha 12 years ago

Don't join (https://en.wikipedia.org/wiki/Join_%28Unix%29), sed, and grep get you most of the way there?

  • harelbaOP 12 years ago

    Hi, I'm q's creator, Harel Ben-Attia.

    The Linux toolset is really great, and I use it extensively. The whole idea of the tool is not to replace any of the existing tools, but to extend the toolset to concepts which treat text as data. In a way, it's a metatool which provides an easy and familiar way to add more data processing concepts to the linux toolset. There are many cases where I use 'wc -l' in order to count rows in a file, but if i need to count the rows of only the ones which have a specific column which is larger than the value X, or get the sum of some column per group, then q is a simple and readable way to do it properly, without any need for "tricks".

    My rationale for creating it is also explained in the README of the github project.

    Any more comments are most welcome.

    Harel

ww2 12 years ago

Name collision with Q/kdb+.

jmnicolas 12 years ago

What I really wished for when I read the title was for something that could make me write regexes but verbosely.

spacemanmatt 12 years ago

I'm still not seeing anything easier than ETLing into a regular old database, at which point you have the power and flexibility of a regular old database. Which is pretty spiffy, actually.

Are people really so bad at databases that they'll gladly suffer hacks like this to avoid using one?

spacemanmatt 12 years ago

I have ended up with MySQL and PostgreSQL on nearly every linux I've installed. It is baffling to me that someone would decline these tools in favor of text file drivers or SQLite, with the exception of using SQLite as an embedded config database or something.

  • super_mario 12 years ago

    Yeah I also do the same, import the csv (it doesn't have to be comma separated) file into PostgreSQL and analyze it using SQL. Alternatively, I will load it into IPython/pandas and work with it like that.

    This is however useful for one off, throwaway query that offers familiar SQL syntax, if you don't want to use awk that is.

gtrubetskoy 12 years ago

The credit should go to SQLite, which is extremely versatile. A while back I wrote Thredis (thredis.org), which lets you use SQL commands inside Redis (also using SQLite, of course).

pointernil 12 years ago

Now imagine some kind of RX.net libs made applicable to the output of the tail -F command... Reactive log file processing?

aethertap 12 years ago

This is really clever, and I'm definitely going to give it a workout on my system. Thanks!

chris_wot 12 years ago

Cool idea!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection