Settings

Theme

SQLite: Past, Present, and Future

vldb.org

282 points by chrstr 3 years ago · 146 comments

Reader

dang 3 years ago

The pdf: https://www.vldb.org/pvldb/vol15/p3535-gaffney.pdf

simonw 3 years ago

I shared some notes on this on my blog, because I'm guessing a lot of people aren't quite invested enough to read through the whole paper: https://simonwillison.net/2022/Sep/1/sqlite-duckdb-paper/

polyrand 3 years ago

Regarding hash joins, the SQLite documentation mentions the absence of real hash tables [0]

  SQLite constructs a transient index instead of a hash table in this instance 
  because it already has a robust and high performance B-Tree implementation at 
  hand, whereas a hash-table would need to be added. Adding a separate hash table 
  implementation to handle this one case would increase the size of the library 
  (which is designed for use on low-memory embedded devices) for minimal 
  performance gain.
It's already linked in the paper, but here's the link to the code used in the paper [1]

The paper mentions implementing Bloom filters for analytical queries an explains how they're used. I wonder if this is related to the query planner enhancements that landed on SQLite 3.38.0 [2]

  Use a Bloom filter to speed up large analytic queries.

[0]: https://www.sqlite.org/optoverview.html#hash_joins

[1]: https://github.com/UWHustle/sqlite-past-present-future

[2]: https://www.sqlite.org/releaselog/3_38_0.html

  • kpgaffney 3 years ago

    That's correct, the optimizations from this paper became available in SQLite version 3.38.0.

    As we were writing the paper, we did consider implementing hash joins in SQLite. However, we ultimately went with the Bloom filter methods because they resulted in large performance gains for minimal added complexity (2 virtual instructions, a simple data structure, and a small change to the query planner). Hash joins may indeed provide some additional performance gains, but the question (as noted above) is whether they are worth the added complexity.

gorjusborg 3 years ago

I came for SQLite, got sold DuckDB.

manimino 3 years ago

TFA appears to be about adapting SQLite for OLAP workloads. I do not understand the rationale. Why try to adapt a row-based storage system for OLAP? Why not just use a column store?

  • Comevius 3 years ago

    SQLite is significantly better at OLTP and being a blob strorage than DuckDB, and it doesn't want to sacrifice those advantages and compatibility if OLAP performance can be improved independently. In my experience for diverse workloads it is more practical to start with a row-based structure and incrementally transform it into a column-based one. Indeed in the paper there is a suggested approach that trades space for improved OLAP performance.

  • didgetmaster 3 years ago

    It is certainly possible to have a single system that can effectively process high volumes of OLTP traffic while at the same time performing OLAP operations. While there are systems that are designed to do one or the other type of operation well, very few are able to do both. https://www.youtube.com/watch?v=F6-O9v4mrCc

  • satyrnein 3 years ago

    It seems like one idea in there is to store it both ways automatically (the HE variant)! That might be better then manually continually copying between your row store and your column store.

    • badgerdb 3 years ago

      Great discussion here. As one of the co-authors of the paper, here is some additional information.

      If you need both transactions and OLAP in the same system, the prevalent way to deliver high performance on this (HTAP) workload is to make two copies of the data. This is what we did in the SQLite3/HE work (paper: https://www.cidrdb.org/cidr2022/papers/p56-prammer.pdf; talk: https://www.youtube.com/watch?v=c9bQyzm6JRU). That was quite clunky. This two copy approach not only wasted storage but makes the code complicated, and it would be very hard to maintain over time (we did not want to fork the SQLite code -- that is not nice).

      So, we approached it in a different way and started to look for how we could get higher performance on OLAP queries working as closely with SQLite's native query processing and storage framework.

      We went through a large number of options (many of them taken from the mechanisms we developed in an earlier Quickstep project (https://pages.cs.wisc.edu/~jignesh/publ/Quickstep.pdf) and concluded that the Bloom filter method (inspired by a more general technique called Look-ahead Information Passing https://www.vldb.org/pvldb/vol10/p889-zhu.pdf) gave us the biggest bang for the buck.

      There is a lot of room for improvement here, and getting high OLAP and transaction performance in a single-copy database system is IMO a holy grail that many in the community are working on.

      BTW - the SQLite team, namely Dr. Hipp (that is a cool name), Lawrence and Dan are amazing to work with. As an academic, I very much enjoyed how deeply academic they are in their thinking. No surprise that they have built an amazing data platform (I call it a data platform as it is much more than a database system, as it has many hooks for extensibility).

spaniard89277 3 years ago

I've been learning SQL recently with PostgreSQL and MySQL in an online bootcamp here in Spain. So far very comprehensive. We've touched indexing and partitioning with EXPLAIN ANALYZE for optimizing performance, and I've implemented this strategies successfully onto an active forum I own.

The SQL course has almost no love by the students but so far it has been the most useful and interesting to me.

I was able to create some complex views (couldn't understand how to make materialized views in MySQL), but they were still very slow.

I decided to copy most of this forum DB to DuckDB (with Knime now, until I know better), and optimization with DuckDB seems pointless. It's very, very fast. Less energy usage for my brain, and less time waiting. That's a win for me.

My current dataset is about 40GB, so It's not HUGE, and sure people here in HN would laugh at my "complex" views, but so far I've reduced all my concerns from optimizing to how to download the data I need without causing problems to the server.

  • dkjaudyeqooe 3 years ago

    In the real world a relational database is the single most useful tool short of a compiler/interpreter. SQL is anachronistic but still works well even if its a pain.

    My advice: avoid MySQL like the plague. PgSQL and SQLite is all you ever need and all you ever want.

  • aljgz 3 years ago

    > The SQL course has almost no love by the students

    This is a big early career mistake. I've seen experienced developers use NoSql in a project where Sql is clearly a great fit, then waste lots of manpower to emulate things you get with Sql for free.

    Of course one's career can fall into a success path that never depends on SQL, but not learning SQL deeply is not a safe bet.

    • dkjaudyeqooe 3 years ago

      Most "programmers" just can't understand relational databases and SQL. It's too hard.

      I've seen things you wouldn't believe. Random deadlocks in multi-billion transaction reporting systems. Atomic transactions split into multiple commits in banking applications. Copying rows between tables instead of setting a flag on a row. All because highly paid programmers are scared of RDBs.

      • esja 3 years ago

        I've had the same experience (over decades) and never understood it. SQL should not be difficult for any professional programmer, and it's extremely powerful.

      • kwhitefoot 3 years ago

        > All because highly paid programmers are scared of RDBs.

        Really? That's not my experience over the last thirty years of programming. As soon as relational databases became something that everyone could use we all jumped in And not just programmers either, a lot of people who never wrote a line of code in their lives became adept at writing SQL in order to get around limitations in ERP systems for instance.

        Is this problem something that afflicts only younger programmers?

      • sn0wtrooper 3 years ago

        All those strong relations will be lost, like tears in the rain.

    • twh270 3 years ago

      I've got some developer PTSD from a previous project where the solution architect decided to use CosmosDB for the entire domain model that was very relational and very transactional, all because "NoSQL is easy to learn and allows rapid development".

      Yeah it is, until you're trying to manually create and maintain relations between documents in different schemas owned by different microservices.

    • spaniard89277 3 years ago

      I've read this again and again in this forum and other dev communities, so I didn't hesitate. I can't say I love SQL, but it's not that bad, Databases look interesting to me.

  • wodenokoto 3 years ago

    Don’t sell yourself short. I’m sure the minority here knows what a complex view is

    • spaniard89277 3 years ago

      I have no real world experience, I've seen things in Stack Overflow that I hardly manage to understand.

js8 3 years ago

Why cannot SQLite have two different table storage engines for different tables, one row and the other column oriented?

  • manigandham 3 years ago

    The same reasoning in the article applies: it's a lot of added complexity that isn't related to its core use as a general purpose in-process SQL database.

    Usually OLAP at these scales is fast enough with SQLite or you can use DuckDB if you need a portable format before graduating to a full on distributed OLAP system.

  • ryanworl 3 years ago

    Storage layout is not the primary issue here because IO throughput on commodity hardware has increased significantly in the last 10 years.

    DuckDB is significantly faster than SQLite because it has a vectorized execution engine which is more CPU efficient and uses algorithms which are better suited for analytical queries. If you implemented a scan operator for DuckDB to read SQLite files, it would still have better performance.

mwish 3 years ago

I'm confused that why in Figure3, seems in Raspberry Pi, latency is slower than same queries' latency in cloud server. Did I missed something?

rafale 3 years ago

SQLite vs Postgres for a local database (on disk, not over the network): who wins? (Each in their most performance oriented configuration)

  • thomascgalvin 3 years ago

    This is basically the exact use case SQLite was designed for; PostgreSQL is a marvel, and at the end of the day presents a much more robust RDBMS, but it's never going to beat SQLite at the thing SQLite was designed for.

  • bob1029 3 years ago

    >most performance oriented configuration

    I am 99% sure SQLite is going to win unless you actually care about data durability at power loss time. Even if you do, I feel I could defeat Postgres on equal terms if you permit me access to certain ring-buffer-style, micro-batching, inter-thread communication primitives.

    Sqlite is not great at dealing with a gigantic wall of concurrent requests out of the box, but using a little bit of innovation in front of SQLite can solve this problem quite well. The key is resolve the write contention outside of the lock that is baked into the SQLite connection. Writing batches to SQLite on a single connection with WAL turned on and Sync set to normal is pretty much like operating at line speed with your IO subsystem.

    • prirun 3 years ago

      > I am 99% sure SQLite is going to win unless you actually care about data durability at power loss time.

      SQLite will handle a power loss just fine.

      From https://www.sqlite.org/howtocorrupt.html:

      "An SQLite database is highly resistant to corruption. If an application crash, or an operating-system crash, or even a power failure occurs in the middle of a transaction, the partially written transaction should be automatically rolled back the next time the database file is accessed. The recovery process is fully automatic and does not require any action on the part of the user or the application."

      From https://www.sqlite.org/testing.html:

      "Crash testing seeks to demonstrate that an SQLite database will not go corrupt if the application or operating system crashes or if there is a power failure in the middle of a database update. A separate white-paper titled Atomic Commit in SQLite describes the defensive measure SQLite takes to prevent database corruption following a crash. Crash tests strive to verify that those defensive measures are working correctly.

      It is impractical to do crash testing using real power failures, of course, and so crash testing is done in simulation. An alternative Virtual File System is inserted that allows the test harness to simulate the state of the database file following a crash."

  • samatman 3 years ago

    Postgres obviously.

    Sorry, just thought I'd buck the trend and assume a very write-heavy workload with like 64 cores.

    If you don't have significant write contention, SQLite every time.

    • ledgerdev 3 years ago

      Here's sqlite doing 100 million inserts in 33 seconds which should fit into nearly every workload, though it is batched. https://avi.im/blag/2021/fast-sqlite-inserts/

      So write contention from multiple connections is what you're talking about, versus a single process using sqlite?

    • dinosaurdynasty 3 years ago

      If you can have one "database" thread and 63 "worker" threads, send messages back and forth, and don't hold open transactions, this would probably work with sqlite. Aka treat sqlite like redis.

      • skyde 3 years ago

        so in your example the database thread is the Redis thread and the worker thread are your http server thread I assume.

        This is a good analogy, but there are still lot of wire heavy scenario a real database like postgresql or mysql will have better throughput than redis.

    • innocenat 3 years ago

      Where is write contention coming from if it's operated locally?

      • Thaxll 3 years ago

        SQLite is "single" threaded for writes.

      • d3nj4l 3 years ago

        ... you can get tons of requests on a server?

        • dinosaurdynasty 3 years ago

          Redis has the same limitation (only one transaction at a time) and is used a lot for webapps. It solves this by requiring full transactions up front. The ideal case for sqlite for performance is to have only a single process/thread directly interacting with the database and having other process/threads send messages to and from the database process.

        • innocenat 3 years ago

          But that isn't "locally"?

  • RedShift1 3 years ago

    SQLite is always going to win in that category just from the fact that there are less layers of code to be worked through to execute a query.

    • remram 3 years ago

      Latency-wise maybe, but throughput can be more important for a lot of applications or bigger databases.

      I say "maybe" because even there, SQLite is much more limited in terms of query-planning (very simple statistics) and the use of multiple indexes.

      That's assuming we're talking about reads, PostgreSQL will win for write-heavy workloads.

    • electroly 3 years ago

      As long as you turn it into a throughput race instead of a latency race, PostgreSQL can definitely win. SQLite has a primitive query builder and a limited selection of query execution steps to choose from. For instance, all joins in SQLite are inner loop joins. It can't do hash or merge joins. It can't do GIN or columnstore indexes. If a query needs those things, PostgreSQL can provide them and can beat SQLite.

      • ac2u 3 years ago

        out of interest, what columnstore indexes are available to postgres? Would be happy to find out that I'm missing something.

        I know citus can provide columnar tables but I can't find columnar indexes for regular row-based tables in their docs. (use case of keeping an OLTP table but wanting to speed up a tiny subset of queries)

        Closest thing I could find was Swarm64 for columnar indexes but it doesn't seem to be available anymore.

    • sophacles 3 years ago

      > just from the fact that there are less layers of code to be worked through

      This is not an invariant. I've seen be true, and I've seen it be false. Sometimes that extra code is just cruft yes. Other times though it is worth it to set up your data (or whatever) to take advantage of mechanical sympathies in hot paths, or filter the data before the expensive processing step, etc.

      • RedShift1 3 years ago

        I'm not talking about extra code, I'm talking about _layers_ of code. With PostgreSQL you're still sending data over TCP/IP or a UNIX socket, and are copying things around in memory. Compare that to SQLite that runs in the memory space of the program, thus no need for copying and socket traffic. There's just less middlemen (middlepersons?) with SQLite that are unavoidable with PostgreSQL. So less layers = less interpreting/serialization/deserialization/copying/... = higher performance. I will even argue that even if the SQLite query engine is slightly less efficient than PostgreSQL, you're still winning because of less memory copying going around.

        • fuckstick 3 years ago

          > less interpreting/serialization/deserialization/copying/... = higher performance

          Unfortunately for many database workloads you are overestimating the relative cost of this factor.

          > even if the SQLite query engine is slightly less efficient than PostgreSQL

          And this is absurd - the postgresql query engine isn't just "slightly" more efficient. It is tremendously more sophisticated. People using a SQL datastore as a glorified key-value store are not going to notice - which seems to be a large percentage of the sqlite install base. It's not really a fair comparison.

          • ok_dad 3 years ago

            With SQLite, though, you could reasonably just skip doing fancy joins and do everything in tiny queries in tight loops because SQLite is literally embedded in your app’s code. You can be careless with SQLite in ways you cannot with a monolithic database server because of that reason. I still agree there are use cases where a centralized database is better, but SQLite is a strange beast that needs a special diet to perform best.

            • fuckstick 3 years ago

              Sometimes. This is only universally true if your datasets are tiny and your access patterns are simple. Moving a shitty suboptimal O(n^2) or worse algorithm locally when something linear or better is possible is going to hurt no matter where the DB is.

              > but SQLite is a strange beast that needs a special diet to perform best.

              I don’t see what is strange about it - for large datasets it’s the same complexity issues as anywhere else.

              Not sure specifically what your comment is trying to add, since I acknowledged the type of use case SQLite excels in - those where roundtripping are a dominating cost and “k-v” stores, ie simple queries. My entire point is that those are a common but still niche use case.

  • lvass 3 years ago

    SQLite. The most performant configuration is unsuited to most usage, and may lead to database corruption on a system crash.

    • rafale 3 years ago

      Should have said the most performance oriented setting that's also safe from data corruption.

      • lvass 3 years ago

        Then it depends on the usage. You'd likely need to run with synchronous mode on, and even on WAL, multiple separate write transactions is a issue. If you don't have many writes or buffer them into not many transactions, SQLite is the most performant.

  • kpgaffney 3 years ago

    I think the (unsatisfying) answer is "it depends". There's a huge amount of diversity in database workloads, even among the workloads served by SQLite as we mention in the paper.

    For read-mostly to read-only OLTP workloads, read latency is the most important factor, so I predict SQLite would have an edge over PostgreSQL due to SQLite's lower complexity and lack of interprocess communication.

    For write-heavy OLTP workloads, coordinating concurrent writes becomes important, so I predict PostgreSQL would provide higher throughput than SQLite because PostgreSQL allows more concurrency.

    For OLAP workloads, it's less clear. As a client-server database system, PostgreSQL can afford to be more aggressive with memory usage and parallelism. In contrast, SQLite uses memory sparingly and provides minimal intra-query parallelism. If you pressed me to make a prediction, I'd probably say SQLite would generally win for smaller databases. PostgreSQL might be faster for some workloads on larger databases. However, these are just guesses and the only way to be sure is to actually run some benchmarks.

  • ergocoder 3 years ago

    Functionality-wise, SQLite's dialect is really lacking...

    • simonw 3 years ago

      Is it the SQL dialect there lacking or is it the built-in functions?

      I agree that SQLite default functionality is very thin compared to PostgreSQL - especially with respect to things like date manipulation - but you can extend it with more SQL functions (and table-valued functions) very easily.

      • eatonphil 3 years ago

        I like SQLite (qualifying not for you, simonw, but for others). But I hate that I can't be lazy by using arrays in SQLite... because they don't exist. group_concat is a poor approximation.

        Also, I genuinely dislike how loose SQLite is with allowed syntax. Probably it's preference. But even interactively I prefer to know immediately that I messed up a query. SQLite is so forgiving I've often wasted time trying to understand why my results are nonsense (because I typoed in the query and SQLite didn't fail the query).

        But I also strongly dislike Python for that reason and I know where you stand there. Maybe SQLite/PostgreSQL is similar to the dynamic/static language preference divide.

      • ergocoder 3 years ago

        Depends on what easily means.

        Sqlite can't do custom format date parsing and regex extract. How do we extend something like this?

        If we go beyond a simple function to window function, I imagine it would be even harder.

        At this point, we nlmight as well use postgres.

    • bob1029 3 years ago

      The entire point is to bring your own functions to SQLite, since it is presumably running in-proc and can be integrated with trivially.

      https://sqlite.org/appfunc.html

      We currently use this path to offer a domain-specific SQL-based scripting language for our product.

  • nikeee 3 years ago

    The documentation offers some advice on this:

    https://www.sqlite.org/whentouse.html

youngtaff 3 years ago

Why do people have to publish papers in a weird two column academic format instead of something that's more easily readable?

  • badgerdb 3 years ago

    Ha ha .. that is what the conference requires. Turns out that there is research that shows that when you are reading paper printed on paper this 2-column format is good for readability and not wasting paper. Conferences still insist on this format even though most people print papers.

    Now the good news is that these days, conferences have an accompanying video associated with the paper, and that may be a good place to start for many. That video will be published on the conference website (https://vldb.org/2022/) in about a week.

    • youngtaff 3 years ago

      Thanks, will lookout for the video

      (I tend to read most things on a screen and find two columns of small text tiring)

  • oxff 3 years ago

    I read papers most of the time on phone and these two column papers are such a PITA to read lol

Kalanos 3 years ago

i wish it had an optional server for more concurrent and networked transactions in the cloud

  • bityard 3 years ago

    You may be interested in rqlite: https://github.com/rqlite/rqlite

  • jjtheblunt 3 years ago

    you could make one pretty easily, no?

    • axelthegerman 3 years ago

      I'd like to see that. I also think the single write situation is not great for web applications, but I don't see an easy way around it without sacrificing things like consistency

      • bob1029 3 years ago

        See: LMAX Disruptor and friends. The magic spell that serializes many threads of events into one without relying on hard contention. You can even skip the hot busy waiting if you aren't trying to arbitrage the stock market.

        The way I would do it is a MPSC setup wherein the single consumer holds an exclusive connection to the SQLite database and writes out transactions in terms of the batch size. Basically: BEGIN -> iterate & process event batch -> END. This is very very fast due to how the CPU works at hardware level. It's also a good place to insert stuff like [a]synchronous replication logic.

        Completion is handled with busy/yield waiting on a status flag attached to the original event instance. You'd typically do a thing where all flags are acknowledged at the batch grain (i.e. after you committed the txn). This has some overhead, but the throughput & latency figures are really hard to argue with. It also has very compelling characteristics at the extremes in terms of system load. The harder you push, the faster it goes.

      • dinosaurdynasty 3 years ago

        If you can treat sqlite transactions like redis transactions (send the entire transaction up front) it can work.

oaiey 3 years ago

After seeing one diagram: Why the hack are we still talking to databases with SQL strings and not directly specifying the Query-AST? Admins, sure (a fancy UI could help there as well) but why in our code?

  • samwillis 3 years ago

    I believe most sql engines cache the query plans for parameterized queries, which would cover the majority of requests.

    Caching the query plan is also going to go further in performance optimisations than just “precompiling” the SQL to a AST.

  • speed_spread 3 years ago

    Because query parsing time is totally insignificant compared to query IO?

    I mean, I get it but the chances that it makes a noticeable difference are zero in almost every case. Also you'd have to change a lot of the existing tooling, at which point you might as well send a compiled agent or use stored procs?

    • robertlagrant 3 years ago

      > Because query parsing time is totally insignificant compared to query IO?

      The problem there is that the SQL query string is not parsed at compile time of the host program, so things that could be caught at compile-time are not, and things like appending strings to SQL strings in an unsafe way are much too easy to do.

      • HelloNurse 3 years ago

        That's why there are query parameters (see https://www.sqlite.org/lang_expr.html#varparam for the comprehensive SQLite implementation) and automatic escaping. Not to mention tests and code reviews.

      • jsmith45 3 years ago

        The irony is that the SQL Standard actually specifies two different generally viewed as obsolete methods for handing this.

        One is "SQL/CLI", which envisions that you provide a "module" of queries (parameterized and static), which you then compile with some database compiled tool, resulting in an object file you can link into your codebase that exports function calls for these queries.

        The other is "Embedded syntax", which is basically "embed RAW SQL statements into your program, run a database provided pre-processor to convert it to normal ADA, C, COBOL, FORTRAN, MUMPS, Pascal, or PL/I code, which is then compiled normally. In theory, this is supposed to generate a "client module" like SQL/CLI, and insert the needed code to call into that module in place of the SQL.

        For both of these the queries are considered to be "prepared" at compilation time, but this mostly amounts to syntax checking. It could theoretically also provide warnings about possible execution errors based on the current state of the database (for example, if you are trying to select some column that does not currently exist in some given table), but it could not treat these as errors since DML statements run by this or other programs may change the table before the query is actually executed.

        The sql standard calls every form of passing a string version of a query to a database as "dynamic sql" (Not just say SQL constructed in say a store procedure and called with EXECUTE). This is because the database cannot statically analyze those queries at program compilation time, so you may need to link with additional libraries that include the query parser, etc. And supporting "dynamic sql" at all is an optional feature.

  • okennedy 3 years ago

    We are... Spark's DataFrame is essentially a relational algebra AST-builder. Microsoft's LINQ interprets SQL directly in at compile-time. All of these, however, run queries more or less directly in the system in which they're specified.

    It helps to think of SQL strings as an untrusted wire format. Yes, parsing is a pain, but it comes with two main benefits: (i) The wire format is human writable/interpretable, with all the accompanying benefits, and (ii) The wire format is easily extensible in a predictable way.

    That latter one is particularly useful in keeping SQL's ecosystem open. Take a front-end library like SQLAlchemy or ScalikeJDBC for example. It's not practical for any one such library to support every extension provided by every database engine. SQL provides a fall-back for when you need a back-end feature that hasn't been implemented in any given front-end.

    • oaiey 3 years ago

      C# LINQ does pass an expression tree into the abstraction before it will then serialize it SQL and then the database deserialize it into an AST. LINQ-to-Objects is in memory and works on the AST directly.

      Also both LINQ language syntax and library methods are a builder paradigm for the expression tree. Valid, but still far from ideal representation of an AST.

  • bayindirh 3 years ago

    > Why the hack are we still talking to databases with SQL strings and not directly specifying the Query-AST?

    The same reason language servers took off. Instead of one to one mapping, SQL enables one to many mapping with minor tweaks, allowing everyone to do whatever they want over a well known, well defined, mature abstraction.

    In the same spirit, I may ask why we're not writing assembly or even machine code, and we have programming languages? Testers, sure, abstraction means clarity up to an extent, but why the developers themselves still use programming languages?

    • keybored 3 years ago

      You’re missing the point if you think that that is “in the same spirit”.

      SQL-as-strings and SQL-as-AST are still the same thing. What is being proposed it not to write procedural code for record retrieval instead of declarative SQL.

  • Timpy 3 years ago

    Isn't that what a parameterized query does? It separates the sql logic from the inputs so it can cache the query and then it accepts inputs separately. Safer and more optimal at the same time, the engine doesn't have to re-optimize the same query again for the life of the connection. If my understanding is wrong somebody please correct me, it's kind of hard to get good information on what's going on under the hood with these things.

  • manigandham 3 years ago

    A SQL query is an AST, but represented in a compact portable form. It also supports functions, procedures, and parameterization for flexible and safe query construction.

    Your code would get incredibly large and complicated if you had to specify any serious SQL query as a raw AST.

  • euroderf 3 years ago

    Check out prql[1], it might be a conceptual model you'd like.

    [1] https://prql-lang.org/

  • leprechaun1066 3 years ago

    Most queries to kdb+ do exactly what you are asking for.

  • AndrewDucker 3 years ago

    Can you give an example of what you mean, and what we'd gain from it?

    • dkjaudyeqooe 3 years ago

      When you're accessing a SQLite database in code you have to generate a query string. Parameters ameliorate that somewhat, but in many cases you still have to regenerate a new string for each new query. It's inefficient to translate your query into a string only to have it parsed back into something structured by SQLite.

      • AndrewDucker 3 years ago

        Are you suggesting an alternate syntax for SQL-like queries, which is more compact?

        Or a specific one for SQLite?

        I'd be very surprised if generating the SQL query string and parsing it again was more than a trivial percentage of the query execution time, but happy to be proven wrong.

        • dkjaudyeqooe 3 years ago

          Probably, but it's still inefficient in other ways. There's memory issues and the code overhead of dealing with strings. It'd be much nicer to have a binary interface, but admittedly it would be much more complex.

          The real solution is to include an alternative to SQL that looks and works like Datalog including things like variables. That would make SQLite 100x more productive for programmers. But that will never happen.

        • rprospero 3 years ago

          As I said in another part of the thread, I was in a scenario where we were performed millions of inserts into a table of four integers, one row at a time. Generating the string and parsing it again wound up being enough to blow our 10µs time budget.

          • dkjaudyeqooe 3 years ago

            But with parameters you don't need to do that, or in any other scenario where only (literal) values change.

            On the other hand, if you were importing data and each line specified which table and column it had to go into you'd probably have to write a new SQL statement each time.

          • aaaaaaaaata 3 years ago

            What was the time constraint due to, out of curiosity?

            What did you do for a solution?

            • rprospero 3 years ago

              It was a neutron detector that stored the the position, energy, and time stamp into the database. The design spec was 10,000 neutrons per second, so that was the origin of the time limit.

              Of course, half those neutrons arrived within a 20ms window, so we had a buffer to handle the load. However, however, if the average remained above the limit, the buffer would fill up. There was a brief discussion of just ignoring events when the buffer was full, but that could introduce systematic errors in the data that would be impossible to detect, so it was better for the database to just crash.

              The solution was to tighten the slits on the neutron beam to lower the count rate to the point that we never filled the buffer. Granted, we were testing a high flux technique that, so that was a bit of a disappointment. Everything else in the instrument could handle an order of magnitude more neutrons, except this database.

              By the way, to be fair to the database designers, they were working with Clinton era tech and were probably told that the flux rate would never be a SUSTAINED 10,000 events per second.

      • chasil 3 years ago

        SQLite was released in the year 2000; it is 22 years old.

        Because of its age, it does carry deprecated API components that are maintained solely for backward compatibility.

        Notice how many _v2 and _v3 variants are present, denoting reworked aspects of the API:

        https://sqlite.org/c3ref/funclist.html

        A product of this age was designed (and redesigned) for specific needs. Unfortunately, your use case is not among them.

      • ThemalSpan 3 years ago

        Is that a fact or an intuition?

    • rprospero 3 years ago

      Not the OP, but I can give two gains. First off, by passing an AST, instead of just an SQL string, we cut out a huge number of possibly SQL injection attacks. Second, in most of the projects where I've used of SQL, there's been some kind of database object that builds the actual query, which it then converts to a string. The database then takes that string and parses it into an AST. There's some performance gains to be made by skipping the middle man and just creating the AST directly.

      Many years ago, I was on a project that needed to add rows to a database with a hard 10µs limit. Each rows was just four integers, so the writing part was trivial. However, allocating the string, formatting the integers to strings, then parsing the resulting string often put us over the time limit. Every time the time limit was breached, we lost about five grand. Why we were using an SQL database for this at all is a story for a different time.

      • Taywee 3 years ago

        You should be using prepared statements and variable bindings. You should never have to parse any statement more than once (many dynamic language interfaces even cache statements for you, and many SQLite libraries also offer an interface for this out of the box). You should also never be formatting integers into strings, but simply binding them to the prepared statement. I really hope you weren't interpolating them and formatting each statement with the literal values in the string. That would be horridly inefficient, and a bad misuse of any modern SQL database. I also hope you were properly using transactions to speed up your batches.

        See: https://www.sqlite.org/c3ref/stmt.html

    • danielheath 3 years ago

      Having application build a string and pass it to a library which parses the string into an AST cannot be as efficient as just building the AST, right?

      • yread 3 years ago

        It's a lot more efficient for the human who is used to reading SQL to read the SQL (or even SQL-producing code) than AST

    • pif 3 years ago

      Type safety at compile time to begin with.

  • emfax 3 years ago

    I’ve been trying to do exactly this. It must be possible.

  • liveoneggs 3 years ago

    PREPARE?

  • 323 3 years ago

    Didn't Microsoft LINQ try something like that? Was not particularly successful.

    • manigandham 3 years ago

      LINQ stands for Language-Integrated Query and is incredibly successful at its purpose of providing powerful querying functionality and extensions baked into the C#/.NET language space itself.

      This querying framework is what powers translations and compilation into SQL and several other languages (depending on the datastore provider used).

      EntityFramework is one of the most advanced ORMs out there and is supremely productive because of LINQ.

    • Semaphor 3 years ago

      LINQ is stunningly successful, LINQ Query syntax (what you probably are referring to) with LINQ to SQL less so.

      And yes, as usual, we have the amazing confusion of Microsoft Naming.

      But query syntax is essentially just a way to use an ORM that looks closer to SQL but is strongly typed.

    • oaiey 3 years ago

      LINQ creates an AST in the .NET land, however, before passing it to an actual SQL database, it serializes the expression tree to SQL.

      So no, it does not talk AST to the database.

    • Ducki 3 years ago

      No, it never tried that.

  • afavour 3 years ago

    Why not?

    • izacus 3 years ago

      Same reason you don't read files by concating string commands into another different language and then posting them to the OS.

naikrovek 3 years ago

my ipad won’t let me search through the PDF, but i couldn’t find where “SSB” was defined, if anywhere. i did not see it defined in the first paragraph, which is where it is first used.

everyone: not all of your readers are domain experts. omissions like this are infuriating.

Thaxll 3 years ago

"While it continues to be the most widely used database engine in the world"

It realy depends what do you mean by that, yes it's shipping in every phones and browser, but I don't consider that as a database. Is the windows registry a database?

Oracle, MySQL, PG, MSSQL are the most widly used DB in the world, the web runs on those not SQLite.

  • adamrezich 3 years ago

    there are far, far more sqlite instances than Windows Registry instances in the world.

    "SQLite is likely used more than all other database engines combined. Billions and billions of copies of SQLite exist in the wild. [...] Since SQLite is used extensively in every smartphone, and there are more than 4.0 billion (4.0e9) smartphones in active use, each holding hundreds of SQLite database files, it is seems likely that there are over one trillion (1e12) SQLite databases in active use."

    https://www.sqlite.org/mostdeployed.html

stonemetal12 3 years ago

>SQLite is primarily designed for fast online transaction processing (OLTP), employing row-oriented execution and a B-tree storage format.

I found that claim to be fairly surprising, SQLite is pretty bad when it comes to transactions per second. SQLite even owns up to it in the FAQ:

>it will only do a few dozen transactions per second.

  • tiffanyh 3 years ago

    > SQLite is pretty bad when it comes to transactions per second. SQLite even owns up to it in the FAQ: "it will only do a few dozen transactions per second."

    That is an extremely poor quote taken way out of context.

    The full quote is:

    FAQ: "[Question] INSERT is really slow - I can only do few dozen INSERTs per second. [Answer] Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second."

    https://www.sqlite.org/faq.html#q19

    • hu3 3 years ago

      Yeah my GP got me confused. I remember doing 40k inserts/s in a trading strategy backtesting program with Go and SQLite. Reads were on the same magnitude, I want to say around 90k/s. My bottleneck was CPU.

    • hnfong 3 years ago

      Given the prevalence of SSDs these days the figure might be out of date as well.

    • kpgaffney 3 years ago

      Thanks for the clarification. It's true that transaction latency is limited by the write speed of the storage medium. However, an "average desktop computer" these days has an SSD that can support tens of thousands of transactions per second, depending on the workload.

    • stonemetal12 3 years ago

      What is your point? If I need transactions, not just bulk loading inserts, then SQLite isn't the bees knees. PG can handle at least an order of magnitude more transactions per second on the same hardware.

  • jessermeyer 3 years ago

    Please quote the entire statement. And stop the needless "even owns up to it" FUD.

    > Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection