Settings

Theme

Leveraging SIMD: Splitting CSV Files at 3Gb/S

blog.tinybird.co

89 points by __exit__ 4 years ago · 42 comments

Reader

zwegner 4 years ago

Pretty similar article from very recently: https://nullprogram.com/blog/2021/12/04/

Discussion: https://news.ycombinator.com/item?id=29439403

The article mentions in an addendum (and BeeOnRope also pointed it out in the HN thread) a nice CLMUL trick for dealing with quotes originally discovered by Geoff Langdale. That should work here for a nice speedup.

But without the CLMUL trick, I'd guess that the unaligned loads that generally occur after a vector containing both quotes and newlines in this version (the "else" case on lines 34-40) would hamper the performance somewhat, since it would eat up twice as much L1 cache bandwidth. I'd suggest dealing with the masks using bitwise operations in a loop, and letting i stay divisible by 16. Or just use CLMUL :)

  • davidm1729 4 years ago

    Hi, I'm one of the authors of the post

    Thanks for pointing us to CLMUL, I'm not familiar with these kind of multiplications, but, converting the quote bitmask to a quoted bitmask would certainly make it faster. With this new bitmask, we could negate it and AND it with the newline mask, generating a mask of newlines that are not inside quotes. Getting the last newline then would be a simple CLZ of that mask. And there wouldn't be a need to resort to byte to byte processing.

    In our tests, going byte to byte for more iterations to keep the alignment when hitting the "else case" performed worse than making the unaligned loads, but as you say "just use CLMUL" (as all loads will be aligned) :D

    • jart 4 years ago

      PMOVMSKB/BSF/POPCNT takes serious wizardry, but instructions like PCLMULLQLQDQ make you feel like Gandalf. It's defined:

          pair clmul(uint64_t a, uint64_t b) {
            uint64_t t, x = 0, y = 0;
            if (a && b) {
              if (bsr(a) < bsr(b)) t = a, a = b, b = t; /* optional */
              for (t = 0; b; a <<= 1, b >>= 1) {
                if (b & 1) x ^= a, y ^= t;
                t = t << 1 | a >> 63;
              }
            }
            return (pair){x, y};
          }
      
      There's a famous paper on how it can perform polynomial division at 40gbps. It's really cool that it has practical applications in things like CSV too. https://www.intel.com/content/dam/www/public/us/en/documents...
    • zwegner 4 years ago

      CLMUL in general is a bit weird to wrap your head around, but a CLMUL with -1 isn't too tricky: it's like a running 1-bit sum, or in other words, each bit in the result is the parity of all the bits up to that point in the multiplier.

      > In our tests, going byte to byte for more iterations to keep the alignment when hitting the "else case" performed worse than making the unaligned loads, but as you say "just use CLMUL" (as all loads will be aligned) :D

      I was talking about using bitwise operations with the quote/escape/newline masks already computed (like in the blog post I linked), rather than a byte-by-byte loop. But yeah, CLMUL is better anyways :)

    • gpderetta 4 years ago

      CLMUL is quite interesting. I learned about it when going in depth on how multiplications help with hashing.

      A multiplication is in practice: - a sum over - a series (i.e. one for each bit set in the multiplier) - of shifts (where the shift amount is the index of that bit in the multiplier)

      The shifting and the combining are great for hashing as they "distribute" each bit around.

      CLMUL simply replaces the addition in step one with xor (which can also be thought as the single bit carryless addition).

  • mattewong 4 years ago

    Even with the CLMUL trick, CSV parsing does not play nice. It can be made to work for JSON parsing because you can make more assumptions. With CSV, it only works smoothly if you are willing to accept a subset of what most spreadsheet programs accept i.e. to assume your CSV is "well-formed". Considering for example that the following three cells:

    AA"A,"AA""A","A"A"A

    when opened in Excel will all give you the same value, using CLMUL to normalize will require many repeated additional SIMD operations-- probably at least 8 if not more. At some vector size it will be worth it, but not clear at 256. The irony is, if you are stuck with CSV input, then the fact that you couldn't get a better format/encoding also suggests that you can't assume your CSV is "well-formed"

    • jart 4 years ago

      That's not what Python and Google Sheets do.

          >>> list(csv.reader(['''AA"A,"AA""A","A"A"A'''], dialect='excel'))
          [['AA"A', 'AA"A', 'AA"A']]
      
      Has the CSV format been standardized somewhere?
      • mattewong 4 years ago

        What I can tell you is that if you save a file with 'AA"A,"AA""A","A"A"A' (excluding the surrounding single-quotes) and then double-click to open in Excel, you get 3 cells with the exact same values. Furthermore, if you run `echo 'AA"A,"AA""A","A"A"A' | xsv select 1,2,3` you again get the same 3 values. For people working with CSV, it's far more likely that the user cares more about consistency with Excel than consistency with some python lib or with Google sheets-- neither of which are used much, compared to Excel, in the worlds where CSV tends to reside (at least in my experience)

      • mattewong 4 years ago

        Huh? your comment proves exactly what my prior post said, which is that you end up with 3 equal values that were each represented, in the input, in different ways. Looks like csv.reader + dialect=excel is doing exactly that.

    • zwegner 4 years ago

      Good points all around. Not sure what the OP's requirements are, but judging by their current code, CLMUL should do nicely (or they have a bug).

      And also, thanks for that example. Clearly I don't know CSV well enough--are quotes in fields that don't start with a quote not special?

      • mattewong 4 years ago

        If their data comes from a controlled bubble and they need not assume "real-world" data, then CLMUL might do nicely but best case it would likely only be a marginal improvement (and even then I would be willing to bet, not at anything less than 512 bit vector sizes). Best case, it still needs additional vector calls to support quoting. Obviously, if no quoting will be supported, it's even simpler, but then you also cannot support commas or newlines inside of cell values and are getting so far from "CSV" that you might as well just say you have pipe-delimited data which happens to use comma instead of pipe in which case you don't need CLMUL. If quoting needs to be supported, CLMUL will still require a number of repeated passes, shifts etc to deal with the various cases including an escaped first quote char, last quote char, non-first-or-last quote char and embedded commas/newlines.

        • zwegner 4 years ago

          Thread's getting a bit old, hope you see this :)

          Referring to the OP, I'm not sure what exact dialect of csv they're using, but its quoting rules are nothing like Excel's: quotes are escaped with a separate character (possibly backslash), and quoted regions can start anywhere, not just the start of a field. This makes CLMUL much easier to apply, quite similarly to how it's used in simdjson. Finding escaped characters can be done branchlessly with a handful of scalar operations on the masks (this code is in simdjson too).

          For proper Excel-style parsing, you're quite possibly right (and looking at ZSV, I trust that you've thought about this problem a lot more than me). I'm not certain that it can't all be done efficiently with very few branches, though, using mostly SIMD and some scalar operations on masks. CLMUL might not be useful here, since quotes are treated completely differently depending on whether the field started with a quote. Instead you'd have a first phase basically looking for a comma or newline followed by a quote, then looking for the next unescaped quote. Escaped quotes within quoted fields can be found with a similar sequence to simdjson's backslash support (and removed with something like vcompressb on newer AVX-512 chips).

          • mattewong 4 years ago

            Saw it :)

            > done efficiently with very few branches, though, using mostly SIMD and some scalar operations on masks [...] first phase basically looking for a comma or newline followed by a quote, then looking for the next unescaped quote

            Yes that's almost exactly what ZSV does, except it just looks for any interesting char (i.e. delim or dbl-quote), irrespective of position relative to other interesting char, and then let's the non-SIMD loop figure out to do with the chunks in between. We tried pretty hard to use CLMUL-- ZSV was designed after a decent-enough digestion of Lemire and Langdale's work to try those techniques-- but the json vs CSV differences were too much, at least for us, to overcome due to the extra 8+ vector calls, without giving up more than the benefit. Furthermore, if you are going to "normalize" the value, even more of that benefit gets lost (in other words, if you read 'aa"aa,"aa""aa"', which are equivalent, and you want to spit out two equivalent values, then you will have to modify the second value before printing non-CSV, or modify the first value to print well-formed CSV, and either way you can't do that more efficiently with vector operations). Since we were not willing to give up the CSV edge cases, we were also not able to find any benefit from CMUL. That's not to say we turned over every stone-- maybe there is a way and we just missed it.

            > are quotes in fields that don't start with a quote not special?

            Sorry missed that q before. Yes, that is correct, at least according to how Excel parses with dbl-click-to-open (as opposed to doing an import operation from a menu). They only serve as a delimiter when they are strictly the first char. So: 'first cell, " second cell ,third cell' gives you 3 cells, the second of which starts and ends with a space and has a double-quote as its second char (i.e. the equivalent of '" "" second cell "')

            As an aside, we made a deliberate decision to be consistent with Excel, for reason other than that in our experience, the users who are least able to cope with edge cases on their own, and therefore rely the most on the software to do what they want/expect-- are also most likely to accept Excel as the "standard", whereas users who want/need a different standard than Excel are also most likely to be able to be able to come up with some solution (e.g. python or other) if their need isn't met by the default behavior of a given software tool.

  • pclmulqdq 4 years ago

    The carryless multiplication instructions are amazing and people should use them more often. They are just so poorly explained that they feel like magic.

jagrsw 4 years ago

Not sure how the author of this entry on HN managed to change original title from

gigabytes per second

to

gigabits per siemens

:)

  • HPsquared 4 years ago

    Staying with Physics, "Gb/S" is Gigabarns per Siemens. Some relation of electrical conductance with cross-sectional area.

    The barn is a unit of cross-sectional area, based on the Uranium nucleus (area 1 barn). Uranium is pretty large in atomic terms; the name is from the idiom "couldn't hit the broad side of a barn".

  • __exit__OP 4 years ago

    Autocorrector issues + fast fingers to click on submit without double checking. Sorry for that.

    Whoever fixed the title, thank you :D

  • jeltz 4 years ago

    Isn't it easier to write 3GbΩ instead of 3Gb/S?

  • Sebb767 4 years ago

    Probably auto-capitalization gone wrong. Or some very new code ;)

mattewong 4 years ago

Stay tuned for a SIMD powered CSV parser library and standalone utility about to drop this weekend. Alpha, but test showing it to be faster than anything else we could get our hands on

liuliu 4 years ago

Splitting CSV file into chunks and process them independently won't necessarily be wrong (although there are implementations out there that I won't name would, because they do guess). The trick however requires to scan twice: https://liuliu.me/eyes/loading-csv-file-at-the-speed-limit-o...

Nice article otherwise!

michaelg7x 4 years ago

Presumably solving the same kind of delimiter-finding issues as Hyperscan? https://news.ycombinator.com/item?id=19270199

Tuna-Fish 4 years ago

Why is the unit expression in topic messed up?

rwmj 4 years ago

Nice, but I'm afraid real world CSVs are a lot more complicated than described so don't use this code in production.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection