Settings

Theme

CSVs Are Kinda Bad. DSVs Are Kinda Good

matthodges.com

107 points by hieronymusN a year ago · 132 comments

Reader

teddyh a year ago

The article talks about reading and parsing CSV data of unknown variants, but then skips to the solution being using a different format altogether. But you can only switch to a different format if you are producing data, not if you are reading it!

And if you are in control of producing data, just produce strict RFC 4180-compliant CSV data and everybody will be able to read it just fine. There is no need to make your reader’s lives difficult by using yet another non-standard data format.

See also: <https://news.ycombinator.com/item?id=39679753>

  • OscarCunningham a year ago

    I just had a look at RFC 4180. This is the grammar they suggest:

    > file = [header CRLF] record *(CRLF record) [CRLF]

    I find it kind of wild that you have to have at least one record. Suppose I have a program that lists the events that occurred on a given day. How do I represent the fact that the program ran successfully but that there weren't any events on that day?

    • joseda-hg a year ago

      Event - Loging Started Aditionally, Loging Ended (for the day) allows you to check for intermediate crashes and startup errors, kinda like a pulse / health check

      Not necessarily how I'd do it intuitively, but doesn't seem that crazy

    • gnu8 a year ago

      Easy, count running the report as an event.

    • 7bit a year ago

      That isn't a problem of the file format. That's a problem of your process.

  • kitd a year ago

    > yet another non-standard data format.

    Tbf, ASCII delimiter characters have been around since the 1960s. They're not exactly reinventing the wheel

  • lazide a year ago

    It’s the classic ‘self driving is easy when we modify the world to be what we want’ type solution.

  • fragmede a year ago

    Sqlite isn't a standard per-se, but outputting an Sqlite db file, if you're writing the export code, is easy enough, and enough of a standard that I dare say you'd be doing fine to output .db files marked as sqpite.

usrbinbash a year ago

> CSVs are kinda bad.

Not really.

What's bad is when people keep insisting on coming up with new and amazing CSV dialects.

https://www.ietf.org/rfc/rfc4180.txt is very clear about what CSV files are supposed to look like, and the fact that people keep ignoring this for whatever reason, is not the formats problem.

And no, "using another format" is not a solution to this. Because: I can just invent a new DSV dialect. Or a JSON dialect. Or a dialect where the field separator is "0xFF00FF00" and the row separator is the string `DECAFCOFFEE` encoded in EBCDIC, all other characters have to be UTF-32, except for a, b and f, which also need to be EBCDIC encoded.

> For starters, it’s rather unreadable when opened in a text editor. But I bet you don’t really do that with your CSVs all that often anyway!

Wrong. I do that with csv files all the time. In fact I even have an amazing vim plugin just for them [0]. That's pretty much the point of having a plaintext tabular data storage format: That I can view and edit it using standard text wrangling utilities.

---

There is a much simpler solution to this problem: Don't accept broken CSV. If people keep ignoring standards, thats their problem.

[0]: https://github.com/mechatroner/rainbow_csv

  • brunokim a year ago

    "Broken" is a sliding scale, and it's unfeasible to refuse engaging at all times.

    If you are a multi-billion dollar company creating a new integration, you can demand that your small supplier provide an RFC-4180 compliant file, and even refuse to process it if its schema or encoding is not conformant.

    If you are the small supplier of a multi-billion dollar company, you will absolutely process whatever it is that they send you. If it changes, you will even adapt your processes around it.

    TFA proposes a nice format that is efficient to parse and in some ways better than CSV, another ways are not. Use it if you can and makes sense.

    • usrbinbash a year ago

      I agree up to a point. It is a kind of tug-o-war, and yes, the weight of each side plays an important role there.

      Nevertheless, even in projects where my services are talking to something that's bigger, I will, at the very least ask "why cant it be RFC compliant? is there a reason?". And without blowing my own horn overly much, but quite a few systems larger than mine have changed because someone asked that question.

  • dghf a year ago

    > https://www.ietf.org/rfc/rfc4180.txt is very clear about what CSV files are supposed to look like

    Mm, not really. By its own admission, it is descriptive, not prescriptive:

    > This section documents the format that seems to be followed by most implementations

    And it came out in 2005, by which date CSVs had already been in use for some twenty or thirty years.

    • usrbinbash a year ago

      It doesn't matter when it came out, it doesn't matter that it it descriptive. It is the standard, period.

      Yes, CSV is much, much older. In fact it predates personal computers. And it went through changes. Again: None of that matters. We have a standard, we should use the standard, and systems should demand the standard.

      Standards are meant to ensure minimal-friction interoperability. If systems don't enforce standards, then there is no point in having a standard in the first place.

      • jhbadger a year ago

        Yes, but you could argue that web browsers shouldn't accept broken HTML either. But they do, and that's why there are so much broken HTML out there in the wild. Same with broken CSV -- basically people's measure is "if Excel can read it correctly, it's fine" even if not every CSV library in every programming language can.

      • tttp a year ago

        "This memo provides information for the Internet community. It does not specify an Internet standard of any kind."

        • teddyh a year ago

          Note the qualifier: “not an Internet standard” (my emphasis).

          • usrbinbash a year ago

            And again: None of that matters. I am not talking about formalities here, I am talking about technical realities.

            Whether it is formally called a standard or no doesn't change the fact that this is the document everyone points at when determining what CSV is and is supposed to look like. So it is de-facto a standard. Call it a "quasi standard" if that makes you happy.

            • teddyh a year ago

              Oh no; I agree with you completely. I just wanted to point out that the document does not disclaim being a “standard”, is just says that it is not an “Internet standard”.

  • SigmundurM a year ago

    > Don't accept broken CSV. If people keep ignoring standards, thats their problem.

    From the very memo you link to (RFC 4180):

    > Implementors should "be conservative in what you do, be liberal in what you accept from others" (RFC 793 [8]) when processing CSV files.

    • usrbinbash a year ago

      Oh, I am nothing but liberal when it comes to CSV: Clients get the liberty to either have their requests processed, or get a 400 BAD REQUEST

      And yes, I am aware that the standard says this. My counter question to that is: How much client-liberty do I have to accept? Where do I draw the line? How much is too much liberty?

      And the answer is: there is no answer. Wherever any system draws that line, it's an arbitrary decision; Except for one, which ensures the least surprise and maximum interoperability (aka. the point of a standard): to be "conservative", and simply demand the standard.

      • geoelectric a year ago

        I think the suggestion reflected a deep understanding that transitioning from decades of wild-west to standardized in the smooth fashion most likely to succeed would require that strategy.

        If you don’t accept whatever some org’s data is encoded with, they won’t consider it a win for standards, or swap out whatever is producing that data for something more compliant. They’ll consider it a bug, and probably use some other more flexible processor.

        On the other hand, if you can be flexible enough to allow quirks on import while not perpetuating them on export, eventually you and other software built with the same philosophy standardize the field.

        I do think there’s a point where things are standardized enough that you can safely stop doing that—when all the extra quirk code is so rarely used as to be irrelevant—but I’m unsure if we’ve reached it yet. It would be something to actually analyze, though, rather than just a philosophical decision.

        • usrbinbash a year ago

          > On the other hand, if you can be flexible enough to allow quirks on import while not perpetuating them on export, eventually you and other software built with the same philosophy standardize the field.

          How? The only thing I can see happening is perpetuation of sloppy use of standards. "Why, why should I change my |-deliminated CSV dialect that requires a double-semicolon at the end of each row, which is arbitrarily denoted by either \n or \r or \n\r when all those programmers will accomodate me, no matter how little sense it makes to do so?

          > I do think there’s a point where things are standardized enough that you can safely stop doing that

          I agree. And that point was when someone sat down, and penned RFC-4180

          Everything after that point, has to justify why it isn't RFC compliant, not the other way around.

  • AstroJetson a year ago

    > In fact I even have an amazing vim plugin just for them

    So this is gold. Editing xSV files has been an ongoing pain, and this plugin is just amazingly awesome. Thanks for the link to it.

  • tpoacher a year ago

    you mean to say that vim can't handle simple character substitution? /s

  • snapcaster a year ago

    No it isn't in the real world. It's very much your problem if you're the team consuming these files. Try to go tell the head of accounting they need to make all their data rfc4180 compliant see how that goes

    • usrbinbash a year ago

      > Try to go tell the head of accounting they need to make all their data rfc4180 compliant see how that goes

      Fun fact: I did. And not just for accounting systems, but all sorts of data ingestion pipelines. Did it work every time? No. Did it work in many cases? Yes. Is that better? Absolutely.

      Here is the thing: If I accept broken CSV, where do I stop? What's next? Next thing my webservice backends have to accept broken HTTP? My JSON-RPC backends have to accept JSON with /*/ style block comments? My ODBC load-balancer has to accept natural language instead of SQL statements (I mean, its the age of the LLM, I could make that possible).

      • mjevans a year ago

        I draw the line at, the source keeps changing how it's broken.

        If things are broken, but in a predictable, standard for that source way... uggh but at least it's their standard and if some tweak gets the common tools working for that one standard then everyone can move on and be happy.

mjevans a year ago

This keeps coming up as new people discover what CSVs are. An ancient TEXT data exchange format. The lowest vaguely common denominator. A style of format with flavors software long out of support contract are happy to export data in.

The intent of the format is to be human readable and editable. Sure, Tab characters can be used instead of commas. (TSV files) Yes that's that "" to escape a quote rule. Oh and quoted values are optional, unquoted strings are fine as long as they contain no newline or record separator characters.

Sure, you could make another CSV inspired format which uses the old mainframe control characters; except as keeps getting pointed out, even programmers often don't know how to enter raw flow control characters on their systems. Who even bashes those out these days? I know I have to look it up every time.

Rejoice that the format is so simple, it's all just text which software might convert to numbers or other values as it might desire.

  • JohnMakin a year ago

    I agree completely. Its simplicity is what gives it staying power.

    When I was an undergrad, I had kind of an anal software engineering 101 professor who was treating the course like he was a scrum master. The deliverable was to make some dumb crud app, and a requirement was it used a "database." It was so stupid simple to write a csv to s3 or local disk that I just used that for the entire project. He tried to fail me for not following the requirements, and I had to go to the dean of CS and argue that by definition, a structured data format on a disk is absolutely a database, and I won. I got graded horribly after that though.

  • vaylian a year ago

    > even programmers often don't know how to enter raw flow control characters on their systems.

    Yes, but that is because those characters are not meant to be entered directly. DSV values should either be created by a dedicated DSV editor or they should be constructed by a software library. You would rather use a paint program to create an image instead of writing the image's bytes in a text editor.

    • lazide a year ago

      Aka a completely different use case than CSV.

      • throw0101d a year ago

        > Aka a completely different use case than CSV.

        How many CSVs are generated, edited, or viewed by Notepad.exe and how many by Excel (or Google Sheets)?

        I would posit the vast majority of CSVs are generated through some kind of program where you go to File > Export or File > Save As…. In which case doing selecting a drop down with the option for File Format to be TSV or DSV (with the corresponding file extension) would solve a lot of problems. (Or at least if CSVs from Excel were RFC 4810 compliant by default.)

        • lazide a year ago

          How many get edited or inspected in notepad at some point in their life? Nearly all of them (for any given workflow).

          • vaylian a year ago

            It is nice that text editors are abundantly available and that they can be used for the task. But once the CSV columns get too wide and irregular, then you probably want to reach for a dedicated spreadsheet program, because it is otherwise too hard to figure out which column you are currently reading.

            There is still room between a text editor and a full-blown spreadsheet program. New DSV editors could emerge when the DSV format gains popularity.

            • lazide a year ago

              At the point someone is using a different format, they’ll likely pick something explicitly structured. Like everything from JSON, to Yaml, to Protobufs, or hell even XML.

              DSV seems like worst of both worlds. Not really structured, AND also not really viewable/editable by lowest common denominator tooling.

            • RaftPeople a year ago

              > when the DSV format gains popularity

              CSV is equivalent to Voyager I, the chances of catching up with that kind of head start are extremely low.

  • memtet a year ago

    Right, the author skipped right over human-readable TSV files which play nicely with sed/awk/grep/sort pipelines, and are supported by all CSV parsers and spreadsheet software.

    • ElevenLathe a year ago

      TSV is also my go-to when mucking around on the command line. Perfect for noodling with data before you have to put together an Excel file to show to management.

  • imtringued a year ago

    CSV isn't a common denominator of anything. Everything is communicated out of band. Nobody understands your CSV files.

wmal a year ago

The author seems to ignore the fact that CSV got so popular because it is human readable. If anyone wanted a binary format there’s plenty of them - most better than this DSV.

Also, I’m on a mobile right now, so can’t verify that, but it seems the format is flawed. The reader decodes UTF8 strings after splitting the binary buffer by the delimiter, but I believe the delimiter may be a part of a UTF8 character.

Edit: just checked and there’s actually no chance that the delimiter the author chose would be part of UTF8 encoding of any other character than the delimiter itself

  • zekica a year ago

    No, all UTF-8 multi-byte encodings have the most significant bit set.

  • tpoacher a year ago

    CSV's aren't really readable either though. They're "inspectable", but that's different. So if you want to read them you'll need to either use specific software, or do some preprocessing to align things properly etc ... in which case the extra step of performing a file-wide substitution of the record separator with newlines and unit separator with tabs or sth, isn't a much worse problem.

  • postalrat a year ago

    I'd say CSVs stuck around because there weren't any other alternatives that could be easily created, appended to, read by different apps.

  • boomlinde a year ago

    > The author seems to ignore the fact that CSV got so popular because it is human readable.

    It might seem that way if you didn't actually read the article:

    > So what’s the downside? This custom FEC tooling might give you a hint.

    > For starters, it’s rather unreadable when opened in a text editor.

bvrmn a year ago

[Grumpy mode start]

Some nitpicks, maybe someone finds it useful. Could we talk about a code design a little bit?

    class DSV:
        @property
        def delimiter(cls) -> bytes:
            return b'\x1F'

        @property
        def record_separator(cls) -> bytes:
            return b'\x1E'

        @property
        def encoding(cls) -> str:
            return 'utf-8'
It's Python, do not make a premature properties for static values.

    class DSV:
        delimiter = b'\x1F'
        record_separator = b'\x1E'
        encoding = 'utf-8'
Also it's a false inheritance relationship. Writer is not related to configuration. You can't make any other useful subclasses for DSV (ok maybe DSVReader, but that's it). At least it should be in the opposite way: an abstract Writer operating on instance configuration attributes and DSVWriter defining these attributes.

Also `self._buffer += chunk` is O(N^2). It starts to bite even for buffers small as 100 bytes. It's ok for an example, but it's an issue for real code. Example at least buffers incomplete record not a whole read chunk (good!). But does only one split at a time (bad).

[Grumpy mode end]

Nevertheless article is very valuable and interesting to read. CSV gotchas are well described.

Pikamander2 a year ago

From what I've seen, the biggest problem isn't with the CSV standard (even though it has a lot of faults), but rather that a lot of software that utilizes CSVs is poorly tested.

I can't tell you how many times I've downloaded a CSV that didn't escape quotes or newlines correctly, or how many times Excel has failed to correctly parse a perfectly valid CSV due to some decades-old quirk.

I know that there are better formats that make these types of situations pop up less, but is a little bit of quality control too much to ask for? If you've tested your software to make sure that it can handle CSVs with line breaks, tabs, and both types of quotes, then you've seemingly done more testing than 90% of the software out there.

On that note, the LibreOffice Calc team deserves major credit for how many different CSV formats it can handle. It's saved my bacon so many times when Excel wasn't up to the task.

brunokim a year ago

I've read a comment here some years ago of someone discovering ASCII field delimiters and excited to use them. They then discovered that those characters are only used in three places: the ASCII spec, their own code, and the data from the first client where he tried to use this solution.

Any file format needs a well-specified escape strategy, because every file format is binary and may contain binary data. CSV is kinda bad not only because, in practice, there's no consensus escaping, but also because we don't communicate what the chosen escaping is!

I think a standard meta header like follows would do wonders to improve interchangeability, without having to communicate the serialization format out-of-band.

``` #csv delim=";" encoding=utf8 quote=double locale="pt-BR" header=true ```

(RFC-4180 does specify that charset and header may be specified in the MIME type)

  • bigbuppo a year ago

    To me it's wild that the problem was solved back in the early 1960s (and really, well before that) but everyone just ignored it because of reasons and now we're stuck with a sub-optimal solution.

NoboruWataya a year ago

The only real benefit of CSV (other than that it is widely supported) is that it is easy for humans to read and write. The approach in this article solves the quoting problem, but also removes that benefit. If you have the power to move from CSV, surely JSON would be better if you need to keep the human readable/writable feature. And if you don't need it, there are other more featureful binary formats out there like parquet.

  • Filligree a year ago

    You can’t put comments in JSON, while that’s fairly easy in CSV. This makes JSON unusable most of the time for human-editable data.

    • imtringued a year ago

      There is no such thing as a comment in CSV.

      • Filligree a year ago

        In many dialects there are. Usually you start the line with #.

        Comments will happen. If your file format doesn’t allow comments, then people will make up an extension to allow it. This is true even for binary formats.

  • zelphirkalt a year ago

    JSON is often good, but it also has potentially a lot of overhead, depending on how sparse the data is. For sparse data, it might be better. But for not sparse data, it will have the overhead of mentioning attribute names over and over again. Of course you could also have arrays in JSON, not writing attribute names over and over, but then you are basically back to a CSV inside the JSON file ...

    • imtringued a year ago

      >Of course you could also have arrays in JSON, not writing attribute names over and over, but then you are basically back to a CSV inside the JSON file ...

      You're confusing the concept of tabular data with the file format. If the most natural way to represent tabular data is through a 2D array, then so be it. The vast majority of people aren't complaining about the fact that they have to hardcode the meaning of "the last name is written into the fifth column", they are cursing that the fifth column has suddenly shifted into the sixth column, because the first name contained a comma.

joeld42 a year ago

I like the idea but this is non-standard enough to be just as hard as making a custom format.

In my experience, the best way to handle this is:

1) Use TSV (tab-separated) instead of CSV (most things that export CSV also export TSV). Strip LF characters while reading and assume newlines are CR.

2) If you have a stubborn data source that insists on CSV, convert it to TSV in a pre-process. This could be a separate step or part of your reader as you're reading in the file. That means there's a single place to handle the escaping nonsense, and you can tailor that to each data source.

tbrownaw a year ago

> If we used 31 as a field delimiter and 30 instead of newlines, we solve every single edge case from above. Why? Because these are non-printing characters that should never appear in a text-stream data set.

I have in fact seen CSV files used as an interchange format for things that include non-plaintext fields. And I've seen nested CSV files.

RockRobotRock a year ago

>For starters, it’s rather unreadable when opened in a text editor. But I bet you don’t really do that with your CSVs all that often anyway!

I really wish that were true.

FerretFred a year ago

I had to LOL a bit about this. I built a career that lasted over 30 years writing software that deciphered clients' attempts to produce sales data files in CSV format.

Many times they just couldn't seem to find the comma. Other times there were commas in the item description (unescaped). My favourite though was when files were edited collaboratively using a Mac, Windows and Linux machines - multiple line-end types FTW! Like I said, a long and somewhat inglorious career..

zelphirkalt a year ago

CSVs are a subset of DSVs. So I guess the idea is, that using that specific subset is bad. But then again it sort of does not matter too much, which character is used for separation, at least if it is not a character that is frequently used as part of a cell value, because that would cause a lot of escaping being needed.

cafard a year ago

About a month ago, somebody posted a link to an interview with Brian Kernighan. About 6 minutes in, he talks about the difficulty of writing a decent CSV parser: https://www.youtube.com/watch?v=_QQ7k5sn2-o

Diti a year ago

The same idea, applied to JSON: https://shkspr.mobi/blog/2017/03/kyli-because-it-is-superior...

TomMasz a year ago

When we get to CSVs I tell my Python students that while the CSV module does do a lot of nice things for you, CSVs are still a minefield and you really have to look at the file in a text editor first if you're not the one who created it.

knallfrosch a year ago

If you strip the exchangeability from an exchange format, it is useless.

DSVs didn't work with either Google Sheets, nor vim and neither Python – I assume this is the exhaustive list of software the author would have needed support from. The question, then: If no software understands the format, what's the point?

> I first learned about these ASCII delimiters while working with .fec [Federal Election Commission] files.

And then the author instantly chose a different delimiter. Two parties and already two standards. That should have been the final red flag for this proposal.

--- Aside: CSVs have so many problems wit their data format that you have to always verify them anyway.

Germans write 5.000,24€ where an American would write $5,000.24. Date strings. URL-encoded strings. Numbers as strings. Floating numbers.

Solving the delimiter problem accomplishes nothing.

snthpy a year ago

I was wondering what DSV is and saw it's a term the author created. I have seen this format usually called ASV (ASCII Separated Values).

There's also a more modern USV (Unicode Separated Values) which has visible separators.

mattewong a year ago

I cannot imagine any way it is worth anyone's time to follow this article's suggestion vs just using something like zsv (https://github.com/liquidaty/zsv, which I'm an author of) or xsv (https://github.com/BurntSushi/xsv/edit/master/README.md) and then spending that time saved on "real" work

poikroequ a year ago

If you're not concerned with the size of the file, you might consider just using NDJSON.

https://github.com/ndjson/ndjson-spec

dflock a year ago

If only...

Every time I have to do any major work with CSVs, I re-lament this exact thing.

I think the only way this could ever become more widespread is to fix all the open source tooling so that it's eventually just supported everywhere - then keep evangelizing for... ~30 yrs.

Probably you should also register a new mime type and extension and make it a new thing - don't overload .CSV any further - but make the same tooling support it.

  • zelphirkalt a year ago

    If time could be turned back, a good idea would be to make CSV mean CSV. Not semicolon separated values, not any other thing separated values, but only comma separated values. To not overload the name in the first place.

    • jltsiren a year ago

      And I would rename the format to SSV and make semicolon the separator. Comma is a terrible choice, because it's used as the decimal separator in many countries around the world.

    • Ekaros a year ago

      Yep. Also limit data that can be stored there. Absolutely no decimal values. Absolutely no text.

      Ban those things and it starts to become reasonable enough for general use.

tpoacher a year ago

People here complaining this guy is suggesting a "new" standard: it's ASCII. It is already a standard, and probably a lot more sensible than others that followed.

I too have wondered why the hell aren't we using those special characters already ever since I discovered their existence

jgord a year ago

CSV is kinda great .. but it does help to have nice tools to wrangle it, such as the famous xsv by burnt-sushi.

eigenvalue a year ago

I have found that pandas is much better than the standard library csv library for just importing random CSV files and automatically figuring out what you would want to do most of the time, detecting column headers, dealing with quotes strings, etc.

euroderf a year ago

Are there any commonly-used fonts that display FS GS RS US as cool graphical characters ? If I'm going to use them to structure text, I want them to be visible, and I want them to be clearly distinguishable from the text.

canimus a year ago

Had a similar challenge when writing alphareader is in GitHub. HN comments helped me to think in multi-byte separators, and one thing is sure no matter which char you choose it will appear in the wrong place at some point.

jmclnx a year ago

['Alice', 'She said, "Hello" and waved.', 'Fred''s Car is broken']

You still have the issue described by "" with '' if I read the examples correctly.

mr90210 a year ago

Nice! Regarding support from third-party software, perhaps it would be worth writing a specification for DSVs. I think that it could ease the adoption from well-known softwares.

beardyw a year ago

I wanted a quick and dirty to parse a CSV in js the other day and just added square brackets around it and used JSON.parse.

Am I alone in this?

  • gsck a year ago

    That's actually pretty smart.

    The amount of times I have written a simple CSV parser to correctly handle quoted strings and the like is more times than I have digits when I could just pretend its JSON.

    Going to make a mental note to try this next time!

cogman10 a year ago

CSVs are bad. If you can change the format then don't use a DSV, use parquet and a library for your language to consume parquet.

It's less code for you and you can do neat things like zstd compression on the columns.

Bonus, it also doesn't require that you load and unload everything in memory.

https://arrow.apache.org/docs/python/parquet.html

Decabytes a year ago

When I'm in control I just produce a TSV instead of a CSV. A comma is much more likely in text than a tab

kvbe a year ago

Ok, but which tool can you use to edit csvs with the same power as excel… without messing with the csv?

BeFlatXIII a year ago

What this DSV format needs for evangelization is for someone to create a front end editor for it.

dsevil a year ago

Original author writes: >>> "Quick aside: I first learned about these ASCII delimiters while working with .fec files. For whatever reason, the Federal Election Commission in the United States also decided that they needed to ditch the comma, but they landed on using ASCII character 28 which is supposed to be used as a file separator not a field saparator. I have no idea why they picked that one when 31 was right there. Anyway, the FEC also has a tool called fs2comma.exe that turns it back into a CSV format, and a couple of years I filed a FOIA request and got the source code."

I can only speculate on this but in Perl, for fake multimensional arrays à la `$foo{$x,$y,$z}`[^1], Perl uses ASCII character 28 (U+001C INFORMATION SEPARATOR FOUR) as its default subscript separator. Perl borrowed this feature from AWK, which uses the same character by default for the same purpose.

Based on Perl, I initally used that same character for that same purpose in a project or two. I cannot speculate on why Aho, Weinberger, and/or Kernighan chose that character. (On or before 1977.)

[^1]: Not to be confused with nested array (or hash) references in Perl, a truer form of multimensional arrays: `$foo->[$x]->{$y}->[$z]`

foobarkey a year ago

DSVs are pretty bad, CSVs are kinda OK (just don’t ever open one in Excel)

  • tomschwiha a year ago

    Directly opening CSV files in Excel does mot work for me, too, importing a CSV file however works devently well for me.

bionhoward a year ago

Love it. I’m gonna use this. Thank you for sharing!

mberning a year ago

> Because these are non-printing characters that should never appear in a text-stream data set.

Good luck with that.

welcome_dragon a year ago

Ah yes good old CSV. It's perfectly fine to use for data transfer and there are libraries for (probably) every language that handle it perfectly to spec.

The problem isn't "CSV". The problems come from: - "excel doesn't like this CSV therefore it's not valid" - "what do you mean the CSV I sent you is wrong? I used excel" - "standard? What's a standard? I put info then a comma. That should be good enough for anyone"

CSV, when done right (i.e. following a standard) is a great format. It's human readable, less verbose than, say, JSON, and everybody understands it.

Just have to make sure business people (and less technical technical people) understand that CSV != Excel and vice-versa.

eth0up a year ago

Question: I started with a deliberately convoluted PDF which after much effort I filtered, sorted, reorganized and transferred the 18000 useful lines to a csv. These lines are simple, with dates, indicator and corresponding numbers.

The purpose is to statically analyze the numbers for anomalies or any signs of deviation from expected randomness. I do this all in python3 with various libraries. It seems to be working, but...

What is a more efficient format than csv for this kind of operation?

Edit: I have also preserved all leading zeros by conversion to strings -- csv readers don't care much for leading zeros and simply disappear them, but quotes fix that.

  • brunokim a year ago

    18k lines is very small, CSVs are fine as storage option.

    My rule of thumb is that anything that fits into Excel (approx 1M lines) is "small data" and can be analysed with Pandas in memory.

    • eth0up a year ago

      Hey, thanks for taking the time to reply. I won't be reaching 1M anytime soon, so good to know!

gsck a year ago

The phones we use at {JOB} can be programmatically controlled by using their proprietary command language, which is just CSV and each command is ended with a new line (Because how else would you do it, packet boundary pfft?).

It's something I've never understood why, why not use something more standard like SIP, or even a more structured message format. Having to parse CSV across N different packet boundaries is a royal PITA

TudorAndrei a year ago

This reminds me of https://xkcd.com/

As other said, most of the times, if you are producing them, just produce them right, or choose other formats.

If you don't then pray for the best.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection