Settings

Theme

The Beauty of Unix Pipelines

prithu.xyz

649 points by 0x4FFC8F 6 years ago · 387 comments

Reader

gorgoiler 6 years ago

Pipes are wonderful! In my opinion you can’t extol them by themselves. One has to bask in a fuller set of features that are so much greater than the sum of their parts, to feel the warmth of Unix:

(1) everything is text

(2) everything (ish) is a file

(3) including pipes and fds

(4) every piece of software is accessible as a file, invoked at the command line

(5) ...with local arguments

(6) ...and persistent globals in the environment

A lot of understanding comes once you know what execve does, though such knowledge is of course not necessary. It just helps.

Unix is seriously uncool with young people at the moment. I intend to turn that around and articles like this offer good material.

  • jcranmer 6 years ago

    > (1) everything is text

    And lists are space-separated. Unless you want them to be newline-separated, or NUL-separated, which is controlled by an option that may or may not be present for the command you're invoking, and is spelled completely differently for each program. Or maybe you just quote spaces somehow, and good luck figuring out who is responsible for inserting quotes and who is responsible for removing them.

    • gorgoiler 6 years ago

      To criticize sh semantics without acknowledging that C was always there when you needed something serious is a bit short sighted.

      There are two uses of the Unix “api”:

      [A] Long lived tools for other people to use.

      [B] Short lived tools one throws together oneself.

      The fact that most things work most of the time is why the shell works so well for B, and why it is indeed a poor choice for the sort of stable tools designed for others to use, in A.

      The ubiquity of the C APIs of course solved [A] use cases in the past, when it was unconscionable to operate a system without cc(1). It’s part of why they get first class treatment in the Unix man pages, as old fashioned as that seems nowadays.

      • jcranmer 6 years ago

        There's a certain irony in responding to criticism of that which you're extolling by saying not to use it.

        And the only reason I might be pushed down that path is because the task I'm working on happens to involve filenames with spaces in them (without those spaces, the code would work fine!), because spaces are a reasonable thing to put in a filename unless you're on a Unix system.

        • enriquto 6 years ago

          > because spaces are a reasonable thing to put in a filename unless you're on a Unix system.

          Putting spaces on a filename is atrocious and should be disallowed by modern filesystems. It is like if you could put spaces inside variable names in Python. Ridiculous.

          • necrotic_comp 6 years ago

            At Goldman, the internal language Slang has spaces in variable names. It's insane at first glance and I got into an argument with my manager about why in the world this was acceptable, and he could give me no other answer than "this is the way it is".

            But when you realize that space isn't a valid separator between token, seeing things like "Class to Pull Info From Database::Extractor Tool" actually becomes much easier to read and the language becomes highly expressive, helped somewhat by the insane integration into the firm's systems.

            I was on your side until I tried it, but it can actually be quite useful, esp. if everything is consistent.

            • enriquto 6 years ago

              > I was on your side until I tried it, but it can actually be quite useful, esp. if everything is consistent.

              On my side? You cannot imagine how extreme one can be... If I had my way, programming languages would only allow single-letter variables and space would be the multiplication operator.

              • nerdponx 6 years ago

                What possible justification could you have for making this mandatory?

                Math notation being dependent on single-character names is an artifact of "concatenation as multiplication" -- which in my opinion is a useful but sloppy historical convention with little merit or relevance in non-handwritten media.

              • necrotic_comp 6 years ago

                Okay - I wasn't meaning to pick a fight here. Just saying that there are definite use cases where spaces in variable names can work and actually be useful. That's all.

          • HeadsUpHigh 6 years ago

            I'm not going back to CamelCase or underscores for my normal day to day file naming. The problem with spaces only exists inside the IT world and it's something they should find a way around.

            • jyounker 6 years ago

              At the point I'd be happy if all unix tools had a `--json` option. Then the whole space issue goes away. So do a bunch of parsing issues.

              Unix tools standardized a character representation (e.g. everything is ascii). The next step is a unified standard the syntactical representation.

              • laumars 6 years ago

                ASCII isn’t just a “character representation” though. It includes code points for data serialisation and transmission control. You can represent tables in ASCII without resorting to printable characters nor white space as field separators.

                I don’t why POSIX tools don’t already do this but I suspect it’s because most tools would have evolved by accident rather than being designed from the ground up with edge cases in mind.

            • enriquto 6 years ago

              I agree. The file load/save dialogs of all GUI should work in such a way that spaces typed by the users in a filename field are always transparently changed to something different (for example, the unicode non-breaking space).

              • skissane 6 years ago

                > (for example, the unicode non-breaking space)

                That's potentially even worse in a file name than plain old space is. It looks like a normal space, but isn't.

                If you are going to replace spaces in file names with something else, underscores are the best option I think.

              • HeadsUpHigh 6 years ago

                Sounds like taking the "I know better than the users to another level".

          • outadoc 6 years ago

            You should take a stroll in the real world sometime, where spaces and Unicode exists :)

            • gorgoiler 6 years ago

              The parent comment is extreme, and the real world is indeed very diverse, but I would also be quite surprised to find a software project with spaces in internal filenames.

              • outadoc 6 years ago

                So would I, but the world is, thankfully, not entirely composed of software projects.

            • enriquto 6 years ago

              Yeah, any unicode character is OK on a filename, except maybe space and slash.

              • skissane 6 years ago

                I don't think C0 or C1 controls should be allowed in filenames.

                Why allow them? And they potentially pose security issues.

                With some, admittedly somewhat obscure [1][2], terminal emulators, C0 and C1 controls can even be used to execute arbitrary code. You could say these terminal emulators are insecure, and you may well be right, but the fact is they exist.

                [1] See for example page 211 of https://static.zumasys.com/zumasys/atfiles/manuals/at7/AccuT... which documents the ESC STX < command CR escape sequence which tells the AccuTerm terminal emulator to run an arbitrary command

                [2] Also the APC C1 control can make Kermit run arbitrary commands, if you've executed "SET APC UNCHECKED" (which is not the default setting) – see http://www.kermitproject.org/onlinebooks/usingckermit3e.pdf page numbered 300 (page 310 of PDF)

                • nerdponx 6 years ago

                  I agree. Personally I think the Linux kernel should have a compilation-time option to disallow various special characters in filenames such as ASCII control characters and colons, so users who want to maintain a sane system can give themselves the option to do so.

                  • AnIdiotOnTheNet 6 years ago

                    So um, how would you work with files that came from a foreign file system? Would the kernel just crash when it sees them? Would they be effectively untouchable, like filenames with a '/' encoded in them?

                    • skissane 6 years ago

                      Previously, I proposed addressing this with a mount option and a filesystem volume flag [1]. The mount option would prevent creating new files/directories with "bad names" but would still allow accessing ones which already exist. The filesystem volume flag would make bad names on that volume be considered fsck errors. (If the kernel found such a bad name on a volume with the flag enabled, it should log that as a filesystem error, and either return an IO error, or just pretend the file doesn't exist – whatever Linux does now when it finds a file on disk whose name contains a null byte or forward slash.)

                      Using a mount option and/or filesystem volume flag means it works fine with legacy volumes which might contain those bad names. On such legacy volumes, either that volume flag is not supported by the filesystem, or else it is but is disabled for that particular volume. If you mount it with the mount option, you can still access bad names on that volume, you just can't create new ones; if you need to create new bad names, you just (re)mount it with the mount option disabled.

                      [1] https://news.ycombinator.com/item?id=22873153

                  • enriquto 6 years ago

                    and spaces! ;)

          • nerdponx 6 years ago

            I disagree with this immensely.

            Also, some languages do allow whitespace in variable names like R and SQL, so long as the variable names are quoted or escaped properly.

          • tomaskafka 6 years ago

            Are your kids named NameSurname? Did you meet EuropeanBrownBear on your last trip to the mountains? :)

        • coliveira 6 years ago

          The shell works with spaces, you just need to be careful to quote every file name. The advantage of using file names without spaces is that you can avoid the quotes.

          • viraptor 6 years ago

            Please don't ever assume you can avoid spaces and quotes. It's just a time bomb.

            It's like people saying you don't need to escape SQL values because they come from constants. Yes, they do... today.

            It's not just quoting either. It's setting the separator value and reverting it correctly. It's making sure you're still correct when you're in a function in an undefined state. It's a lot of overhead in larger projects.

            • CountSessine 6 years ago

              Sure, but in a larger project your won’t be coding in shell script, right?

              • viraptor 6 years ago

                Larger projects will often include small shell scripts for deployment / maintenance / various tasks.

            • coliveira 6 years ago

              What I am saying is nothing new. This has always been the policy of UNIX developers. If you use only file names without space, then you can make it easier to use tools that depend on this. Of course, if you're writing applications in shell then you cannot rely just on convention.

          • tsimionescu 6 years ago

            The shell (bash, anyway) has a weird mix of support and lack of support for spaces/lists/etc.

            For example, there is no way to store the `a b "c d"` in a regular variable in a way where you can then call something similar to `ls $VAR` and get the equivalent of `ls a b "c d"`. You can either get the behavior of `ls a b c d` or `ls "a b c d"`, but if you need `ls a b "c d"` you must go for an array variable with new syntax. This isn't necessarily a big hurdle, but it indicates that the concepts are hard to grasp and possibly inconsistent.

            • account42 6 years ago

              Bash has arrays:

                  var=( a b "c d" )
                  ls "${var[@]}"
              
              POSIX shell can get you what you want with eval:

                  var='a b "c d"'
                  eval "ls $var"
              
              There is also one array variable in POSIX shells, the argument list:

                  set -- a b "c d"
                  ls "$@"
            • teddyh 6 years ago

              That has nothing to do with spaces, but more to do with the rather lackluster support for arrays in most Bourne-style shells.

          • gorgoiler 6 years ago

            Indeed, and the shell lives and breathes spaces. Arrays of arguments are created every time one types a command. Without spaces-as-magic we’d be typing things like this all the time:

              $ exec(‘ls’, ‘-l’, ‘A B C’)
            
            Maybe that’s unrealistic? I mean, if the shell was like that, it probably wouldn’t have exec semantics and would be more like this with direct function calls:

              $ ls(ls::LONG, ‘A B C’)
            
            Maybe we would drop the parentheses though — they can be reasonably implied given the first token is an unspaced identifier:

              $ ls ls::LONG, ‘A B C’
            
            And really, given that unquoted identifiers don’t have spaces, we don’t really need the commas either. Could also use ‘-‘ instead of ‘ls::’ to indicate that an identifier is to be interpreted locally in the specific context of the function we are calling, rather than as a generic argument.

              $ ls -LONG ‘A B C’
            
            If arguments didn’t have spaces, you could make the quotes optional too.

            QED

            • tsimionescu 6 years ago

              I don't think this is the problem people usually complain about.

              The much bigger problem is that spaces make text-only commands compose badly.

                  $ ls -l `find ./ -name *abc*`
              
              Is going to work very nicely if file names have certain properties (no spaces, no special chars), and it's going to break badly if they don't. Quoting is also very simple in simple cases, but explodes in complexity when you add variables and other commands into the mix.
              • hnlmorg 6 years ago

                That's still just a shell problem since the shell is expanding `find` and placing the unquoted result into the command line. Some non-POSIX shells don't have this issue. For example with murex you could use either of these two syntaxes:

                    ls -l ${find ./ name *abc*}
                    # return the result as a single string
                
                    ls -l @{find ./ name *abc*}
                    # return the result as an array (like Bourn shell)
                
                So in the first example, the command run would look something like:

                    ls -l "foo abc bar"
                
                whereas in the second it would behave more like Bourne shell:

                    ls -l foo abc bar
                
                As an aside, the example you provided also wouldn't work in Bourne shell because the asterisks would be expanded by the shell rather than `find`. So you'd need to quote that string:

                    ls -l `find ./ -name "*abc*"`
                
                This also isn't a problem in murex because you can specify whether to audo-expand globbing or not.

                With POSIX shells you end up with quotation mark soup:

                    ls -l "`find ./ -name '*abc*'`"
                
                (and we're lucky in this example that we don't need to nest any of the same quotation marks. It often gets uglier than this!)

                Murex also solves this problem by supporting open and close quote marks via S-Expression-style parentheses:

                    ls -l (${find ./ -name (*abc*)})
                
                Which is massively more convenient when you'd normally end up having to escape double quotes in POSIX shells.

                ---

                Now I'm not trying to advocate murex as a Bash-killer, my point is that it's specifically the design of POSIX shells that cause these problems and not the way Unix pipelines work.

              • bregma 6 years ago

                Ah, more problems with shell expansion and substitution.

                Pipelines to the rescue. If you used a pipeline instead of shell expansion, it can deal with spaces just peachy.

                    $ find . -name "*abc*" -print0 | xargs -0 ls -l
                
                There's no argument that bash (and other shells) make dealing with spaces and quotes troublesome. That has nothing to do with pipelines.
                • tsimionescu 6 years ago

                  Does every unix utility suport -print0 or similar?

                  My point wasn't specifically about find, but about the problems of text and other unstructured input/output in general.

                  Update: reading a bit around the internet, it looks like it's not possible to do something like this :

                      find / -print0 | \
                          grep something | \
                          xargs -0 cat
                  
                  As grep will mangle the output from find -print0.
                • jcranmer 6 years ago

                  And note that, all the way in my original argument, I complained that every command used a different way of spelling that changed delimiter character.

          • thekaleb 6 years ago

            Some shells handle this quoting better than others.

        • kbenson 6 years ago

          That's when it's useful to use Perl as a substitute for a portion of or the entire pipeline.

        • de_watcher 6 years ago

          Choosing space as a separator is a useful convention that saves you typing in additional code afterwards if you want to run a short command.

          It's like there is a shortcut that some people use that you want to wall in because it doesn't look pretty to you.

        • gorgoiler 6 years ago

          The biggest pain comes from untarring some source code and trying to build it while in ~/My Tools Directory. Spaces in directories that scupper build code that isn’t expecting them is the fatal mixing of worlds.

          In most other cases I’ve never really had a problem with “this is a place where spaces are ok” (e.g. notes, documents, photos) and “this is a place where they are not ok” — usually in parts of my filesystem where I’m developing code.

          It’s fine to make simplifying assumptions if it’s your own code. Command history aside, most one liners we type at the shell are literally throwaways.

          I think I was clear that they aren’t the only category of program one writes and that, traditionally on Unix systems, the counterpart to sh was C.

      • tsimionescu 6 years ago

        The problem is that the pipeline model is extremely fragile and breaks in unexpected ways in unexpected places when hit with the real world.

        The need to handle spaces and quotes can take you from a 20 character pipeline to a 10 line script, or a C program. That is not a good model whichever way you look at it.

        • mbreese 6 years ago

          Pipelines are mainly for short-lived, one-off quick scripts. But they are also really useful for when you control the input data. For example, if you need a summary report based on the results of a sql query.

          If you control the inputs and you need to support quotes, spaces, non-white space delimiters, etc... in shell script, then that’s on you.

          If you don’t control the inputs, then shell scripts are generally a poor match. For example, if you need summary reports from a client, but they sometimes provide the table in xlxs or csv format — shell might not be a good idea.

          Might be controversial, but I think you can tell who works with shell pipes the most by looking at who uses CSV vs tab-delimited text files. Tabs can still be a pain if you have spaces in data. But if you mix shell scripts with CSV, you’re just asking for trouble.

        • diegocg 6 years ago

          That's not the fault of pipes, that's the fault of shells like bash. There are shells that deal with spaces and quotes perfectly fine.

        • necrotic_comp 6 years ago

          I've been scripting stuff on the pipeline for over a decade and haven't really run into this much.

          You can define a field separator in the command line with the environment variable IFS - i.e. 'IFS=$(echo -en "\n\b");' for newlines - which takes care of the basic cases like spaces in filenames/directory names when doing a for loop, and if I have other highly structured data that is heavily quoted or has some other sort of structure to it, then I either normalize it in some fashion or, as you suggest, write a perl script.

          I haven't found it too much of a burden, even when dealing with exceptionally large files.

        • xkucf03 6 years ago

          I agree that spaces, quotes, encodings, line-ends etc. are problem, but this is not problem of „the pipeline model“ – this is problem of ambiguously-structured data. See https://relational-pipes.globalcode.info/v_0/classic-example...

        • nerdponx 6 years ago

          Kind of / not really.

          Also Zsh solves 99% of my "pipelines are annoying" and "shell scripts are annoying" problems.

          Even on systems where I can't set my default shell to Zsh, I either use it anyway inside Tmux, or I just use it for scripting. I suppose I use Zsh the way most people use Perl.

      • nerdponx 6 years ago

        C was always there when you needed something serious

        There is a world of stuff in between "I need relatively low-level memory management" and "I need a script to just glue some shit together".

        For that we have Python and Perl and Ruby and Go, or even Rust.

        • gorgoiler 6 years ago

          Yes! I love 2020!

          (My point about C was in the historical context of Unix, which is relevant when talking about its design principles.)

        • 1MachineElf 6 years ago

          Going with the same theme, C itself was an innovation meant to fill the space between Assembler (in this context, "A") and B[0].

          [0] https://en.wikipedia.org/wiki/B_(programming_language)

          • klyrs 6 years ago

            That isn't really supported by the article you linked -- it describes C as an extension of B multiple times.

            • 1MachineElf 6 years ago

              I only linked that because most people don't know what B is, and I suppose the difference between "innovation meant to fill the space" and "extention" wasn't so important to me.

              My source for the claim itself is from 14m30s of Brian Cantrill's talk "Is It Time to Rewrite the Opersting System in Rust" https://youtu.be/HgtRAbE1nBM

            • jacquesm 6 years ago

              To be more precise: A->BCPL->B->C

      • viraptor 6 years ago

        This doesn't always hold in Linux though. Some proc entries are available only via text entries you have to parse whether that's in sh or C. There's simply no public structured interface.

        • JdeBP 6 years ago

          This is one of the superior aspects of FreeBSD: no parsing of human-readable strings to get back machine-readable information about the process table. It's available directly in machine-readable form via sysctl().

      • pwdisswordfish2 6 years ago

        [C] Long lived tools one throws together oneself

        Stable tools designed for oneself

    • hnlmorg 6 years ago

      That's a POSIX shell thing rather than a Unix pipeline thing. Some non-POSIX shells don't have this problem while still passing data long Unix pipes

      Source: I wrote a shell and it solves a great many of the space / quoting problems with POSIX shells.

      • buzzkillington 6 years ago

        Same, I've built toy shells [0] that used xml, s-expressions and ascii delimited text [1]. The last was closest to what unix pipes should be like. Of course it breaks ALL posix tools, but it felt like a shell that finally works as you'd expect it to work.

        [0] Really just input + exec + pipes.

        [1] https://en.wikipedia.org/wiki/Delimiter#ASCII_delimited_text

        • hnlmorg 6 years ago

          As long as you re-serialise that data when piping to coreutils (etc) you shouldn't have an issue.

          This is what my shell (https://github.com/lmorg/murex) does. It defaults to using JSON as a serialisation format (that's how arrays, maps, etc are stored, how spaces are escaped, etc) but it can re-serialise that data when piping into executables which aren't JSON aware.

          Other serialisation formats are also supported such as YAML, CSV and S-Expressions too. I had also considered adding support for ASCII records but it's not a use case I personally run into (unlike JSON, YAML, CSV, etc) so haven't written a marshaller to support it yet.

          • buzzkillington 6 years ago

            Ascii records have worked better than json, xml, s-expressions (better as in provide the same bang for a lot less buck) in every case I've used them for.

            I have no idea why I am the only person I know who uses them regularly.

            • hnlmorg 6 years ago

              ASCII records have their own limitations though:

              - They can be as easily edited by hand like other serialisation formats which use printable characters as their deliminators

              - It's not clearly defined how you'd use them for non-tabulated data (such as JSON, XML, S-Expressions)

              - There isn't any standard for escaping control characters

              - They're harder to differentiate between unserialised binary formats

              - They can't be used as a primitive like JSON is to Javascript and S-Expressions is to Lisp.

              And if we're both honest, reducing the serialisation overhead doesn't gain you anything when working in the command line. It's a hard enough sell getting websites to support BSON and at least there, there is a tangible benefit of scale.

              Not that I'm dismissing ASCII records, they would have been better than the whitespace mess we currently have in POSIX shells. However I don't agree ASCII records are better that current JSON nor S-Expressions.

        • JdeBP 6 years ago

          Here's one tool that it does not break. (-:

          * http://jdebp.uk./Softwares/nosh/guide/commands/console-flat-...

    • gricardo99 6 years ago

      >good luck figuring out ...

      luck, or simply execute the steps you want to check by typing them on the command line. I find the pipes approach incredibly powerful and simple because you can compose and check each step, and assemble. That's really the point, and the power, of a simple and extensible approach.

    • 6gvONxR4sf7o 6 years ago

      God help you if your paths have spaces in them.

      • mfontani 6 years ago

        No need to invoke a deity to do that...

          echo 'Hi!' | tee "$(printf "foo\nbar\tqu ux.txt")"
          cat foo$'\n'bar$'\t'qu' 'ux.txt
        
        IIRC the latter only works in ksh93+, but the former should be fine pretty much in any shell.
      • gorgoiler 6 years ago

        By “God” you may mean “libc”...

        ...if it was the 1970s. Nowadays we use Python etc for tools where it’s less acceptable for them to fall apart, though everything falls apart at some point, spaces or otherwise.

      • bityard 6 years ago

        Just quote paths? shrug

        • dataflow 6 years ago

          Try doing that in e.g. Make

          • blacksqr 6 years ago

            In the early 2k's I discovered that it was impossible to quote or escape spaces in filepaths in a makefile. I naively reported it to the Make maintainer as a bug.

            The maintainer responded by saying, basically, that the behavior was deeply baked in and was not going to change.

            • enriquto 6 years ago

              This is a nice feature of make. People who put spaces on filenames are justly punished by not being given the goodness of make.

              • mehrdadn 6 years ago

                I can't tell if you're being sarcastic but this can result in innocent users who don't even know this fact to suddenly overwrite or wipe their files when trying to build someone's project.

                • enriquto 6 years ago

                  Nobody is innocent if they have filenames with spaces in their systems! /s

          • fao_ 6 years ago

            Try finding out which registry key you need to edit to get something to happen.

            Some things across different systems have pitfalls as a result of their implementation, or some features have pitfalls. There are half a dozen things I can point to on windows that are worse than the "spaces in filenames". Hell, windows doesn't even support ISO 8601 dates, or times to be properly placed in files. It restricts symbols much more than Linux / Unix

            • yzmtf2008 6 years ago

              who said anything about windows?

              • fao_ 6 years ago

                The parent thread referenced windows as something that handles spaces in filenames. I'm not sure why I'm getting downvoted when my comment was relevant to the debate occurring, and my point (that all features have downfalls caused by the necessity of implementation within the surrounding system, and in the case of command line arguments, linguistics and common keyboards themselves) is something that according to the guidelines should be debated against, not downvoted.

                • yzmtf2008 6 years ago

                  None of the parents of your comment mentioned Windows. Perhaps you have the wrong thread?

    • jlg23 6 years ago

      Or you just pipe through the program of your choice for which you know the syntax:

        echo "foo bar fnord" | cut --delimiter " " --output-delimiter ":" -f1-
      
        # foo:bar:fnord
      • bloopernova 6 years ago

        I love learning new things about commands I've used for years.

            --output-delimiter ":"
        
        Very cool!

        However it does require the GNU version of cut, not the Mac OS X supplied BSD version. In zsh at least, you can do:

            alias cut='/usr/local/bin/gcut'
    • fomine3 6 years ago

      The fact that filename can contain anything except NULL/slash is really pain. I often write a shell script that treats LF as separator but I know it's not good.

      • brokenmachine 6 years ago

        Why not use null as a separator?

        I agree it's messy but not that hard. Just set IFS to null.

      • enriquto 6 years ago

        filenames are variable names. Choosing good filenames is the first step in writing a nice shell script in a situation that you control.

        • fomine3 6 years ago

          It's sometimes in control or out of control. Writing script in defensive way is better for security if filename is out of control.

        • Quekid5 6 years ago

          No, they aren't -- they are data.

    • ComputerGuru 6 years ago

      I think this was a failure on behalf of early terminal emulator developers. End-of-field and end-of-record should have been supported by the tty (either as visual symbols or via some sort of physical distancing, etc even if portrayed just as a space and a new line respectively) so that the semantic definition could have been preserved/distinguished.

      • junon 6 years ago

        TTYs in general are some of the most outdated and archiac pieces of technology still used today. They need a serious overhaul, but the problem is, any major change will break pretty much all programs.

    • chooseaname 6 years ago

      All of which are so much easier to deal with than some binary format you have no spec on.

      XML? SMH.

    • michaelcampbell 6 years ago

      Ah yes, perfect is the enemy of the good.

    • pwdisswordfish2 6 years ago

      Works for me.

    • derefr 6 years ago

      People struggle so much with this, but I don't see what the point is at all. The fundamental problem that playing with delimiters solves, is passing arbitrary strings through as single tokens.

      Well, that's easy. Don't escape or quote the strings; encode them. Turn them into opaque tokens, and then do Unix things to the opaque tokens, before finally decoding them back to being strings.

      There's a reason od(1) is in coreutils. It's a Unix fundamental when working with arbitrary data. Hex and base64 are your friends (and Unix tools are happy to deal with both.)

  • laumars 6 years ago

    > everything is text

    Everything is a byte stream. Usually that means text but sometimes it doesn't. Which means you can do fun stuff like:

    - copy file systems over a network: https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.htm...

    - stream a file into gzip

    - backup or restore an SD card using `cat`

  • dvirsky 6 years ago

    > Unix is seriously uncool with young people at the moment.

    Those damn kids with their loud rockn'roll music and their Windows machines. Back in my day we had he vocal stylings of Dean Martin and the verbal stylings of Linus Torvalds let me tell ya.

    Seriously though, I'm actually seeing younger engineers really taking the time to learn how to do shell magic, using vim, etc. It's like the generation of programmers who started up until the late 90s used those by default, people who like me started 15-20 years ago grew up on IDEs and GUIs, but I've seen a lot of 20-30 something devs who are really into the Unix way of doing things. The really cool kids are in fact doing it.

    • abnry 6 years ago

      I guess I am one of the young kids who think the unix command line is wicked cool. It makes the user experience on my laptop feel so much more powerful.

      • logicprog 6 years ago

        Me too. Especially since I can regularly hit 100 WPM when typing, but I'm a terrible shot with the mouse - not to mention the fact that you can get really nice and satisfying keyboards (I use an IBM Model M at home and a CODE Cherry MX Clear at college) but mice are all kind of the same, and you have to move your hand a lot to get there. On that last point, my mouse hand gets wrist pain consistently way more than my other hand (which I only use for the keyboard) does.

        Add to all that the fact that the command line allows for a lot of really easy composition and automation, and it's significantly better for me. I can hardly function on a Windows computer!

        • divbzero 6 years ago

          Ditto.

          For projects where I must work in Windows my secret weapon is WSL + tmux + fish + vim for everything without a strict Visual Studio dependency.

          • dvirsky 6 years ago

            "Millennials Have Ruined GUIs"

          • mokanfar 6 years ago

            hard to get around not using the mouse eventually when looking up docs or copy pasting from browser

            • divbzero 6 years ago

              Yes, that’s a pain point I haven’t quite cracked.

              Copy-paste within tmux works just fine with (a) set -g mouse on in ~/.tmux.conf (b) mouse select to copy and (c) Ctrl + B ] to paste.

              Copy-paste from tmux to Windows or vice versa is a pain.

              I haven’t tried all the terminals yet but what I need is an iTerm2 [1] for Windows.

              [1]: https://iterm2.com

            • logicprog 6 years ago

              Unfortunately. But that's what VI keybindings for the browser are for!

    • xnyan 6 years ago

      I grew up all-GUI windows kid. I actually had a revulsion to the shell, mostly because it seemed unfriendly and scary. In my early 20s I tried to run Plex on a Windows 7 box and it was miserable. I forced myself to lean by switching to a headless arch linux box.

      Giving up the idea that CLI = pain (i.e figuring out how to to navigate a file system, ssh keys, etc) for sure was a learning curve, but now I can't imagine using computers without it.

      • dvirsky 6 years ago

        I guess that's the issue with terminal - the learning curve. I did start with TUIs as a kid before Windows came along, and I remember that feeling when I started using GUIs - "oh, everything has a menu, no need to remember anything, no need to read any docs, it's all just there". That was a real revolution.

  • bityard 6 years ago

    > (1) everything is text

    Not at all, you can pipe around all the binary you want. Until GNU tar added the 'z' option, the way to extract all files in a tarball was:

    `gunzip -c < foo.tar.gz | tar x`

    However, "text files" in Unix do have a very specific definition, if you want them to work with standard Unix text manipulation utilities like awk, sed, and diff:

    All lines of text end are terminated by a line feed, even the last one.

    I can't tell you how many "text files" I run across these days that don't have a trailing newline. It's as bad as mixing tabs and spaces, maybe worse.

  • ssivark 6 years ago

    Serialized bytestreams do compose better than graphical applications. But that is setting a very low bar. For example, allowing passing around dicts/maps/json (and possibly other data structures) would already be a massive improvement. You know what might be even better — passing around objects you could interact with by passing messages (gasp! cue, Alan Kay).

    While posix and streams are nice (if you squint, files look like a poor man’s version of objects imposed on top of bytestreams), it’s about time we moved on to better things, and not lose sight of the bigger picture.

    • fao_ 6 years ago

      Yeah, but that makes every single file an object. Then you need plugins to deal with plain text objects, plugins to deal with every single type of object and their versions.

      Objects aren't human readable and if they get corrupted the object's format is very difficult to recover, you need an intimate knowledge of both the specific format used (of which there might be thousands of variations). Plaintext, however, if that gets corrupted it's human-readable. The data might not be more compact (Although TSV files tend to come out equal), but it's more robust against those kinds of changes.

      Have you ever examined a protobuf file without knowing what the protobuf structure is? I have, as part of various reverse-engineering I did. It's a nightmare, and without documentation (That's frequently out of date or nonexistent) it's almost impossible to figure out what the data actually is, and you never know if you've got it right. Even having part of the recovered data, I can't figure out what the rest of it is.

      • ssivark 6 years ago

        But that seems like just ignoring the problem, like an ostrich shoving one’s head into the sand!

        > Have you ever examined a protobuf file without knowing what the protobuf structure is?

        Are you referring to an ASCII serialization or a binary serialization?

        In a typical scenario, your application needed structured data (hence protobuf), and then you serialized your protobuf into an ASCII bytestream... so how are you ever worse off with a protobuf compared to a text file, provided you make a good choice of serialization scheme? Clearly text files cannot carry enough structure, so we have to build more layers of scaffolding on top?

        Ultimately files are serialized on to disk as bits, and “ASCII” and “posix” are just “plugins” to be used when reading bitstreams (serialized file objects). Plaintext is not readable if the bits get corrupted or byte boundaries/endianness gets shifted!

        To have the same amount of of robustness with higher-order structures I imagine all one needs is a well-defined de/serialization protocol which would allow you to load new objects into memory? Am I missing something?

      • Firadeoclus 6 years ago

        You could solve that problem with a system which mandated that structured data always come with a reference to a schema. (And where files as blobs disconnected from everything else only exist as an edge case for compatibility with other systems).

    • flukus 6 years ago

      > For example, allowing passing around dicts/maps/json (and possibly other data structures) would already be a massive improvement.

      It is allowed, you can pass around data in whatever encoding you desire. Not many do though because text is so useful for humans.

      > You know what might be even better — passing around objects you could interact with by passing messages (gasp! cue, Alan Kay)

      That's a daemon/service and basic unix utils make it easy to create, a shell script reading from a FIFO file fits this definition of object, the message passing is writing to the file and the object is passed around with the filename. Unix is basically a fulfillment of Alan Kay's idea of OO.

    • boomlinde 6 years ago

      > For example, allowing passing around dicts/maps/json (and possibly other data structures) would already be a massive improvement.

      Well, it is allowed. You can of course serialize some data into a structured format and pass it over a byte stream in POSIX to this end, and I think this is appropriate in some cases in terms of user convenience. Then you can use tools like xpath or jq to select subfields or mangle.

      > (if you squint, files look like a poor man’s version of objects imposed on top of bytestreams)

      If all you have is a hammer...

    • cobalt 6 years ago

      try powershell, its on linux too

      • Koshkin 6 years ago

        Indeed, if it wasn’t for its reliance on .NET, it looks like PowerShell could be a massive improvement over Unix shell.

  • jiggawatts 6 years ago

    > Pipes are wonderful!

    It's wonderful only if compared to worse things, pretending that PowerShell is not a thing, and that Python doesn't exist.

    UNIX pipes are a stringly-typed legacy that we've inherited from the 1970s. The technical constraints of its past have been internalised by its proponents, and lauded as benefits.

    To put it most succinctly, the "byte stream" nature of UNIX pipes means that any command that does anything high-level such as processing "structured" data must have a parsing step on its input, and then a serialization step after its internal processing.

    The myth of UNIX pipes is that it works like this:

        process1 | process2 | process3 | ...
    
    The reality is that physically, it actually works like this:

       (process1,serialize) | (parse,process2,serialize) | ...
    
    Where each one of those "parse" and "serialise" steps is unique and special, inflexible, and poorly documented. This is forced by the use of byte streams to connect each step. It cannot be circumvented! It's an inherent limitation of UNIX style pipes.

    This is not a limitation of PowerShell, which is object oriented, and passes strongly typed, structured objects between processing steps. It makes it much more elegant, flexible, and in effect "more UNIX than UNIX".

    If you want to see this in action, check out my little "challenge" that I posed in a similar thread on YC recently:

    https://news.ycombinator.com/item?id=23257776

    The solution provided by "JoshuaDavid" really impressed me, because I was under the impression that that simple task is actually borderline impossible with UNIX pipes and GNU tools:

    https://news.ycombinator.com/item?id=23267901

    Compare that to the much simpler equivalent in PowerShell:

    https://news.ycombinator.com/item?id=23270291

    Especially take note half of that script is sample data!

    > Unix is seriously uncool with young people at the moment. I intend to turn that around and articles like this offer good material.

    UNIX is entirely too cool with young people. They have conflated the legacy UNIX design of the 1970s with their preferred open source software, languages, and platforms.

    There are better things out there, and UNIX isn't the be all and end all of system management.

    • matheusmoreira 6 years ago

      > Where each one of those "parse" and "serialise" steps is unique and special, inflexible, and poorly documented.

      This can also be a security concern. According to research, a great number of defects with security implications occur at the input handling layers:

      http://langsec.org

    • scbrg 6 years ago

         The reality is that physically, it actually works like this:
      
            (process1,serialize) | (parse,process2,serialize) | ...
      
      But as soon as you involve the network or persistent storage, you need to do all that anyway. And one of the beauties is that the tools are agnostic to where the data comes from, or goes.
      • jiggawatts 6 years ago

        So you're saying optional things should be forced on everything, because the rare optional case needs it anyway?

        Look up what the Export-Csv, ConvertTo-Json, and Export-CliXml, Invoke-Command -ComputerName 'someserver', or Import-Sql commands do in PowerShell.

        Actually, never mind, the clear and consistent naming convention already told you exactly what they do: they serialise the structured pipeline objects into a variety of formats of your choosing. They can do this with files, in-memory, to and from the network, or even involving databases.

        This is the right way to do things. This is the UNIX philosophy. It's just that traditional shells typically used on UNIX do a bad job at implementing the philosophy.

      • rcxdude 6 years ago

        If you have network or persistent storage, those parse and serialise steps are better served by a consistent, typed, and common serialize and parse step instead of a different ad-hoc step for each part of the chain (especially if there are then parts of the chain which are just munging the output of one serialisation step so it can work with the parse stage of another).

    • gorgoiler 6 years ago

      Typed streams sound great.

      Most data being line oriented (with white space field separators) has historically worked well enough, but I get your point that the wheels will eventually fall off.

      It’s important to remember that the shell hasn’t always been about high concept programming tasks.

        $ grep TODO xmas-gifts
        grandma TODO
        auntie Sian TODO
        jeanie MASTODON T shirt
      
      The bug (bugs!) in the above example doesn’t really matter in the context of the task at hand.
  • imglorp 6 years ago

    More Lego rules:

    (7) and common signalling facility

    (8) also nice if some files have magic properties like /dev/random or /proc or /dev/null

    (9) every program starts with 3 streams, stdin/stdout for work and stderr for out of band errors

    • enedil 6 years ago

      (9) every program starts with 3 streams, stdin/stdout for work and stderr for out of band errors

      This is false - if a process closes stdin/out/err, its children won't have these files open.

      • imglorp 6 years ago

        That's correct, which is why I said _starts_ with the streams. The child can do what it wants with them after that.

        • DSMan195276 6 years ago

          His point was that if you close stdin/stdout/stderr and then fork a child, the new child will not have all of those streams when it starts - the three streams are a convention, not something enforced by the kernel when creating a new process.

          That said, it's still a bit of a pedantic point, you can expect to always have those three fds in the same way you can expect `argv[0]` to represent the name of your executable (Which also isn't enforced by the kernel).

          • JdeBP 6 years ago

            Actually, it's not a pedantic point. It's a security flaw waiting to happen (as it actually has) if one assumes that one always has three open file descriptors.

            The privileged program opens a sensitive data file for writing, it gets assigned (say) file descriptor 2, because someone in a parent process took some advice to "close your standard I/O because you are a dæmon" to heart, other completely unrelated library code somewhere else blithely logs network-supplied input to standard error without sanitizing it, and (lo!) there's a remotely accessible file overwrite exploit of the sensitive data file.

            Ironically, the correct thing to do as a dæmon is to not close standard I/O, but to leave setting it up to the service management subsystem, and use it as-is. That way, when the service management subsystem connects standard output and error to a pipe and arranges to log everything coming out the other end, daemontools-style, the idea actually works. (-:

    • hhas01 6 years ago

      I defy anyone to build their error-handling logic off the back of stderr…

      • imglorp 6 years ago

        Logic, not in this case, just messages.

        The point of separate error streams echoes the main Unix philosophy of doing small things, on the happy path, that work quietly or break early. Your happy results can be chained on to the next thing and your errors go somewhere for autopsy if necessary. Eg,

            p1 | p2 | p3 > result.txt 2> errors.txt
        
        will not contaminate your result file with any stderr gripes. You may need to arrange error streams from p1 and p2 using () or their own redirection. If it dies, it will die early.

        BUT speaking of logic, the concept is close to Railway Oriented Programming, where you can avoid all the failure conditionals and just pass a monad out of a function, which lets you chain happy path neatly. Eg, nice talk here https://fsharpforfunandprofit.com/posts/recipe-part2/

  • matheusmoreira 6 years ago

    1. All data is bytes

    2. Everything is a file descriptor

    3. File descriptors are things that support standard I/O system calls such as read and write

  • ryandrake 6 years ago

    > (2) everything (ish) is a file

    I'm not all that knowledgable about Unix history, but one thing that has always puzzled me was that for whatever reason network connections (generally) aren't files. While I can do:

      cat < /dev/ttyS0
    
    to read from a serial device, I've always wondered why I can't do something like:

      bind /tmp/mysocket 12.34.56.78 80
      cat < /tmp/mysocket
    
    It is weird that so many things in Unix are somehow twisted into files (/dev/fb0??), but network connections, to my knowledge, never managed to go that way. I know we have netcat but it's not the same.
    • spijdar 6 years ago

      The original authors of "UNIX" considered their creation to be fatally flawed, and created a successor OS called "Plan 9", partially to address this specific deficit, among others like the perceived tacked-on nature of GUIs on *nix.

      That's a whole rabbit hole to go down, but Plan 9 networking is much more like your hypothetical example [0]. Additionally, things like the framebuffer and devices are also exposed as pure files, whereas on Linux and most unix-like systems such devices are just endpoints for making ioctl() calls.

      [0] http://doc.cat-v.org/plan_9/4th_edition/papers/net/

    • DSMan195276 6 years ago

      I think you're just seeing the abstraction fall apart a bit - for weirder devices, I would say they are "accessible" via a file, but you likely can't interact with it without special tools/syscalls/ioctls. For example, cat'ing your serial connection won't always work, occasionally you'll need to configure the serial device or tty settings first and things can get pretty messy. That's why programs like `minicom` exist.

      For networking, I don't think there's a particular reason it doesn't exist, but it's worth noting sockets are a little special and require a bit extra care (Which is why they have special syscalls like `shutdown()` and `send()` and `recv()`). If you did pass a TCP socket to `cat` or other programs (Which you can do! just a bit of fork-exec magic), you'd discover it doesn't always work quite right. And while I don't know the history on why such a feature doesn't exist, with the fact that tools like socat or curl do the job pretty well I don't think it's seen as that necessary.

      • XorNot 6 years ago

        Theoretically it would work if the devices were exposed as directories, not files (which is basically what /proc actually does for most things).

        /dev/ttyS0 should really be /dev/ttyS0/{data, ctr, baud, parity, stopbits, startbits} (probably a bunch I'm forgetting).

    • jandrese 6 years ago

      Berkeley sockets were kind of bolted on as an afterthought. If the AT&T guys developed them they would probably look a lot more like that.

      • JdeBP 6 years ago

        There's no need for a subjunctive. Go and look at AT&T STREAMS and see what actually did result.

    • philsnow 6 years ago

      elsewhere in this thread somebody mentions socat, but you can do it entirely within bash. from https://www.linuxjournal.com/content/more-using-bashs-built-... :

          exec 3<>/dev/tcp/www.google.com/80
          echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3
          cat <&3
      
      https://news.ycombinator.com/item?id=23422423 has a good point that "everything is a file" is maybe less useful than "everything is a file descriptor". the shell is a tool for setting up pipelines of processes and linking their file descriptors together.
      • JdeBP 6 years ago

        Daniel J. Bernstein's little known http@ tool is just a shell script wrapped around tcpclient, and is fairly similar, except that it does not use the Bashisms of course:

            printf 'GET /%s HTTP/1.0\nHost: %s:%s\n\n' "${2-}" "${1-0}" "${3-80}" |
            /usr/local/bin/tcpclient -RHl0 -- "${1-0}" "${3-80}" sh -c '
            /usr/local/bin/addcr >&7
            exec /usr/local/bin/delcr <&6
            ' |
            awk '/^$/ { body=1; next } { if (body) print }'
    • moonchild 6 years ago

      In the c api, you can read/write to sockets. Wouldn't be difficult to implement something like your example with a named pipe.

      Additionally, there is /dev/tcp in bash.

    • yjftsjthsd-h 6 years ago

      Sounds like you want netcat?

  • devchix 6 years ago

    >(1) everything is text

    >(2) everything (ish) is a file

    I'm having a moment. I have to get logs out of AWS, the thing is frustrating to no end. There's a stream, it goes somewhere, it can be written to S3, topics and queues, but it's not like I can jack it to a huge file or many little files and run text processing filters on it. Am I stupid, behind the times, tied to a dead model, just don't get it? There's no "landing" anywhere that I can examine things directly. I miss the accessibility of basic units which I can examine and do things to with simple tools.

    • macintux 6 years ago

      I worked around that by writing a Python script to let me pick the groups of interest, download all of the data for a certain time period, and collapse them into a CSV file ordered by time stamp.

      Yes, it’s very annoying I have to do all that, but I commend Bezos for his foresight in demanding everything be driven by APIs.

  • quickthrower2 6 years ago

    Can you give me any clue as to what execve does? I looked at the man page but none the wiser. Sounds like magic from what I read there. I'm from a Windows backgroud and not used to pipes.

    • kccqzy 6 years ago

      It replaces the executable in the current process with a different executable. It's kind of like spawning a new process with an executable, except it's the same PID, and file descriptors without CLOEXEC remain open in their original state.

      • uryga 6 years ago

        would `become` be a good alternate name for it?

      • asdff 6 years ago

        whats a simple example of using this?

        • gorgoiler 6 years ago

          It’s how all new processes in Unix are created. There is no “spawn new process running this code” operation. Instead:

          (1) you duplicate yourself with fork()

          (2) the parent process continues about its business

          (3) the created process calls execve() to become something else

          This is the mechanism by which all new processes are created, from init (process 1) to anything you run from your shell.

          Real life examples here, in this example of a Unix shell:

          https://github.com/mirror/busybox/blob/master/shell/ash.c

        • Icathian 6 years ago

          Bash is probably the best example of this style of working. You have a shell process that forks and spawns child processes from itself that are turned into whatever program you've called from your terminal.

        • zwetan 6 years ago

          see here: https://pubs.opengroup.org/onlinepubs/9699919799/functions/e...

          plenty of basic examples with execv, execve, execvp, etc.

          if the execv() call succeed then it completely replaces the program running in the process that calls it

          you can do plenty of tricks with that ;)

    • JdeBP 6 years ago

      Well you have a Windows background, so you know BASIC, right? And you know that BASIC has a CHAIN statement, right? (-:

      Yes, I know. You've probably never touched either one. But the point is that this is simply chain loading. It's a concept not limited to Unix. execve() does it at the level of processes, where one program can chain to another one, both running in a single process. But it is most definitely not magic, nor something entirely alien to the world of Windows.

      * https://en.wikibooks.org/wiki/QBasic/Appendix#CHAIN

      If you are not used to pipes on Windows, you haven't pushed Windows nearly hard enough. The complaint about Microsoft DOS was that it didn't have pipes. But Windows NT has had proper pipes all along since the early 1990s, as OS/2 did before it since 1987. Microsoft's command interpreter is capable of using them, and all of the received wisdom that people knew about pipes and Microsoft's command interpreters on DOS rather famously (it being much discussed at the time) went away with OS/2 and Windows NT.

      And there are umpteen ways of improving on that, from JP Software's Take Command to various flavours of Unix-alike tools. And you should see some of the things that people do with FOR /F .

      Unix was the first with the garden hosepipe metaphor, but it has been some 50 years since then. It hasn't been limited to Unix for over 30 of them. Your operating system has them, and it is very much worthwhile investigating them.

      • quickthrower2 6 years ago

        Thanks. I use pipes, occasionally ( I admit I prefer to code in a scripting language ) in Windows. It’s just execve that I was not familiar with.

    • gorgoiler 6 years ago

      Its how you launch (execute) a new program.

      Such programs typically don’t have any interaction so the v and the e parts refer to the two ways you can control what the program does.

      v: a vector (list) of input parameters (aka arguments)

      e: an environment of key-value pairs

      The executed program can interpret these two in any way it wishes to, though there are many conventions that are followed (and bucked by the contrarians!) like using “-“ to prefix flags/options.

      This is in addition to any data sent to the program’s standard input — hence the original discussion about using pipes to send the output of one command to the input of another.

  • hhas01 6 years ago

    “(1) everything is text”

    LOL. What fantasy land are you living in?

    Pipes are great. Untyped, untagged pipes are not. They’re a frigging disaster in every way imaginable.

    “Unix is seriously uncool with young people at the moment.”

    Unix is half a century old. It’s old enough to be your Dad. Hell, it’s old enough to be your Grandad! It’s had 50 years to better itself and it hasn’t done squat.

    Linux? Please. That’s just a crazy cat lady living in Unix’s trash house.

    Come back when you’ve a modern successor to Plan 9 that is easy, secure, and doesn’t blow huge UI/UX chunks everywhere.

  • Shared404 6 years ago

    >Unix is seriously uncool with young people...

    Not all of us. I much prefer the Unix way of doing things, it just makes more sense then just trying to become another Windows.

  • pjmlp 6 years ago

    Everything is a file until one needs to do UNIX IPC, networking or high performance graphics rendering.

  • lsofzz 6 years ago

    > Unix is seriously uncool with young people at the moment. I intend to turn that around and articles like this offer good material.

    I offer you my well wishes on that.

  • ianai 6 years ago

    How’s it uncool? Just not “ mobile”?

    • gorgoiler 6 years ago

      Exactly.

      Linux, ew!”

      A big part of teaching computer science to children is breaking this obsession with approaching a computer from the top down — the old ICT ways, and the love of apps — and learning that it is a machine under your own control the understanding of which is entirely tractable from the bottom up.

      Unlike the natural sciences, computer science (like math) is entirely man made, to its advantage. No microscopes, test tubes, rock hammers, or magnets required to investigate its phenomena. Just a keyboard.

      • carlmr 6 years ago

        >No microscopes, test tubes, rock hammers, or magnets required to investigate its phenomena.

        They're full of magnets, and with electron microscope you can analyze what's going on in the hardware.

      • jtbayly 6 years ago

        Until stuff goes wrong with the hardware...

      • shrimp_emoji 6 years ago

        My sense is that people do think Linux is "cool" (it's certainly distinguished in being free, logical, and powerful), but its age definitely shows. My biggest pain points are:

        - Bash is awful (should be replaced with Python, and there are so many Python-based shells now that reify that opinion)

        - C/C++ are awful

        - Various crusty bits of Linux that haven't aged well besides the above items (/usr/bin AND /usr/local/bin? multiple users on one computer?? user groups??? entering sudo password all the time????)

        - The design blunder of assuming that people will read insanely dense man pages (the amount of StackOverflow questions[0][1][2] that exist for anything that can be located in documentation are a testament to this)

        - And of course no bideo gambes and other things (although hopefully webapps will liberate us from OS dependence soon[3][4])

        0: https://stackoverflow.com/questions/3689838/whats-the-differ...

        1: https://stackoverflow.com/questions/3528245/whats-the-differ...

        2: https://stackoverflow.com/questions/3639342/whats-the-differ...

        3: https://discord.com/

        4: http://wasm.continuation-labs.com/d3demo/

        • anthk 6 years ago

          - Python as a shell would be the worst crapware ever. Whitespace syntax, no proper file autocompletion, no good pipes, no nothing. Even Tclsh with readline would be better than a Python shell.

          - By mixing C and C++ you look like a clueless youngster.

          - /usr/local is to handle non-base/non-packages stuff so you don't trash your system. Under OpenBSD, /usr/local is for packages, everything else should be under /opt or maybe ~/src.

          - OpenBSD's man pages slap hard any outdated StackOverflow crap. Have fun with a 3 yo unusable answer. Anything NON-docummented on -release- gets wrong fast. Have a look on Linux HOWTO's. That's the future of the StackOverflow usefulness over the years. Near none. Because half of the answers will not apply.

          • viraptor 6 years ago

            You took that python shell too literally. Ipython included a shell profile (you can still use it yourself) and https://xon.sh/ works just fine.

          • pjmlp 6 years ago

            The ISO C and C++ papers are full of C/C++ references, are the ISO masters clueless youngsters?

            Or for that matter the likes from Apple, Google and Microsoft?

            • anthk 6 years ago

              EVen C99 has nothing to do with bastard childs such as C++.

              The last is convoluted crap, full of redundant functions.

              Further C standards do exist, but C99 and heck, even ANSI C can be good enough for daily uses.

              Perl, Ruby and PHP look close to each other, being the first the source of inspiration.

              But don't try to write PHP/Ruby code as if it was something close to Perl. The same with C++.

          • blacksqr 6 years ago

            > Even Tclsh with readline would be better than a Python shell

            In fact, I think Tcl would make an excellent sh replacement.

            • yjftsjthsd-h 6 years ago

              So, wish(1)?

              • anthk 6 years ago

                There were attempts in order to create an interactive shell from the TCL interpreter, but the syntax was a bit off (upvar and a REPL can drive you mad), but if you avoided that you could use it practically as a normal shell. Heck, you can run nethack just fine under tclsh.

        • catalogia 6 years ago

          > multiple users on one computer??

          Last I checked this is something all the mainstream desktop operating systems support, so for you to call this out as a linux weirdness is baffling.

        • yjftsjthsd-h 6 years ago

          - Bash is awful (should be replaced with Python, and there are so many Python-based shells now that reify that opinion)

          It has its pain points, but the things that (ba)sh is good at, it's really good at and python, in my experience, doesn't compete. Dumb example: `tar czf shorttexts.tar.gz $(find . -type f | grep ^./...\.txt) && rsync shorttexts.tar.gz laptop:`. I could probably make a python script that did the equivalent, but it would not be a one-liner, and my feeling is that it would be a lot uglier.

          - Various crusty bits of Linux that haven't aged well besides the above items (/usr/bin AND /usr/local/bin? multiple users on one computer?? user groups??? entering sudo password all the time????)

          In order: /usr/bin is for packaged software; /usr/local is the sysadmin's local builds. Our servers at work have a great many users, and we're quite happy with it. I... have no idea why you would object to groups. If you're entering your sudo password all the time, you're probably doing something wrong, but you're welcome to tell it to not require a password or increase the time before reprompting.

          - The design blunder of assuming that people will read insanely dense man pages (the amount of StackOverflow questions[0][1][2] that exist for anything that can be located in documentation are a testament to this)

          I'll concede that many GNU/Linux manpages are a bit on the long side (hence bro and tldr pages), but having an actual manual, and having it locally (works offline) is quite nice. Besides which, you can usually just search through it and find what you want.

          - And of course no bideo gambes and other things (although hopefully webapps will liberate us from OS dependence soon[3][4])

          Webapps have certainly helped the application situation, but Steam and such have also gotten way better; it's moved from "barely any video games on Linux" to "a middling amount of games on Linux".

        • lsofzz 6 years ago

          > - Bash is awful (should be replaced with Python, and there are so many Python-based shells now that reify that opinion)

          You know that no one uses Bash these days right?

          Fish shell, Zsh are the much newer and friendlier shells these days. All my shells run on Fish these days.

          I have to tell you that my shell ergonomics has improved a lot after I started using fish shell.

          But to your point, _why_ is Bash awful?

          • darkwater 6 years ago

            > You know that no one uses Bash these days right?

            I strongly disagree although, just like you, I have no evidence to support my claim

            • lsofzz 6 years ago

              > I strongly disagree although, just like you, I have no evidence to support my claim

              Haha. Fair enough!

        • gitgud 6 years ago

          I don't get this. Linux is just the kernel, there's a variety of OS distributions which allow you to customise them infinitely to be what ever you want.

    • ma2rten 6 years ago

      I was also skeptical of that statement. I did a google trends search and indeed there seems to be a slow decline in command-line related searches:

      https://trends.google.com/trends/explore?date=all&q=bash,awk...

    • remexre 6 years ago

      My complaints with unix (as someone running linux on every device, starting to dip my toe into freebsd on a vps); apologies for lack of editing:

      > everything is text

      I'd often like to send something structured between processes without needing both sides to have to roll their own de/serialization of the domain types; in practice I end up using sockets + some thrown-together HTTP+JSON or TCP+JSON thing instead of pipes for any data that's not CSV-friendly

      > everything (ish) is a file

      > including pipes and fds

      To my mind, this is much less elegant when most of these things don't support seeking, and fnctls have to exist.

      It'd be nicer if there were something to declare interfaces like Varlink [0, 1], and a shell that allowed composing pipelines out of them nicely.

      > every piece of software is accessible as a file, invoked at the command line

      > ...with local arguments

      > ...and persistent global in the environment

      Sure, mostly fine; serialization still a wart for arguments + env vars, but one I run into much less

      > and common signalling facility

      As in Unix signals? tbh those seem ugly too; sigwinch, sighup, etc ought to be connected to stdin in some way; it'd be nice if there were a more general way to send arbitrary data to processes as an event

      > also nice if some files have magic properties like /dev/random or /proc or /dev/null

      userspace programs can't really extend these though, unless they expose FUSE filesystems, which sounds terrible and nobody does

      also, this results in things like [2]...

      > every program starts with 3 streams, stdin/stdout for work and stderr for out of band errors

      iow, things get trickier once I need more than one input one one output. :)

      I also prefer something like syslog over stderr, but again this is an argument for more structured things.

      [0]: https://varlink.org/

      [1]: ideally with sum type support though, and maybe full dependent types like https://dhall-lang.org/

      [2]: https://www.phoronix.com/scan.php?page=news_item&px=UEFI-rm-...

      • shakna 6 years ago

        > As in Unix signals? tbh those seem ugly too; sigwinch, sighup, etc ought to be connected to stdin in some way; it'd be nice if there were a more general way to send arbitrary data to processes as an event

        Well, as the process can arbitrarily change which stdin it is connected to, as stdin is a file, you need some way to still issue directions to that process.

        However, for the general case, your terminal probably supports sending sigint via ctrl + c, siquit via ctrl + backslash, and the suspend/unsuspend signals via ctrl + z or y. Some terminals may allow you to extend those to the various other signals you'd like to send. (But these signals are being handled by the shell - not by stdin).

        • remexre 6 years ago

          Sure, I mean more like there'd be some interface ByteStream, and probably another interface TerminalInput extends ByteStream; TerminalInput would then contain the especially terminal-related signals (e.g. sigwinch). If a program doesn't declare that it wants a full TerminalInput for stdin, the sigwinches would be dropped. If it declares it wants one but the environment can't provide one (e.g. stdin is closed, stdin is a file, etc.) it would be an error from execve().

          Other signals would be provided by other resources, ofc; e.g. sigint should probably be present in all programs.

          ---

          In general, my ideal OS would have look something like [0], but with isolation between the objects (since they're now processes), provisions for state changes (so probably interfaces would look more like session types), and with a type system that supports algebraic data types.

          [0]: https://www.tedinski.com/2018/02/20/an-oo-language-without-i...

      • blibble 6 years ago

        > iow, things get trickier once I need more than one input one one output. :)

        you can pass as many FDs to a child as you want, 0/1/2 is just a convention

  • xkucf03 6 years ago

    Unix is seriously cool and I consider myself still young :-) But it could be better. See https://relational-pipes.globalcode.info/

cowmix 6 years ago

I use pipelines as much as the next guy but every time I see post praise how awesome they are, I'm reminded of the Unix Hater's Handbook. Their take on pipelines is pretty spot on too.

http://web.mit.edu/~simsong/www/ugh.pdf

  • cuddlybacon 6 years ago

    I mostly like what they wrote about pipes. I think the example of bloating they talked about in ls at the start of the shell programming section is a good example: if pipelines are so great, why have so many unix utilities felt the need to bloat?

    I think it a result of there being just a bit too much friction in building a pipeline. A good portion tends to be massaging text formats. The standard unix commands for doing that tend to have infamously bad readability.

    Fish Shell seems to be making this better by making a string which has a syntax that makes it clear what it is doing: http://fishshell.com/docs/current/cmds/string.html I use fish shell, and I can usually read and often write text manipulations with the string command without needing to consult the docs.

    Nushell seems to take a different approach: add structure to command output. By doing that, it seems that a bunch of stuff that is super finicky in the more traditional shells ends up being simple and easy commands with one clear job in nushell. I have never tried it, but it does seem to be movement in the correct direction.

    • code-faster 6 years ago

      It's less that pipelines are friction, they're really not.

      It's more that people like building features and people don't like saying no to features.

      The original unix guys had a rare culture that was happy to knock off unnecessary features.

    • rnestler 6 years ago

      > Nushell seems to take a different approach: add structure to command output. By doing that, it seems that a bunch of stuff that is super finicky in the more traditional shells ends up being simple and easy commands with one clear job in nushell. I have never tried it, but it does seem to be movement in the correct direction.

      I tried nushell a few times and the commands really compose better due to the structured approach. How would one sort the output of ls by size in bash without letting ls do the sorting? In nushell it is as simple as "ls | sort-by size".

  • ken 6 years ago

    > The Macintosh model, on the other hand, is the exact opposite. The system doesn’t deal with character streams. Data files are extremely high level, usually assuming that they are specific to an application. When was the last time you piped the output of one program to another on a Mac?

    And yet, at least it's possible to have more than one Macintosh application use the same data. Half the world has migrated to web apps, which are far worse. As a user, it's virtually impossible to connect two web apps at all, or access your data in any way except what the designers decided you should be able to do. Data doesn't get any more "specific to an application" than with web apps.

    • junke 6 years ago

      Command-line tools where you glue byte streams are, in spirit, very much like web scraping. Sure, tools can be good or bad, but by design you are going to have ad-hoc input/output formats, and for some tools this is interleaved with presentation/visual stuff (colors, alignment, headers, whitespaces, etc.).

      In a way web apps are then more alike standard unix stuff, where you parse whatever output you get, hoping that it has enough usable structure to do an acceptable job.

      The most reusable web apps are those that offer an API, with JSON/XML data formats where you can easily automate your work, and connect them together.

      • jandrese 6 years ago

        > The most reusable web apps are those that offer an API, with JSON/XML data formats where you can easily automate your work, and connect them together.

        Sadly these kind are also the most rare.

  • the_af 6 years ago

    I think of the Unix Hater's Handbook as a kind of loving roast to Unix, that hackers of the time understood to be humorous (you know, how people complain about the tools they use every day, much like people would later complain endlessly about Windows) and which was widely misunderstood later to be a real scathing attack.

    It hasn't aged very well, either. "Even today, the X server turns fast computers into dumb terminals" hasn't been true for at least a couple of decades...

    • Quekid5 6 years ago

      > It hasn't aged very well, either. "Even today, the X server turns fast computers into dumb terminals" hasn't been true for at least a couple of decades...

      You're not wrong, but that's only because people wrote extensions for direct access to the graphics hardware... which obviously don't work remotely, and so aren't really in the spirit of X. It's great that that was possible, but OTOH it probably delayed the invention of something like Wayland for a decade+.

      It's been ages since I've used X for a remote application, but I sometimes wonder how many of them actually really still work to any reasonable degree. I remember some of them working when I last tried it, albeit with abysmal performance compared to Remote desktop, for example.

      • the_af 6 years ago

        Fair enough! I wasn't really thinking of the remote use case which was indeed X's main use case when it was conceived (and which is not very relevant today for the majority of users).

        In any case, this was just an example. The Handbook is peppered with complaints which haven't been relevant for ages. It was written before Linux got user-friendly UIs and was widespread to almost every appliance on Earth. It was written before Linux could run AAA games. It was written before Docker. It was written before so many people knew how to program simple scripts. It was written before Windows and Apple embraced Unix.

        If some of these people are still living, I wonder what they think of the (tech) world today. Maybe they are still bitter, or maybe they understand how much they missed the mark ;)

      • Ekaros 6 years ago

        It's not great even on a local network I can use Firefox reasonably well, but I can entirely forget that on current remote working environment with VPN and SSH... And this is from land-line connection with reasonably powerful machines at both ends. A few seconds response time makes user-experience useless...

    • msla 6 years ago

      No, I think it's mostly just as angry and bitter as it sounds, coming from people who backed the wrong horse (ITS, Lisp Machines, the various things Xerox was utterly determined to kill off... ) and got extremely pissy about Unix being the last OS concept standing outside of, like, VMS or IBM mainframe crap, neither of which they'd see as improvements.

      • Analemma_ 6 years ago

        Just because Unix was the last man standing, doesn't mean it's good. Bad products win in the marketplace all the time and for all kinds of reasons. In Unix's case I'd argue the damage is particularly severe because hackers have elevated Unix to a religion and insisted that all its flaws are actually virtues.

  • zamalek 6 years ago

    I am firmly in the "ugh" camp. I strongly suspect that the fawning that occurs over pipelines is because of sentimentality more than practicality. Extracting information from text using a regular expression is fragile. Writing fragile code brings me absolutely no joy as a programmer - unless I am trying to flex my regex skills.

    If you really look at how pipelines are typically used is: lines are analogous to objects and [whatever the executable happens to use] delimiters are analogous to fields. Piping bare objects ("plain old shell objects") makes far more sense, involves far less typing and is far less fragile.

    • shakna 6 years ago

      If you're having to extract your data using a regex, then the data probably isn't well-formed enough for a shell pipeline. It's doable, but a bad idea.

      Regex should not be the first hammer you reach for, because it's a scalpel.

      I recently wanted cpu cores + 1. That could be a single regex. But this is more maintainable, and readable:

          echo '1 + '"$(grep 'cpu cores' /proc/cpuinfo | tail -n1 | awk -F ':' '{print $2}')" | bc
      
      There's room for a regex there. Grep could have done it, and then I wouldn't need the others... But I wouldn't be able to come back in twelve months and instantly be able to tell you what it is doing.
      • Laremere 6 years ago

        Compare to using a real programing language, such as Go:

          return runtime.NumCPU() + 1
        
        Sure, there needs to be os to program (and possibly occasionally program to program) communication, but well defined binary formats are easier to parse in a more reliable manner. Plus they can be faster, and moved to an appropriate function.
      • eulers_secret 6 years ago

        OP wants number of CORES, not CPUs (excluding HT 'cores'). I know this is old, but if someone finds their way here looking for a solution...

        In BASH number of CPUs + 1 is as easy as:

        echo $(($(getconf _NPROCESSORS_ON)+1))

        or

        echo $(($(nproc)+1))

        getconf even works on OS X!

        Many of the other solutions confuse the two. As far as I can tell, using OPs solution is one of the better ways to get core count.

        Could also: echo $(($(lscpu -p | tail -n 1 | cut -d',' -f 2)+2)) ...but OPs solution is probably easier to read...

      • zamalek 6 years ago

        Would anyone be able to approach this, without knowledge of /proc/cpuinfo, and understand what `$2` refers to?

        > Regex should not be the first hammer you reach for, because it's a scalpel.

        You used a regex tool: grep.

        • jandrese 6 years ago

          It's plaintext, you could just look at it. That's the beauty of text based systems.

      • rakoo 6 years ago

        This is where you need to use awk.

            awk -F ':' '/cpu cores/ {a = $2 + 1} END {print a}' /proc/cpuinfo
        • lsofzz 6 years ago

          > This is where you need to use awk.

          I think need is a strong statement in this context.

            echo $(( 1 + $(grep 'cpu cores' /proc/cpuinfo | head -1 | awk -F ':' '{print $2}') ))
          
          
          Choosing shell vs. [insert modern] programming is a matter of trade-off of taste, time, $$$, collective consensus (in a team setting) and so forth.

          [edit]: Forgot code syntax on HN.

      • jandrese 6 years ago

        That seems overly complicated to me. Can you not just:

             grep '^processor' /proc/cpuinfo | wc -l
  • julianeon 6 years ago

    I love UNIX but I think on this point I'm with the haters.

    Practically speaking, as an individual, I don't have the needs of the DMV. For my personal use I'm combing through pip-squeaky data file downloads and CSV's.

    So even though using Python or some other language causes a gigantic performance hit, it's just the difference between 0.002s and 0.02 seconds: a unit of time/inefficiency so small I can't ever perceive it. So I might also well use a language to do my processing, because at my level it's easier to understand and practically the same speed.

  • fenwick67 6 years ago

    From the intro:

    > As for me? I switched to the Mac.

    It's amusing that Apple would go on to switch to a unix-based OS in '01.

    • pjmlp 6 years ago

      Apple released their first UNIX based OS in 1988.

      Exercise for the reader to find out its name.

  • marcosdumay 6 years ago

    Hum... Those other abstractions on that section are great and everything, but it really sucks to write them interactively.

    Pipelines also lack some concept similar to exceptions, but it would also suck to handle those interactively.

  • Silamoth 6 years ago

    Thanks for sharing! I'll have to give that a read sometime. It looks quite interesting.

  • ColanR 6 years ago

    I read the introduction. Sounds like Norman would have liked to hear about Plan 9.

    • dialamac 6 years ago

      I have a hard time believing he would have been unfamiliar with Plan 9. It wasn’t exactly obscure in the research community at the time. See the USENIX proceedings in the late 80s and 90s.

      This is mere speculation, but I doubt he would have appreciated Plan 9.

      • ColanR 6 years ago

        I did notice that he wrote about problems in Unix that were solved in P9; that's the reason I made the comment.

atombender 6 years ago

Pipes are a great idea, but are severely hampered by the many edge cases around escaping, quoting, and, my pet peeve, error handling. By default, in modern shells, this will actually succeed with no error:

  $ alias fail=exit 1
  $ find / | fail | wc -l; echo $?
  0
  0
You can turn on the "pipefail" option to remedy this:

  $ set -o pipefail
  $ find / | fail | wc -l; echo $?
  0
  1
Most scripts don't, because the option makes everything much stricter, and requires more error handling.

Of course, a lot of scripts also forget to enable the similarly strict "errexit" (-e) and "nounset" options (-u), which are also important in modern scripting.

There's another error that hardly anyone bothers to handle correctly:

  x=$(find / | fail | wc -l)
This sets x to "" because the command failed. The only way to test if this succeeded is to check $?, or use an if statement around it:

  if ! x=$(find / | fail | wc -l); then
    echo "Fail!" >&2
    exit 1
  fi
I don't think I've seen a script ever bother do this.

Of course, if you also want the error message from the command. If you want that, you have to start using name pipes or temporary files, with the attendant cleanup. Shell scripting is suddenly much more complicated, and the resulting scripts become much less fun to write.

And that's why shell scripts are so brittle.

  • codemac 6 years ago

    Just use a better shell. rc handles this wonderfully, $? is actually called $status, and it's an array, depending on the number of pipes.

  • fomine3 6 years ago

    set -e makes another pain for command that nonzero isn't mean failed (ex. diff). It changes semantics for whole script.

    • atombender 6 years ago

      If by pain you mean you have to handle errors, yes, that's what you have to do. It's no different from checking the return code of functions in C.

      • zwkrt 6 years ago

        But there are tools that don't follow this paradigm. Famously, grep fails with err=1 if it doesn't find a match, even though 'no match' is a perfectly valid result of a search.

      • fomine3 6 years ago

        Yes. So I don't use set -e.

geophile 6 years ago

I love pipelines. I don't know the elaborate sublanguages of find, awk, and others, to exploit them adequately. I also love Python, and would rather use Python than those sublanguages.

I'm developing a shell based on these ideas: https://github.com/geophile/marcel.

  • ehsankia 6 years ago

    +1

    Piping is great if you memorize the (often very different) syntax of every individual tool and memorize their flags, but in reality unless it's a task you're doing weekly, you'll have to go digging through MAN pages and documentation every time. It's just not intuitive. Still to date if I don't use `tar` for a few months, I need to lookup the hodge podge of letters needed to make it work.

    Whenever possible, I just dump the data in Python and work from there. Yes some tasks will require a little more work, but it's work I'm very comfortable with since I write Python daily.

    Your project looks like, but honestly iPython already lets me run shell commands like `ls` and pipe the results into real python. That's mostly what I do these days. I just use iPython as my shell.

  • khimaros 6 years ago

    The lispers/schemers in the audience may be interested in Rash https://docs.racket-lang.org/rash/index.html which lets you combine an sh-like language with any other Racket syntax.

  • jraph 6 years ago

    Your project looks really cool.

    I am pretty sure I've seen a Python-based interactive shell a few years ago but I can't remember the name. Have you heard of it?

ketanmaheshwari 6 years ago

Unix pipelines are cool and I am all for it. In recent times however, I see that sometimes they are taken too far without realizing that each stage in the pipeline is a process and a debugging overhead in case something goes wrong.

A case in point is this pipeline that I came across in the wild:

TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secrets -n kube-system | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t' | tr -d " ")

In this case, perhaps awk would have absorbed 3 to 4 stages.

  • dfinninger 6 years ago

    Oh man. That's when knowing more about the tools you are using comes in handy. Kubectl has native JSONPath support [0].

    Or at the very least, use structured output with "-o json" and jq [1], like they mention in the article.

    I have always found that trying to parse JSON with native shell tools has been difficult and error-prone.

    [0] https://kubernetes.io/docs/reference/kubectl/jsonpath/ [1] https://stedolan.github.io/jq/

  • salmo 6 years ago

    I think people can go through a few stages of their shell-foo.

    The first involves a lot of single commands and temporary files.

    The second uses pipes, but only tacks on commands with no refactoring.

    The third would recognize that all the grep and cut should just be awk, that you can redirect the cumulative output of a control statement, that subprocesses and coroutines are your friend. We should all aspire to this.

    The fourth stage is rare. Some people start using weird file descriptors to chuck data around pipes, extensive process substitution, etc. This is the Perl of shell and should be avoided. The enlightened return back to terse readable scripts, but with greater wisdom.

    Also, I always hear of people hating awk and sed and preferring, say, python. Understanding awk and especially sed will make your python better. There really is a significant niche where the classic tools are a better fit than a full programming language.

    • young_unixer 6 years ago

      My most complex use of the shell is defining aliases in .bashrc and have never felt the need to go further. Do you recommend learning all of that for someone like me?

      If so, what resource do you recommend?

      • asicsp 6 years ago

        I have books on GNU grep/sed/awk [0] (currently free in spirit of quarantine learning) that teaches the command and features step by step using plenty of examples and exercises. There's a separate chapter for regular expressions too.

        And I have a list of curated resources for `bash` and Linux [1]. Here's what I'd recommend:

        * https://ryanstutorials.net/linuxtutorial/ is a good place to start for beginners

        * https://mywiki.wooledge.org/BashGuide and the rest of this wonderful wonderful site is recommended to understand `bash` and its features and gotchas

        * https://explainshell.com/ gives you a quick help on the various parts of the command, including documentation

        * https://www.shellcheck.net/ is a must have tool if you are writing shell scripts

        * https://linuxjourney.com/ is probably the most complete site to understand how things work in *nix environment

        [0] https://learnbyexample.github.io/books/

        [1] https://github.com/learnbyexample/scripting_course/blob/mast...

      • DiggyJohnson 6 years ago

        I absolutely think a lot of people are in this position. I am just now slightly "farther along" path as described by OP,and I'm starting to see how it can be worth it as long as you're pursuing the new information in an actually useful context.

        I definitely be interested in OP's answer. That sort of breakdown is often extremely valuable in any area of learning.

      • every 6 years ago

        You can get a lot of mileage out of aliases, especially compound ones. But eventually you may run up against something they don't handle very well. For that a simple shell script might be the next logical step. And fortunately, they aren't that much of a leap...

      • salmo 6 years ago

        I may be a bad person to ask. I like history a lot. A lot of this stuff was originally picked up by secretaries and non-computer people working on manuals for Bell Labs. If you understand it, it will save you time. It will also make you use and understand Linux/Unix better, which will bleed into any other programming you do (assuming you write other code). The skill also comes up a lot in strange places like Dockerfiles, CI Pipelines, and a lot of that kind of surrounding infrastructure for applications.

        The video referenced in the article is actually pretty interesting, as is the book "The Unix Programming Environment". Both will refer to outdated technology like real terminals, but I think they help understand the intent of the systems we still use today. Any of the Bell Labs books are great for understanding Unix.

        Also do a minimal install of Linux or FreeBSD and read all of the man pages for commands in /bin, /usr/bin. Read the docs for the bash builtins.

        The old Useless Use of Cat is great: http://porkmail.org/era/unix/award.html

        But writing real scripts is probably the only way to learn. Next time you need to munge data, try the shell. When you read about a new command, like comm, try to use it to do something.

        Also force yourself into good practices:

        - revision control everything

        - provide -h output for every script you write (getopts is a good thing to be in the habit of using)

        - use shellcheck to beat bad habits out of you

        - try to do things in parallel when you can

        - assume everything you make is going to run at a much larger scale than you intend it to (more inputs, more servers, more everything)

        - after something works, see where you can reduce the number of commands/processes you run

        - write things like a moron is going to use them who will accidentally put spaces or hyphens or whatever in the worst place possible, not provide required arguments, and copy/paste broken things out of word

        - don't let scripts leave trash around, even when aborted: learn trap & signals

        The two things you should probably understand the most, but on which I'm the least help on are sed and awk. Think of sed as a way to have an automated text editor. Awk is about processing structured data (columns and fields). I learned these through trial and error and a lot of staring at examples. Understanding regular expressions is key to both and is, in general, an invaluable skill.

        Oh and if you are a vi(m) user, remember that the shell is always there to help you and is just a ! away. Things like `:1,17!sort -nr -k2` (sort lines 1-17, reverse numerical order on the second field) can save a ton of time. And even putting some shell in the middle of a file and running !!sh is super-handy to replace the current line that has some commands with the output of those commands.

        • mturmon 6 years ago

          You’ve left some good tips. sed, awk, and grep. getopts. Read the man pages. Try to automate even simple series of commands.

          Don’t be shy about running a whole pipeline of commands (in an automation script) just to isolate one field within one line of one file and add one to it, and store the result in a shell variable.

    • viraptor 6 years ago

      > Understanding awk and especially sed will make your python better.

      Why do you think so?

  • m0xte 6 years ago

    Every time I see that sort of stuff I end up | perl, still after 20 odd years. Sometimes i fine perl is not installed and I die a little.

  • lordleft 6 years ago

    A little bit of grep / awk goes very, very far.

  • pwdisswordfish2 6 years ago

    The -E on the grep makes no sense. I do not see any ERE. Looks like everything here could be done with a single invocation of sed. Can anyone share some sample output of kubectl?

  • ecnahc515 6 years ago

    This is a pretty bad example since kubectl can output JSON and using something like `jq` would have made it even simpler.

    • majewsky 6 years ago

      Better yet, kubectl can take a jsonpath expression to select from the JSON. For example, to just get a name:

        kubectl get something -o jsonpath='{.items[*].metadata.name}'
russellbeattie 6 years ago

I think there's an interesting inflection point between piping different utilities together to get something done, and just whipping up a script to do the same thing instead.

First I'll use the command line to, say, grab a file from a URL, parse, sort and format it. If I find myself doing the same commands a lot, I'll make a .sh file and pop the commands in there.

But then there's that next step, which is where Bash in particular falls down: Branching and loops or any real logic. I've tried it enough times to know it's not worth it. So at this point, I load up a text editor and write a NodeJS script which does the same thing (Used to be Perl, or Python). If I need more functionality than what's in the standard library, I'll make a folder and do an npm init -y and npm install a few packages for what I need.

This is not as elegant as pipes, but I have more fine grained control over the data, and the end result is a folder I can zip and send to someone else in case they want to use the same script.

There is a way to make a NodeJS script listen to STDIO and act like another Unix utility, but I never do that. Once I'm in a scripting environment, I might as well just put it all in there so it's in one place.

nojito 6 years ago

Debugging Linux pipelines is not a fun experience.

This is one clear area where Powershell with its object model has got it right.

  • jandrese 6 years ago

    I don't find it too bad most of the time. If the pipeline isn't working you can cut it off at any juncture and watch standard out to see whats going on.

    You can even redirect some half processed data to a file so you don't have to re-run the first half of the pipe over and over while you work out what's going wrong in the tail end.

  • pwdisswordfish2 6 years ago

    Please provide an example. Thank you.

antipaul 6 years ago

A similar philosophy has made the "tidyverse" a much-loved extension of the statistical language R.

Compare the following 2 equivalent snippets. Which one seems more understandable?

    iris_data %>%
        names() %>%
        tolower() %>%
        gsub(".", "_", ., fixed=TRUE) %>%
        paste0("(", ., ")")

or:

    paste0("(", gsub(".", "_", tolower(names(iris_data)), fixed=TRUE), ")")
  • chubot 6 years ago

    Yup, I'm working on Oil shell and I wrote an article about data frames, comparing tidyverse, Pandas, and raw Python:

    What Is a Data Frame? (In Python, R, and SQL)

    http://www.oilshell.org/blog/2018/11/30.html

    The idea is essentially to combine tidyverse and shell -- i.e. put structured data in pipes. I use both shell and R for data cleaning.

    My approach is different than say PowerShell because data still gets serialized; it's just more strictly defined and easily parseable. It's more like JSON than in-memory objects.

    The left-to-right syntax is nicer and more composable IMO, and many functional languages are growing this feature (Scala, maybe Haskell?). Although I think Unix pipes serve a distinct use case.

CalmStorm 6 years ago

Unix pipelines are indeed beautiful, especially when you consider its similarity to Haskell's monadic I/O: http://okmij.org/ftp/Computation/monadic-shell.html

Unix pipelines actually helped me make sense of Haskell's monad.

  • AnimalMuppet 6 years ago

    Could you be more specific? I don't get it.

    • msla 6 years ago

      It helps if you avoid the syntactic sugar of do-notation:

          main = getArgs >>= processData >>= displayData
      
      main is in the IO monad. The >>= function takes a value wrapped in a monad and a function which accepts a value and returns a value wrapped in the same type of monad, and returns the same monad-wrapped value the function did. It can be used as an infix operator because Haskell allows that if you specify precedence, in which case its left side is the first argument (the value-in-a-monad) and its right side is the second argument (the function).

      The (imaginary) function getArgs takes no arguments and returns a list of command-line arguments wrapped in an IO monad. The first >>= takes that list and feeds it into the function on its right, processData, which accepts it and returns some other value in an IO monad, which the third >>= accepts and feeds into displayData, which accepts it and returns nothing (or, technically, IO (), the unit (empty) value wrapped in an IO monad) which is main's return value.

      See? Everything is still function application, but the mental model becomes feeding data through a pipeline of functions, each of which operates on the data and passes it along.

    • CalmStorm 6 years ago

      In Unix:

          a; b    # Execute command b after a. The result of a is not used by b.
      
      In Haskell:

          a >> b  # Run function b after a. The result of a is not used by b.
      
      In Unix:

          a | b    # Execute command b after a. The result of a is used by b.
      
      In Haskell:

          a >>= b  # Run function b after a. The result of a is used by b.
      • cnity 6 years ago

        Nitpick, sort of, but in unix, a | b executes a and b in parallel, not sequentially. This somewhat complicates the analogy with programming. They're more like functions operating on a data stream.

        Edit: I might be misinterpreting your use of "execute _after_". You may not mean "after the completion of a" but instead "operating on the results of a, asynchronously" in which case, apologies.

  • thesz 6 years ago

    Pipelines are function composition, not monads.

    For pipelines to be monadic, their structure must be able to change depending on the result on the part of pipeline. The type of binding operator hints at it: (>>=) :: m a -> (a -> m b) -> m b.

    The second argument receives result of first computation and computes the way overall result (m b) will be computed.

    As for program composition, I would like to add gstreamer to the mix: it allows for DAG communication between programs.

nsajko 6 years ago

Surprised there is no mention of Doug McIlroy: https://wiki.c2.com/?DougMcIlroy

lexpar 6 years ago

Interesting comment in the header of this site.

https://github.com/prithugoswami/personal-website/blob/maste...

rzmnzm 6 years ago

Powershell is the ultimate expression of the Unix pipeline imo.

Passing objects through the pipeline and being able to access this data without awk/sed incantations is a blessing for me.

I think anyone who appreciates shell pipelines and python can grok the advantages of the approach taken by Powershell, in a large way it is directly built upon existing an Unix heritage.

I'm not so good at explaining why, but for anyone curious please have a look at the Monad manifesto by Jeffrey Snover

https://devblogs.microsoft.com/powershell/monad-manifesto-th...

You may not agree with the implementation, but the ideas being it, I think, are worth considering.

akavel 6 years ago

I think it will be on topic if I let myself take this occasion to once again plug in a short public service announcement of an open-source tool I built, that helps interactively build Unix/Linux pipelines, dubbed "The Ultimate Plumber":

https://github.com/akavel/up/

I've also recently seen it being described in shorter words as a "sticky REPL for shell". Hope you like it, and it makes your life easier!

  • edeion 6 years ago

    I'm surprised this doesn't get up-voted. The tool automates very nicely my shell command writing process: pipe one step at a time and check the incremental results by using head to get a sample. Looks cool to me!

tekert 6 years ago

To my understanding, this is the same pattern where every "object" outputs the same data type for other "objects" to consume. This pattern can have a text or gui representation that is really powerful in its own nature if you think about it, its why automation agents with their events consumption/emition are so powerful, its why the web itself shifts towards this pattern (json as comunication of data, code as object), The thing is, this will always be a higher level of abstraction, i think that a gui of this pattern should exist as a default method in most operating systems, its would solve a lot learning problems like learning all the names and options of objects, i would be the perfect default gui tool, actually, sites like zapier, or tools like huginn already do this pattern, i always wondered why this pattern expands so slowly being so useful.

tarkin2 6 years ago

I wish there were gui pipes. Pipes are wonderful but they’re neither interactive nor continuous.

Pipes that loop, outputting a declarative gui text format, and listen for events from the gui, would be marvellous.

I can’t think how to do that without sockets and a bash loop. And that seems to create the kind of complexity that pipes manage to avoid.

benjaminoakes 6 years ago

I find gron to be much more Unix-y than jq. It "explodes" JSON into single lines for use with grep, sed, etc and can recombine back into JSON as well.

https://github.com/TomNomNom/gron

  • majewsky 6 years ago

    Since jq is sed for JSON, by the transitive property, you're saying that sed is not Unix-y. ;)

    Seriously though, I use both, and IMO they serve different purposes. gron is incredibly useful for exploring unknown data formats, especially with any form of

      something | gron | grep something
    
    Once you've figured out how the data format in question works, a jq script is usually more succinct and precise than a chain of gron/{grep,awk,sed,...}/ungron.

    So in practice, gron for prompts and jq for scripts.

parliament32 6 years ago

Related: Pipelines can be 235x faster than a Hadoop cluster https://adamdrake.com/command-line-tools-can-be-235x-faster-...

theshadowmonkey 6 years ago

Pipes are like one of the best experiences you’ll have whatever you were doing. I was debugging a remote server logging millions of Logs a day and was aggregating a little on the server. Then all it required was wget, jq, sed and awk. And I had a powerful log analyzer than splunk or any other similar solution on a developer Mac. Which you think is awesome when you’re paying a fortune to use Splunk. And for getting some insights quick, Unix pipes are a godsend.

sedatk 6 years ago

It's ironic that the article ends with Python code. You could have done everything in Python in the first place and it would have probably been much more readable.

  • enriquto 6 years ago

    > You could have done everything in Python in the first place and it would have probably been much more readable.

    Python scripts that call a few third party programs are notoriously unreadable, full of subprocess call getouptut decode(utf8) bullshit. Python is alright as a language, but it is a very bad team player: it only really works when everything is written in Python, if you want to use things written in other languages it becomes icky really fast. Python projects that use parts written in other languages inevitably gravitate to being 100% Python. Another way to put it, is that Python is a cancer.

  • unnouinceput 6 years ago

    But slower

enriquto 6 years ago

If you eval this simple pipeline

     file -b `echo $PATH:|sed 's/:/\/* /g'`|cut -d\  -f-2|sort|uniq -c|sort -n
it prints a histogram of the types of all the programs on your path (e.g., whether they are shell, python, perl scripts or executable binaries). How can you ever write such a cute thing in e.g., python or, god forbid, java?
  • junke 6 years ago

    - This fails on badly formed PATH, or malicious PATH

    - When PATH contains entries that no longer exist / are not mounted, substitution with "sed" gives a wildcard that is not expanded, and eventually makes "file" report an error, which is not filtered out

    - If PATH contains entries that have spaces, the expansion is incorrect

    • enriquto 6 years ago

      More realistically, it also fails if you have directories on your path that are symlinks to other directories in the path. In that case their programs are doubly-counted.

      Anyways, if your PATH is malicious then you have worse problems than this silly script :)

      • junke 6 years ago

        I agree, but "more realistically"? those cases were pretty realistic I think. If you run the script on a box where you have users who can do whatever they want, at least the script should fail on bad inputs.

            PATH=rm -rf :/bin
      • JdeBP 6 years ago

        Your error is in thinking that either of those cases is in any way malice.

  • viraptor 6 years ago

    You don't. If you use it once, meh. If you share it with anyone or preserve for the future, why would you want it to be cute?

    It's just a few lines in python, probably takes just as long to write because you don't have to play with what needs to be escaped and what doesn't. You can actually tell what's the intent of each line and it doesn't fail on paths starting with minuses or including spaces. Outside of one time use or a code golf challenge, it's not cute.

    https://pastebin.com/FnBCgHUm

    • enriquto 6 years ago

      This is not about "code golfing", the "sort|uniq -c|sort" combo is in the hall of fame of great code lines. The PATH thing in my script was an unnecessary distractor, consider this:

          file -b /bin/* /usr/bin/* |cut -d' ' -f-2|sort|uniq -c|sort -n
      
      There are no bizarre escapes nor anything. Besides, the "file" program is called only once. The python equivalent that you wrote may be better if you want to store it somewhere, but it takes a lot to type, and it serves a different purpose. The shell line is something that you write once and run it because it falls naturally from your fingers. Moreover, you can run it everywhere, as it works in bash, zsh, pdksh and any old shell. The python version requires an appropriate python version installed (3 not 2).
      • viraptor 6 years ago

        > Moreover, you can run it everywhere

        Issue 1: `zsh: argument list too long: file`

        Issue 2: `file -b /usr/bin/apt` returns 3 lines, so the entries you count end up being:

            2 /usr/bin/apt (for
            1 Mach-O universal
        
        Yes, it looks easier, but there are traps.
      • marcthe12 6 years ago

        find-xargs combo could have been better. It also prevents the case where you cross arg max

        • enriquto 6 years ago

          Definitely! It is also much slower, though, especially if you need to add "-L 1". I prefer to try first that way, if the argument list is too long, then try xargs.

          On the other hand, the limitations on the command lenght of modern shells are idiotic. The only limitation should be the amount of available RAM, not an artificially imposed limit.

stevefan1999 6 years ago

I think it is terrible, as if everything is a text you can't precisely describe some data structure, e.g. circular list.

praveen9920 6 years ago

I recently wrote a golang code for fetching rss feed and displaying gist based on requirement.

Looking at this code, I am tempted to reimplement this using pipes but saner mind took over and said "don't fix something that is not broken"

I probably would be still do it and get some benchmark numbers to compare both.

hi41 6 years ago

Thank you! I did not know you get to a "sql like group by" using uniq -c. That's so cool! I think I used to pipe it to awk and count using an array and then display but your method is far better than mine.

  • quicklime 6 years ago

    I use "sort | uniq -c" all the time, but I find it annoying that it does it in worse-than-linear time. It gets super slow every time I accidentally pass it a large input.

    At that point I usually fall back to "awk '{ xs[$0]++ } END { for (x in xs) print xs[x], x }'", but it's quite long and awkward to type, so I don't do it by default every time.

    At some point I'll add an alias or a script to do this. One day.

    edit: OK I finally did it; it's literally been on my to-do list for about 10 years now, so thanks :)

    $ cat ~/bin/count

    #!/usr/bin/awk -f

    { xs[$0]++ } END { for (x in xs) print xs[x], x }

Silamoth 6 years ago

That was a great article! Pipes can definitely be very powerful. I will say, though, that I often find myself reading pages of documentation in order to actually get anything with Unix and its many commands.

miclill 6 years ago

Great write up. One thing I would add is how pipes do buffering / apply backpressure. To my understanding this is the "magic" that makes pipes fast and failsafe(r).

not2b 6 years ago

I've been using pipes for decades to get my work done, but it was cool to learn about jq as I have much less experience with JSON. It's a very cool program. Thanks.

sigjuice 6 years ago

Minor nitpick: It is Unix "pipes" (not pipelines).

kazinator 6 years ago

> If you append /export to the url you get the list in a .csv format.

Text filtering approach narrowly rescued by website feature.

Phew, that was close!

stormdennis 6 years ago

The video from 1982 is brilliant great explanations from an era before certain knowledge was assumed as generally known

mehrdadn 6 years ago

What kills me about pipelines is when I pipe into xargs and then suddenly can't kill things properly with Ctrl+C. Often I have to jump through hoops to parse arguments into arrays and avoid xargs just for this. (This has to do with stdin being redirected. I don't recall if there's anything particular about xargs here, but that's where it usually comes up.)

jakubnarebski 6 years ago

The first example in the article is what `git shortlog -n` does; no need for the pipeline.

JadeNB 6 years ago

What does it mean to say that the video shows "Kernighan being a complete chad"?

  • Cerium 6 years ago

    Based on some searching, it seems calling someone a "Chad" is evoking the cool and attractive image that the name presents.

    "Chad is a man who automatically and naturally turns girls on due to his appearance and demeanor" https://www.reddit.com/r/ForeverAlone/comments/6bgbc2/whats_...

  • Aaronstotle 6 years ago

    Essentially another way of saying that Kernighan exudes confidence in his demeanor and excellence in the subject matter

  • globular-toast 6 years ago

    According to Urban Dictionary:

    "chad 1. Residue of faecal matter situated between arse cheeks after incomplete wiping and can spread to balls."

    I have no idea why the author decided to use that term.

    There is a related term "Chad" (capital C) which invokes the image of a man who is attractive to women. Again, I have no idea why the author decided to use that term.

  • dodwer 6 years ago

    It's an alt-right in-joke related to the incel subculture.

0xsnowcrash 6 years ago

Aka The Ugliness of Powershell. Would be fun to see equivalents to these in Powershell.

nickthemagicman 6 years ago

I have problems understanding what commands I can pipe. Some work some don't.

  • ketanmaheshwari 6 years ago

    The commands that read their input from standard input could be used as receiver in a pipeline. For instance: `A | B` is a short form of `A > tmp.txt; B < tmp.txt; rm tmp.txt`

    There are some commands that need literal arguments which is different than standard input. For instance the echo command. `echo < list.txt` will not work assuming you want to print the items inside the list. `echo itemA itemB itemC` will work. This is where xargs comes into play -- it converts the standard input stream to literal arguments among other things.

    • nickthemagicman 6 years ago

      How do you know which ones receive input from standard input or how can you find out efficiently?

      • ketanmaheshwari 6 years ago

        The man page of the command helps in most cases. Often the description section has info if the command reads from standard input, a file or both.

  • mehrdadn 6 years ago

    --help is a great one. When you want to search the help output you can either feed it into grep or not... depending on whether the program writes to stdout or stderr. You can force writing to stdout via adding 2>&1 at the end of your command. (It's possible that's what you're running into.)

  • cptnapalm 6 years ago

    As everything gets tossed into bin, there's no easy way to find out what is and what isn't a tool or filter rather than an application. It makes discovery difficult.

unnouinceput 6 years ago

Cygwin and you can do exactly this on Windows too. For me Cygwin is a bless.

hbarka 6 years ago

abinitio.com was borne from these principles.

Dilu8 6 years ago

Good article. Thanks for information

staycoolboy 6 years ago

The first time I saw pipes in action I was coming from Apple and PC DOS land. It blew my mind. The re-usability of so many /bin tools, and being able to stuff my own into that pattern was amazing.

If you like pipes and multimedia, checkout gstreamer, it has taken the custom pipeline example to real-time.

known 6 years ago

The capacity of pipe buffer is important for doing serious producer|consumer work;

billfor 6 years ago

I think systemd can do all of that.

niko0221 6 years ago

Esta genial Laexplicacion. Se aprende mucho aqui Soy nuevo y este articulo es bastante genial.

fogetti 6 years ago

I am sorry for playing the devil's advocate. I also think that pipes are extremely useful and a very strong paradigm, and I use them daily in my work. Also it is not an accident that it is fundamental and integral part of powershell too.

But is this really HN top page worthy? I have seen this horse beaten to death for decades now. These kind of articles have been around since the very beginning of the internet.

Am I missing something newsworthy which makes this article different from the hundreds of thousands of similar articles?

  • hu3 6 years ago

    Perhaps it's useful to reinforce pipes' utility to those who are not aware.

    As for being top most article, if I had to guess it's because people feel related to the tool's power and enjoy sharing anecdotes of it.

adamnemecek 6 years ago

Unix pipes are the a 1970's construct, the same way bell bottom pants are. It's a construct that doesn't take into account the problems and scale of today's computing. Unicode? Hope your pipes process it fine. Video buffers? High perf? Fuggetaboutit. Piping the output of ls to idk what? Nice, I'll put it on the fridge.

  • JdeBP 6 years ago

    An informed criticism would be based upon the knowledge that Doug McIlroy came up with the idea in the 1960s. (-:

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection