Settings

Theme

Unix and Microservice Platforms

blog.deref.io

97 points by brandonbloom 4 years ago · 38 comments

Reader

Animats 4 years ago

Multics required unusual hardware that was expensive at the time. The hardware was abandoned by GE and taken over by Honeywell. Honeywell, which made thermostats then and makes thermostats now, was never a major player in computing. The big advantage of UNIX was that it runs on vanilla hardware.

UNIX is all wrong for microservices. Interprocess communication barely existed at first, and it's still mediocre. QNX did this right, with a true message-passing architecture, and a message-oriented network protocol. (Reliable, any-length message, not just raw UDP packets.) QNX continues to power many real-time systems, passing messages around.

  • pjmlp 4 years ago

    The big advantage of UNIX was that AT&T wasn't allowed to sell it, and it was basically free beer available for a symbolic price, then the UNIX V6 annotated book did the rest.

    If UNIX was closed source, sold at the same price as the competition, whatever hardware it run on would have not mattered at all.

pjmlp 4 years ago

> See, the brilliance of Unix didn't stop at functional orthogonality, they also used C: A high-level programming language, as compared to machine-specific assembly code. Once Unix started getting ported to new hardware architectures, the two-dimensional implementation matrix becomes three-dimensional.

Yeah, like plenty of other OSes since the late 1950's.

Multics was also written in an high level language, PL/I.

If anything, I am looking forward to the cloud platforms to replace UNIX.

It doesn't matter what AWS runs on, as long as my language runtime, or Kubernetes runs there.

UNIX, hypervisor, Linux, Windows, bare metal,...., I just don't care.

  • MisterTea 4 years ago

    > If anything, I am looking forward to the cloud platforms to replace UNIX.

    That happened in the late 80's / early 90's and it was called Plan 9. And there was a reason it failed to fill its intended role:

      [I]t looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.
      — Eric S. Raymond
    
    
    > UNIX, hypervisor, Linux, Windows, bare metal,...., I just don't care.

    Then you don't need to worry about "looking forward" to replace any of those things.

    • pjmlp 4 years ago

      Plan 9 failed because Bell Labs management did not care enough, and eventually placed the team into the Inferno project to try fight against Sun and Java, that is why.

      The lesson is that if you want a project to succeed ensure management is on your side for the long road.

      I still need to care, because the final decision what to use is not always on my table.

      • MisterTea 4 years ago

        The Inferno project was around a year of disruption. But yes, the whole Labs management failure plays a part too.

  • miohtama 4 years ago

    Google has taken a stepmto this direction with Fuchsia. To replace the Linux kernel under Android/ChromeOS and maybe later the servers. But it will be a massive undertaking due to massive sunken cost in the UNIX legacy.

    • pjmlp 4 years ago

      Yes, but if no one ever spends the money, it will never change.

      The main reason why alternatives failed so far it wasn't for lack of technical capabilities, rather companies not willing to put the money in for the long road it takes to reboot everything.

      At least the majority of userspace applications are moving into the right direction, specially those being deployed on cloud environments on top of orchestrated runtimes, or the mobile phone apps.

      • formerly_proven 4 years ago

        Linux isn't Unix: commercial Unix already got obliterated by Linux (and the dotcom bubble, Itanium). Completely different development model, vastly different capabilities.

        • pjmlp 4 years ago

          Yeah, I forgot about that, it is just a kernel that happens to be POSIX compatible.

overtomanu 4 years ago

Doesn't service mesh seem similar to enterprise bus in SOA?

thom 4 years ago

Some truly baffling dataviz in this article, which really underlines the 'do one thing well, stick to text' message of the article.

flerovium 4 years ago

I would like to see the details of what primitives the author considers important (the "columns" of the perimeter diagram).

  • brandonbloomOP 4 years ago

    Hi, author here.

    My writing backlog includes a post defining what "good" looks like for a web service. I don't think most organizations are doing it well, or have their priorities straight.

    Rumor has it that there is a 30+ page checklist if you work at AWS and want to launch a new service. Meanwhile, CloudFormation support still trails most new service launches and remains completely unavailable for many old services. These things suggest strongly that they are coding the area, not the perimeter.

    • flerovium 4 years ago

      In many ways, AWS's insistence on proprietary vs. universal open-source and the lack of interoperability with other cloud providers is itself "coding the area", turning an O(N+M) problem into an O(N*M) one.

    • _sheep 4 years ago

      AWS's console UI is a pretty good sign of "coding the area" as well. So many subtle inconsistencies in the UI with navigation structure, pagination, filtering/search with all of their different services.

richardfey 4 years ago

Don't microservices using shared code correspond to going back to a monolith in disguise?

  • azaras 4 years ago

    No. Sharing code is good. Sharing memory bad.

    • discreteevent 4 years ago

      Sharing memory is pretty easy when you have a garbage collector. Even if you have concurrency you can use messaging without having to put the communicating components into separate processes. And having separate garbage collected virtual machines wastes a lot of memory if you want to have enough headroom for performance in each one.

      • pjmlp 4 years ago

        Not when sharing memory is done over OS IPC mechanisms.

        Which is what you want if the application is to be resilient to crashes and exploits from dynamicallly loaded code.

    • jjtheblunt 4 years ago

      perfectly said!

  • dboreham 4 years ago

    In the same sense that a set of C binaries that uses libc is a monolith?

    • richardfey 4 years ago

      Only if your company develops the equivalent of libc and it is clearly in scope for its business.

skissane 4 years ago

> Write programs to handle text streams, because that is a universal interface.

I wish they'd invented a simple structured data model, like JSON, and exchanged that rather than plain text.

dotcommand 4 years ago

> "Code the Perimeter" is the key insight of Kevin Greer's fabulous 2016 analysis of why Unix beat Multics

Brian Kernighan's "UNIX: A History and a Memoir" delves into why multics failed and unix was created and opinions on why unix succeeded ( something they really didn't anticipate ). It's a great book, broader than just unix - bell labs history, the people involved, computer history, etc.

  • pjmlp 4 years ago

    Multics failed for Bell Labs, it was quite useful for others, and in a DoD security assessemnt even proved more secure than UNIX thanks to PL/I.

    https://multicians.org/myths.html

    • anthk 4 years ago

      Without reading this comment's nickname I already knew it would be... you.

      JK, if Multics has TCP/IP and VT100 support (or even raw serial support), IRC, Gopher, E-Mail, News, and Telnet clients could be backported perfectly for Multics.

      IF there's a Z Interpreter for Tops-20 on the KA10, I think a computer capable on running Multics should be able to run Zork.

  • dboreham 4 years ago

    Very interesting also for insight into "what were they trying to do?". In several cases this was actually "facilitate printing bell labs technical papers without paying a printing company". Also interesting that the first part of Unix to be developed, and it was done as a stand alone project not with the goal to become a full OS, was the filesystem.

    • foobarian 4 years ago

      It's amazing how much of modern computing infrastructure was inspired by printers. Richard Stallman's almost entire life's work comes to mind.

      • giantrobot 4 years ago

        Consider big companies before departmental printers. A memo would be typed and then have to go to the reprographics department for copies or typing pool. Anything more complex than a memo would need to go to an outside printer to be typeset and printed.

        So you're looking at hours to days (weeks in the case of technical documentation) turn around time. All the labor was also very expensive.

        Departmental printing could cut the turn around time to minutes or hours and reduce the manual labor significantly. This got cheaper and more accessible with desktop printing.

        While e-mail is often abused anymore it's much more manageable than the reams of paper even relatively small companies had to deal with just to communicate internally.

      • macintux 4 years ago

        And laser printing was a key factor in the Mac’s early success.

        • smhenderson 4 years ago

          And now a favorite goal of small/medium business is to use computers to go completely paperless.

    • pjmlp 4 years ago

      That was never the goal of UNIX per se, rather the way Dennis and Richtie managed to get hold of funding and management support to keep going at it.

      > When the Computing Sciences Research Center wanted to use Unix on a machine larger than the PDP-7, while another department needed a word processor, Thompson and Ritchie added text processing capabilities to Unix and received funding for a PDP-11/20.[5] For the first time in 1970, the Unix operating system was officially named and ran on the PDP-11/20. A text-formatting program called roff and a text editor were added. All three were written in PDP-11/20 assembly language. Bell Labs used this initial text-processing system, consisting of Unix, roff, and the editor, for text processing of patent applications. Roff soon evolved into troff, the first electronic publishing program with full typesetting capability.

      https://en.wikipedia.org/wiki/History_of_Unix

      • dboreham 4 years ago

        True, but if you read Brian's book, he provides more details and color: nroff was the first real Unix application, and the first PDP-11 was purchased by the patent dept on the understanding that the Unix group would develop nroff. roff pre-dates Unix and didn't have the capabilities required by the patent group (automatically add line numbers to printed documents).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection