Settings

Theme

Plan 9 is a uniquely complete operating system

posixcafe.org

152 points by moody__ a year ago · 87 comments

Reader

jazzyjackson a year ago

For the uninitiated, Plan 9 lives on as the filesystem network interface that allows Windows and Windows Subsystem for Linux cross-platform access to your C drive. Via "https://nelsonslog.wordpress.com/2019/06/01/wsl-access-to-li...":

    Plan 9’s filesystem is a very simple network filesystem protocol to share files between systems. They are specifically using 9P2000.L.
    They considered using Samba and SMB instead but can’t rely on Samba being installed and usable in the Linux guest OS and didn’t want to ship it because Samba is GPL licensed.
    They picked Plan 9 because it’s much simpler to implement. Also Microsoft already had Plan 9 server code for some other Linux container project they’d done.
    The \\wsl$\ path is handled in the Windows system by the MUP, an existing hook for network-like filesystems. They added a new one for Plan 9.
    The $ is in the name so that it can’t be confused with a computer whose hostname is wsl.
    The Plan 9 server in Linux communicates with the Windows Plan 9 client via a Unix socket. (Windows supports Unix sockets; who knew?)
    Windows can access your Linux files even if no Linux is instance is running. There’s a new Windows service called LXSManagerUser that mediates user identity and permissions.
  • skissane a year ago

    > (Windows supports Unix sockets; who knew?)

    Only since Windows 10 build 17063 (December 2017 pre-release) [0] [1], which was released as Windows 10 April 2018 Update. So for the first 25+ years of Windows' existence, it didn't.

    And although it does implement the basic functionality, it is missing features found on mainstream Unix-like platforms, e.g. file descriptor passing (SCM_RIGHTS)

    [0] https://devblogs.microsoft.com/commandline/af_unix-comes-to-...

    [1] https://betawiki.net/wiki/Windows_10_build_17063

  • ninkendo a year ago

    > Plan 9 lives on as the filesystem network interface that allows Windows and Windows Subsystem for Linux cross-platform access to your C drive.

    Same with other hypervisors, virtualbox etc do the same. If you have docker installed on macOS it also uses 9p to share data with the host.

    But IMO 9p is a terrible choice for this, particularly because it doesn’t support hard links. It breaks a lot of software like sccache etc which rely on hardlinks to work.

    The reply from the plan9 devs on why this is the case hits staggering levels of arrogance:

    > If you look at what a hard link is, you'll realize why they are not in Plan 9.

    https://groups.google.com/g/comp.os.plan9/c/24mMVoy6wXA/m/JW...

    • emmelaich a year ago

      I didn't take it as arrogance. And avoiding hard links and dirmove makes it more portable. Which is why it's used everywhere.

      I'd even suggest it reflects humility not arrogance.

      • ninkendo a year ago

        The arrogance comes from the fact that it came without any explanation whatsoever, but instead just acted as if it is the only possible position one could have. The quoted sentence didn’t have any elaboration as follow up, it was just the end of the discussion.

        It’s phrased as if to say “if you gave it a moments thought, you’d see I’m right”, which is the epitome of arrogance to me.

        I get this impression from Rob Pike as well… I’m sure it comes from decades of being tired of arguing with people, but he comes off as utterly dismissive, as if to say “you either agree with me or you’re an idiot”. It doesn’t help that he continually throws shade on Linux (which is approximately infinity times more successful than Plan9) as you can see from other comments in the same thread. I don’t come away with a good impression of him or any of the other plan9/9front devs. Their whole attitude seems to be “everyone in the OS/systems world is dumber than us, and even too dumb to see why they’re dumber than us. We have a perfect system beyond any reproach and you’re an idiot if you disagree.”

        • emmelaich a year ago

          Its sort of annoying that he merely mentions the principle, and not precisely how it applies to hard links, but I think that's appropriate in the context of a single post / email.

          >On Plan 9, the rule tends to be: if feature(X) can't be implemented in a way that works for everything, don't do it.

          TBH I don't think hard links would be missed in Linux either, symlinks are good enough.

          • ninkendo a year ago

            > symlinks are good enough

            They really aren’t though. Anyone reading from a symlink needs access to the path it points to. If I have a network share with hard links pointing to common storage that’s outside the share’s root, clients are none the wiser and see them as normal files. With symlinks they would see links to paths they can’t see. It’s not really the same thing.

    • Brian_K_White a year ago

      I can accept that someone considers a hard link a sort of insanity or deliberate corruption, or at best an undesirable feature, but I think they should just say that rather than go to the next level and act like this is the only possible position.

    • foul a year ago

      rminnich is a lot of things (including a 9front developer) but not a plan9 developer.

      He is right however, you are also right: it's because Windows and Docker publish 9p server in a stupid way. It shouldn't be just the guest fs, you should be able to make any file server you like so hard links would be useless (as they are in plan9) because you would decide what filesystem organization layout you need.

      You could make do with FUSE inside the virtualized OSes I guess.

rcarmo a year ago

Plan9 is one of those things I go back to every Summer and that is somewhere between completely mind-blowing (check out the GIF at https://taoofmac.com/space/blog/2020/09/02/1900 to see how fast it boots in real-time on a single-core Pi) and almost completely unfit for purpose because it just doesn’t integrate well (or easily enough) with modern systems (I also considered using it for a writing “appliance” - https://taoofmac.com/space/blog/2023/09/22/1230 - but syncing data off it was a blocker, and three-button mouse chording GUIs are just not a thing I want to deal with).

One of the “stupid” ideas I have in my back-burner is to rewrite rio so that it works like Mac OS 7 (the platinum look with window shading), which in my mind was always a very sane and efficient way to manage windows — but time is not on my side…

I have one of my usual lists of resources for it on https://taoofmac.com/space/os/plan9 - comment here if it’s missing anything you particularly like.

  • moody__OP a year ago

    Your link for 9front mentions that ssh2 is not included. This is because the code was rewritten and the program is now just called ssh(1). Other features of ssh are accessible through sshfs(4), and sshnet(4). The only difference in features compared to the original Plan 9 is that 9front does not currently have code for an ssh server. I know some users who are interested in this capability so it'll likely happen at some point.

kccqzy a year ago

> The Plan 9 implementations tend to not be as feature rich as the proper upstream variants.

This is IMO the biggest drawback. Why wouldn't any user want the software to be feature rich? In fact, looking at Plan 9, I often feel that the provided software is just a MVP.

  • linguae a year ago

    Counterpoint: Plan 9 is supposed to be the ultimate realization of the Unix philosophy. One important aspect of the Unix philosophy is composable software. Instead of large, feature-rich programs where functionality is often siloed off from other programs, users have a toolbox of small, composable programs that “do only one thing and do it well” and that they could connect together using pipes and other inter-process communication primitives.

    Composable software is something I’m highly interested in. There were efforts in the 1990s to make desktop software more composable, such as COM from Windows and OpenDoc from Apple, but the desktop world is still dominated by large applications such as those that constitute Microsoft Office and the Adobe Creative Suite. It would have been a wonderful opportunity for the Linux desktop world to embrace components, but, alas, the community embraced OpenOffice, GIMP, and other large applications.

    • pjmlp a year ago

      That would be Inferno, not Plan 9.

      COM is everywhere on Windows, specially since Vista, as the WinDev regained the control they thought Longhorn was going to take away from them.

      One of Powershell strengths is the easy access to COM, just like .NET, frameworks.

      Linux could do the same with D-Bus, but alas so is the distributions wars, and hate on anything like proprietary OSes, that it only has a minor role on systemd, GNOME and KDE.

      • bboygravity a year ago

        Intuitively this sounds like asking for dependency and API hell?

        Imagine writing a huge complex program that is dependent of communication between smaller existing programs. Either you use the default programs that where shipped with all the different versions of OS'es with different distro's (never going to work, too many different versions of programs and their communication interfaces) or you ship certain fixed versions of all of the small programs that form your bigger program.

        In case of the latter: why not just use libraries? It's basically the same thing with an easier API?

        Maybe I'm missing something...

        • mananaysiempre a year ago

          A “program that is dependent on communication between smaller existing programs” is essentially the definition of a shell script, and those are usually not as problematic as your describe until you go out and try different independent implementations of the tools as opposed to mere versions. Compatibility problems definitely happen, but not as often as you seem to expect.

          I’d guess the trick is that you’re not thinking small enough. GNU coreutils, etc. are not minimal by any means, but compared with even late-90s graphical desktop software they are still fairly compact, and you’re rarely using each and every tool at once. And the smaller the tool and the problem it targets, the more likely it is that the problem is mostly a solved one, so interface churn is less necessary.

          I’m not sure every problem area is amenable to this—GUIs and things best expressed as GUIs seem particularly stubborn. (It would be sad if OLE/ActiveX was the best possible solution.) But some are, and few enough people are trying to expand the simplicity frontier for their real-world tasks in recent years that I don’t believe the state of the art of the 1980s is the farthest it can reach, either.

        • Philip-J-Fry a year ago

          These programs are effectively libraries. They implement a known interface.

          Except at the end of the day, the interface can be as simple as a stream of bytes over a socket. If I want a h264 decoder in my application I could just pipe a stream to a specific program made to decode it. That could be written in Python, in Rust, in C, in Go, etc. whereas dynamic libraries don't give you that freedom as you have to abide by the ABI defined by the host application.

        • pjmlp a year ago

          Components are libraries, which aren't stuck in a C view of UNIX world.

      • fasa99 a year ago

        It's more like Plan 9 is an evolution of unix and inferno is an evolution of plan 9.

        Also there's the old viewpoint of "let's put everything in XML. Everything can read and write XML", and that seems similar to the argument of "everything can be an object that is exposed to other programs". The problem in both cases, objects have requirements for function call names, function arguments, nuanced conventions, as XML has specific tree structures. Interfacing programs must be wise to these conventions and sometimes nuances so it doesn't "just work". Windows is like - programs CAN share objects and interface BUT it needs developer work for program A to work with program B. Unix is like "programs X and Y, whatever they are, work together with no additional work". The magic of unix is that if there are 1000 programs, now there are 1,000,000 combinations that "just work" vs. 1,000,000 developer hours

        Unix on the other hand recognizes that it's a lot easier to just say "everything is a text stream, and all programs are responsible for inputting and outputting only fairly clean text streams without extraneous output" . This is extended to "everything is a file", a file also typically made to be read into a clean text stream so it can feed into that whole ecosystem.

    • nrr a year ago

      The irony is that Acorn's RISC OS arguably came the closest to this ideal with any pragmatism. The way that file choosers worked effectively allowed one to pipe a saved file from one application to another and then do it again through the same workflow in the next application and so on.

      • foul a year ago

        This kind of IPC lurks in every major OS, drag to an app window is supported by Xorg Mac and Windows.

        • nrr a year ago

          I suppose this is IPC in the loosest possible sense? It's more like GetOpenFileName() and GetSaveFileName() in Win32 but getting handed a HANDLE outright instead of an LPCSTR to pass to CreateFile() later.

    • flomo a year ago

      Agreed. And one could argue that Unix wasn't really popular because of the "philosophy", but because it would get out of the way and let you run big monolithic applications like OracleDB or CAD software or even Emacs and etc. So no popular application using "Plan 9 philosophy" ever emerged.

      • hakfoo a year ago

        I suspect the appeal of the Unix Philosophy is strongest at the earliest phases of the system's evolution.

        Once you've written some very basic boostrap tools, the "second generation" of stuff that adds convenience and flexibility are a lot simpler.

        A trivial example: 20 seconds after you wrote "directory listing", someone will say "I want a directory listing, but sorted by date, and it would be awesome if it didn't immediately scroll past the end of my screen."

        With Unix Philosophy tools, you might already have have a "sort" and "paginate" command, so it's just piping stuff together. They can do it themselves, or it will take 20 seconds to explain.

        Without it, you're going to have to add additional options to "directory listing" (or parallel commands) to handle the sorting and pagination features. The tools get bigger and buggier for the same functionality.

        Early Unix machines weren't much bigger than mid-80s PCs-- 512K of memory or less-- but offered a very rich command line experience compared with DOS machines of similar sizes.

        Programs like database or CAD packages probably go monolithic because they're more "state dependent" than your usual command-line tools. "sort" and "more" can take their inputs from stdin and feed them out to stdout, and when they're done, forget everything with no damage.

        That wouldn't work well for other packages. You could probably make a database or CAD system that worked as composable units, like `echo db.sql | db-query "select username from accounts where credit < 0" | xargs delete-account` or `echo image.dxf | add-circle -x 200 -y 400 -r 60 > image.dxf` But you'd spend a lot of time reloading and reparsing the same files. A persistent monolith that keeps the data file open and in whatever internal representation is most efficient is going to perform better.

        Some use cases also have limited composability, because the user can only plan a few moves ahead. Tools that encourage interactive/experimental usage, like drafting software, might involve the user stopping every step or two to make sure they're staying on plan, and queuing up a series of commands could wreak havoc. Some of these packages ended up simulating composable tools through internal macro/scripting languages which still avoided the penalty of having to rely on the OS to orchestrate every single action.

        • zozbot234 a year ago

          > With Unix Philosophy tools, you might already have have a "sort" and "paginate" command, so it's just piping stuff together. They can do it themselves, or it will take 20 seconds to explain.

          Sorting the output of textual tools like ls requires parsing which can be non-trivial. It's easier to do it by using a modern structural shell such as nushell.

        • foul a year ago

          The pipeline part and the "do one thing well" can't make anything structurally serious for the layman. But Unix had gaps for that, that plan9 in its ways fills: it has GUI, a better shell language, coroutines and file servers. The application then has to be glue code you'd write in C or Rc, you may write pipelines for some things your application does but nobody can do Excel (the way Excel does) with just a pipeline...

        • mjevans a year ago

          Imagine extending Plan 9 semantics with something like REST style protocols, but via the (virtual) filesystem layer rather than HTTP requests.

          (Offhand, I've never touched Plan 9 but...) Hypothetically /proc/SOMEPID/db/DATABASE/SCHEMA/TABLE/various views which provide expressions in some order. Or /proc/SOMEPID/containerofthings/ and the directory listing is serviced by the application, as an enumeration of keys (filenames) to values (datasets). For a database the API would behave similarly to how ORMs operate since filesystems are inherently similar to objects.

          • zozbot234 a year ago

            Why be dependent on the /proc/SOMEPID/ path? Just write your process as a plan9 file server, and expose it in some arbitrary part of the filesystem.

      • linguae a year ago

        I agree, but I believe the fact there are no popular applications that fully embrace the Unix/Plan 9 philosophy is the point of the philosophy. Generic tools that can be composed versus end-to-end applications. Both have their advantages and disadvantages, though component-based software doesn’t preclude the development of end-to-end applications using these components. In my opinion I believe the reason end-to-end applications are dominant is because it’s easier for companies to sell and market products over tools. Part of the reason OpenDoc failed was because companies that made a living selling end-to-end applications (like Adobe) didn’t want to adopt component-based software where the product (application) isn’t the main focus. Imagine if users could construct their own Photoshop out of discrete elements.

        • pjmlp a year ago

          There were plenty of ActiveX lego components to build Photoshop like applications on Windows during the 1990's, back when buying libraries was a thing professionals would care about.

      • pjmlp a year ago

        Even the UNIX philosophy is something that gets praised all around on UNIX FOSS circles, which I seldom saw anyone carying about on commercial UNIX systems, starting with my introduction to Xenix in 1993.

        It kind of feels a bit of cargo cult, praising it all the time.

      • foul a year ago

        The plan9 philosophy can't exist because there are no more three-button mice... UI aside, you have very cool technical differences with UNIX and most important of all daemons are file servers, you don't have to create or learn any new query style and path layout to work with them, you interact with parts exposed from a program writing (to) files and reading them, you don't necessarily need to read big chunks of data either if the program can expose just fragments.

    • sillywalk a year ago

      Both KDE and GNOME started out with components. GNOME was originally the GNU Network Object Model Environment and KDE had KParts.

  • ori_b a year ago

    For the same reason people prefer languages like Python over Perl. Simplicity improves usability and understandability.

    It's pleasant to use a minimalist, viable product.

    9front is not the only OS I use, but it is one of my daily drivers.

    • lagniappe a year ago

      Hey Ori :) I particularly enjoyed your video talk "not dead just resting" - thanks for everything you do

      https://www.youtube.com/watch?v=6m3GuoaxRNM

    • hollerith a year ago

      Python started out decades ago as a language for beginners or non-professional programmers, but is the current language simple or minimalistic?

      • pjmlp a year ago

        Not at all, it has C++ complexity level, if one wants to master it at all levels.

        Additionally, since even minor versions introduce breaking changes, getting something from e.g. Python 1.6 to run on 3.12 is an exercise in trial and error, or unexpected surprises at some moment at runtime.

        • stinos a year ago

          > Not at all, it has C++ complexity level, if one wants to master it at all levels.

          Would be interesting to hear what levels you think those are because as someone who has been learning and writing C++ and Python for over 20 years now, I'd say there's no level of Python which comes even close to the complexity of C++ at a comparative level.

          > getting something from e.g. Python 1.6 to run on 3.12 is an exercise in trial and error, or unexpected surprises at some moment at runtime.

          Fair enough. But then again: that's irrelevant for any new project written since about the start of the last decade. And to continue the C++ comparison: whereas Python 1 -> 2 -> 3 imo solved some real issues, the consecutive C++ standards never did this resulting in something which is, yes, backwards comptible (roughly - try getting 30-40 year old code getting to compile with /std:c++latest and /Wall - or look at all those tiny behavior changes between the last couple of standard iterations) but also seriously plagued by that as it holds back innovation. Modern C++ minus a lot of the old UB-prone stuff would definitely be better and less complex than what we have now.

          • pjmlp a year ago

            Easy, mastering the language (everything you can do with it, everything!), the main language runtime, the C extensions API, and the complete standard library.

            Then just like C++, add the set of key implementations, CPython, Cython, PyPy, key libraries everyone uses (parallel to Boost).

            Since you have such a wide Python experience, naturally you already printed out all the PDFs of Python documentation, and read them cover to cover.

            I did such thing back on Python 2.0 days, and it only grew bigger.

            How many pages was it again?

          • okasaki a year ago

            The dynamic and metaprogramming parts of Python can get pretty complicated, especially when it's done in a large program/library like SQLAlchemy.

        • Brian_K_White a year ago

          I now describe python as enjoyable for the author and miserable for the user.

          If you give the tiniest of Fs for your user, do not write your thing in python.

    • sfpotter a year ago

      Python isn't simple.

      • ori_b a year ago

        Perl is less simple. Though, I suppose lua would be a better comparison.

        • 3np a year ago

          I fail to see your point. To me, these three are all very close to each other in "simplicity" and any ordering seems arguable. If anything, isn't Perl simpler than Python and if not, why?

          Perhaps vast differences in ergonomics and language-culture-fit but that's orthogonal/unrelated?

          • ori_b a year ago

            The point is that simplicity is an ergonomic consideration, regardless of how you nitpick the analogy.

            • 3np a year ago

              In context though, you gave it as an example why people would not want more feature-rich software.

              The supposed simplicity of Python over Perl has nothing to do with Perl being more feature-rich and that compromising its simplicity.

              It seems like Plan9 would get closer to "Python simplicity" by adding features and extending interfaces. Which would be at conflict with the "minimalist MVP simplicity".

              You present it as mutually exclusive. I believe it's the same word used for two fundamentally different aspects of software - less of a nit.

              • ori_b a year ago

                You're absolutely right, my analogy sucked. Python is not a good example of a simple language.

                Complexity is in conflict with usable composition. Because of simple, small interfaces, the surface area needed for composition to cover all cases in a plan 9 environment is much smaller.

                This allows for things like remote login to be implemented as a small number of 'mount' calls, git to serve repos in a way that can be scripted without writing gobs of porcelain, or sshnet to trivially replace the entire network stack in userspace (namespace by namespace), so that software doesn't need to implement features like socks proxy support.

                The simplicity and uniformity of the interfaces and tools is an enabler.

                But, you're right. I should have picked scheme or lua as an analogy. Scheme is a particularly good one: the simplicity of its syntax enables easier macro manipulation. How many special cases would you need to have if you were implementing a lisp style macro system for C++?

                Smaller interfaces and implementations, lead to smaller sets of special cases and contexts to keep in mind, easier interposition and emulation, and faster debugging.

                > You present it as mutually exclusive.

                I honestly have no idea what you mean. What is the "it" that I am presenting, and what do you think I presented it as excluding?

                • 3np a year ago

                  > > You present it as mutually exclusive.

                  > I honestly have no idea what you mean. What is the "it" that I am presenting, and what do you think I presented it as excluding?

                  Simplicity vs feature-rich. Your last follow-up mostly resolves that, I think.

                  BTW, I think you just sold me on diving into Plan9 some time soonish (:

        • johnisgood a year ago

          Lua is great, and `eval "$(luarocks path --bin)"` in your ~/.bash_profile or ~/.bashrc seems fine, better than Perl's where you gotta export PERL5LIB, PERL_LOCAL_LIB, PERL_MB_OPT, and PERL_MM_OPT, for example.

      • nutrie a year ago

        He didn't say Python was simple. He mentioned simplicity, those are different things.

    • tbrownaw a year ago

      Python's `venv` Just Works (and is standard), while whatever it was that I dug up to get the same effect in Perl mostly didn't. I somewhat prefer Perl for things where this isn't an issue.

      I should probably make time look again in case I missed something or it's improved in the last decade.

  • drdaeman a year ago

    > Why wouldn't any user want the software to be feature rich?

    Users want feature rich systems. Individual programs are best feature-complete, but focused on a single task and capable of cooperating with others when something out of scope is desired.

    From my personal viewpoint: It's not easy to hack on large monoliths, even for senior software engineers. But if every logical piece of the monolith tries to be as small as it meaningfully could, the barriers are drastically lower.

  • akritid a year ago

    It is a personal choice of course, but some people enjoy the feeling of fully learning a piece of software, which is impossible with most.

  • emmelaich a year ago

    Because there's diminishing returns on effort for each feature and some features just don't fit well with in plan9. And sometimes there's a workable alternative.

    Lastly, it's really not for general users. It's for (academic?) computer people who are dexterous and willing to try new things.

readmemyrights a year ago

I closely studied plan 9 many times, I unfortunately can't use it because of accessibility issues but from what I read and heard it feels more like a time capsule from the 90s, which is ironic considering it was meant to be a future path for os research. And even in the 90s there were developments in unix that the labs seemingly completely ignored, like DJB's daemon supervision.

To talk about the article itself, the only reason plan 9 can achieve such a design is because it's developed and used by the same small group of people. If linux is a bazaar and BSDs are cathedrals, then 9front is a monastery's citadel. Another thing that isn't mentioned is that both linux and BSD (and pretty much anything based on posix) has a lot of third party software that would be hard to maintain along with the rest of the system, if the monks even include it to begin with. And that software could include something like jq which a lot of software depends on and would love to just assume it's there.

And really, what more does someone get from something like this over, say, having a more or less formal standard on what a true plan9 system includes and waving it in someone's face when they choose to ignore it? This is pretty much what modern unices do and it works out great in cases when it's actually important. Most people don't care what commit your system is built from as long as it works as their programs expect it to.

  • moody__OP a year ago

    I didn't directly mention third party software but when I talk about the various levels of default software the implication is that those with less built in typically rely more heavily on third party software. Even those who do ship a more batteries included base still have to provide mechanisms for using third party software given the ecosystem.

    > ... has a lot of third party software that would be hard to maintain along with the rest of the system

    This is the point that the article is trying to challenge. I think 9front proves that it's doable.

    > Most people don't care what commit your system is built from as long as it works as their programs expect it to.

    The former helps the later a lot. Everything is tested with each other and for a lot of functionality there is only one option.

GianFabien a year ago

I've played with Plan9 several times, but never used it seriously. The aesthetics that puts me off. Would have been great if they had taken guidance from BeOS / Haiku-OS for the look and feel. Heck, even Windows 95 would have been an improvement.

  • coreload a year ago

    That would have been difficult since Plan 9 predates each of those other systems. Also Plan 9 took place in Bell Labs as a research project, was based on their own UI research, and was not intended to be a commercially familiar UI. There are interesting ideas in the write ups about the UI that could be applied in nearly any UI today.

  • linguae a year ago

    Plan 9 was heavily influenced by the Xerox PARC Mesa/Cedar interface, which influenced Wirth’s Project Oberon. I forget whether Project Oberon directly influenced Plan 9, but I’ve heard people argue that Rob Pike, one of the leaders of the Plan 9 project at Bell Labs, was heavily influenced by Wirth when it came to programming language design, even if the syntax was closer to C instead of Wirth languages like Pascal, Modula-2, and Oberon. With that said, there are major similarities between the interfaces of Cedar/Mesa, Project Oberon, and Plan 9.

    A few years ago I thought about what it would take to implement a more conventional desktop GUI on top of Plan 9, but I’ve oscillated back and forth between wanting Plan 9 with a Mac-like desktop versus wanting a modern Lisp/Smalltalk machine (with object-oriented underpinnings instead of Plan 9’s “everything is a file” interfaces) with a Mac-like desktop.

    • pjmlp a year ago

      You see Oberon's influence on how ACME editor works, and later the OS dynamism enjoyed by Inferno and Limbo, that everyone usually forgets about.

      • akritid a year ago

        Inferno is not a successor. For example you can have Golang for Plan 9 but doesn’t make much sense on Inferno. You would even run Inferno on Plan 9 on some scenarios. I suspect most people who know about Plan 9 also know about Inferno, but it’s just a different thing, does not supersede it in general.

        • pjmlp a year ago

          Plan 9 => Inferno, which is still Plan

          Alef => Limbo => Go.

          Being able to backport Go into Plan 9, doesn't make sense in this context, that isn't how historical evolution works.

          Also even Inferno has the necessary C infrastructure to port Go, if someone hasn't done it already.

          • akritid a year ago

            I suspect you would have to port Go to run on Dis, the VM. C is for the OS. It’s a different design, without mmu. Plan 9 is still a classic OS with hardware isolation

  • Gualdrapo a year ago

    Not an expert in OS development whatsoever but I do know that it's not intended for common usage, so how its UI looks is not something devs would take much care of.

    Though I do know that it puts some strong emphasis on mouse usage, something that for someone that grew to use the keyboard a lot like me (ironically, as a graphic designer) seems to be really awkward, to say the less. Its strengths seem to be its overlaying concepts and that it intended to be "the next gen Unix" - alas it won't take over for a myriad of reasons, and some would argue Unix-es have already borrowed some of its concepts for themselves.

    • DaiPlusPlus a year ago

      Forgive my naïvety, but couldn’t X and CDE be ported over?

      • yjftsjthsd-h a year ago

        I thought there was an X server for Plan 9 (edit: yes; https://plan9.io/wiki/plan9/x11_installation/index.html ), but it kinda defeats the point unless you use something like https://github.com/gerstner-hub/xwmfs on it. CDE... something like CDE probably, but actual CDE would be painful and really defeat the point unless you ported its IPC to a Plan 9 native version.

      • ori_b a year ago

        You could, but if you want Unix, you already know where to find it.

      • nrr a year ago

        Not trivially. There's a lot in how Plan 9 is put together that makes this a monumental effort.

        (That said, there may have been an X server brought in at some point, but don't quote me on that. That's the least of anyone's problems in undertaking a CDE port though.)

  • irusensei a year ago

    The UI is called Rio. It’s simple yet functional and the plumbing thing is kinda cool. The thing that irks me with Plan9 (I have tried 9front) is the lack of tab completion on the shell. It’s easy to input garbage on the screen and contrary to most modern shells where you can just skip a line you need to use your mouse to put the cursor back where the prompt is.

    Of course lots of it might be skill issue.

    • ori_b a year ago

      Ctrl+f for completion, ctrl+b to warp the cursor back to the prompt.

tylerchilds a year ago

cache cause hugged:

https://web.archive.org/web/20240728004832/https://posixcafe...

zokier a year ago

The author is putting "upstream" on some weird pedestal. The whole point of foss is that any upstreams have very limited privileges compared to downstreams.

> Put in another way, if someone wanted the ability to touch every line of code (in the upstream sense), they would have to be a member of some non trivial amount of communities.

On a typical distro you can just download sources and start hacking, you don't need to be member of any community.

While something like Debian might not be monorepo in the strictest sense, on a conceptual level it is very close. They still have all the sources under their control and are not dependent on anything outside. They are at full liberty to accept or reject any patches regardless of where they come from, from "upstream" or "downstream".

This idea that distros are actually independent full-featured operating systems is an idea that I think is getting forgotten way too often. Distros are (or rather can be) much more than mere repackaging of upstream software.

  • moody__OP a year ago

    There is a direct correlation between the amount of power exerted by a project like Debian over an upstream project, and the amount of effort and upkeep required in doing so. I think of this like a sliding scale between shipping things with zero patches and a full on fork. From my understanding distribution patches on top of upstream projects tend to be typically just bug or portability fixes and stop short of adding features. The point I was trying to communicate was that in order to fully interact with the software you either have to be part of the upstream community or essentially fork.

    The illustrate how I think Plan 9 is different in this regard. A patch for 9front could include a new feature for our compilers and then also show how useful it is by using it within other parts of our code. In plan 9 you can interact fully with every component.

  • BSDobelix a year ago

    >The whole point of foss is that any upstreams have very limited privileges compared to downstreams.

    I would say introducing a backdoor (xz) without downstream knowing is probably the biggest "privilege" you can have on a system or distribution no?

tbrownaw a year ago

So is there something (social or technical) that makes it tricky to independently provide apps for plan 9, or is it just that the only people who care already have commit access?

teleforce a year ago

Fun facts, the unpopularity of Plan 9 compared to Unix/Linux is what motivated Rob Pike writing the now infamous article Systems Software Research is Irrelevant (2000) [pdf]:

[1] Systems Software Research is Irrelevant (2000) [PDF]:

https://news.ycombinator.com/item?id=29709807

lagniappe a year ago

The year of the 9 desktop cometh!

pjmlp a year ago

Superceded by Inferno as follow up project, where Limbo took the role of the abandoned Alef language for Plan 9.

I always have the impression the discussion stops on a gas station the middle of the road, instead of on the destination.

  • irusensei a year ago

    I keep reading this but if you compare the latest version of the 9front fork with the source code with inferno there is no doubt 9front looks a lot more polished. FFS I had to download some guys amd64 fork because the vita nuova bit bucket only has i386 sources.

    • sillywalk a year ago

      I'm curious if vita nuova is even still around. The website looks exactly the same as it did 20 years ago, before AMD released their x86-64, and still has Irix as a target.

    • pjmlp a year ago

      A fork doesn't count, that isn't how the Plan 9 => Inferno evolution took place.

      Otherwise we could also go back to Windows NT with POSIX subsystem, fork it to a version where the UNIX experience was first class like on NeXTSTEP and macOS, and then praise Windows NT UNIX qualities, that weren't there in first place. (Naturally ignoring the access to source code issue).

nopoolontheroof a year ago

I have a strong interest in different OS designs, and Plan 9 is one of the more interesting one. Having said that, it's only place is in a VM - it's not really cut out for everyday use. Pretty sure the people using it as such are doing so just to be 'different' - like BeOS back in the day.

revskill a year ago

What does this file mean and used ? https://github.com/9front/9front/blob/front/lib/bullshit

  • ck45 a year ago

    https://github.com/9front/9front/blob/75ac2674deab8ca70924b8...

    From the man page: bullshit - assemble a stream of bullshit from words in a file

    It will produce output like (ran it 3 times)

    persistence firewall markup realtime-java callback-scale generator

    virtual polling polling SQL out-scaling blockchain

    converged converged singleton property self-signing-based element polling just-in-time control

pxmpxm a year ago

Plan9 is the ycN equivalent of https://xkcd.com/739/

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection