Settings

Theme

Writing a Unix clone in about a month

drewdevault.com

395 points by drewdevault 2 years ago · 139 comments

Reader

mtillman 2 years ago

This is really cool. Reminds me of the original Unix was invented in a couple weeks while Ritchie's family went on vacation to CA to visit his in-laws.

Source: UNIX: A History and a Memoir Paperback – October 18, 2019 by Brian W Kernighan (Author)

  • balder1991 2 years ago

    But I think it’s relevant to say that before writing Unix he was working on Multics for a long time already. Unix was a “simplified” version of it, if I remember well. So it didn’t “spring out of thin air.”

    • teleforce 2 years ago

      Unix was a kind of play word for Unique as an anti-thesis for Multics that latter was originally designed for modern multi-user and multi-process OS. Ironically as any real-world OS Unix eventually becomes multi-user system similar to Multics but the name stucked. Granted Unix has a very simple (as in simple as possible but no simpler) multi-user permission and security system that work reliably for many decades until now. Of all the organizations NSA actually even come up with a better replacement for the modern Unix permission and security model with SELinux, but most users just ignored and disabled SELinux although it's installed by default by many major Linux distros [1].

      [1] SELinux is unmanageable; just turn it off if it gets in your way:

      https://news.ycombinator.com/item?id=31176138

    • fuzztester 2 years ago

      >So it didn’t “spring out of thin air.”

      Right. Almost nothing does.

      You see, it's https://en.m.wikipedia.org/wiki/Turtles_all_the_way_down

    • eichin 2 years ago

      Mmm, even early versions ended up being more the "anti-multics" than actually simplified-from, despite the name pun...

    • DiggyJohnson 2 years ago

      Where did the quoted text come from? Something might have gotten edited.

    • trollerator23 2 years ago

      Absolutely.

  • laxd 2 years ago

    I think you mean Ken Thompson. I can't be bothered searching through youtube interviews but I'm pretty shure that on more than one occasion, he tells a story something along the lines of having a disk driver, some programs, and maybe some other components. His wife went on a trip and he figured it would be enough time to fill in the gaps and make a complete OS.

  • dboreham 2 years ago

    But Unix itself took many years to write (if you count V7 as "properly finished Unix"). The first version was only a filesystem, for example.

    • lproven 2 years ago

      So was MS-DOS. Sold in the tens of millions and kick-started the entire x86 PC industry, though.

      Sometimes small and simple is good.

      • jauntywundrkind 2 years ago

        I struggle like hell to imagine what the enduring lasting lessons of DOS are. It doesnt seem to have any real legacy in OS design. Not a single aspect but was copied or emulated or expanded on (although DOS in the whole was cloned purely for sale of having a DOS compatible system).

        The lesson seems to be: it's a race to the bottom on price. The lesson seems to be: get lucky and have your competitor just happen to be on a trip the day IBM knocks at their door. The lesson seems to be: have a parent who sells your stuff direct the board. The lesson seems to be: take advantage of decades of now-non-existant anti-trust atmosphere to make the world's biggest computer company seek outside OS. DOS itself? I struggle to think of anything remarkable at all. Maybe the availability of very cheap BASIC on-ramps for enthusiasts.

        • lproven 2 years ago

          > Not a single aspect but was copied or emulated or expanded on

          The business computing world still, to this day, largely runs on Windows, and Windows NT was built on the foundations of DOS: it bootstraps from a DOS filesystem, as UEFI still does in 2024, and it could be installed from DOS. It implements an API designed on DOS for a DOS GUI and to this day supports DOS-compatible filenames.

          All the core system folders in Windows 11 still have DOS-compatible names, from `SYSTEM32` to `SYSWOW64`.

          DOS itself was emulated by DR-DOS, FreeDOS, PTS-DOS, and other OSes.

          > it's a race to the bottom on price.

          Always was, still is. Why do you think Linux does so well? It's not technical merit!

          > have your competitor just happen to be on a trip the day IBM knocks at their door.

          Absolutely cast-iron lie, and you should be ashamed of yourself for repeating it.

          • flohofwoe 2 years ago

            > Windows NT was built on the foundations of DOS

            AFAIK Windows NT was mainly influenced by VMS (which Dave Cutler worked on before NT). The DOS-isms were mainly coming in via the Win95 side and for backward compatibility reasons, but I bet everybody on the NT team hated those requirements ;)

            > Absolutely cast-iron lie, and you should be ashamed of yourself for repeating it.

            Not the parent, but it's at best a good urban legend and not much different from "Gary Kildall was not interested". Do you have any first-person accounts that paint a different picture?

            • lproven 2 years ago

              > AFAIK Windows NT was mainly influenced by VMS (which Dave Cutler worked on before NT).

              It had three parents: OS/2, DOS and VMS. However, MS could use code from 2 of them but not from VMS. I've blogged about this more than once:

              https://liam-on-linux.livejournal.com/67492.html

              https://liam-on-linux.livejournal.com/54464.html

              > The DOS-isms were mainly coming in via the Win95 side

              Nope. Not true, and you have the timeline backwards.

              NT was released in 1993, 2 years before Win95, and only the 2nd version of NT, 3.5, supported VFAT long file names.

              NT did not support Win95B's FAT32 until its 5th release, Windows 2000.

              > backward compatibility reasons, but I bet everybody on the NT team hated those requirements

              No, I don't think so. NT could be installed on top of DOS, via the WINNT.EXE setup program. (Something I urged in OS/2 communities, but they didn't understand the need or usefulness.)

              https://networkencyclopedia.com/winnt-exe/

              NT could dual-boot with DOS, even in the same partition in early versions. It could also dual-boot with Win9x.

              This level of interop was hugely important and useful and really helped the new OS gain adoption. It was not some reluctant bolt-on.

              > a good urban legend

              No, it isn't. It's a horrid calumny against a good and brilliant man.

              > not much different from "Gary Kildall was not interested".

              Also utter nonsense.

              > Do you have any first-person accounts that paint a different picture?

              TL;DR version.

              Dr Kildall's wife, Barbara McEwen, was DR's lawyer. She negotiated with clients and suppliers, not the CEO who was a programmer.

              IBM wanted an NDA which DR was unwilling to do. She said no. Remember DR was the industry giant in microcomputer OSes at this time, and IBM didn't have an offering at all.

              Kildall was flying to visit an important client; this wasn't some accidental joyride.

              This lie about Kildall literally drove him to drink and his early death. Stop repeating it. It's not funny or clever. It's an evil, vindictive lie.

              Tom Rolander was the other passenger in the plane. Is his testimony good enough?

              Listen to him describe the flight he was on.

              https://youtu.be/bLVbSjDq0DE?si=Ig9KksWWiJG3KDFn&t=1025

              A much longer interview:

              https://www.mercurynews.com/2008/12/18/cassidy-theres-more-t...

              Video interview:

              https://www.youtube.com/watch?v=VREZ6Zx_usc

              Transcript:

              http://archive.computerhistory.org/resources/access/text/201...

              • flohofwoe 2 years ago

                Well that's a lot more hard info than I was hoping for, thanks for taking the time!

                • lproven 2 years ago

                  Thanks.

                  The thing is, the computer industry is now old enough it has a lot of folklore and legend: stuff that "everyone knows" and repeats.

                  But many of the people involved are still alive and you can just ask them.

                  And there are some really nasty people in this industry -- such as Bill Gates, or Larry Ellison -- who tell lies about others and to others, and then some of those lies catch on and everyone repeats them.

                  These lies that people share destroy lives. Don't repeat stuff you heard. Just Google it. It's easy to find the truth.

  • naitgacem 2 years ago

    i thought that story was about 3 programs that were missing, a text editor being one of them.

    I'll have to check because my memory is failing me atm.

  • AlexeyBrin 2 years ago

    I think you are confusing Dennis Ritchie with Ken Thompson.

kpw94 2 years ago

> I also finally learned how signals work from top to bottom, and boy is it ugly. I’ve always felt that this was one of the weakest points in the design of Unix and this project did nothing to disabuse me of that notion.

Would love any resources that goes in more details, if any HN-er or the author himself knows of some!

  • chubot 2 years ago

    If you haven't already, I would start with Advanced Programming in the Unix Environment by Stevens

    https://www.amazon.com/Advanced-Programming-UNIX-Environment...

    It is about using all Unix APIs from user space, including signals and processes.

    (I am not sure what to recommend if you want to implement signals in the kernel, maybe https://pdos.csail.mit.edu/6.828/2012/xv6.html )

    ---

    It's honestly a breath of fresh air to simply read a book that explains clearly how Unix works, with self-contained examples, and which is comprehensive and organized. (If you don't know C, that can be a barrier, but that's also a barrier reading blog posts)

    I don't believe the equivalent information is anywhere on the web. (I have a lot of Unix trivia on my blog, which people still read, but it's not the same)

    IMO there are some things for which it's really inefficient to use blog posts or Google or LLMs, and if you want to understand Unix signals that's probably one of them.

    (This book isn't "cheap" even used, but IMO it survives with a high price precisely because the information is valuable. You get what you pay for, etc. And for a working programmer it is cheap, relatively speaking.)

    • aspectmin 2 years ago

      Not positive, but pretty sure that this, and the Unix Network book were golden for us in the 90s when we were writing MUDs. Explained so much about Socket communications (bind/listen/accept,...) Been a long time since I looked at that stuff, but those were fun times.

      • HankB99 2 years ago

        I believe that's the book I still have on my shelf. IIRC "UNIX Network Programming" and I learned a lot about networking and a lot about how UNIX works reading it cover to cover. I think I learned more from that book than any other.

        Mr Stevens replied to something I wrote back in the day. I can't recall if it was a Usenet post or email, but I was over the moon!

    • balder1991 2 years ago

      I believe this was the 3rd time I’ve seen this book being recommended this week. It must mean something.

      • pjmlp 2 years ago

        It is a must for anyone serious about UNIX programming.

        Additionally one should get the TCP/IP and UNIX streams books from the same collection.

        • philosopher1234 2 years ago

          Is the Unix streams book “Unix Systems V network programming”?

          • pjmlp 2 years ago

            That one is also relevant, yeah.

            Although, I did a mistake, I was thinking about all Richard Stevens books for networking, that go beyond plain TCP, UDP, IP.

            https://en.wikipedia.org/wiki/W._Richard_Stevens

            Unfortunelly given their CS focus, they are kind of on the expensive side, I read most of them via libraries, or eventually getting my own copies.

      • madhadron 2 years ago

        It's been the standard reference for decades for a reason. I learned from it, too. There's really nothing else quite like it available.

      • Terr_ 2 years ago

        It might mean the Baader–Meinhof effect.

      • lanstin 2 years ago

        It's well written and full of practical advice and fun to read.

  • retrac 2 years ago

    Signals are at the intersection of asynchronous IO/syscalls, and interprocess communication. Async and IPC are also weak points in the original Unix design, not originally present. Signals are an awkward attempt to patch some async IPC into the design. They're prone to race conditions. What happens when you get a signal when handling a signal? And what to do with a signal when the process is in the middle of a system call, is also a bit unclear. Delay? Queue? Pull process out of the syscall?

    If all syscalls are async (a design principle of many modern OSes) then that aspect is solved. And if there is a reliable channel-like system for IPC (also a design principle of many modern OSes) then you can implement not only signals but also more sophisticated async inter-process communication/procedure calls.

    • Joker_vD 2 years ago

      As I wrote in some older discussion about UNIX signals on HN, the root problem (IMHO, of source) is that signals conflate three different useful concepts. The first is asynchronous external events (SIGHUP, SIGINT) that the process should be notified about in a timely manner and given an opportunity to react; the second is synchronous internal events (SIGILL, SIGSEGV) caused by the process itself, so it's basically low-level exceptions; and the third is process/scheduling management (SIGKILL, SIGSTOP, SIGCONT) to which the process has no chance to react so it's basically a way to save up on syscalls/ioctls on pidfds. An interesting special case is SIGALRM which is an asynchronous internal event.

      See the original comment [0] for slighlty more spellt out ideas on better designs for those three-and-a-half concepts.

      [0] https://news.ycombinator.com/item?id=39595904

      • mananaysiempre 2 years ago

        At least the first two are also conflated in a typical CPU’s trap/interrupt/whatever-your-architecture-calls-it model, which is what Unix signals are essentially a copy of. So this isn’t necessarily illogical.

        • KerrAvon 2 years ago

          SIGHUP and SIGINT have no CPU-level equivalent.

          • mananaysiempre 2 years ago

            Sure. What I meant is, a CPU’s trap/interrupt mechanism is very often used to signal both problems that arise synchronously due to execution of the application code (such as an illegal instruction or a bus error) and hardware events that happen asynchronously (such as a timer firing, a receiver passing a high-water mark in a buffer, or an UART detecting a break condition). This is not that far away from SIGSEGV vs SIGHUP.

            Some things (“imprecise traps”) sometimes blur the difference between the two categories, but they usually admit little in the way of useful handling. (“Some of the code that’s been executing somewhere around this point caused a bus error, now figure out what to do about it.”)

    • chasil 2 years ago

      IPC was actually introduced in "Columbus UNIX."

      https://en.wikipedia.org/wiki/CB_UNIX

    • lanstin 2 years ago

      A story about the problem with delivering interrupts to a process in kernel mode in unix:

      https://www.dreamsongs.com/RiseOfWorseIsBetter.html

  • jkrejcha 2 years ago

    Unix signals do... a lot of things that are separate concepts imo, and I think this is why there are people who don't like it or take issue with it.

    You have SIGSTOP/SIGCONT/SIGKILL, which don't even really signal the process, they just do process control (suspend, resume, kill).

    You have simple async messages (SIGHUP, SIGUSR1, SIGUSR2, SIGTTIN, SIGTTOU, etc) that get abused for reloading configuration/etc (with hacky workarounds like nohup for daemonization) or other stuff (gunicorn for example uses the latter 2 for scaling up and down dynamically). There's also in this category bizarrely specific things like SIGWINCH.

    You also have SIGILL, SIGSEGV, SIGFPE, etc for illegal instructions, segmentation violations, FP exceptions, etc.

    And also things that might not even be good to have as async things in the first place (SIGSYS).

    ---

    As an aside, it's not the only approach and there's definitely tradeoffs with the other approaches.

    Windows has events, SEH (access violations, other exceptions), handler routines (CTRL+C/CTRL+BREAK/shutdown,etc), and IOCPs (async I/O), callbacks, and probably some other things I'm forgetting at the moment.

    Plan 9 has notes which are strings... which lets you send arbitrary data to another process which is neat, but it using the same mechanism for process control imo has the same drawbacks as *nix except now they're strings instead of a single well-defined number.

    • jclulow 2 years ago

      The Windows mechanisms you're mentioning were also added over the course of many, many years. Much of Windows also happened a long time after UNIX signals were invented.

      If you're including all that other stuff, it's probably fair to include all of the subsequent development of notification mechanisms on the UNIX side of the fence as well; e.g., poll(2), various SVR4 IPC primitives, event ports in illumos, kqueue in FreeBSD, epoll and eventually io_uring in Linux.

      • flykespice 2 years ago

        Except much of these UNIX later development were done by their derivatives and are often available with certain degree of incompatibility among them (or not even at all)

      • jkrejcha 2 years ago

        Yeah, it definitely is (especially since SIGIO is a thing :)). Even the Unix signals had more added to them over time (SIGWINCH and friends iirc came from the BSDs).

        A lot of the mechanisms are very OS specific but I do think they're good comparisons to have with signals as well.

  • pcwalton 2 years ago

    "signalfd is useless" is a good article: https://ldpreload.com/blog/signalfd-is-useless

    It goes into the problems with Unix signals, and then explains why Linux's attempt to solve them, signalfd, doesn't work well.

    • lelanthran 2 years ago

      That is a good article. I found myself nodding in agreement while reading it, thinking "Yeah, I've been bitten by that before".

      How does Windows handle this? There's still signals, but I believe/was under the impression that signals in Windows are an add-on to make the POSIX subsystem work, so maybe it isn't as broken (for example, I think it doesn't coalesce signals).

      • okanat 2 years ago

        Windows has a slightly better concept: Structured Exceptions (https://learn.microsoft.com/en-us/windows/win32/debug/struct...). It is a universal concept to handle all sorts of unexpected situations like divide by zero, illegal instructions, bad memory accesses... For console actions like Ctrl+C it has a separate API which automatically creates a thread for the process to call the handler: https://learn.microsoft.com/en-us/windows/console/handlerrou... . And of course Windows GUI apps receive the Window close events as Win32 messages.

        Normal windows apps doesn't have a full POSIX subsystem running under them. The libc signal() call is a wrapper around structured exceptions. It is limited to only a couple well-known signals. MSVCRT does a bunch of stuff to provide a emulation for Unix-style C programs: https://learn.microsoft.com/en-us/cpp/c-runtime-library/refe...

        In contrast to Unix signals, structured exceptions can give you quite a bit more information about what exactly happened like the process state, register context etc. You can set the handler to be called before or after the OS stack unwinding happens.

        • lelanthran 2 years ago

          I am such a moron. Every one of those three links above is colored as 'visited' for me.

          I have obviously read this up before and just didn't remember :-(

  • chasil 2 years ago

    There were differences between BSD and SYSV signal handling that were problematic in writing portable applications.

    https://pubs.opengroup.org/onlinepubs/009604499/functions/bs...

    It's important to remember that code in a signal handler must be re-enterant. "Nonreentrant functions are generally unsafe to call from a signal handler."

    https://man7.org/linux/man-pages/man7/signal-safety.7.html

    • convolvatron 2 years ago

      reentrancy is not sufficient here - at least that provided by mutex style exclusion. the interrupted thread may have actually been the one holding the lock, so if the signal handler enters a queue to wait for it, it may be waiting quite a while

  • NikkiA 2 years ago

    I always felt VMS' mailbox system was much more elegant, but I imagine it's an ugly mess under the surface too.

    https://wiki.vmssoftware.com/Mailbox

  • palata 2 years ago

    I wanted to say the exact same thing! I would love to get more details about that.

  • eterps 2 years ago

    Would love to read a blog post about that.

samatman 2 years ago

I was interested in Hare until I found this immensely self-defeating FAQ item: https://harelang.org/documentation/faq.html#will-hare-suppor...

As a baseline, I support developers using whatever license they would like, and targeting whatever operating systems, indeed, writing whatever code they would like in the process.

That doesn't make this specific policy a good idea. Even FSF, generally considered the most extreme (or, if you prefer, principled) exponents of the Free Software philosophy, support Windows and POSIX. They may grumble and call it Woe32, but Stallman has said some cogent things about how the fight for a world free of proprietary software is more readily advanced by making sure that Free Software projects run on proprietary systems.

They do at least license the library code under MPL, so merely using Hare doesn't lock you into a license. But I wonder about the longevity of a language where the attitude toward 95+% of the desktop is "unsupported, don't ask questions on our forums, we don't want you here".

Ironically, a Google search for "harelang repo" has as the first hit an unofficial macOs port, and the actual SourceHut repo doesn't show up in the first page of results.

Languages either snowball or fizzle out. I'm typing this on a Mac, but I could pick up a Linux machine right now if I were of a mind to. But why would I invest in learning a language which imposes a purity test on developers, when even the FSF doesn't? A great deal of open source and free software gets written on Macs, and in fact, more than you might think on Windows as well.

From where I sit, what differentiates Hare from Odin and Zig, is just this attitude of purity and exclusion. I wish you all happy hacking, of course, and success. But I'm pessimistic about the latter.

  • kbolino 2 years ago

    On the one hand, I can respect the authors for sticking to what they want to accomplish and not accommodating every demand.

    On the other hand, that is hardly the only thing from the FAQ that raises one's eyebrows:

    > we have no package manager and encourage less code reuse as a shared value

    > qbe generates slower code compared to LLVM, with the performance ranging from 25% to 75% the runtime performance of comparable LLVM-generated code

    > Can I use multithreading in Hare? Probably not.

    > So I need to implement hash tables myself? Indeed. Hash tables are a common data structure that many Hare programs will need to implement from scratch.

    As it stands, this is definitely not a language designed for mass adoption. Which is fine, and at least they're upfront about it.

    • jay-barronville 2 years ago

      Some of those design decisions I’m okay with, but deliberately not providing a basic hash table for general usage is pretty bizarre. I can’t think of even one serious software project I’ve worked on that didn’t need a dictionary/map-like data structure somewhere in the code.

  • skydhash 2 years ago

    > But why would I invest in learning a language which imposes an arbitrary purity test on developers?

    While I understand your concerns, I disagree with your the idea of “imposition”. Someone doing something for free doesn’t owe anyone to do it in a particular way (as long as it’s not malevolent). You’re free to express your opinion, but if the developer has already established his guidelines, criticisms like this is not constructive.

  • stonogo 2 years ago

    Sounds like you and the Hare people have different definitions of success. As for "languages either snowball or fizzle out," I feel like that's pretty dismissive of a lot of languages that have been steadily marching on for decades even without this rockstar status.

    Not every band has to hit the Billboard charts to be worth listening to.

  • bee_rider 2 years ago

    It says they won’t officially support Windows or MacOS. Some other project can try to port it if they want, right? It seems good of them to be honest about their intended level of support.

    Supporting an OS the devs don’t use is a big ask.

  • 2pEXgD0fZ5cF 2 years ago

    > Languages either snowball or fizzle out.

    This is not true and a naive statement. There are quite few languages which are not popular across the board but have a very firm niche in which they thrive and fulfill critical roles.

  • cardanome 2 years ago

    I think focusing on Linux makes sense for limiting the scope of the project. Supporting Mac sucks when you own no Apple hardware and have no personal interest in the ecosystem. Windows users probably can just use WSL, right? Or I mean, people use docker these days anyway.

    So I get it. Especially if it is to be a more niche or pet project but then again I don't buy the ideological reason. I am a really big proponent of free software and their stance just doesn't make any sense. I agree with you here. But then again they can do whatever they want.

  • palata 2 years ago

    I don't think that Apple particularly cares about porting their software to Linux. Do you feel the same about Apple? That with such an attitude, they surely cannot succeed?

    • samatman 2 years ago

      Apple releases a great deal of open source software, which, so far as I'm aware, all runs on Linux as well. At least Swift, clang, and LLVM, all run on Windows as well. So does their Objective C compiler, so of Apple's programming languages, that leaves AppleScript. I would not describe AppleScript as robustly successful.

      I believe Apple could probably get away with keeping Swift proprietary, or only supporting Apple platforms. But they don't. I have no inside-track information on why that is, but I suspect the reason is fairly simple: developers wouldn't like it.

      • palata 2 years ago

        > so of Apple's programming languages

        So the whole part of your message about "even the FSF saying that free software should run on proprietary system" works when you want to criticize Hare, but not when looking at Apple proprietary software, right?

        A language is just another piece of software, I don't see why you should apply different rules to a programming language than, e.g. to a serializing system like Protobuf. And I don't think Google actively supports swift-protobuf (https://github.com/apple/swift-protobuf).

        Hare upstream just says "we are not interested in supporting non-free OSes, but we won't prevent you from doing it". It's your choice to not use Hare because of this, but it's their choice to not support macOS.

        • samatman 2 years ago

          > As a baseline, I support developers using whatever license they would like, and targeting whatever operating systems, indeed, writing whatever code they would like in the process.

          > That doesn't make this specific policy a good idea.

      • saagarjha 2 years ago

        You will note that Apple invests approximately zero effort in making those projects portable.

  • sramsay 2 years ago

    "We cannot effectively study, understand, debug, or improve, the underlying operating system if it is non-free. We actively work with the source code for the systems on which we depend, and we are not interested in supporting any platforms for which this is not possible."

    I understand that you don't like it, but how do you come to regard a statement like this as "arbitrary?" It's exclusive, for sure. "Purity test" is one way to characterize it. But do you really think that statements like this are just the product of individual caprice? That it's not someone's attempt at a principled intervention, but just an "attitude?"

    • PhilipRoman 2 years ago

      Ouch, I hadn't really considered it before but that quote deeply resonates with me. The experience of trying to debug windows wifi system is day and night compared to wpa_supplicant/mac80211.

    • samatman 2 years ago

      You're right, it isn't arbitrary. I removed that word from the post and edited it to express my opinion more clearly.

    • apantel 2 years ago

      I was going to post the same quote. If you have no visibility into the layer you depend on, you really can’t reason about it or write optimized code for it.

      The Hares are saying they require that, which I totally understand and respect.

  • WhyNotHugo 2 years ago

    There's no purity test and the Hare devs aren't prohibiting you from using Hare on macOS or any other platform.

    They just don't want to maintain Mac/Windows ports themselves. If somebody else is interested, they can maintain a port. Like that macOS one that you've already found.

  • sakras 2 years ago

    My real showstopper with Hare is the lack of multithreading. In the modern world, we need to be making parallelism easier not harder!

  • jampekka 2 years ago

    "The goal of Hare is not to achieve the broadest possible reach, but to be a part of a broader system which effectively achieves Hare’s goals."

andsoitis 2 years ago

Impressive, super cool, and inspiring!

Example of “creating something impressive in X days” requires a lot of experience and talent that is built over years.

  • beryilma 2 years ago

    Versus now... I changed the text on a button with an internationalized string. It only took me about a week.

    I put the English string in the catalog, updated a number of tests, run the tests on the local system, pushed the change to staging cluster, fix unanticipated test failures, push the change to production, contact the translators to have the string translated to a number of languages, and have documentation updated.

    • lupire 2 years ago

      I suggest you use translation management tools, so the translator gets the strong as soon as you add it to the catalog.

      Buy anyway there's no "then vs now" when you are really comparing "prototype" to "deliver to users". It took Unix decades to get those strings translated.

    • Muromec 2 years ago

      So... It goes to production before you get translations to all the languages?

      • beryilma 2 years ago

        In my case, the "production" does not really become visible to users right away. Perhaps, I should have called it "pre-production".

  • pushedx 2 years ago

    Also the creator of KnightOS, written entirely in Z80 assembly, more than 12 years ago!

    https://www.ticalc.org/archives/files/fileinfo/463/46387.htm...

  • saagarjha 2 years ago

    Drew is smart and his timeline is short but I think it’s the wrong way to look at it if you just put him on a pedestal for it. Making a UNIX clone is a typical undergrad project at most universities. Extending that to something that is complete is something that requires perseverance, not special genius.

    • bjoli 2 years ago

      I think it is a matter of how you are exposed to programming. I started with pascal at 9, and I wrote my first (VM-)bootable OS in junior high school (around the age of 14). Not as fancy as this of course, but it booted into an environment not unlike r4rs scheme - based on SIOD. A scheme error was handled but any C errors would immediately lead to a kernel panic.

      I am not a programmer today, but I can still wrap most of my head around many low level concepts. I can't, however, write anything resembling a modern web page. Nor can I understand how any larger JS application works.

    • smugma 2 years ago

      NachOS was developed at Berkeley and maintained at UW. Both are top-ranked CS programs. Undergraduates are expected to add features to the core OS e.g. virtual memory, not build it from scratch.

      https://en.wikipedia.org/wiki/Not_Another_Completely_Heurist...

      https://homes.cs.washington.edu/~tom/nachos/

  • PaulDavisThe1st 2 years ago

    ... and also a previously kernel implementation called Helios to provide a lot of the lowest level code. Not trying to knock down the accomplishment, but DD is pretty open about the fact that a lot of the speed of this project was dependent on having done Helios first (and reusing code from it).

    • palata 2 years ago

      ...which is part of the "experience and talent built over years", I guess? :-)

  • ezconnect 2 years ago

    He only have Helios so he just integrated a few missing parts.

LightFog 2 years ago

It was really cool watching the ~daily updates on this on Mastodon - seeing how someone so skilled gradually pieces together a complex piece of software.

8organicbits 2 years ago

Code is here: https://git.sr.ht/~sircmpwn/bunnix/tree/master

GPLv3 license.

userbinator 2 years ago

The userspace is largely assembled from third-party sources.

That answered my initial surprise of clicking on the ISO and getting a 60MB download.

For comparison, Linux 0.01 was a 71k download, but contained only the kernel source.

nickcw 2 years ago

Hare looks like an interesting language.

Though this limitation will limit its adoption in this multicore age I think:

From the FAQ https://harelang.org/documentation/faq.html

....

Can I use multithreading in Hare?

Probably not.

We prefer to encourage the use of event loops (see unix::poll or hare-ev) for multiplexing I/O operations, or multiprocessing with shared memory if you need to use CPU resources in parallel.

It is, strictly speaking, possible to create threads in a Hare program. You can link to libc and use pthreads, or you can use the clone(2) syscall directly. Operating systems implemented in Hare, such as Helios, often implement multi-threading.

However, the upstream standard library does not make reentrancy guarantees, so you are solely responsible for not shooting your foot off.

  • senkora 2 years ago

    > multiprocessing with shared memory if you need to use CPU resources in parallel

    This is actually pretty powerful. I personally prefer it for most purposes, because it restricts the possibility of data races to only the shared memory regions. It's a little like an "unsafe block" of memory with respect to data races.

    • pjmlp 2 years ago

      I changed from a strong threads believer and dynamic libraries plugins, exactly because of attack vector and host program stability.

  • guenthert 2 years ago

    > However, the upstream standard library does not make reentrancy guarantees, so you are solely responsible for not shooting your foot off.

    Well, that not only rules out multi-threading, but also usage in interrupts. Quite a limitation for a "systems programming language" methinks.

  • packetlost 2 years ago

    I just wish it had closures

westurner 2 years ago

From "Linux System Call Table – Chromiumos" https://www.chromium.org/chromium-os/developer-library/refer... https://news.ycombinator.com/item?id=33395777 :

> google/syzkalleR

> Fuschia / Zircon syscalls: https://fuchsia.dev/fuchsia-src/reference/syscalls

pjmlp 2 years ago

Quite cool, by making use of Hare instead.

lupusreal 2 years ago

Missed opportunity to call it Drewnix.

  • jrpelkonen 2 years ago

    Fun fact: Linus Torvalds originally named his fledgling OS as “Freax”, but it was an FTP site admin who came up with “Linux” and the rest is history. So perhaps the opportunity is not completely missed…

    • davisr 2 years ago

      No he didn't.

      "I called it Linux originally as a working name. That was just because "Linus" and the X has to be there--it's UNIX, it's like, a law--and what happened was that I initially thought that I can't call it "Linux" publicly because it's just too egotistical. That was before I had a big ego."

      https://yewtu.be/watch?v=kZlOCHYu1Vk

anta40 2 years ago

Very cool. Most of these Unix clones are usually written in C. This one is written in a new programming language.

  • balder1991 2 years ago

    I only read part of the FAQ. I find the desire to keep the complexity low by limiting the compiler lines of code and not using LLVM interesting, but I wonder how practical it is. The FAQ admits that because of this, it generates slower code. So it shifts the complexity to the software codebase, by telling the users to “use assembly where needed”.

    Seems a bit like Python’s philosophy of not introducing too much optimizations to prevent the runtime complexity from spiraling out of control.

    • PhilipRoman 2 years ago

      I doubt it is a real problem for anything other than number crunching. I like to use tcc during development (which does very little, if any optimizations) to speed up compilation and I never noticed any regressions in performance, even for GUI software. Throughput just isn't that big of a deal for most applications (although latency and resource usage is, but that's not affected by choice of compiler).

      • fuzztester 2 years ago

        You are using C (with TCC) for GUI apps? with what GUI framework or library?

        • PhilipRoman 2 years ago

          I used SDL2. Not actively working on that project anymore, but I picked it due to requirements - fast startup, low latency, low memory usage, portability anywhere (linux on multiple distros, CPU architectures and multiple rendering backends, any Windows version from XP to 11...). C fits that very well IMO and I don't regret choosing it.

    • cardanome 2 years ago

      From what I heard LLVM seems to be not very great at keeping backwards compatibility and makes no guarantees that the IR (intermediate representation) stays the same. So I imagine it can be frustrating to have a moving target.

      Plus it is a heavy dependency which means projects like writing a self-hosting OS in a month are much less realistic to achieve when your compiler relies on LLVM.

      And not the least, the code generation is pretty slow. If your languages cares greatly about compile speed, which it should, this is a bummer.

      So yeah, for many projects avoiding LLVM might be a good idea.

  • pjmlp 2 years ago

    There were UNIX written in Ada and Pascal, naturally C has a special relationship.

calvinmorrison 2 years ago

hey drew! did writing this project give you any Hare-y situations you hadn't run into before, or maybe - reached into corners not yet probed by Hare and gave you ideas for a new feature or edge case that was scary?

amelius 2 years ago

Waiting for an OS that treats GPU(s) as a first class citizen ...

  • eterps 2 years ago

    That wouldn't be too hard if GPU's would have a stable interface. Try programming a GPU in Assembly language and see how that goes. The experience sucks, but that's the level that needs to be targeted in case of an OS.

    • eterps 2 years ago

      For example, in the past Amiga computers had a 'GPU' (although much less powerful than todays GPUs) with a stable interface. It was a first class citizen in its OS. It also was incredibly easy to target in Assembly language.

      • pjmlp 2 years ago

        Blitter was great, but those were simpler times.

        The best we have nowadays is using compute shaders for the same purpose.

        Just like when using a TMS34010 with its C SDK.

      • mepian 2 years ago

        Amiga died because it was stuck with the same old "GPU" for too long, among other reasons.

  • sph 2 years ago

    If programming GPU drivers was not something only a handful of employees with NVIDIA or AMD badges could do (because of NDAs, non-public documentation and immense complexity), somebody would have tried.

    • amelius 2 years ago

      The point (for some) of writing your own OS is that you do something only a handful of people can do ...

      • sph 2 years ago

        Writing GPU drivers is not hard per se. It is impossible if you don't work at the vendor and have access to internal documentation.

        Knowing how to write a kernel (which incidentally I am doing for the second time) doesn't mean you have years to dedicate to reverse engineer something as complicated as a GPU

  • bobmcnamara 2 years ago

    Are you thinking something like Plan9 instances, cloud-in-a-box style, or something else?

  • saagarjha 2 years ago

    What should this OS do?

AtlasBarfed 2 years ago

Are there "waypoint" commits for major milestones? Id really like to see those.

Like PC bootstrap, basic kernel action loops, process forking, yada yada

thefaux 2 years ago

Impressive work but I feel this approach is the hard and brittle way to write an os. The easier and more portable way is to write the os as a guest in a host language. You start with a simple shell with the print command and build from there.

  • palata 2 years ago

    I hope it's not too easy then... imagine what he could do in 27 days if this was the "hard and brittle way" :-).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection