Settings

Theme

Why can't computers boot instantly? (2013)

superuser.com

146 points by anon35 7 years ago · 166 comments

Reader

diego 7 years ago

That is not a good answer. It's perfectly possible for computers to boot in a fraction of a second.

The argument that computers need to go from a useless state S0 to a useful state S1 and that takes a long time is just bullshit. The question is, how close can you bring S0 to S1. This is a question of optimization. What happens is that S1 is different for different computer configurations, and processing needs to happen. Then the question becomes, how much effort do software companies spend optimizing that processing? The answer is "as much as they feel that they need to."

Computer boot times have been constant over many years because performance gains enable companies to add more features. As long as the boot time stays within tolerable bounds for the typical user, there is no need to optimize further. But there are tons of other computer systems in the world (particularly embedded) that must boot instantly, and they do so.

  • fulafel 7 years ago

    I think the same answers hold for a large set of "what's stopping us from making computers better in dimension X". You can explain the cause of the unsatisfactory state as lack of incentives and software inertia, and you can also say that it's fixable at a cost.

  • colejohnson66 7 years ago

    It depends on if you consider the Arduino and other related devices “computers”, but their init routine takes a fraction of a second before control is passed off to the user’s code

  • mbell 7 years ago

    > It's perfectly possible for computers to boot in a fraction of a second.

    Maybe if you are restricting to software and time from 'hardware ready', but I doubt this is possible from power application. There are a lot of hardware processes that need to take place before any real code can execute, e.g. locking clocks, memory training, etc.

    • Dylan16807 7 years ago

      Okay, how about we budget half a second for hardware and half a second for software? (assuming SSDs) How long does memory training need?

  • petra 7 years ago

    // What happens is that S1 is different for different computer configurations, and processing needs to happen.

    The computer and OS config doesn't change often.

    And it's doesn't seem slow to test that.

    So caching at boot should work.

    And caching is a low effort optimization. So it should have been implemented early.

    But we don't have caching at boot, why?

    • salawat 7 years ago

      The two hard things in computer science are naming things, and cache invalidation.

      Remember; the computer could quite literally have been completely reconfigured out from under the BIOS. Testing that everything is the same from the computer's point of view is anything but straightforward.

      Also, most recognizable boot time isn't spent waiting on BIOS and POST; it's spent waiting for either the boot loader, disk encryption, or the OS to spin up all the background stuff (graphics stacks, session managers, HAL's, message busses, desktop environments, network services... etc) it needs to offer for fulfilling the particular purpose the system is intended for.

      Computation is amazingly fast, but by no means either free or magic.

      Everything takes time. However, I make no excuses for Windows: They had 3-5 second boot nailed for a while. Then, they done bloated it to the point of unrecognizability.

      • egypturnash 7 years ago

        Usually the configuration stays the same, though - why not a way for the user to indicate when it has? Hold down an awkward combination of three far-flung keys while booting, the BIOS wakes up and checks every device plugged into it, otherwise it just pulls it in from a cache.

        (And how much does the average computer's config change? How many users never change a single part of the hardware configuration of their computer for its entire lifetime? A hobbyist who built their own might swap out CPU/GPU/drive/ram/etc, but how many people upgrade their laptops in any way?)

        Get all that backup stuff spun up. Save the state of the entire computer the same way you're saving a hibernation snapshot. Next boot, just pull that snapshot up. Something makes that snapshot crash? Reset and run the slow boot, either because the system caught this, or because the user held down the "invalidate bootup snapshot" chord.

        Although realistically I never reboot my computer any more unless I'm doing an OS upgrade anyway. It's all just "sleep" and "wake up" nowadays for me, so I don't care if bootup adds a minute or two to the end of the half an hour or so my computer spent downloading the update and applying it. Maybe a cold boot just isn't worth optimizing any more.

        • cududa 7 years ago

          As someone who worked on boot up sequence in Windows, that won’t work. You could kill components in the system if someone forgets to press the right keys on bootup. The snapshot method you mention is essentially how it already works

      • fallingfrog 7 years ago

        ..why could you not just use some kind of hash of all the relevant files/drivers to tell if the system config has changed?

        As in: the os has a flag that says if each file is used in startup. If a startup file is written then the startup config hash is recalculated. It’s a global hash that uses every startup file. If the startup hash does not match the startup hash from the last bootup, then we don’t use the cached config.

        Seems straightforward enough to me.

        • salawat 7 years ago

          Because all of that is on the hard disk; which may be encrypted.

          There's more going on during boot than you think. Bootloaders are meant to be small and simple; just enough to hand off to the OS.

          If your OS takes a minute to get its stuff straight; there isn't really much you can do to make the boot faster.

          • fallingfrog 7 years ago

            Oh ok, so in other words a lot of the time is before the os even reads the hard drive? That would sort of make the caching approach less advantageous

            • salawat 7 years ago

              Other way around. BIOS and POST tend to be complete in <2-3 seconds, even on the slowest boxes I've worked with.

              It's exceedingly fast. The UEFI implementations on modern computers then have to scan the hard drive GPT to find the bootloaders, if there are any. UEFI Then hands off to the bootloader that then begin initializing the OS. At this point, you're probably anywhere from 5-7 seconds in in my experience. As your OS begins loading, it has to spin up the filesystem, and locate all the necessary files required to do it's thing. Most critical hardware (mouse, keyboard, video drivers) are loaded and initialized at this point.

              Generally, your OS would be in some equivalent of a single-user mode at this point.

              After the necessary local filesystem shenanigans are complete, non-critical device drivers (network, sound card, exotic storage media) tend to get initialized. Network drives if properly configured might be mapped, but generally (I.e. Windows) loads Most remote volumes after login, which is generally after the system enters multi-user mode.

              Windows in particular is incredibly annoying because vast tracts of configuration data is tracked in the registry; an in-memory hierarchical key/value pair binary database. Most "Windows updates" tend to involve shell scripts/installers running and performing filesystem and registry updates while you're sitting there really wishing you knew why stuff was taking so long. Stuff is taking so long because the registry is a bloody mess, and your filesystem is getting polluted by Windows not cleaning up after itself.

              God help you if you killed an update in the middle; as the scripts probably were not crafted with a User's convenience in mind, and therefore are likely to fail in exceedingly interesting ways.

              Though I habitually try to be the most horribly behaved user in that regard, and I've only encountered a handful of updates that when interrupted had caused Bad Things(tm) to happen.

              If you aren't into breaking things for fun and profit though, I'd not advise you doing the same.

              Long story short; 5-7 seconds is the absolute longest I'll tolerate for boot from hardware to OS selection. After that, faster boots depend on OS developers to get things going faster.

      • bradknowles 7 years ago

        The two hard problems are:

        Cache invalidation Naming And off-by-one problems.

        FTFY. ;)

    • bradknowles 7 years ago

      But macOS does exactly that. The second boot is much faster than the first one after an OS configuration change, because it has to figure out what the new cache order has to be.

      This feature was introduced years ago. What is stopping other OSes from doing the same?

argimenes 7 years ago

When I was in high school their Amiga would cold boot into a GUI within 3 seconds. It was the same for the Acorn Archimedes in the UK. These computers had the entire GUI OS burnt into the ROM. After that I remember the shock or getting a 386 and waiting minutes for Windows 95 to load. It was like the future had died and was buried by a conspiracy of silence in the industry.

  • weinzierl 7 years ago

    Almost the the same for me. I had a C64 and you could switch it off and immediately on again. There was no boot time but if you did it too quickly it sometimes behaved strangely. That was because there were bits and pieces of the previous RAM contents still there. The most obvious effect was remnants of the old application on the screen. So no boot time but still had to wait 3 seconds.

    When I got my 386DX(ha!) I thought the boot process was so much of a hassle that I seriously believed it would just be a matter of months until that problem got fixed and newer computers would be instant on again. Oh boy was I wrong.

  • Razengan 7 years ago

    > It was like the future had died and was buried by a conspiracy of silence in the industry.

    That is exactly how I feel about many other regressions in tech!

    Including the promises of the internet and online gaming, or when the industry wanted to sell 3D cards but 3D games were uglier than 2D/hand-drawn games for several years..

    • chubot 7 years ago

      Interoperable instant messenging too... and generally being able to move data from one application to another.

  • dkersten 7 years ago

    > When I was in high school their Amiga would cold boot into a GUI within 3 seconds.

    I have a cheap laptop (it does have an SSD) that boots from pushing power to a minimal but perfectly usable i3 window manager system in a second.

  • unnouinceput 7 years ago

    386 and W95? Not a good combination. By the time W95 was out on the market P5 was the latest and greatest and a 486 was the low budget one. You might want to reconsider those memories. As for loading times it all boiled down on how many check-up you wanted or not in your BIOS. As a rule I've always did memory check by BIOS only one time when buying/putting together a new PC and after that I've check it off in BIOS, halving the time it need it to boot. It was perfectly easy to have a 3 seconds BIOS time, sometime was even a nuisance to make it that short because you'd had to be fast to press Del (or whatever the BIOS key was) if you wanted to go to BIOS instead.

    • avian 7 years ago

      > By the time W95 was out on the market P5 was the latest and greatest and a 486 was the low budget one. You might want to reconsider those memories.

      Maybe the 486 was the low end of what you could buy at the time, but not everyone bought a new computer. My parents upgraded their 386 from Windows 3.1 to 95. It was slow, but usable.

      • unnouinceput 7 years ago

        In 3rd year at Uni, in good old '96 I had a P5 with W95 in my dorm room. Granted, not mine, but the friend and colleague who had it was not some rich parents kid, just middle class. We played HOMM2 on it all day long, much to his gf dismay.

    • vardump 7 years ago

      I remember 100 MHz 486 with 12 MB RAM being the point when Win95 was nice to use. With 8 MB it would just swap too much.

      Of course it'd run with much less.

    • karmakaze 7 years ago

      There was a long time where the 486 was the sweet spot before Pentiums were commonplace. I didn't so much run Win3x/9x but did have both OS/2 and WinNT i486 (desktop and laptop) machines with up to 66 MHz 40 MB ram before moving off them.

      • unnouinceput 7 years ago

        486 DX4 at 100MHz was something like 250 USD in 1995. I remember this perfectly because my sister in law saw this in a ad and wanted to buy it for her business and need it my help for negotiating the price down to 200 USD, since that was all the budget she had for such a purchase.

    • dmead 7 years ago

      you're conflating a bunch of stuff here.

  • Theodores 7 years ago

    The whole of MS-DOS was rubbish if you came from an Acorn machine, even the BBC Model B was streets ahead in terms of usable performance. Games were definitely better.

    I have a revisionist view of Microsoft - I no longer believe their early products of BASIC and MS-DOS were enabling. And Windows got the world stuck in the desktop PC paradigm when mainstream computing was networked, not moving floppy disks around.

    Nowadays with Ubuntu and a decent disk I don't have boot speed problems. If I want the machine rebooted then it is all up and loaded where I last left off within a minute. That includes the shutdown with web server, proxy server and networked drives to deal with. So I am asking, does everyone else have a huge boot delay? Is it really something like three minutes on Windows or Apple to shutdown and restart the machine?

    Since I disabled IPV6 I don't even need to reboot - it used to start up funny after a suspend, which it does not do now, the network actually works so no reboots needed.

    The other performance difference is concerning adblockers. Recently I turned off the adblocker for a website with a form I needed to fill in. Suddenly the fans were on and my machine was crawling along!

    Today with free and open source software you can have a modest machine that goes at speed or you can have a paid for operating system and not turn off the adverts to have an experience akin to wading through a lake of treacle. The developer tools seem fine in open source world so I am wondering if going with Windows really is the sane and rational choice for productivity. I can understand having a Windows machine for testing and doing Windows things but for actual work the open source world just has so much more 'professional' productivity. Boot speed being an indicator of this.

    • kalleboo 7 years ago

      > So I am asking, does everyone else have a huge boot delay? Is it really something like three minutes on Windows or Apple to shutdown and restart the machine?

      Since I only reboot for OS updates (at which point the update process also runs which is much slower), I had to test this out.

      My MacBook Pro running the latest MacOS, takes about 20 seconds to shut down, and then 50 seconds to boot again - including restoring the software that was running before shutdown [browser w/ tabs, IRC client, etc] since it does that before hiding the boot progress screen.

bane 7 years ago

All of the answers here are missing something. In the old days, the OS was often on a ROM that was simply mapped into some unified memory location. Thus the system went from a state of "off" to "ready to go" pretty much immediately. There wasn't a need to copy the data from the ROM to RAM and then execute some stuff to get into an initialized mode.

If you hunt around a bit for things like "atari 800 memory map" you can find diagrams showing how this all worked. In fact on some systems, you could selectively map certain things from ROM into the memory map to get more RAM if you wanted.

Today, the move to put the OS onto disk, the memory hierarchy, and virtual memory eliminate this approach. You basically have to let the virtual memory system map things from disk into some page space, and often things need to be copied into RAM, work their way up the cache hierarchy and be executed somewhere in order to set the system into an initialized state. I've honestly forgotten nearly all of the details so maybe somebody else can provide a better explanation.

So the reason computer can't boot instantly:

1) We have a complex memory system these days that prevent mapping the OS into the memory address space.

2) OS's aren't in ROM anymore. They sit on disk in a state different than when under operation.

3) Simple engineering willpower. Nobody wants to bother to figure it out because once the system is booted it doesn't really matter (with modern sleep modes and whatnot).

  • Baeocystin 7 years ago

    I am still waiting for the computing world to catch up to the instant-boot, straight-to-REPL joy that was my Commodore 128.

    I say that only partly tongue-in-cheek. The home machines in that era were right on the cusp; they were complex enough that you could do Genuinely Interesting Stuff with them, and simple enough that you could still hold the whole machine in your head. The instant feedback of REPL being the primary interface, and the almost zero-time cost of rebooting when an experiment didn't work out, made for a spectacular tool for self-learning.

  • Dylan16807 7 years ago

    Grabbing the kernel and login screen off of an SSD, then mapping the rest of what you need into virtual memory, is something that's easily accomplished in 100ms. If you design for 3-5 seconds of password typing that's enough time to grab gigabytes in the background.

    The complex memory system isn't the culprit here. It's one of the easiest parts to set up.

  • animal531 7 years ago

    I just built a new desktop PC (albeit with great parts) and it boots to a usable state in Win10 in just a few seconds. To be honest it's good enough to no longer worry about it.

    Of course mileage on a laptop with non-quality parts will vary.

    • MrMember 7 years ago

      Windows on an SSD is pretty amazing when it comes to boot time. My computer spends more time on the BIOS splash screen than it does booting Windows.

kstenerud 7 years ago

There are a number of reasons:

1. Hardware

Hardware is a big problem for boot times. Most hardware is poorly standardized, or not standardized at all, or doesn't even follow the standards, or tries to be backwards compatible with older hardware, or is just plain buggy. This means that the initialization code has to poll and retry and work around a whole bunch of things just in case the hardware happens to be slow in responding or gives a weird response. This is the main reason why POST is so godawful slow, and why the initial linux boot sequence takes so long. Apple hardware can boot quicker because they control what hardware is in the machine and can optimize their initialization code for it.

2. Software

The operating system stack is HUGE. There's a LOT of state that needs to be initialized, and most of it is not very efficient (we tend to optimize the runtime operation vs the startup operation of a software package). You absolutely could cut the software component of an OS boot sequence by an order of magnitude, but the development costs would be massive, and the gains pathetic in terms of the work-over-time the machine will do over its lifespan.

3. Protocols

A large number of the protocols we use for inter-process and inter-device communication have poorly designed latency characteristics. Either they are too chatty (requiring multiple messages back and forth to do a task), or have ill-defined timeouts (requiring clients to wait longer than they should), or ambiguous states, or some poorly built implementation has become the de facto standard. This is an area I'm personally tackling.

4. Formats

We use a number of formats for the wrong kinds of things. Appending to a tgz file, for example, has horrendous time implications, especially as the archive size grows.

  • catern 7 years ago

    This is a really good summary of some of the main reasons - more interesting detail than many of the other comments here.

    Now what would really be interesting is if we had the same kind of knowledge about the typical causes of userspace software and distributed systems being slow to start. In my experience, for proprietary software, it's mostly because people inserted random calls to sleep()...

verisimilitudes 7 years ago

Windows is a bloated mess, so it's obvious there.

As for anything POSIX, it's because UNIX is shitware; it took a long time to boot decades ago and it takes a long time to boot now, because more garbage has been piled on top.

It's possible for a good operating system to boot in less than a second, however. People have simply been conditioned to accept this unacceptable state of affairs, no different than a TV that takes ten seconds to become usable for no reason other than the manufacturer was lazy.

  • ahartmetz 7 years ago

    There are embedded Linux demos that boot into a GUI application in about a second. Linux systems don't boot slowly because "POSIX" isn't capable, it is a matter of boot time vs. flexibility and developer convenience / development time. There are also Windows CE systems that boot that fast. Linux demo: https://www.toradex.com/de/videos/fast-boot-demo-with-linux-...

    I have a few moderate boot time improvement projects on embedded Linux under my belt. The typical low hanging fruit are hardcoded wait times in boot loader and drivers, unnecessary drivers, unnecessary features (did you know that PPP - yes, the modem thing - support takes a while to initialize?), slow to mount filesystems, bad choice of storing / compressing the kernel, and delays in the init system (run the application as soon as possible instead). More hardcore techniques are various methods to reduce dynamic linker overhead and reordering blocks on storage in the order they will be read at startup.

    • planteen 7 years ago

      Came here to say this. I've worked on embedded Linux projects with a boot time requirement of 2 seconds and easily made it.

      I guess part of the difference is that you explicitly enumerate all hardware and drivers in a device tree on embedded rather than figuring it out on the fly like on a desktop/server machine.

  • baybal2 7 years ago

    I have a GUI linux, and it boots in 3s on stock xps 13.

    Magic trick? Open RC + no crapware + lightweight DE

    I use no preloading daemons, prelinkers, gold, or malloc hacks

  • viraptor 7 years ago

    It's not bad and people did some good work on the boot times: https://lwn.net/Articles/299483/

    There's nothing inherent to posix that stops us from booting in seconds.

  • galfarragem 7 years ago

    I still remember the day I got a new TV (despite old CRT working) for my parents and see their faces the first time time we turned it on. Now they are used but at the beggining the boot time was difficult to grasp: it used to be negligible with the obsolete tech..

    • agumonkey 7 years ago

      The old world was mostly stateless. Power = function.

      That said, even CRT had 'boot' time. The screen would need a few seconds to finally warm up and be fully on. But it was still a mind free system, you knew that your were done with the boot process and could enjoy 90% of functionality.

      • tsjq 7 years ago

        >The screen would need a few seconds to finally warm up and be fully on.

        is that why we hear the audio before video is ready ?

        • dsr_ 7 years ago

          Yes. The radio receiver, sound decoder, amplifier and speaker all work on very little charge and can bring you the sound in a fraction of a second after being turned on; the CRT needs to power up magnets and a particle accelerator

          Late in the history of CRTs, some units kept the electron gun warmed up whenever the unit was plugged in -- so it was never all the way off. That enabled an "instant on" feature, at the cost of more power usage.

          • LocalH 7 years ago

            Pretty sure the "instant on" sets also caused premature aging to the CRT. They'd keep the filament partially heated all the time.

    • tempguy9999 7 years ago

      I'll beat that: I was watching a modern flat-screen fancy tv. It blacked out and rebooted while I was watching. Screw startup if they can't even stay up. OMG TVs can now crash...

      • amiga-workbench 7 years ago

        Not to defend that, but it reminds me of a project for the raspberry pi that would let it output teletext signals over its composite port. The original version was a bit buggy and would crash the onboard computer in CRT TV's.

      • gonzo41 7 years ago

        This happened to me when i was playing xbox recently. I'm not sure if it was the game (Division 2) or the TV but it really surprised me.

    • FrancoisBosun 7 years ago

      There is now a boot time for my kitchen mixer… about 2-3 seconds. It’s horrible and I always forget.

      • sverige 7 years ago

        Honestly, "smart" appliances are the worst idea since . . . I can't actually think of a worse idea.

        • brokenmachine 7 years ago

          Recently I got an email ad for a "smart" kettle with an app that you can remotely turn the kettle on, or schedule a time for it to turn on...

  • c22 7 years ago

    I used to achieve 1-2 second boot times by just reconfiguring my kernel to only load drivers for hardware I actually had installed. It's more like 4-5 seconds now that ASLR is a thing, but I'm not complaining.

  • snek 7 years ago

    I installed Manjaro about a week ago and it cold boots in ~5 seconds (not counting the grub menu to select between windows and manjaro)

  • johnchristopher 7 years ago

    While I don't agree with everything you wrote I would like to add android boot time to the list.

mojuba 7 years ago

A bigger question for me is, why can't monitors boot instantly? So annoying that a monitor that went to sleep can sometimes take 10-20 seconds longer to wake up than the computer it is attached to.

  • quickthrower2 7 years ago

    But my 5” touchscreen monitor can...

  • gruez 7 years ago

    If it's really 10-20s, I'm guessing it's because it's because it's cycling through the various input sources looking for a signal.

    • mojuba 7 years ago

      Might be that, also the computer itself might be another reason. But who cares? Why doesn't the monitor just re-establish the last known good connection quickly?

    • ebg13 7 years ago

      But it could just look at all of the input sources at the same time. And anyway, these are solid state electronics, not hamsters and flywheels. Taking seconds to send "give me electrons" and for the other side to say "here are some electrons" is bonkers.

      • jamesissac 7 years ago

        Being able to observe all inputs sources at the same time requires extra hardware(more utilisation of the FPGA resources), considering only one input is connected to the display at a time. So it's probable that the check is sequential.

        • ebg13 7 years ago

          "More" isn't a meaningful qualifier without a quantifier. I'm not convinced that it would take more than $0.01 worth of resource. I could even see doing it in a purely analog fashion with an extremely basic component like a mosfet. If a pin goes high because the computer responded to your request to send electrons, it sets the corresponding channel.

          > So it's probable that the check is sequential.

          It's a user unfriendly design for the process to be "send request, wait 5 seconds, send request on next channel, wait 5 seconds, send request on next channel, ..." when it could instead be sending requests across all channels rapidly and then that getting a response on a particular channel _activates_ that channel by analogue means.

        • lazyjones 7 years ago

          I know you’re just guessing, but if that’s what happens, why don’t they just default to the last source used and start the cycling only if nothing is detected there?

    • kalleboo 7 years ago

      I have auto-input select disabled since I only use my monitor with a dock. Still takes 15 seconds.

    • brokenmachine 7 years ago

      Why does it take 10-20s even if it is cycling through all the inputs though?

  • jjwhitaker 7 years ago

    That's on the monitor, whatever process it is using to scan for input, warm up, or other needs. Buying a cheap offbrand monitor or something low end has drawbacks.

  • unnouinceput 7 years ago

    10 to 20 seconds? What monitor do you have? All decent monitors have a maximum of 5 seconds.

    • Sephr 7 years ago

      Unfortunately the very best LCD monitors currently all take pretty long to start up.

      The Asus PG27UQ (regarded as the current-best consumer 27" 4K LCD) takes almost 20 seconds to start up, presumably to initialize a bunch of state for its FPGA.

      • unnouinceput 7 years ago

        LCD? What year is this, 2005? I have LED for over a decade now, and all my friends as well.

        • Tagbert 7 years ago

          No, you don’t. Unless you are using an OLED screen, you have an LED-backlit LCD screen. Vendors call them LED monitors because it sounds more high tech than old LCD monitors.

          • MertsA 7 years ago

            That said, LED backlit LCDs are better than the old CCFL screens.

CGamesPlay 7 years ago

A lot of these answers are why computers don’t boot instantly but none really talk about why they can’t. The accepted answer talks about it on a physical/mechanical level but eschews practicality: what can’t computers boot arbitrarily quickly?

  • enneff 7 years ago

    They can. But the more software you need to initialise the slower it is. And modern machines run a lot of software.

    • adsche 7 years ago

      But then the questions are, why does it need to initialize? Why doesn't it start in an initialized state, what factors make it impossible to determine that initialized state beforehand and storing it? How close can we get?

      • codeflo 7 years ago

        It's not impossible, just increasingly expensive. Modern software practices tend move a lot of the things that used to be done at compile time to initialization time or runtime. A lot of modern tooling is built around that idea. You gain a lot programmer productivity in exchange for increased memory usage, startup time and battery drain. Is that worth it? The market seems to says yes. As much as people complain about the battery life of their devices, in aggregate, they prefer apps with even the slightest usability improvements over apps with great performance. In the battle of delivering more tiny little features, programmer productivity trumps all other concerns.

        • adsche 7 years ago

          Absolutely! But couldn't this be moved from a productivity/time problem to a storage/packaging problem? I.e., do the initialization once and save its state until some major factors change? Storage seems to be quite inexpensive. Emacs does something like this (not saving state after first run but shipping with an initialized state that was generated before, I believe).

      • magicalhippo 7 years ago

        As an example, consider the PCIe bus controller. Say it had some non-volatile memory so that when it powers on, it just reads the bus configuration from the NVM instead of enumerating and initializing the devices on the bus. Lots of time saved.

        Well that wouldn't really work without the PCIe devices also storing the configuration in NVM. If I plug a x16 device into a x4-only slot (as is typical for the third PCIe x16 slot on motherboards that have it), the device needs to know it has to only use 4 lanes, not the full 16.

        Ok, so say the controller has some way to tell the devices to save their state, and when powering on they all read from NVM and can just go on talking with each other right away, no further initialization required. Yay!

        Except then you power off your machine, and swap that GPU for the new one you bought from your friend. Now the new GPU has the configuration from your friends motherboard, and your bus controller has no idea about this new card... so this is doomed to fail.

        Of course this could be worked around by querying the bus for the devices we expect to be there... except that's the initialization step we were trying to avoid in the first place...

        • adsche 7 years ago

          Sure, that all makes sense, but why could it not do the initialization once (when something changes like you describe), save it, and reuse it next time?

          EDIT: Ah, sorry, you're saying the enumeration/scanning for changes is what takes most of the time?

        • cesarb 7 years ago

          All that PCIe enumeration takes much less than a second, in my experience. The current boot on the machine I'm using to type this took only 2.3 seconds for the kernel part (source: systemd-analyze).

          • msbarnett 7 years ago

            Enumeration, yes. But then you need to load and initialize all of the drivers for the stuff you just enumerated.

        • lazyjones 7 years ago

          It’s smart to optimize for the common case (no hardware changes) and dumb to slow it down significantly just for this flexibility. This can’t be a major factor, vendors aren’t that incompetent.

      • Fronzie 7 years ago

        Hardware has lots and lots of pieces of memory, not only block-ram, but also many registers. The needs to be initialized after power-off.

        As for the software: Hibernate is storing that initialized state.

        • adsche 7 years ago

          So would it be impossible/expensive to provide storage for those registers/memory etc. that can survive power-off, and then copy the state from there upon power-on?

          How much time would something like that take compared to initialization?

      • icebraining 7 years ago

        I'd say that's essentially what suspend-to-ram means. Booting means throwing that state away and rebuilding it.

        • adsche 7 years ago

          Right, or even just something like suspend-to-disk (or other storage) but for the pre-OS part of the boot (so that constant power isn't required).

          Why can this not work?

          • pmontra 7 years ago

            Would the pre OS part of the boot process need less than all the available RAM?

            Hibernate (suspend to disk) used to be slow with spinning disks. Even with SSDs, if on a 6 Gb/s SATA bus hibernating my 32 GB laptop would take 32 GB * 8 / 6 = 42 seconds, probably more. And 42 s to resume. My laptop goes from power off to login and Gnome loaded in less than that time. Of course it takes much more to open all the programs I need. That's why I suspend to RAM. Push a button to suspend and less than 10 s to be working again. By the way, it takes extra seconds for the laptop to connect to the Wi-Fi access point or to the ethernet switch. IMHO it's not fully usable until then.

          • colejohnson66 7 years ago

            Isn’t that what “hibernate” is?

            • CGamesPlay 7 years ago

              I would say no because I can’t reuse that state: all implementations that I’ve used discard the hibernated state afterwards. If there was an OS that reused those saved snapshots then I would argue that it does count as “booting”.

              It would be even more interesting to do this on a per-application basis. Atom does something like this to cope with long startup times, for example.

      • silon42 7 years ago

        The fact that reboots happen mostly when software (kernels, system) is upgraded is part of it. It's simplest and most reliable to initialize the system from clean state (I've often had an idea it would be best to reinstall Windows+software for each reboot).

        • nickpsecurity 7 years ago

          There was a hard drive on the market in the past for high-security use that had write-protect and an admin mode. Maybe a PIN or something to get to admin mode. Memory getting fuzzy on it. The normal usage would be read-only with you saving your data elsewhere or to a write-able partition that had nothing to do with read-only part for OS and apps. Go into admin mode to change those. Otherwise, your system is fresh every boot.

          People do it with VM's, too. I think you were aiming for something physical with no extra software. That was closest thing I could think to it.

        • adsche 7 years ago

          Yes, but do you have an idea why that is, why is it so much more reliable to initalize every time than to generate an initialized configuration and reuse that?

          Developer errors? Not enough testing?

          • dannypgh 7 years ago

            Part of it is hardware support- your machine's state isn't just in main ram and CPU registers, it's also in the state of every embedded microcontroller and onboard memory of every device on the machine.

            It's to limit the complexity of suspend/resume that a lot of OSes will do things like reinitialize all drivers (e.g. unload/load kernel modules) to try to get the whole system into a known state, but this takes time.

            Relatedly, hardware can be added (or enabled) between boots, so whatever conditional hardware initialization was done when booting may result in a state that's not quite accurate later. Furthermore, some of the state (e.g. anything involving system time) will always need to be modified after restore, and that's a state transiston that's less well tested (moving your clock forward hours or days may make a lot of software flake out).

        • cesarb 7 years ago

          > reboots happen mostly when software (kernels, system) is upgraded is part of it

          Only if you never turn your machine off. Don't you power off your desktop every day when leaving work?

          • scarface74 7 years ago

            At most, I would expect most technical people put their desktops to sleep when they leave. Well really, I would expect most of us to have laptops where we just close the top and it just sleeps.

      • gruez 7 years ago

        >Why doesn't it start in an initialized state

        That's sort of how Windows quick startup works. It saves the kernel state after startup, and on next startup, it restores it (kind of like hibernating the system). Iirc emacs does something similar with its lisp runtime as well.

        • adsche 7 years ago

          Yes, I was thinking of Emacs, too. I'm basically curious why the BIOS etc. couldn't run once, store the result of initialization and then resuse that on every boot until the configuration changes.

          • Slartie 7 years ago

            Because how would it know that some little detail about some configuration has changed (maybe you moved a PCIe card from one slot to the other, a firmware of something was updated, or whatever)? It would have to scan everything, every startup...and that takes time...

            And because the BIOS isn't your only state-holder, anything with a controller on it inside your PC holds state, and that is basically everything today. Including external devices, like everything connected via USB (actually those have two controllers minimum, a USB controller which holds communication link state and a controller for the actual device function, like being a keyboard or mouse).

            It totals to hundreds of controllers, and it simply isn't feasible to standardize a way for this entire system to somehow store an overall consistent runtime state. That is why plan B is taken: just re-establish that state every time. We call this process "booting" or "initializing".

    • maxheadroom 7 years ago

      >...the more software you need to initialise the slower it is...

      Of course, part of that initialisation is self-tests, harkening back to the O.G. POST[0] days.

      [0] - https://en.wikipedia.org/wiki/Power-on_self-test

kerkeslager 7 years ago

I remember when Ubuntu was first becoming popular. At the time, Windows boot times were long--I don't have timings, but my memory seems to say something like 5 minutes. And Intel processors were in the middle of their highest-energy-use period, so they ran hot, meaning you couldn't just leave your laptop on all the time. So reboot time was a significant part of my experience as a Windows user. When I first booted Ubuntu, the boot time was fast, something like a 1 minute, and later I tried Xubuntu which was closer to 30 seconds. This was an advertised advantage of Ubuntu, and it was a big part of why that was "the year of desktop Linux" (if only for me).

In the following years, I noticed significant improvements both to boot times in Windows and to their "sleep"-type features. I suspect this was motivated at least in part by competition from Ubuntu.

  • bonoboTP 7 years ago

    > I suspect this was motivated at least in part by competition from Ubuntu.

    Ubuntu has such tiny market share that I strongly doubt this.

    • dannypgh 7 years ago

      Ubuntu and Linux may have a small market share in the general population, but they enjoy a larger share among software developers. And we all know ("developers, developers, developers!") MS has chased that demographic.

    • kalleboo 7 years ago

      It seems just as likely it was competition from the Mac, which also at one point during Steve's reign focused on boot time

mehrdadn 7 years ago

More interesting question to me is why the hell some firmware boots take so long now. I know laptops from years ago much, much faster than what I have now, which is otherwise much faster than them. And the firmware initialization times for some servers I've seen seem completely obscene (on the order of minutes).

  • aardvarklegend 7 years ago

    Surprisingly power is a huge factor. Spinning up 48/96 drives on boot can surge beyond what power supplies could handle, so things supported staggered spin ups. This allowed huge banks of drives to start without blowing a power supply, but you don’t know how many drives or if staggered spin up is enabled until it times out. This was a huge boot delay for servers.

    • mehrdadn 7 years ago

      That's interesting to note, but I'm just talking about computers with 1 drive connected to them. Both for the servers and the laptops. In fact the older (faster) laptop had both a DVD and an HDD, whereas the newer one just has an SSD...

  • scarejunba 7 years ago

    Probably because no one turns anything off anymore so the selection pressure is gone.

    • gambiting 7 years ago

      So I'm not so sure about that. When I complained recently about sleep issues on my windows 10 desktop I was met with an overwhelming "but....why are you sleeping your desktop? Just switch it off like a normal person?".

  • gingabriska 7 years ago

    Today we have to make things hack proof, noob proof and lots of proofing adds more complexity, cost and lowers speed.

    • mehrdadn 7 years ago

      I don't think this is it. I haven't tried with many machines but I don't think current machines are all generally slow with firmware, just some of them. The part that baffles me is that the slowness is on the faster ones.

ohiovr 7 years ago

My POST sequence takes longer than the boot process so no matter how blazing fast my drives are it still takes a long time to get to the boot loader.

  • faissaloo 7 years ago

    Nail on the head, UEFI/BIOS is increasingly the bottleneck for startup speeds nowadays

air7 7 years ago

This is a good question, and the "answer" avoid it by essentially saying "it can't be zero."

Why does boot time take ~10 seconds? CPUs run in GHz, meaning 10^9 operations per second. That means that booting takes around 10^10 operations. Why that order of magnitude and not any other? Obvious answer is "that's where the optimization efforts stopped", but could it be 10^9 or 10^8?

Obviously booting time isn't just (or mostly) CPU, but the question extends to other peripherals: Harddrives access is in the 10^8 B/s range. RAM is 10^9 B/s. USB ~ 10^7 B/s. Wifi 10^6 B/s. etc. Why does booting, with all it's internal sub processes still take ~ 10^1 seconds and not any other OoM?

  • Veedrac 7 years ago

    10⁹ ops/s is an incredible underestimate; CPUs tend to have an IPC around 4, and at least two cores, even on mobile. Which means at 3GHz, you should have closer to 10¹¹ ops/s.

    The real question is not whether a computer can boot in a second, it's whether it can boot in a frame.

    • fulafel 7 years ago

      Actual observed IPC tends to be 1 or less unless you are running well tuned computational workloads. And mostly software doesn't utilize multiple cores well.

      • Veedrac 7 years ago

        Even ‘perf stat libreoffice ./somedocument.odt’ give an IPC of almost 2, and that's hardly a well-tuned computational workload.

        > And mostly software doesn't utilize multiple cores well.

        Most software sucks, that's the point of this whole conversation.

        • fulafel 7 years ago

          Valid correction, seems I was out of date, let's say 2 or below today. (I got 1.3-1.7 with a few libreoffice startup & command-line conversion experiments).

          However during the boot process I think the IPC will be notably less, because there will be periods of waiting for disk IO and hardware initialization waits.

          To zoom out, there are two dimensions here - it's good for software to run at a good IPC, but even better to do the job with fewer instructions.

          • Veedrac 7 years ago

            Sure, I think the point of measuring CPU speed is not to say that it suffices for a fast boot, but to say that whatever it takes to boot, the CPU should not be a limiting factor.

            When I think of all the things necessary to booting; discovering connected devices, loading drivers, setting up memory and permissions, loading filesystem data, starting up the graphics hardware, ... I struggle to think of many things where ‘a few milliseconds’ is not plenty of time. For sure, you might have electrical limitations, spinning disks might take multiple milliseconds, and loading dozens of megabytes of data is not instantaneous, but, really, you've got 16 whole milliseconds in a frame at 60Hz! Is that so much to ask for?

jokoon 7 years ago

Further question would be:

"If I saved all my work, why do I need to properly shut down my computer? Why does it take so long to shut down?"

Bakary 7 years ago

A silver lining of long boot times is that you can contemplate what you are about to use the computer for and catch yourself in the middle of a habit loop.

swebs 7 years ago

>When you turn on your computer, it instantly executes code in BIOS or UEFI boot manager. It doesn't take much time to execute BIOS or UEFI boot manager. It will initialize your hardware, scan your storage devices for operating system, and run the operating system. It is usually the operating system that requires much time for loading.

Eh, it takes from 10-15 seconds on my computer from pressing the power button to getting to GRUB. After that, it's only another 10 seconds or so to boot Ubuntu.

basicplus2 7 years ago

My old valve televisions are up and running long before ANY smart tv, AND its quicker to change channels

vilhelm_s 7 years ago

It could probably be faster if people cared to optimize it, e.g. some experiments to boot an embedded linux system to the console prompt in less than 1 second (https://fossforce.com/2017/01/linux-zero-boot-second/), or to the Qt Camera Preview app in 3 seconds (https://www.e-consystems.com/Articles/Product-Design/Linux-B...) or to a Debian desktop in "5 seconds" (https://wiki.debian.org/BootProcessSpeedup#Tests_results_of_...).

cookingrobot 7 years ago

Here's a proposal.. Don't do any network stuff during boot, and don't let it depend on anything super volatile like user files. Then store the boot memory state as an image (like a hyberfile). Whenever a bigger change happens to the system that will affect the boot state, like installing a new driver, or an app that has any kind of background service, recreate a new boot image. But do it right away when the new app is installed, don't wait for the next boot, or shutdown (which is annoying), just do it in a virtual machine right away as part of the app/driver install. Installing impactful apps will take longer, but that can happen in the background and is not too disruptive, and doesn't happen very often anyway.

kzrdude 7 years ago

I'm pretty happy with boot times, the longest wait is the boot loader and inputting the password. (Linux with full disk encryption).

MrQuincle 7 years ago

The more optimization in boot times, the higher the maintenance burden.

I think this can only be solved on an architectural level by introducing "specialization" in a "general" manner. In other words, have a supervising process monitor times and adjust the system to the local hardware and software demands. The adjustments to be good probably require some kind of learning mechanisms, which you also don't want to have in a kernel itself. Just my two cents.

usernam33 7 years ago

This reminds me of something I read about memristors. Once they get cheap and fast enough to be sold in gb or tb sticks they might replace ram and ssd's/hdd's for some computers. This would allow a computer to boot once (at the factory) and then just return to the state it was when powered on again.

fulafel 7 years ago

Trivia: some compuers have a single level memory store, so there are no separate disk and ram concepts from se PoV. I think as/400 and maybe Multics are like this. But i don't know if they boot instantly from whete they left off, probably not. It would be within relatively easy reach though.

  • trn 7 years ago

    For new install and first (cold) boot for a Multics system, the system is loaded from a Multics System Tape. As part of this process, the entire MST is loaded in Multics memory; Multics is partially started up to configure all of hardcore for the current hardware configuration; Multics then transfers back to BCE without shutdown.

    BCE captures this configured Multics hardcore memory image, and saves it to a file in the BCE disk partition. BCE then returns control to Multics hardcore, so it can continue system start-up.

    Subsequent (fast) boots from BCE boot the preconfigured Multics memory image, loading it from disk into memory, and transferring into the image, just as was done when the image was first saved.

gregw2 7 years ago

If computer makers put the OS in ROM memory rather than RAM and had a well defined way of overlaying the two so the ROM state could seem to be modified by writes to RAM, you could get instant boot. Early PCs worked that way like the Commodore64, right?

  • unilynx 7 years ago

    The Commodore didn't copy itself from ROM to RAM, but that was a trick you could do yourself to speed up BASIC execution - the RAM was faster than ROM

sansnomme 7 years ago

I have a question too: why does radios take so long to start up and acquire a signal? Same for other I/O devices similar to radios,on the embedded side you have keep on polling until finally it acquires the signal.

  • simcop2387 7 years ago

    One common thing is that the radio needs time for it's clock source to stabilize. To make them cheaper they usually use things like phase locked loops to take a lower frequency clock and multiply it up to the clock they need. This can take tens to thousands of milliseconds to bnot be bouncing around when first configuring the loop. And with modern radios, these loops get programed at run-time from the firmware because they need to be at a large range of frequencies. During this startup you can't transmit anything, because you'll transmit with another signal (the clock jitter) mixed in which adds noise, at best, and possibly even at a completely incorrect frequency which will result in fines or possibly jail time. The safe route is just to wait instead.

    Receivers work similarly, using PLLs to downmix the signal they want, generally, and while they can begin trying to receive while waiting on it to stabilize, it's usually pointless because you'll just be getting unintelligible garbage from the noise or wrong frequency.

tbyehl 7 years ago

What gets me is how long servers take to reach the bootloader from power on. Your average modern Dell/HP desktop or laptop is there within a second or two, Windows 10 and systemd-based Linux distros on SSD can be at the login prompt within 5 seconds total. I haven't touched a Mac in a long time but I'd bet they're no worse.

But a typical server BIOS takes 3-5 minutes do its thing. If you're lucky you might be able to disable enough stuff to shave off a minute. Insanity.

  • kalleboo 7 years ago

    I bought an old 1/10 Gbit managed switch for $25 on an online auction to play around with, and was surprised just how long (minutes) a switch can take to boot.

    • tbyehl 7 years ago

      A managed switch is basically an embedded server with super-slow storage. If it's of a certain age you might pop the cover and find a CompactFlash card.

      I run my most critical piece of home infrastructure (PiHole, of course) bare-metal on an ancient NUC-like thing with a $20 SSD because it boots and is responding to network requests in like 6 seconds.

Doubl 7 years ago

Hibernate is my preferred option when going off the computer but Windows doesn't offer it out of the box and more. You have to tweak settings. Why is that I wonder?

  • tsimionescu 7 years ago

    My understanding is that on Desktops, the default is for Sleep to do 'Hybrid Sleep' - prepare for a hibernate, but sleep instead; if you have a power failure while the computer is sleeping, you should see it waking from. The hibernate when you turn back on, instead of booting from scratch.

    I know that this was the case at some point when I checked during the Windows 8 years, I don't know if this is still true.

  • phendrenad2 7 years ago

    My guess is the same reason modern UIs suck: too complex for the average non-technical user to handle. They probably got tons of support requests asking what the heck the difference between "sleep" and "hibernate" was, and that's a hard question to answer if the person asking has no idea what RAM is...

bin0 7 years ago

Let me take a moment to vent on the garbage that is the dell UEFI firmware. I have an excellent, new machine, an XPS 15 9570. According to systemd-analyze, it took eight seconds to get past firmware, corroborated by my observations. There is flat-out no reason why it ought to be so slow. Dell makes the absolute worst BIOS.

mschuster91 7 years ago

You can do warm boot images (Sony's Linux-based Alpha cameras do this, for example), but these have to be tailor-made for each model... feasible for a camera manufacturer, impossible for a Linux distribution.

But I do wonder why WBI generation and automatic usage upon first boot isn't more common (or possible at all)...

  • flowless 7 years ago

    GoPro cameras run RTOS and one of the threads run Linux which is cold booted only once (or during an update) and stored for next fast hot boots - it was only used for providing WiFi and streaming functionality.

lazyjones 7 years ago

All this guessing and conjecture is pointless. Someone should simply measure what takes so long with the profiling tools available and post an analysis.

jasonhansel 7 years ago

The real answer: the cost (in developer time) of making computers that boot instantly is too high to justify the benefits.

  • orbital-decay 7 years ago

    Yes, but it's just one of the factors, and actually all major user-facing OSes do care about boot times. Other factors are the design by committee and platform fragmentation. Hardware manufacturers don't care about initialization times, and standard makers are issuing insanely complex specifications with lots of compatibility cruft and edge cases.

thrownhwn 7 years ago

Has someone explored the area of writing software that spits out resume from disk images?

  • amelius 7 years ago

    You mean hibernation?

    That exists for a long time now, but is technically not booting.

    • jarfil 7 years ago

      Alternatively, booting is just an inefficient "resume from disk", using a generic initial state instead of one adapted to the device.

niklasd 7 years ago

The link in the answer titled "state machines" is a bit misleading, since it directs to the Wikipedia entry about Finite-state machines. But most computers (the ones we are talking about here) are Turing complete rather than being a (computationally much more limited) finite-state machine.

thecopy 7 years ago

Because of state.

imtringued 7 years ago

Because everything takes non zero time. If a state transition from turned off to booted takes no time then doesn't that mean there is no state transition at all? The only way for it to take zero time for a computer to boot is if it is already booted or there is no difference between an unbooted computer and a booted one.

How much time does it take for a circle to become round? None at all. It happened instantly.

  • jarfil 7 years ago

    It also takes hours to compile everything from scratch, but we don't see computers taking hours to boot, do we? Most of the process "from initial state" is cached, and there is little reason to not keep the trend.

    Booting, as in detecting and initializing the hardware, including the RAM, should only be needed when hardware is changed, and even then it should be possible to cache the state of any unchanged parts.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection