Settings

Theme

Microsoft disables Spectre mitigations as Intel’s patches cause instability

securityweek.com

757 points by tomtoise 8 years ago · 328 comments

Reader

zingmars 8 years ago

These updates most definitely could've been handled better. I was having a busy week with exams and I get a call about around 10 machines not booting (this was before the announcement). Sure enough, last thing everyone reported was updating. I call the supplier and apparently they have reports of at least 2000 machines (at that moment) that had to be reimaged across the city (from what I could tell, all were older AMD PCs) because of this dumb update. I was used to not being to able to trust software, but if I can't trust hardware now either, farming does suddenly appear much more appealing.

tjoff 8 years ago

I lost many hours over this last week. The system was unable to boot and finally a thread on reddit came to the rescue ( https://www.reddit.com/r/techsupport/comments/7sbihd/howto_f... ).

This actually made the system boot but there are some leftovers being installed on first boot that I've been unable to disable that also causes the system to be unable to boot.

So now, the machine is running but as soon as it is restarted we have to re-image the disk, go through the process of manually removing patches, and then pray that we don't have a power shortage as we'd have to do everything yet again on next boot.

I'm not convinced that this patch will solve the issue either, because if this updates requires a reboot the fix won't be installed if we can't boot. I might try to install this update from the recovery console see if that works.

Quite frustrating.

  • thanksgiving 8 years ago

    Speaking of which, why do so many things require reboot to update on Windows?

    • beagle3 8 years ago

      There is a very fundamental difference between how Unix and Windows view open files:

      On Windows, once the file is open, it is that filename that is open; You can't rename or delete it; Therefore, if you want to replace a DLL (or any other file) that is in use, you have to kill any program that uses it before you can do that; And if it's a fundamental library everything uses (USER32.DLL COMCTL.DLL etc), the only effectively reliable way to do that is reboot.

      On Unix, once you have a handle=descriptor of the file, the name is irrelevant; You can delete or rename the file, The descriptor still refers to the file that was opened. Thus, you can just replace any system file; Existing programs will keep using the old one until they close/restart, new programs will get the new file.

      What this means is, that even though you don't NEED to restart anything for most upgrades in Unix/Linux, you're still running the old version until you restart the program that uses it. Most upgrade procedures will restart relevant daemons or user programs, or notify that you should, (e.g. Debian and Ubuntu do).

      You always need a reboot to upgrade a kernel (kernel-splice and friends not withstanding), but otherwise it is enough in Unixes to restart the affected programs.

      • mehrdadn 8 years ago

        > On Windows, once the file is open, it is that filename that is open; You can't rename or delete it;

        This is wrong... there's no clear-cut thing like the "file name" or "file stream" that you can specify as "in-use". It depends on the specifics of how the file is opened; often you can rename but not delete files that are open. Some (but AFAIK not all) in-use DLLs are like this. They can be renamed but not deleted. And then there's FILE_SHARE_DELETE which allows deletion, but then the handle starts returning errors when the file is deleted (as opposed to keeping the file "alive").

        To make it even more confusing, you can pretty much always even create hardlinks to files that are "in use", but once you do that the new name cannot be deleted unless the old name can also be deleted (i.e. they follow the same rules). This should also make it clear that it's not the "name" that's in use, but the "actual file" (whatever that means... on NTFS I'd suppose it corresponds to the "file record" in the MFT).

        The rule that Windows always abides by is that everything that takes up space on the disk must be reachable via some path. So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

        > What this means is, that even though you don't NEED to restart anything for most upgrades in Unix/Linux, you're still running the old version until you restart the program that uses it.

        What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct? Because it seems to me that the Windows method might be less flexible but is likely to be more stable, since there's a single coherent global view of the file system at any given time.

        • tialaramex 8 years ago

          Yes, in principle what you've said about the Unix approach here is correct, if you upgrade one half of a system and not the other half and now they're talking different protocols, that might not work.

          But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability, you've made a system that's deliberately not very robust, unless it's very, very tightly integrated (e.g. two sub-routines inside the same running program) the cost savings had better be _enormous_ or what you're doing is just amplifying a problem and giving it to somebody else, like "solving" a city's waste problem by just dumping all the raw sewage into a neighbouring city's rivers.

          Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.

          But (present year argument) this is 2018. Everybody's main file systems are journalled. Sure enough, both systems _can_ write a record to the journal which will cause the blocks to be freed on replay and then remove that journal entry if the blocks actually get freed up before then. The difference is that Windows doesn't bother doing this.

          • cat199 8 years ago

            > Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.

            Unix semantics were IIRC in place as far back as v7 (1979), possibly earlier - granted, a PDP disk from that time was bigger (~10-100MB) than the corresponding PC disk from a few years later (~1-10MB), but an appeal to technological progress in this particular example case is a moot point.

          • mehrdadn 8 years ago

            Aaaaaand here comes the Linux defending! OK...

            > But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability

            It's weird that you're blaming my operating system's problems on me. "My system" is something a ton of other people wrote, and this is the case for pretty much every user of every OS. I'm not engineering anything into (or out of) my system so I don't get the "you've made a system that [basically, sucks]" comments.

            > [other arguments]

            I wasn't trying to go down this rabbit hole of Linux-bashing (I was just trying to present it as as objective of a flexibility-vs.-reliability trade-off as I could), but given the barrage of comments I've been receiving: I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run. I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents. All I know is it happens and it's less stable than what you (or I) would hope or expect. Yet from everyone's comments here I'm guessing I must be the only one who encounters this. Sad for me, but I'm happy for you guys I guess.

            • rlpb 8 years ago

              > ...but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot...

              Ubuntu developer here. This doesn't happen to me in practice. Most updates don't cause system instability. I rarely reboot.

              Firefox is the most noticeable thing. After updating Firefox (usually it's a security update), Firefox often starts misbehaving until restarted. But I am very rarely forced to restart the login session or the entire system. I should, to get updates to actually take effect, but as a developer I'm usually aware of the specifics, so I can afford to be more selective than the average user.

              Are you sure you aren't comparing apples to oranges here, and are actually complaining about the stability of updates while running the development release, which involves ABIs changing and so forth?

              • mehrdadn 8 years ago

                Development release? Currently I'm on 16.04, and I've never been on a development release of anything on Ubuntu. I'm just describing the behavior I usually see in practice (which it seems someone attributed to "D-BUS" [1]). Obviously the logon session doesn't get messed up if all I'm updating is something irrelevant like Firefox, but if I update stuff that would actually affect system components then there's a good chance I'll have to reboot after the update or I'll start seeing weird behavior. This has generally been my experience ever since... any Ubuntu version, really. It's almost ironic that the most robust thing to update in practice is the OS kernel.

                [1] https://news.ycombinator.com/item?id=16257060

                • rlpb 8 years ago

                  All I can say is that, based on everything I know, that's not the current experience of the majority of users, so it doesn't seem fair for you to generalize this to some architectural problem. I don't know if you unknowingly have some edge case setup or what.

                  • mehrdadn 8 years ago

                    Also, FYI, update: apparently I'm not the only person in the world experiencing this [1].

                    But we are the only 2 people in the world experiencing this, so never mind.

                    [1] https://news.ycombinator.com/item?id=16257935

                  • mehrdadn 8 years ago

                    Are you reading the same comments I'm writing? I was literally point-by-point saying the opposite of what you seem to have read me writing:

                    > You: All I can say is that, based on everything I know, that's not the current experience of the majority of users

                    >> Me: Yet from everyone's comments here I'm guessing I must be the only one who encounters this.

                    ???

                    > You: It doesn't seem fair for you to generalize this to some architectural problem.

                    >> Me: I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents.

                    ???

                • beagle3 8 years ago

                  I have been running Ubuntu since 2004, and except for Firefox which tends to destabilize on update, I’ve observed this twice in 14 years; I update weekly or more often, and reboot every few months (usually on a kernel update I want to take hold)

                  • mehrdadn 8 years ago

                    Maybe it's because you update often so there are fewer changes in between? I update far less frequently.. it's not my primary OS so it's not like I'm even on it every day (or week). I use it whenever I need to.

                    • xfer 8 years ago

                      Have you filed any bug report or point to one? All i have seen is a lot of handwaving about "IPC mismatch", things will not fix itself unless people actively report/help fix issues.

              • Valmar 8 years ago

                Here, on Arch, Firefox updates don't cause me any grief. Only time I've ever need to reboot is after a kernel or DKMS module update.

                For systemd updates, I can just reload it. For the likes of core components like bash, and major DE updates, I can just lazily use loginctl to terminate all of my sessions, and start fresh.

                I'm not sure why Firefox would be causing instability until you restart (reboot?), though.

                • rlpb 8 years ago

                  > I'm not sure why Firefox would be causing instability until you restart (reboot?), though.

                  I get the impression that the UI loads files from disk dynamically, which start mismatching what was already loaded.

                • postingatwork 8 years ago

                  Firefox with e10s enabled (all current releases) detects version differences between parent process and and a child process started at a later point in time. Until recently it aborted the entire browser when that happened. I think now they have some logic that tries to keep running with the already open processes and abandoning the incompatible child.

                  Ideally they'd just prefork a template process for children and open fds for everything they need, that way such a detection wouldn't be necessary.

              • evmar 8 years ago

                For Chrome we had to go through a lot of extra effort to make updates not break us.

                http://neugierig.org/software/chromium/notes/2011/08/zygote....

                • kuschku 8 years ago

                  Or you could explicitly design the package so that your pre/postinstall scripts ensure that you install to a separate directory, and rename-replace the old directory, so you can’t get half-finished updates.

                  Regarding the rest, if your code has incompatible API breaks between two patch or minor version changes, you’ll need to rethink your development model.

              • peterwwillis 8 years ago

                Ubuntu user here. Ubuntu is less stable than my second girlfriend, and she tried to stab me once.

                Lately, every time my co-worker has updated Ubuntu, it has broken his system. He's like my canary in the coalmine. I wait for his system to not fall over before I will update mine.

                • StudentStuff 8 years ago

                  Maybe its time to consider an OS with better maintainers, like Debian. I've had less issues on unstable/sid over the past few years than I had on the last Ubuntu LTS release (which was what spurred me to Debian). On my other machine, Debian Stretch (and Jessie prior to upgrading) have treated me well, there just isn't breakage when upgrading to the latest stable release or when applying security patches.

                  • peterwwillis 8 years ago

                    I chose Ubuntu because it was more widely supported by 3rd party software vendors and support companies than Debian. But this doesn't matter, because I still ran into hardware and software compatibility issues, and Ubuntu is more up to date than Debian, meaning Debian would have been even more broken by default.

                    I don't know of a single Linux distro that works out of the box with my laptops. Maybe if I bought a $2,000 laptop that shipped with Linux it would work. It would still be a pain in the ass to update, though.

                    I kind of hate Linux as a desktop now. I've been using it as such for 14 years, and it's only gotten worse.

                    • StudentStuff 8 years ago

                      I had very similar reasons for starting with Ubuntu, but when it came right down to it, all the software that I thought would only work on Ubuntu works just fine on Debian.

                      Hardware support wise, newer kernels generally come to Debian sooner too, as the latest stable kernel generally gets into Sid a week or so after release, then added to backports for Debian Stable after a few weeks. Currently you can nab 4.14 from backports on Debian Stable, and 4.15 should be coming down the pike shortly (seeing as its just a few days old).

                    • tluyben2 8 years ago

                      Depends what you need ofcourse; a lot of people buy the newest and fastest but do not need it. Most (90%+) of my dev work works fine on an X220 which I can pick up for $80, has stellar linux support and still really good (14+ hour) battery life. Depends on the use case ofcourse, but when I see what most people around me do on their 2k+ laptops, they could have saved most of that. Also, Ubuntu Unity is just not very good; but Ubuntu or Debian with i3 are perfect. Cannot imagine a better desktop.

                    • darpa_escapee 8 years ago

                      > I kind of hate Linux as a desktop now

                      This is unfortunate. I've been using Linux and suffered, for the lack of a better word, with its warts since 2002.

                      There was a period between 2013-2016 where Linux was great as my main operating system. It was more stable than OS X and was much better for development.

                      Is hardware support your main issue with desktop Linux?

                      • peterwwillis 8 years ago

                        No, it's mostly software, but hardware is a big problem.

                        The software (especially Ubuntu's desktop) is lacking basic features from even 10 years ago. Maybe there's a way to get it to do what it used to do, but I can't figure it out, and I'm not going to research for two days to figure it out. I just live with a lack of functionality until I can replace this thing.

                        Not only that, but things are more complicated, with more subsystems applying more constraints (in the name of compatibility, or security, or whatever) that I never asked for and that constantly gets in my way. Just trying to control sound output and volume gives me headaches. Trying to get a new piece of software to work requires working out why some shitty subsystem is not letting the software work, even though it is installed correctly. Or whining about security problems. You installed the software, Ubuntu, don't fucking whine to me that there's a SELinux violation when I open my browser!

                        Hardware is a big problem because modern software requires more and more memory and compute cycles. All of my old, stable laptops can no longer perform the web browsing workloads they used to. Browsers just crash from lack of memory, or churn from too much processing. If you don't use modern browsers, pages just won't load.

                        Aside from the computing power issue, drivers are garbage. Ignoring the fact that some installers simply don't support the most modern hard disks, and UEFI stupidity, I can't get video to work half the time. When I can, there are artifacts everywhere, and I have to research for three days straight to decipher what mystical combination of graphics driver and firmware and display server and display configuration will give me working graphics. Virtually every new laptop for several years uses hybrid graphics, and you can't opt-out or you get artifacts or crashing. Even my wifi card causes corruption and system crashing, which I can barely control if I turn off all the features of the driver and set it to the lowest speed! Wifi!!! How do you screw that up, seriously?

                        Modern Linux is just a pain in the ass and I'm way too old to spend my life trying to make it work.

              • kworker 8 years ago

                FF usually just CTD after an update.

            • peterwwillis 8 years ago

              > I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot

              Which is almost true. In fact, you were unable to use programs that changed runtime dependencies or conflicted with current user sessions, init processes or kernel modules. You can often use other programs, but not ones that in any way touched the ones you upgraded, for one reason or another.

              If you have to upgrade, say, a command line utility, that almost always doesn't require rebooting. If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.

              Then there's system init processes, kernel modules, firmware, system daemons and the like. You can reload those without rebooting, but it's certainly not easy - you will probably have to change to runlevel 1, which kills almost everything running. You can reload the kernel without rebooting, too - very handy for live patching - but really, why the hell would anyone want to do this unless they were afraid to power off their system?

              So, technically, rebooting is not required to update in many cases in Linux, just like in Windows. But it is definitely the simplest and most reliable way.

              • mehrdadn 8 years ago

                > If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.

                Thanks, I'm glad at least one person agrees I'm not hallucinating. The vast majority of people here are telling me I'm basically the only one this happens to.

                • krinchan 8 years ago

                  I've long since abandoned "bare metal" Linux in favor of VirtualBox and Windows for my home machine and VirtualBox and macOS on my laptop.

                  Monday's I merge last week's snapshot, take a new one, and run all my updates. Then I do my dev work in my VM. Before I head out on trips, I just ship the entire machine over the network to my MacBook Pro.

                  This is mostly because have you literally ever tried to install any Linux on laptops? It's always a Russian roulette with those $+#&ing Broadcom wireless chipsets. >.<

                  So you're not hallucinating. Linux as a desktop/laptop had a sweet spot from like...2012-ish till 2016. Then 802.11ac went mainstream so Broadcom released new chipsets and graphics cards had a whole thing with new drivers and Ubuntu's packagers (the people) lost their mind or something.

                  Nothing feels right, at least in Ubuntu/Arch land right now.

                  • michaelmrose 8 years ago

                    How about just buy stuff that's well supported if you intend to use Linux on it. Been working for me since 2003.

            • riskable 8 years ago

              > I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run.

              This is less of an issue with Linux, per se, and more to do with proprietary video drivers.

              I have multiple systems in my home with various GPUs. The systems running Intel and AMD GPUs with open source drivers don't have this problem. The two desktops with Nvidia GPUs have this problem whenever the Nvidia driver is updated.

              I also had the same exact problem with my AMD system back when it was running the proprietary fglrx driver.

              • ksk 8 years ago

                >This is less of an issue with Linux, per se, and more to do with proprietary video drivers.

                Actually, it IS a problem with Linux. I don't get this behavior on my Windows or OSX machines where NVIDIA has been reliably (modulo obvious caveats) shipping "evil proprietary" drivers for a decade.

                Linux is great, but it doesn't need to be coddled.

            • mikestew 8 years ago

              It's weird that you're blaming my operating system's problems on me.

              No one is blaming you specifically, it is a common way of saying, “if you write operating systems, and you do $THING, you will get $RESULT.” Common, but wrong, which is why your high school English teacher will ding you for phrasing something that way.

            • code_duck 8 years ago

              Why would Linux need ‘defending’ for superior flexibility? The fact that files work like this is an advantage, not a disadvantage. I have never seen the flaw you’ve pointed out actually occurring in practice.

              • da_chicken 8 years ago

                Well, it's not always an advantage. It's just the consequences of a different locking philosophy.

                Windows patches are a much bigger pain in the ass to deal with on a month-to-month basis, but Linux patches can really bite you.

                Example 1:

                Say I have an application 1 that uses shared library X, and a application 2 that spawns an external process every 5 minutes that uses library X and communicates in some way with application 1. Now let's say that library X v2.0 and v2.1 are incompatible, and I need to apply an update.

                On Windows, if I update this program, it will keep running until the system is rebooted. Updates, although they take significant time due to restarts, are essentially atomic. The update either applies to the entire system or none of the system. The system will continue to function in the unpatched state until after it reboots.

                On Linux, it's possible for application 1 to continue to run with v2.0 of the shared library, while application 2 will load v2.1, and suddenly your applications stop working. You have to know that your security update is going to cause this breaking change and you need to deal with it immediately after applying the update.

                Example 2:

                A patch is released which, unbeknownst to you, causes your system to be configured in a non-bootable state.

                On Windows, you'll find out immediately that your patch broke the system. It's likely (but not certain) to reboot again, roll back the patch, and return to the pre-patched state. In any event, you will know that the breaking patch is one that was in the most recently applied batch.

                On Linux, you may not reboot for months. There may be dozens or hundreds of updates applied before you reboot your system and find that it's not in a bootable state, and you'll have no idea which patch has caused your issue. If you want your system in a known-working state, you'll have to restore it prior to the last system reboot. And God help you if you made any configuration changes or updates to applications that are not in your distro's repository.

              • michaelmrose 8 years ago

                No lie. After all nothing is stopping you from updating once every tuesday and rebooting after updates. You just wont have to do it 8 times in succession or stop in the middle of doing useful work to do so.

                I just don't update nvidia or my kernel automatically and magically I only have to reboot less than once a month and always on my schedule.

              • CHY872 8 years ago

                I have! We had a log shipping daemon that wasn't always releasing its file handles properly and kept taking out applications due to out of spacing the box. That said, I drastically prefer the Unix behaviour.

            • AnIdiotOnTheNet 8 years ago

              It is a common tactic of Linux evangelists to state that they never have the problems you're experiencing and thereby disregard any criticism. You'll probably also get variants of "you're using the wrong distribution".

        • kelnos 8 years ago

          > What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct?

          Only if the libraries that use IPC have changed their wire format between versions, which would be a pretty bad practice, so I wouldn't expect that to happen often (if ever).

          If something that's already running has its data files moved around or changed sufficiently, and it later tries to open (that is, the app was running but the data file wasn't open when the upgrade happened) what it thinks is an old data file, but is either new and different or just missing, that could cause problems.

          > Because it seems to me that the Windows method might be less flexible but is likely to be more stable, since there's a single coherent global view of the file system at any given time.

          In practice I've never had an issue with this (nearly 20 years using various Linux desktop and server distros). Upgrade-in-place is generally the norm, and most people will only reboot if there's a kernel update or an update to the init system.

        • jacoblambda 8 years ago

          *nixes have a system for handing off to the newer version such as kpatch and kgraft.

          kgraft for example swaps off each syscall for a process while it is not being used. This lets the OS slowly transfer to the new kernel as it is running.

          kpatch does it all in one go but locks up the system for a few milliseconds.

          The version that is currently merged into 4.0+ kernels is a hybrid of the two developed by the authors of both systems.

          • cat199 8 years ago

            Runtime kernel patching is a new thing. "*nixes" do not generally have these systems. Some proprietary technologies exist which enable specific platforms to use runtime kernel patching exist.

        • pjc50 8 years ago

          > What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

          In theory, but Linux systems tend to do very little IPC other than X11, pipelines, and IP-based communication, where the protocols tend to support running with different versions.

          In practice you can achieve multi-year uptimes with systems until you get a mandatory kernel security update.

          • majewsky 8 years ago

            How can you ave a multi-year uptime unless you willfully ignore kernel security updates? In this day and age, year-long uptimes are an anti-pattern (if only because you cannot be sure whether your services are actually reboot-safe).

            • jschwartzi 8 years ago

              It's easy. You gather information about what the risks and hazards are for each vulnerability and then pragmatically decide whether there are any unacceptable risks after you mitigate with other layers of security.

              It's a really common engineering task to do this and I'm not at all surprised that someone trying to maintain uptime would do so. Honestly it's more mature than updating every time because each change also introduces more potential for regression. If your goal is to run a stable system you want to avoid this unless the risk is outweighed.

              • ams6110 8 years ago

                But with "yum check-update" or the equivalent apt-get incantation saying you have dozens of security updates every week or two, reading the release notes for all of them and deciding which ones can be skipped safely in your environment is too much work. Far easier to just apply all updates every two weeks or monthly or whatever your schedule is, and then reboot.

              • tluyben2 8 years ago

                Fully agree here; a lot (most?) of patches and updates are simply not exploitable in the respective server use case, so why should I incur risk of downtime to apply it?

            • cat199 8 years ago

              you willfully ignore kernel security updates.

              If my system is closed to the public world, has a tiny amount of external services, and I am aware of the specific bug delta since system release and what mitigations may or may not be required, I can leave it running as long as I choose to accept the risk. Cute phrases like 'pattern' and 'anti-pattern' are rules of thumb, not absolute truths.

            • bri3d 8 years ago

              Ksplice or KernelCare

              • rkeene2 8 years ago

                Kernel Live Patching (KLP) has been in mainline since 4.4. I've used it to patch various flaws in my Linux distribution since rebooting the running cluster is more tedious.

            • Valmar 8 years ago

              kexec?

              • AnIdiotOnTheNet 8 years ago

                kexec doesn't keep your services running, it just allows the kernel to act as a bootloader for another kernel.

          • mehrdadn 8 years ago

            X11 (or other window-related tooling) was exactly what I was thinking of actually, because every time I do a major Linux (Ubuntu) update I can't really launch programs and use my computer normally until I reboot. It always gets finicky and IPC mismatch is the best explanation I can think of.

            • gnfurlong 8 years ago

              Are you sure this isn't more the Desktop Environment/Display Manager than X11? Or otherwise something to to with your use case?

              I've primarily been using AwesomeWM for the last few years and occasionally XFCE (both on ArchLinux) and I cant recall ever experiencing what you describe.

            • rleigh 8 years ago

              That's D-BUS being a terribly specified and poorly implemented mess. Everything underneath should be solid.

              • mehrdadn 8 years ago

                I mean, OK, but if Windows's GUI (or PowerShell, or whatever) crashed or misbehaved upon update, would you be satisfied with "that's win32k/DWM/whatever being a poorly implemented mess; everything underneath should be solid"?

                • rleigh 8 years ago

                  No. D-BUS is a travesty and blight upon the Linux desktop, and with systemd, every Linux system. It's the most fragile and nasty IPC system I've encountered. There are several better alternatives implemented by competent people, so there's really no excuse for its many defects.

                  • andrewshadura 8 years ago

                    Roger, I think we both know D-Bus has been rock stable at for the last 7 years, or possibly more, and Simon McVittie, its current maintainer, is a highly skilled and competent engineer.

                    While I find the situation different with systemd maintainers, whose communication style used to be questionable too often, and their software had some nasty bugs, I must admit that despite those problems they've also built software, which is nevertheless reliable, even though it took them a lot of time to come there.

                    This, honestly, is the most disappointing statement I read from you in the recent couple of years. And I'm saying this as a person who used to respect you a lot. I find your lack of respect together with the willingness to spread lies like this quite appalling.

        • postingatwork 8 years ago

          > What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct?

          At the first glance this is true, but you can guard against this in several ways. If your process only forks children then it already inherits the loaded libraries from the parent as part of the forked address space. Alternatively you can pass open file descriptors between processes. Another option is to use file-system snapshots, at least if the filesystem supports them.

          Yet another option is to not replace individual files but complete directories and swap them out via RENAME_EXCHANGE (an atomic swap, available since kernel 3.15). As long as the process keeps a handle on its original working directory it can keep working with the old version even if it has been replaced with a new one.

          Some of those approaches are tricky, but if you want to guard against such inconsistencies at least it is possible. And if your IPC interfaces provide a stable API it shouldn't be necessary.

          > And then there's FILE_SHARE_DELETE which allows deletion

          That has some issues when the file is mmaped. If I recall correctly you can't replace it as long as a mapping is open.

        • vetinari 8 years ago

          > What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

          While this is true, I've never seen this to be a problem. If two programs use IPC, they usually use either stable, or compatible protocol.

          To make things even more complicated, you can have two programs, either each in it's own container, or statically linked, or with their private bundles of libraries, doing IPC and then they are free to have different versions of the underlying libraries, while the users still expect them to work fine.

        • ptero 8 years ago

          In principle, yes, IPC can fail between different versions of the same software. However, the chances that communication will fail between new version and some other utility are IMO much higher. A surprise comm failure between two copies of the same software (even different revisions) usually makes developers look pretty bad.

          Some versions are known to be incompatible and most Linux distributions do a very good job of recommending and doing a restart of affected services in a way transparent to users. I have been running Linux and home and at work for years, almost never restart those workstations and, as far as I can tell, never had problems from piecemeal upgrades. My 2c.

        • asveikau 8 years ago

          Here is the simplest way I can put it: When you delete a file in NT, any NtCreateFile() on its name will fail with STATUS_DELETE_PENDING until the last handle is closed.[1] Unix will remove the name for this case and the name is re-usable for any number of unrelated future files.

          [1] Note that is not the same as your "must be reachable via some path". It is literally inaccessible by name after delete. Try to access by name and you get STATUS_DELETE_PENDING. This is unrelated to the other misfeature of being able to block deletes by not including FILE_SHARE_DELETE.

          • mehrdadn 8 years ago

            "Reachable" doesn't mean "openable". Reachable just means there is a path that the system identifies the file with. There are files you cannot open but which are nevertheless reachable by path. Lots of reasons can exist for this and a pending delete is just one of them. Others can include having wrong permissions or being special files (e.g. hiberfil.sys or even $MFTMirr).

            • asveikau 8 years ago

              I would be kind of surprised if "the system" cares much about the name of a delete pending file. NT philosophy is to discard the name as soon as possible and work with handles. I was under the impression that ntfs.sys only has this behavior because older filesystems led everybody to expect it.

              • mehrdadn 8 years ago

                Well if you look at the scenario you described, I don't believe the parent folder can be deleted while the child is pending deletion. And if the system crashes, I'd expect the file to be there (but haven't tested). So the path components do have to be kept around somewhere...

                • asveikau 8 years ago

                  It's true that NT won't let you remove a directory if a child has a handle open. But I suspect you are getting the reasoning backwards. The directory is not empty as long as that delete pending file is there. Remove this ill-conceived implementation detail (and it is that) then this and other problems go away.

                  There is also an API that retrieves a filename from a handle. I don't think it guarantees the name be usable though.

                  It's easy to imagine a system that works the way I would have it, because it exists: Unix. You can unlink and keep descriptors open. NT is very close to being there too, except for these goofy quirks which are kind of artificial.

          • tfigment 8 years ago

            this has led to some interesting observations for me in linux when I've had really large log files that were still in use and were "deleted" but the file was still in use. (I think cat /dev/nul > file will do this). Tools like du now cannot find where the disk usage actually is. Only on restart of the app does usage show correctly again. Kinda hard to troubleshoot if you were not aware this was what happened.

            • asveikau 8 years ago

              I agree that this is a drawback or a common gotcha for the Unix behavior which would be more user visible with the NT behavior, but to anyone advocating the Windows way I would ask: is it worth getting this fringe detail "right" by making every unlink(x); open(x, O_CREAT ...); into a risky behavior that may randomly fail depending on what another process is doing to x? On Windows, I have seen this type of pattern, a common one because most people aren't aware of this corner case, be the cause of seemingly random failures that would be rather inexplicable to most programmers. (Often the program holding x open is an AV product scanning it for viruses, meaning that any given user system might have a flurry of race condition causing filesystem activities that may or may not conflict with your process.)

        • jrs235 8 years ago

          >So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

          Ah. This must be why I can't permanently delete files in use by a program but can sometimes "delete" them and send them to the recycle bin.

        • beagle3 8 years ago

          > So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

          Isn't there a $MFT\\{file entry number} virtual directory that gives an alternate access path to each file? Wouldn't that qualify as "a way to access the file?"

          Also, you might say that in practice Linux abides by the same rule - the old file can be referenced through some /proc/$pid/fd/$fd entry.

        • tzahola 8 years ago

          >What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

          That's why you should restart those programs that were using the library. You can find this out via `lsof`.

          • mehrdadn 8 years ago

            Really? Somehow procure a list of all libraries that were updated in a system update, go through each one, find out which program was using it, and kill that program? Every single time I update? You can't be serious.

            • beagle3 8 years ago

              Apt does this automatically for you on uphrade

              • jlgaddis 8 years ago

                "checkrestart" also exists (on some distributions) for exactly this same purpose.

            • avar 8 years ago

              What you're describing is trivially done on any *nix system with a mature package manager, if it isn't doing this already:

              1. Do the upgrade, this changes the files.

              2. You have a log of what packages got upgraded.

              3. Run the package system's introspection commands to see what files belonged to that package before & after

              4. Run the package system's introspection commands to see what reverse depends on those packages, or use the lsof trick mentioned upthread.

              5. For each of those things restart any running instance of the application / daemon.

              Even if something didn't implement this already (most mature systems to) this is at most a rather trivial 30 line shellscript.

        • code_duck 8 years ago

          In a case where one could get some sort of inconsistency because of different library versions, you restart the applications. That’s the point, that this can be handled by restarting the applications and not the entire operating system.

          • ksk 8 years ago

            Is there a way to get a nice helpful popup telling me which applications and services to restart?

            • code_duck 8 years ago

              Yes, there are many ways to determine that on the command line. As far as a GUI, I couldn’t say as that isn’t how I do system administration.

              I’m not sure what scenario you are envisioning. Usually upgrades are handled via the distribution and its package manager, and the maintainers take care of library issues. It’s not like windows where you go all over downloading packages from websites and installing them over each other.

              • ksk 8 years ago

                I was responding to your own comment

                >In a case where one could get some sort of inconsistency because of different library versions, you restart the applications.

                I am envisioning the same scenario you replied to!

                • code_duck 8 years ago

                  Yes, but for some reason you want a GUI alert to tell you important things, which is a foreign concept to me. What I envision is knowing what is running on your server and updating programs purposefully, with knowledge of how they interact and what problems version inconsistency could cause.

                  • ksk 8 years ago

                    You're the one proposing the enumeration of affected programs and services and restarting them as a solution!! How is using a GUI a foreign concept? Are you debating the merits of a GUI in 2018?

                    In any case, my assumption here is we're trying to help the user and give them an easy way to know what to do, instead of leaving their software in an undefined state.

                    • code_duck 8 years ago

                      Great. For one, I was initially confused because I thought you were the author of the post I replied to you. Next, yes. I think we should do that.

      • Razengan 8 years ago

        Heck, on Windows I couldn't even rename audio/video files when playing them in an app, but I can on macOS, without anything crashing (except some stubborn Windows-logic apps like VLC which will fail to replay something from its playlist that has since been renamed, but at least on macOS it will still allow you to rename or move the files while they're being played.)

        It's small details like these that make so much difference in daily convenience.

        • ksk 8 years ago

          I think a general purpose UX should err on the side of "make it difficult for non-technical users to make mistakes"

          Renaming/Deleting files in use is one of those things that us nerds like to complain about, but it makes sense when you think of an accountant that has an open spreadsheet and accidentally deletes a folder with that file. For average non-technical people (on any OS) I would say it makes sense to block that file from being deleted.

          • rubatuga 8 years ago

            I see you’ve never actually experienced it? It is actually more intuitive for the average user, as the file name is updated across every application immediately. In fact, you can actually change the file name from the top of the window directly.

            • nomel 8 years ago

              I have experienced this many times in MacOS after deleting a file/folder and replacing it with some other version, like one downloaded from an email. After a while, I realize that I'm working from ~/.Trash and the attachment I just sent didn't include the changes I had been making for the last hour.

              I've had this happen in bash also, where I modify some script in an external editor then try to run it, only to realize that I'm running from the trash, even though the bash prompt naively tells me I'm in ~/SomethingNotTrashFolder.

              Intuitive would be "Hey, this file you're working on was just moved to the trash. There's a 99.9999% chance you don't want to do this." rather than hoping the filepath is visible, and noticed by dumb chance, in the title bar, since not many people periodically glance at the file they have open to verify it's still the file they want open.

            • ksk 8 years ago

              How does the user know that the file is open? Also, Is it consistent across network folder renames too?

          • Sylos 8 years ago

            No idea, if the OS design plays into this or if it's just a application design convention, but on desktop Linux how it often works (for example with KDE programs) is that if a program has a file open which was moved (including moved to the trash), then the program will pop open a little persistent notification with a button offering to save the content that you still have in the program to the location where the file used to be, effectively allowing you to recover from such mistakes without hindering you from moving/deleting the file.

      • boznz 8 years ago

        > On Windows, once the file is open, it is that filename that is open; You can't rename or delete it;

        I am not sure about deletion, but one of my programs has been using the ability to rename itself to do updates for the last 15 years.

      • quietbritishjim 8 years ago

        This doesn't fully explain why a reboot is not required on Linux. If a *nix operating system updates sysfile1.so and sysfile2.so in the way you describe, then there will be some time where the filename sysfile1.so refers to the new version of that file while sysfile2.so refers to the old version. A program that is started in this brief window will get mixed versions of these libraries. It is unlikely that all combinations of versions of libraries have been tested together, so you could end up running with untested and possibly incompatible versions of libraries.

        • beagle3 8 years ago

          > This doesn't fully explain why a reboot is not required on Linux.

          Of course there is a theoretical possibility that this will happen; however, in practice, updates (especially security updates) on Linux happen with ABI compatible libraries. E.g. on debian/ubuntu

          apt-get update && apt-get upgrade

          Will generally only do ABI compatible updates, without installing additional packages (you need 'dist-upgrade' or 'full-upgrade' for that).

          Some updates will go as far as to prevent a program restart while updating (by temporarily making the executable unavailable).

          Firefox on Ubuntu is an outlier - an update will replace it with one that isn't ABI compatible. It detects this and encourages you to restart it.

          All in all, it's not that a reboot is never required for linux theoretically - it is that practically, you MUST reboot only for a kernel update, and may occasionally need to restart other programs that have been updated (but are rarely forced too).

        • snuxoll 8 years ago

          This generally should never happen, Linux distributions don’t wholesale replace shared objects with ABI incompatible versions - soname’s exist to protect against this very issue.

        • CJefferson 8 years ago

          I had a program with a rarely reported bug that turned out to be lazy loading of .so files that was this bug. Switched to eager loading and it went away.

      • nullymcnull 8 years ago

        > On Windows, once the file is open, it is that filename that is open; You can't rename or delete it

        It's simple for any application to open a file in Windows such that it will allow a rename or delete while open - set the FILE_SHARE_DELETE bit on the dwShareMode arg of the win32 CreateFile() function. In .NET, the same behaviour is exposed by File.Open / FileShare.Delete.

      • aussie1233 8 years ago

        That is incorrect. It just depends on Windows how you call the Win32 API and what parameters you specify. Many options there - in the end it's just an object in the NT kernel space.

      • sclangdon 8 years ago

        > On Windows, once the file is open, it is that filename that is open; You can't rename or delete it

        You can rename a file when it's open. And once it's renamed, you can delete it.

      • ainiriand 8 years ago

        Maybe this is also why 'rm -rf /' is so effective in destroying your system, isn't it?

      • yuhong 8 years ago

        What is really irritating is when for example an update that only changes mshtml.dll requires a reboot because a program unnecessarily depends on it. These are not as common as it used to be though.

    • josteink 8 years ago

      > Speaking of which, why do so many things require reboot to update on Windows?

      Can't speak for everyone else, but Windows fully supports shared file-access which prevents the kind of file-locks which causes reboot requirements.

      The problem is that the default file-share permissions in the common Windows APIs (unless you want to get verbose in your code) is that the opening process demands exclusive access and locking to the underlying file for the lifetime of that file-handle.

      So unless the programmer takes the time to research that 1. these file-share permissions exists, 2. which permissions are appropriate for the use-cases they have in their code, and 3. how to apply these more lenient permissions in their code...

      Unless all that, you get Windows programs which creates exclusive file-locks, which again causes reboot-requirements upon upgrades. Not surprising really.

      In Linux/UNIX, the default seems to be the other way around: Full-sharing, unless locked down, and people seem prepared to write defensive code to lock-down only upon need, or have code prepare for worst-case scenarios.

    • pjc50 8 years ago

      Windows executables are opened with mandatory exclusive locking. So you can't overwrite a program or its DLLs while any instances of it are running. If a DLL is widely used, that makes it essentially impossible to update while the system is in use.

      There is a registry key which allows an update to schedule a set of rename operations on boot to drop in replacement file(s). https://blogs.technet.microsoft.com/brad_rutkowski/2007/06/2...

    • vetinari 8 years ago

      > Speaking of which, why do so many things require reboot to update on Windows?

      We are getting there on Linux too - with atomic or image based updates of the underlying system. On servers you will (or already) have A/B partitions (or ostrees), on mobiles and IoT too, some desktops (looking at Fedora) also prefer reboot-update-reboot cycle, to prevent things like killing X while doing your update and leaving your machine in inconsistent state.

      macOS also does system updates with reboot, for the same reasons.

    • Shivetya 8 years ago

      I used to joke that Windows was alone in this issue, my work laptop being a prime example, but even Apple tends to towards reboots more often than not and especially as of late. Fortunately both can do it during slow periods; as in over night; and make updates nearly invisible to users.

    • jameshart 8 years ago

      So that's a fine question to ask, and you've received many fascinating answers, but can I just suggest that this case - that is, applying patches that relate to the security of your processor cache - is a very fine reason for requiring a reboot, since it will ensure that your processor cache starts out fresh and all behaviors that cause data to be placed there are correctly following the patched behavior.

    • faragon 8 years ago

      The main reason is that in Windows executable files and dynamic libraries (.exe and .dll) are locked while a process is using them, while in other systems, e.g. Linux, you can delete them from disk. The only absolute need for reboot should be an OS kernel update (there are cases where a kernel could be updated/patched without a reboot).

    • Momquist 8 years ago

      I think a better question would be: why does Windows need multiple successive reboots? Too often my experience can be resumed as: update-reboot-reboot-update continuing-reboot... ad nauseam.

      At least on *nix, even when you need a reboot, once is enough.

    • gaius 8 years ago

      To force a reinitialisation of all security contexts. Same reason that many websites make you log in again immediately after changing your password (which interestingly Windows doesn’t)

      Historically (Windows 95 and earlier) reboots were required to reload DLLs and so on but that’s not really true anymore. Still a lot of installers and docs say to reboot when it’s not really necessary as a holdover from then

      • lucb1e 8 years ago

        I was under the impression that the reason is what u/beagle3 mentions (in a sibling comment to yours): open system files. I'm curious to see your comment on what he describes, as what you mention (reloading some security context) does not seem to be the whole truth. That websites make one log in again after changing your password has nothing to do with this.

        • gaius 8 years ago

          That websites make one log in again after changing your password has nothing to do with this.

          No it is exactly the same principle: something has changed therefore invalidate all existing contexts. Far less error prone than trying to recompute them, what happens e.g. if a resource has already been accessed in a context that is now denied? Security 101.

          • lucb1e 8 years ago

            I don't see how changing my password changes a "security context". I don't suddenly get more or fewer permissions.

            As for logging other places out, that's a design choice. People change password either because they routinely change theirs (they either need to or choose to), or because of a (suspected) compromise. In the latter case you'll probably want to log everyone else out (though, who says you're logging out the attacker and not the legitimate user?) and in the former case you shouldn't (otherwise changing your password becomes annoying and avoided). The interface for changing the password could have a "log out all sessions" checkbox or it could just be a feature separate from changing your password.

            No, it's not as simple as you put it. No need to condescendingly pass it off as "security 101".

  • chli 8 years ago

    Same thing happened to me this week-end and we are not alone [1]. The worst is that it's actually the second time on this computer (Kaby Lake i3 on a MB with Intel B250 Chipset). I had the same issue in December last year (exact same behaviour with probably an earlier version of that hotfix).

    I'm running with Windows Update service disabled till this is fixed for good !

    [1] https://answers.microsoft.com/en-us/windows/forum/windows_10...

    • dhimes 8 years ago

      I have not been able to disable the update service. I'm supposed to be able to, but damn if I don't open my computer in the morning and see everything closed (and lock files all over the place) and all kinds of annoying shit like this.

      I actually like Win 10, but it's shit like this that keeps me from becoming a true convert. Oh, for $X00 I can get enterprise update, but IMO that's just Win 10 home being used as ransomware. /rant

cesarb 8 years ago

In a related development, there are proposed patches to the Linux kernel (not yet merged) to blacklist the broken microcode updates: https://www.spinics.net/lists/kernel/msg2707159.html

That patch disables the use by the kernel of the new IBPB/IBRS features provided by the updated microcode, when it's of a "known bad" revision. Since Linux prefers the "retpoline" mitigation instead of IBRS, and AFAIK so far the upstream kernel (and most of the backports to stable kernels) doesn't use IBPB yet, that might explain why Linux seems to have been less affected by the microcode update instabilities than Windows.

Also interesting: that patch has a link to an official Intel list of broken microcode versions.

  • Valmar 8 years ago

    > In a related development, there are proposed patches to the Linux kernel (not yet merged) to blacklist the broken microcode updates

    Linus probably won't pull it until it's truly known to be stable, because of his attitude towards having decent quality code and not causing needless system instability.

    Without Linus... who knows what would have happened by now.

    • cesarb 8 years ago

      They are on the "tip" tree (https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...), so they'll probably be sent to Linus as soon as the merge window opens (Linux 4.15 has just been released, so the merge window should open soon). I expect these patches to be on 4.16, and also to be backported to the stable releases (4.15.x and others).

      But yeah, upstream Linux kernel development is taking it slow. As far as I can see, variant 3 mitigations (PTI) are already in, variant 2 mitigations are partially in (retpoline) and partially not (the microcode dependent ones), and variant 1 mitigations are still under discussion.

      • jerf 8 years ago

        "But yeah, upstream Linux kernel development is taking it slow."

        Taking it slow seems very appropriate to me. This seems to me to have been a case of everybody grossly overestimating the short-term portion of the catastrophe, and underestimating the long term.

        In the short term, the only people who were going to be plausibly affected in the next three to six months are people on shared hosting of some sort where you may share a server with somebody else's untrusted code, where an accelerated fix is in order, but also something that can be centrally handled. I'm not that worried in the next three to six months that my personal desktop is somehow going to be compromised by either Meltdown or Spectre, and personally, if I see a noticeable performance issue I may well revert the fixes (I'm on Linux), because first you have to penetrate my defenses to deliver anything anyhow, then you have to be in a situation where you're not going to just use a root exploit, which probably means you're in a sandbox or something which means it's that much more difficult to figure out how to exploit this. For most users, uses, and systems, spectre and meltdown aren't that immediately pressing.

        Meanwhile, in the long term this may require basically redesigning CPUs to a very significant degree; there is no software patch that can fix the underlying issues. It is difficult to overstate the long term impact of this class of bugs. IMHO the real problem from the jousting match with Linus and Intel last week isn't that Intel's patches today aren't quality code, but that it makes me concerned that they're just going to sweep this fundamental problem under the rug. As I said in another post on HN, I fully understand that remediating this is going to be years, and I don't expect Intel to have an answer overnight, or a full solution in their next "tock". But if they're not taking this seriously, we have a very large long-term problem. We're only going to see more leaks in the long term.

        • Mister_Snuggles 8 years ago

          I read somewhere that people have developed POCs of these using JavaScript. At minimum, you'll want to keep your browser up to date as there are mitigations happening there too. Who knew that exposing high precision timers to untrusted JavaScript would be a bad idea?

          Apart from browsers, it's fortunately pretty easy to avoid running code you don't trust on your devices.

          • jerf 8 years ago

            What I've seen that the POCs can actually do is not worth running around with your hair on fire, from what I've seen.

            Note I did not say there is no reason to be concerned about Meltdown and Spectre... just that for most users, uses, and systems, it's not that important. In the next three-to-six months, if you care about security at all, unless you are already running a tip-top tight operation, your money and effort is better spent defending against the many already-realistic threats, rather than worrying about the vector that may someday be converted into a realistic threat. Meltdown isn't what is going to drag your business to a halt next week; it's that ransomware that one of your less-savvy employees opened while mapped to the unbacked-up world-writable corporate share that has all the spreadsheets your business runs on. At the moment, the net risk of applying the Meltdown fix comfortably exceeds by several orders of magnitude the risk that Meltdown itself poses.

            And my point is precisely that for most users and uses, that panic was not justified. Those for whom that is not true (VM hosting companies) already know they need to be more aggressive. There was no point in pushing out patches that nearly bricked some computers.

            • Mister_Snuggles 8 years ago

              I agree with your points. In fact, I made the same argument about removing a Heartbleed/Spectre-related patch that caused issues for one of our applications - "the machine doesn't execute any untrusted code, so this patch isn't strictly necessary."

      • cesarb 8 years ago

        Update: the merge window has started, and that blacklist has just been merged: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

    • speedie 8 years ago

      Everyone would have politely installed their "fixes" . This situation shows why linus was swearing .

      • Valmar 8 years ago

        Exactly. He's very intelligent about how he manages the kernel, which is precisely why it is preferred by a majority of businesses throughout the world, and is the absolute #1 in the supercomputer world, for the top 500:

        https://www.top500.org/statistics/details/osfam/1

        • speedie 8 years ago

          I was just pointing out why politeness gets you nowhere when dealing with eejits . Linus improved that Capone's saying about kind word and a gun .

ComodoHacker 8 years ago

>However, Intel does not appear too concerned that the incident will affect its bottom line - the company expects 2018 to be a record year in terms of revenue

There is an interesting paradox in our industry. If you pay enough attention (read: money) to security, you will be late to the market, your costs will be high and you lose profit. If you don't pay enough attention, you take the market, get your profits, but your product (be it hardware or software) and reputation will be screwed later. And worst of all: there's never enough attention to security.

So by simple logic, an optimal strategy is to forge your product quickly, take your profits within a [relatively] short period and vanish from the market. I guess we'll see this strategy executed from IoT vendors when market start to punish them for their bad sec.

For Intel, that "long period" just happened to be REALLY long.

  • AnIdiotOnTheNet 8 years ago

    I doubt Intel will see serious punishment in the market. As usual, there will be a lot of wailing and gnashing of teeth but when push comes to shove most people will prioritize nearly everything over security.

    All markets work like this. People bitch about the quality of products, but still buy the cheap stuff.

    • javajosh 8 years ago

      This will be true until something uses Meltdown in the wild to cause massive damage. When a digital superflu comes, businesses and individuals will be faced with a choice: continue to use Intel and be vulnerable to a flu that is literally wiping out businesses, exchanges, hospitals, etc or replace ALL of their hardware with AMD.

      Interestingly, I think AMD has a lot of motive to create such a superflu, or at least encourage it's creation.

      • bmer 8 years ago

        If AMD were proved to have been involved in the creation of such malware, to what extent could they be litigated against?

        • javajosh 8 years ago

          Well, it would be highly illegal (and immoral) to create such a program, regardless of who did it.

  • _jal 8 years ago

    Intel is TBTF. Like the banks but in via different mechanism, they appear to have reached consequence-immunity by becoming critical infrastructure.

    • curun1r 8 years ago

      The same could've been said about Microsoft 10-15 years ago. Now, we could probably get by without them.

      Intel may be too big for an abrupt failure, but they can absolutely fail in a decade-long slide into obscurity.

    • Feniks 8 years ago

      Feature of the industry. It would take a new corp entering into the chip industry hundreds of billions and decades to get where Intel is today.

  • doktrin 8 years ago

    You conclusion isn't in agreement with the section you quoted, so are you saying that Intel will be punished by the market in the mid to distant future (after 2018)?

stinos 8 years ago

"Here's a patch" - "Here's a patch to disable that other patch" - ...

What's next? Repeat? Sounds like this could turn into a maintainance nightmare quickly. Also because I've introduced things like that myself in the past, and that was for normal applications and not a kernel or OS. Somewhere, someday, there's usually this one exception for which none of your rules hold true and the thing blows up in your face. Anyway, I'd love to see the actual code for this. Not a chance probably?

  • hishnash 8 years ago

    Im really wanding they had more than 6months to do these patches and they did not bother testing on a good number of systems. Its not like MS + Intel dont have enough money to buy a few 1000 testing machines and get some testers on it.

    • prewett 8 years ago

      Have you released a bugfix to a large application? I've had one line fixes break some use case I hadn't even heard of before, and it doesn't always show up right away, either. Intel's fix has to work on every application in every version of Windows, macOS, Linux, for multiple versions of processors with multiple different chipsets. And it has to be done yesterday. That's a nightmare scenario.

    • x0x0 8 years ago

      I came here to ask the same thing. How did these folks squander six months?

      • peoplewindow 8 years ago

        I think Spectre may have appeared later, after Meltdown? Remember the investigations into what's possible were proceeding in parallel with the attempted fixes.

        Also, CPU design changes take a long time. 6 months may seem a long time from the perspective of HackerNews node.js type hackers, but it's a bit harder to patch decades worth of CPU microcode than a website.

        • hishnash 8 years ago

          Reading over googles project0 page it reads as if they told AMD about the issues on 2017-06-01 why would they do this if it were meltdown only?

          also look at the exploit numbering:

          Variant 1: bounds check bypass (CVE-2017-5753) Variant 2: branch target injection (CVE-2017-5715) Variant 3: rogue data cache load (CVE-2017-5754)

          according to https://cve.mitre.org/cve/identifiers/ this is sequence based so `Variant 2` was recorded to CVE before v1 and v3.

          I get it may take a long time (that is fine even if the patches took a few more days), what I don't get is that they released it to production (server) envs seemingly without testing. Surely even rudimentary testing (deploying on a few 1000 different server platforms for a few hours at least should be something that Intel does for all microcode updates, after all they are rather more important than js Node packages as you point out)

          • peoplewindow 8 years ago

            I haven't heard of microcode updates that hurt stability before. Presumably the collapse of the embargo caused them to do an accelerated release, skipping their usual long testing cycle.

        • vbernat 8 years ago

          Currently, they are not patching a decade worth of CPU microcodes since we have 0 working microcode. And, previously, released microcodes were only down to Ivy Bridge EP (~2014).

    • im3w1l 8 years ago

      Only reason I can think of is that they didn't immediately realize how much of a headache it would be.

PerusingAround 8 years ago

I'm amazed on how Intel's stock price still keeps going UP, despite all these problems... just WOW.

  • adtac 8 years ago

    The recent jump is because they released a good earnings report.

    https://www.cnbc.com/2018/01/26/intc-intel-stock-jumps-to-hi...

  • collinmanderson 8 years ago

    I have a feeling there's going to be a lot of demand for future Intel hardware that's immune to spectre and meltdown. I think it might cause _more_ sales of Intel chips in the future, not fewer.

    • xigency 8 years ago

      Regardless of sales, I would think damage to their image would be a concern. Maybe it's not a concern to investors because their "PR nightmare" turned out to be softball for them, and it's been hard to pin anything on Intel when they keep pointing fingers in all directions.

      I think it's our responsibility as technology literate folks and decision makers to explicitly highlight their failures so that mistakes and poor handling like this are not normalized.

      • phaser 8 years ago

        It would be a concern if real competition was allowed in the x86_64 CPU space. Monopolistic patents, in this case, create the opposite incentives

  • NelsonMinar 8 years ago

    What, are you going to stop buying Intel processors?

    • ImaCake 8 years ago

      I will, but I am just a guy who builds his own computer every few years.

  • speedie 8 years ago

    I tend to think it's because the folk who trade with stock very well know that these "issues" are actually features. I can also bet that that "grilling" they get from US government is not about the mess the chip flaws are doing , it's about why the "flaws" were publicly announced.

  • ggregoire 8 years ago

    Do you know someone who is going to stop buying Intel's CPUs?

    • tyfon 8 years ago

      I know in my company this has indeed resulted in a full stop in buying Intel and going AMD instead. There is also an active "project" to replace the Intel servers.

      I work in a bank and they are terrified of the possibility of user processes reading privileged memory. Not necessarily out of actual fear but out of the insane amount of paperwork this will require to satisfy the auditors that it is still safe.

      Anecdotally, but you asked for "someone" and here is someone :)

      • tobyhinloopen 8 years ago

        Wasn’t AMD also affected?

        • tyfon 8 years ago

          Not by Meltdown which enabled user processes to read kernel memory. And as we've seen in the aftermath, they have not nearly as much trouble with their patches for Spectre as Intel has had.

    • speedie 8 years ago

      Well , Linus already said perhaps linux should look at the arm folk . Should that happen , well guess what ? From my very small knowledge , 90+% of internet infrastructure runs on linux .

  • ceejayoz 8 years ago

    Spectre affects AMD, too, so there's no competitor to run to... and CERT was saying at one point that only new processors would fully fix it. They're looking at everyone needing to buy a bunch of replacement products, aren't they?

    • Valmar 8 years ago

      Spectre v2 affects AMD less drastically than it does with Intel, because of the architectural differences between Zen and Intel's processors.

HugoDaniel 8 years ago

So much for the embargo period. I guess the bsd people were not so wrong after all. They might as well just had published it as soon as it was found.

  • Valmar 8 years ago

    Intel has had literally months upon months to test and stabilize their microcode and kernel-side patches for Meltdown/Spectre... but it seems like Intel just doesn't give half a shit, if they're having major issues now.

    Intel's actions seem to shout that they have cared far more about releasing Kaby/Skylake X and Coffee Lake in short order, as a response to Ryzen/ThreadRipper, than actually really digging into fixing their major security flaws. Their actions speak of them preferring to keep their market and mindshare over actually fixing any security issues.

    Intel is still so deeply entrenched that they likely believe that they can get away with their lazy approach. They make millions upon millions, if not billions of dollars ~ why should they give a shit, when their monopoly and half-hearted attempt at a solution will get them by? Intel is being strangled by their shitty management, seemingly...

tallanvor 8 years ago

By my reading of the article, Microsoft is disabling some mitigations for Spectre due to instabilities that Intel's microcode update have been causing.

Intel certainly isn't making any friends these days...

notspanishflu 8 years ago

It is not only Microsoft reverting this patch. HP, Dell or Red Hat are doing that as well.

https://www.bleepingcomputer.com/news/microsoft/microsoft-is...

mosselman 8 years ago

So what can I do for my next self-built pc? Get some AMD equipment, or is that not enough?

  • hishnash 8 years ago

    AMD have released patches (to reduce the near-zero risk of exploit to 0 risk) and they are not having any instability issues! so yes go with AMD

    for server, Epyic is now finally available to purchase for desktop workstation Threadripper for desktop RyZen

    for mobile... not many newer cpus out yet.. need to wait :(

  • avtar 8 years ago

    I'm eyeing Threadripper for my next build but beyond that I'm going to choose a motherboard vendor based on the level of support they offer in this scenario. Some observations that I'm making:

    * How promptly did they address the issue via official channels, i.e. did they leave users in the dark as they appealed to vendors in their forums (hint: most of them seem to have gone down this route) or did they share updates directly on their official sites, social media accounts, etc.

    * Did they provide some estimates as to when users could expect patches?

    * How much of their product catalogue were they willing to cover with security updates? Since this is a unique security issue with high impact I would have expected them to cover motherboards at least 4-5 years old.

  • ihsw2 8 years ago

    AMD equipment should be fine, current-gen Ryzen/Threadripper is more than adept at workstation tasks and next-gen Ryzen (named Ryzen 2 and Threadripper 2) will edge out any advantage that Intel's CPUs have.

  • ageofwant 8 years ago

    I'll be getting a few high-end Intel CPU's that will soon flood the market on the cheap for home machines running arch.

  • chrisper 8 years ago

    Depends on what you are building it for.

    • mosselman 8 years ago

      I have my macbook for work and programming, my PC has windows on it (much to my dislike) and I use it for occasional gaming. I will probably not get new parts anytime soon though as performance is currently fine. Just wondering for when someone asks me to build them a PC.

      • chrisper 8 years ago

        Well, I built my PC with gaming in mind and chose the i7 8700k. But I got it a week before the spectre/meltdown spectacle. I decided to keep it because of its superior singlecore performance.

megaman22 8 years ago

I've not been impressed with my Windows 10 installations of late. All my machines that don't have the Long Term Servicing Branch have had wild instabilities and performance issues the past few months - crazy things, like the task manager taking minutes to launch, and the whole shell periodically crashing. The Fall Creators Update was so bad I had to wipe and start over on some boxes. It's not engendering a lot of confidence in their competence of late.

  • dimmuborgir 8 years ago

    Creators Updates (1703 / 1709) are the culprits. LTSB (1607) has not been upgraded to Creators Update yet and it runs like butter.

nippples 8 years ago

Gee, if only they had a Linus Torvalds type around to block irresponsible commits.

Thimothy 8 years ago

I never got them. The last update in my windows is from Dec 2017. My antivirus is compliant, the registry key correctly set up and yet it refuses to update.

I still haven't had the time to debug it, but I wonder how many people are out there with their OS silently refusing to update.

  • epistasis 8 years ago

    I had huge problems with Win 10. Updates wold fail and install again and again without actually getting installed. Sometimes I would get an opaque error number but web searches revealed nothing for that number, and it was rare that I would be able to find even an error number. I don't do Windows, and just installed it for VR, and didn't spend that much time in Windows, so I would spend 15-30 minutes looking a month, before realizing I had spent more debugging time than VR time that week.

    After probably 9 months of this, and with Windows doing ever more intrusive pop overs whenever I launched it for updates that don't take, I wiped all boot sectors everywhere and installed from scratch. That seemed to work, but it was incredibly frustrating that the boot process was so buggy as was error reporting. I've never encountered a situation like it in the past 15 years of heavy Linux use. Problems there are usually solvable with a couple web searches, even for extremely obscure kernel bugs with obscure packages. Windows refused to tell me anything as did the web.

    • whywhywhywhy 8 years ago

      I built a Windows machine for 3D work and VR just over a year ago, after being a Mac only user for 15+ years. Honestly my Win 10 experience has been the total opposite, it's been stable, fast, minimum update nagging. Overall I've actually been shocked how stable and hassle free the experience has been.

      Maybe I just got lucky with the right combination of hardware.

speedie 8 years ago

So , Linus was right for cursing at Intel ?

ohiovr 8 years ago

My brother was hit by a recent update to Windows 7 that prevented the machine from booting. He went to Microcenter to buy a hard drive. There were a lot of people doing the same thing for the same reason when he was there.

  • quiq 8 years ago

    I don't use the windows side of my machine very often, but decided to update it last night. Booted fine (OS on SSD), but one of the HDDs with all of the windows files was corrupted. No go with ntfsfix, chkdsk, partition table destroyed. Reformatted it as ext4 and windows doesn't get to touch it anymore. Haven't tested it too much yet but seems to be working fine.

    Remember to use backups!

    • ohiovr 8 years ago

      My brother did have major data loss. And no real backup strategy.

shultays 8 years ago

Amount of fuck-up in this whole issue is mind blowing. I am getting more surprised with every new I get

  • dotdi 8 years ago

    Intel has been called out by Linus Torvalds several days ago for the crappy fixes they delivered for GNU/Linux. I would be very surprised if Intel actually shipped proper fixes for Windows. It's a shame, really.

    • DominikD 8 years ago

      And then another developer kindly explained why he's wrong. It's fun to listen to Linus' rants, sure, but he's not always correct.

    • watwut 8 years ago

      Those were not delivered fixes, that was work in progress that is still work in progress. And the dude who was "called out" works for Amazon.

      • mrmondo 8 years ago

        FYI - The “dude” from Amazon worked for Intel for 8 years before joining Amazon UK just over a year ago.

        • numbsafari 8 years ago

          The “dude” is also probably working under an insane amount of pressure and being made to feel like he is somehow responsible or at fault for the whole situation. Best not to make it personal from the peanut gallery.

          • sundarurfriend 8 years ago

            > Best not to make it personal from the peanut gallery.

            That's a very uncharitable interpretation of your parent comment, which was simply pointing out his connection and history with Intel.

            • FPGAhacker 8 years ago

              Are you sure it is not you that has made the uncharitable interpretation?

              I read the peanut gallery comment as an agreement that the guy knows what he’s talking about.

              • mrmondo 8 years ago

                FWIW - I took it as a somewhat rude reply but hey it’s the internet so no big deal.

          • mrmondo 8 years ago

            I did not, and do not lay any blame, I only provided context to the prior comment.

          • SmellyGeekBoy 8 years ago

            So the person supposedly single-handedly working on this fix doesn't even work for Intel? That strikes me as... Odd?

            • watwut 8 years ago

              Where did you got single-handedly from? Of course multiple people cooperate on those patches. That is it was possible for Linus Torwalds to join their discussion, it would all be much less public if only one institution or person worked on it.

      • cornholio 8 years ago

        So what you are saying is that Intel didn't bother to support Linux even with a crappy fix, like they did for Microsoft? Good to hear the Chinese are well supported though.

        • watwut 8 years ago

          That is not what I said and not what situation implied. In situation where you have like zero factual information you went out of your way to make up most damaging possibility you could have.

  • mtgx 8 years ago
    • 534b44a 8 years ago

      I doubt some the best/most popular players of the USA-tech-industry dream team will get any real punishment at their own soil. Any fines will give a sense of justice to the public but it will just be peanuts.

      • eddieD401 8 years ago

        These won't be the same peanuts from the peanut gallery right? I didn't get many to begin with and if we have to share...

      • richardwhiuk 8 years ago

        They aren't going to get fined. There's nothing they can be fined for.

  • imglorp 8 years ago

    I wonder how much all this cleanup will cost in hours, downstream, for all the installed users? Judging by all the grief on this thread it's substantial.

bartl 8 years ago

>Intel, AMD and Apple face class action lawsuits over the Spectre and Meltdown vulnerabilities.

I sure hope Intel will face a class action suit over this botched update. Many professionals have wasted countless hours dealing with this junk.

cjsuk 8 years ago

Not what I wanted to wake up to this morning. I suspect we’re in for a rough ride for a long time thanks to this mess.

chrisper 8 years ago

Just checked for Updates, but there don't seem to be any?

  • taspeotis 8 years ago

    “If you are running an impacted device, this update can be applied by downloading it from the Microsoft Update Catalog website”

    https://support.microsoft.com/en-us/help/4078130/update-to-d...

  • thg 8 years ago

    Just in case you, like me, missed the memo where Microsoft said they'd stop supplying security updates if you have no AV / AV incompatible with the patches installed. The fix to the former is creating the registry entry manually.

    https://support.microsoft.com/en-us/help/4072699/january-3-2...

    • morsch 8 years ago

      Bizarre.

      Customers without Antivirus

      In cases where customers can’t install or run antivirus software, Microsoft recommends manually setting the registry key as described below in order to receive the January 2018 security updates.

      • testplzignore 8 years ago

        Sounds like Microsoft can't tell the difference between "has AV installed that will break" and "has no AV installed", which makes sense. It's probably infeasible to reliably fingerprint all existing AV software.

        • Slansitartop 8 years ago

          > Sounds like Microsoft can't tell the difference between "has AV installed that will break" and "has no AV installed", which makes sense. It's probably infeasible to reliably fingerprint all existing AV software.

          For something like this, I think best-effort bad-AV detection would have been best. Seems pretty insane to disable security patching because they can't be 100% certain that you have a compatibly AV.

          • anonymfus 8 years ago

            Incompatibility here means unbootable state.

            • Slansitartop 8 years ago

              But it also means that people with perfectly acceptable configurations are left in an insecure state, without an unexpected magic incantation (a registry hack) that most probably will never know about.

              Disabling security patches is not acceptable in current year without A LOT of nasty and annoying warnings.

      • sundvor 8 years ago

        It makes sense though. Only AV programs that comply may set the setting. Without a compliant AV program, there's nothing to do that set - unless you do it manually.

    • Santosh83 8 years ago

      Microsoft won't supply updates even if you have no AV installed, including builtin Defender disabled??

      I thought stopping updates was only for the case of unpatched AVs that did not set the registry key...

      • T-N-T 8 years ago

        Microsoft does not have any way of knowing whether you have an antivirus or not and because the Spectre patch causes a bluescreen on boot if you have an antivirus that's not updated, they require the antivirus set the registry key to say "hey, it's safe to update". Absence of AV means that registry key doesn't get set.

        MS doesn't provide an easy, GUI way of disabling built-in Defender by the way. If you 'disable' defender by using the control panel on windows 10, it only stops its activity temporarily and it can reactivate itself after 24 hours or something like that. You can permanently disable it through registry keys but it's not an officially supported, accepted method to edit the registry by yourself. There's a group policy for 10 Pro and other corp editions though.

        For a normal home user, Defender is never fully disabled. It will deactivate itself if you install a third party antivirus, and reenable itself when you uninstall them. Bottom line, the average user is not supposed to be AV-less.

      • cesarb 8 years ago

        If you have no patched AV, who's going to set the registry key?

        • ptaipale 8 years ago

          If your AV is not patched, the kernel patches should not be installed because you might get a repeating bluescreen.

          So get a patched AV. If you haven't installed another AV, then Defender is there and counts.

mm-vorticesoft 8 years ago

Linus was right

  • anfilt 8 years ago

    Well he is known for calling a spade a spade. No matter how blunt that may be.

    • cat199 8 years ago

      except for when he is wrong.

      Why he gets a pass for being the 'nice guy' and DeRaadt gets the bad rep is still a mystery to me

  • cm2187 8 years ago

    I thought he was angry about the mitigation being disabled by default, not being unstable.

    • icebraining 8 years ago

      Nah, it was more than that: The patches do things like add the garbage MSR writes to the kernel entry/exit points. That's insane. That says "we're trying to protect the kernel". We already have retpoline there, with less overhead.

      • rocqua 8 years ago

        That was about patches to the linux kernel, not the microcode patches.

        • icebraining 8 years ago

          Yes, but as far as I know Linus has made no comment on the microcode patches, so mm-vorticesoft is probably referring to the Spectre patches in general.

          • jononor 8 years ago

            The microcode patches are binary blobs against a proprietary and secret ISA, how can anyone comment on their quality?

            • mjevans 8 years ago

              When I read it I believed that Linus was implying that the suggested mitigation was so insane that it seemed like Intel MIGHT be hiding how broken they believed their hardware was with such over-the-top reactions. As well as indirectly asking if they believed the currently accepted mitigation method (retpoline) was considered ineffective.

    • sundarurfriend 8 years ago

      His overall point was a bewilderment at the incompetent and non-sensical patches that were being given as "fixes" in this issue. Linus was pointing out a particular instance of that, but this news and other behaviour from Intel seems to indicate this is part of an endemic, cultural, administrative issue inside the company.

  • byte1918 8 years ago

    Aren't we talking about two different things? Linux vs Windows kernel?

    • cbcoutinho 8 years ago

      OP is probably referencing the 'bullshit patches from Intel' comment from Linus about the patches they were sent, and that Microsoft might have been sent similar obfuscatory patches.

    • pjmlp 8 years ago

      Given that the patches are CPU micro-code delivered by OS drivers, AFAIK, the actual OS won't make much difference.

  • topspin 8 years ago

    "Decades old trap/fault software is being replaced by 10-20 operating systems, and there are going to be mistakes made."

    - Theo de Raadt

  • richardwhiuk 8 years ago

    Linus was talking about Linux patches, not these microcode patches. They've been known broken for at least a week

  • mehrdadn 8 years ago

    About what? Would you have a link?

    • bmon 8 years ago

      Linus Torvalds: “Somebody is pushing complete garbage for unclear reasons.” http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04628.html

      • mehrdadn 8 years ago

        Oh, this is the same issue?

        • carlmr 8 years ago

          It's the same bug, same company pushing patches, but we don't know if it's the same reason.

          • lorenzhs 8 years ago

            It's not the same company - David Woodhouse works for Amazon. He used to work for Intel but not for a year or so.

            It's also not the same reason. Linus doesn't like the mitigation in the kernel, disagreeing on how Intel intends to implement it. This article is about unstable microcode patches that Intel retracted, and that retraction has been discussed on here a few times. The article is just exceedingly bad at describing the actual issue. It also doesn't help that the kernel mitigation depends on new flags introduced by the faulty microcode update, but the update being faulty is orthogonal to Linus' opinion.

  • gsich 8 years ago

    Like always.

    • Valmar 8 years ago

      He generally is. His rants are almost always spot-on and pure gold. :)

  • emptyfile 8 years ago

    What are you talking about?

debt 8 years ago

I mean, is this an unmitigated disaster on Intel's part? It's like a train wreck in slow motion.

A part of me feels this stories like this are going to keep getting worse until Spectre is finally used in the wild.

dsign 8 years ago

What a mess!

The day this blew up we rented our first physical server for the express purpose of running secure critical workloads in unpatched environments. Yes, I know that there is nothing secure, but not everything we do is running a chunk of logic uploaded by an attacker, so we will take our chances.

hi41 8 years ago

What does the Spectre bug mean for a person planning to buy a new windows computer? Should I buy an AMD CPU based computer instead of an Intel based computer?

Roritharr 8 years ago

I'm pretty sure these patches were responsible for my notebook crashing everytime i hooked it up to our Thunderbolt 3 Dockingstations.

kuon 8 years ago

Anybody know what is the status of FreeBSD? I googled a bit and found nothing except "we wait", is it still the case?

mehrdadn 8 years ago

Anyone know if people who had disabled the mitigations via FeatureSettingsOverride = 3 were still affected?

mark_l_watson 8 years ago

A more accurate title would be that Microsoft disabled the specter mitigation’s due to a flawed Intel update, right? I thought this was all Microsoft’s fault until getting half way through the article.

  • joemaller1 8 years ago

    Even more accurate: Microsoft joins HP, Dell, Lenovo, VMware and Red Hat in disabling Intel's buggy Spectre patch.

    > HP, Dell, Lenovo, VMware, Red Hat and others had paused the patches and now Microsoft has done the same.

  • giancarlostoro 8 years ago

    It's really telling that even Linus Torvalds was not happy with their "fixes" and now Microsoft. Intel needs to start taking the situation completely seriously cause their actions don't imply they are.

    • segmondy 8 years ago

      Intel doesn't care. What choice do we have?

      • zolthrowaway 8 years ago

        Vote with your wallet. That's really the only thing that you can do. Intel is too comfortable in their position as market leader. Until they start to feel some pressure, they have shown they don't really care. I know AMD is not a perfect company either, but I elected to buy a Ryzen processor for my upcoming build. People need to at least consider the competition without defaulting to "I need a processor, I buy the latest Intel chip."

        Yes, I know that most people don't build new PCs or upgrade their processors that regularly. Yes, I know that many people don't have much of a choice because they have some requirement that currently ties them to Intel. However, those that do have that choice, should remember this debacle the next time they are buying a CPU/system (even if they are not doing it for awhile). Intel is hoping they can sweep this under the rug, we can't let them until they make amends. Do not buy an Intel chip until they've proven they will do better. I am not endorsing AMD either. You can still vote with your wallet by not buying anything at all. If enough people put off their upgrade, it would put a dent in Intel's bottom line. Things won't change until it hurts them financially.

        • _asummers 8 years ago

          I understand your point, and agree with the spirit of it, but a few consumers buying Ryzen chips isn't going to make one bit of difference. A couple data centers buying dozens of racks of them, however, would be more measurable. Hit em in the B2B not the B2C.

          • zolthrowaway 8 years ago

            I agree with you, and you are right. However, most people aren't making decisions about what types of chips to use in a data center. It would be wonderful if the people in those positions explored non-Intel options. For the average consumer, all we can do is choose which company we buy a CPU from every few years. People buying Ryzen chips incentivizes AMD to keep making chips and stay in the market. Competition is good for consumers. I totally get that it is a lot more complex than that, but I personally feel like it's the best we can do as the average consumer.

            • _asummers 8 years ago

              For sure. Like I said, I agree with the spirit, and you can obviously only do things within your own sphere of influence. I'm also planning on doing a Ryzen build for my next PC. I'm just trying to be realistic to say that even thousands of consumers switching won't make a huge dent in their bottom line. B2B is really the only way to influence a company as large as Intel, unfortunately.

              • zolthrowaway 8 years ago

                You're definitely right. I'm just feeling idealistic this morning :). I hope you enjoy your build and it goes well. I'm getting my 1600 this week.

          • epicide 8 years ago

            Yep, nobody gets fired for picking Intel, so to speak.

      • giancarlostoro 8 years ago

        Before Intel Core processors became the standard hot processor I was an AMD guy. I'm heading back towards that route. This means no Macbook or Surfacebook for my next development laptop for me. If anyone wants my money they better build a developer worthy laptop with an AMD processor and a sweet AMD graphics card. Also AMD is working on providing open source GPU Vulkan drivers.

      • kilburn 8 years ago

        For the longest time there was no practical alternative, but nowadays... AMD is back! The whole Ryzen lineup has turned out to be pretty good. You may even save some money in the process.

        • mseebach 8 years ago

          Has AMD processors been confirmed as not vulnerable? As I recall the original investigation only covered Intel processors, but hypothesized that AMD would be affected as well, as they more or less have the same fundamentals around branch prediction.

          • morganvachon 8 years ago

            CPUs from AMD are not vulnerable to Meltdown, but are vulnerable to both versions of Spectre.

            https://www.amd.com/en/corporate/speculative-execution

            • Valmar 8 years ago

              Zen is not anywhere quite as vulnerable to Spectre v2 as Intel's CPUs are, due to architectural differences, according to AMD, so that's something.

            • shiven 8 years ago

              Damn! So, practically, no modern processor (or consumer laptop, or enterprise server) is safe from Spectre?

              Time to go off the beaten path.

              • morganvachon 8 years ago

                The earliest Intel Atom chips (Nxxx series) are supposedly safe, but they were only ever used in woefully underpowered netbooks and nettops, and they had perhaps half the performance of a similarly clocked (much older) Pentium M. That performance metric is documented, and I've felt it myself when I owned a Pentium M laptop and Atom N450 netbook at the same time a few years ago.

                A few ARM SoCs -- including the entire line used in Raspberry Pi boards -- are safe, but the vast majority of recent ARM devices are affected by one or more of the attack vectors. This means virtually any flagship and most if not all midrange smartphones and tablets, even iPhones and iPads, are vulnerable.

                This is the most complete list of affected CPUs and SoCs I've found, and they appear to be keeping it updated:

                https://www.techarp.com/guides/complete-meltdown-spectre-cpu...

              • mseebach 8 years ago

                I think it's safe to assume that pratical mitigations will eventually surface, the biggest issue is probably around the cost in performance. Shaving 30% (or whatever) of the worlds computing power in one fell swoop is kind of a big deal.

              • tetromino_ 8 years ago

                Arm (starting with Cortex-R7 and higher) and PowerPC are vulnerable to Spectre too.

        • giancarlostoro 8 years ago

          This is why I always preferred AMD. They gave you either more or the same bang for your buck. I hope they don't slack off on pushing to innovate ahead of Intel now that they're "even" in a sense.

      • arkades 8 years ago

        I was specking out a new machine for my wife just as all this news broke. At this point, I’m obviously going AMD.

        I mean, I’m not putting Intel out of business, but I have a -choice-.

        • thrillgore 8 years ago

          Meltdown and Spectre exploits are rooted in the nature of current CPU design. If you want to be safe, go build a RISC-V computer.

          • Slansitartop 8 years ago

            I don't think that's true. Meltdown and Spectre are sub-ISA issues, so you could have a RISC-V implementation with them if it handled caching and speculation similarly.

  • dang 8 years ago

    Thanks. We've edited the title above.

rootw0rm 8 years ago

Ugh, is this the cause of the weird bugchecks I've been having this week? Just gave myself 64gb page file and enabled full memory dumps so I could track it down in WinDbg. I always forget something on fresh installs...

IanSanders 8 years ago

Meanwhile Intel stock is at 5 year highest

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection