Settings

Theme

Conditions in the Intel 8087 floating-point chip's microcode

righto.com

115 points by diogotozzi 5 days ago · 42 comments

Reader

WalterBright a day ago

I've always thought the 8087 was a marvelous bit of engineering. I never understood why it didn't get much respect in the software business.

For example, when Microsoft was making Win64, I caught wind that they were not going to save the x87 state during a context switch, which would have made use of the x87 impractical with Win64. I got upset about that, and contacted Microsoft and convinced them to support it.

But the deprecation of the x87 continued, as Microsoft C did not provide an 80 bit real type.

Back in the late 80's, Zortech C/C++ was the first compiler to fully implement NaN in the C math library.

  • kstrauser a day ago

    I’d agree that the engineering was brilliant (but 68882 gang represent!). Its ISA was so un-x86-like, though, as it was basically an RPN calculator. X86 had devs manipulating registers. X87 had them pushing operands and running ops that implicitly popped them and pushed the result back on the stack.

    That’s not better or worse, just different. However, I can imagine devs of the days saying hey, uh, Intel, can we do math the same way we do everything else? (Which TBH is how you’d end up with an opcode for a hardware-accelerated bubble sort or something, because Intel sure does love them some baroque ISAs.)

    • jamesfinlayson 20 hours ago

      > Its ISA was so un-x86-like, though, as it was basically an RPN calculator

      Yeah I remember when I first came across floating point stuff when trying to reverse engineer some assembly - I wasn't expecting something stack-based.

    • WalterBright 21 hours ago

      Eh, as far as compiler backends go, the RPN stack was worse.

      I thought the X86_64 instruction set was a giant kludge-fest, so I was looking forward to implement the AArch64 code generator. Turns out it is just as kludgy, but at right angles. For example, all the wacky ways of simply loading a constant into a register!

      • kstrauser 4 hours ago

        That's fair. Are there any instruction sets that strike you as pretty?

        • WalterBright an hour ago

          The PDP-11 instruction set. It fits neatly onto a single sheet of paper. It's all very regular and orthogonal. It's simplicity is a work of genius.

  • jdsully 20 hours ago

    Excel needed the x87 as well as they cared about maintaining the 80-bit precision in some places to get exactly the same recalc results. So they would have fixed it eventually most likely.

  • mschaef 21 hours ago

    What do you mean by respect? Here's a layperson's perspective, at least.

    Up through the 486 (with its built in x87), the x87 was always a niche product. You had to know about it, need it, buy it, and install it. This is over and on top of buying a PC in the first place. So definitionally, it was relegated it to the peripheries of the industry. Most people didn't even know x87 was a possibility. (I remember distinctly a PC World article having to explain why there was an empty socket next to the 8088 socket in the IBM PC.)

    However, in the periphery where it mattered, it gained acceptance within a matter of a few years of being available. Lotus 1-2-3, AutoCAD, and many compilers (including yours, IIRC) had support for x87 early on. I would argue that this is one of the better examples of marginal hardware being appropriately supported.

    The other argument I'd make is that (thanks to William Kahan), the 8087 was the first real attempt at IEEE-754 support in hardware. Given that IEEE-754 is still the standard, I'd suggest that x87's place in history is secure. While we may not be executing x87 opcodes, our floating point data is still in a format first used in the x87. (Not the 80-bit type, but do we really care? If the 80-bit type was truly important, I'd have thought that in the intervening 45 years, there'd be a material attempt to bring it back. Instead, what we have are a push towards narrower floating point types used in GPGPU, etc.... fp8 and f16, sure... fp80, not so much.)

    • WalterBright 20 hours ago

      > What do you mean by respect?

      The disinterest programmers have in using 80 bit arithmetic.

      A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors. More bits would put off the cliff where the results turned into gibberish.

      I know there are techniques to minimize this problem. But they aren't simple or obvious. It's easier to go to higher precision. After all, you have the chip in your computer.

      • adrian_b 11 hours ago

        Yes, the argument of Kahan in favor of the 80-bit precision has always been that it will allow ordinary programmers, like the expected users of IBM PC, who do not have the knowledge and experience of a numerical analyst, to write programs that perform floating-point computations without subtle bugs caused by unexpected behavior due to rounding errors.

      • mschaef 11 hours ago

        > The disinterest programmers have in using 80 bit arithmetic.

        I don't know, other than to say there's often a tendency in this industry to overlook the better in the name of the standard. 80-bit probably didn't offer enough marginal value to enough people to be worth the investment and complexity. I also wonder how much of an impact there is to the fact that you can't align 80-bit quantities on 64-bit boundaries. Not to mention the fact that memory bandwidth costs are 25% higher when dealing with 64-bit quantities, and floating point work is very often bandwidth constrained. There's more precision in 80-bit, but it's not free, and as you point out, there are techniques for managing the lack of precision.

        > A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors.

        This sort of thing shows up in even the most prosaic places, of course:

        https://blog.codinghorror.com/if-you-dont-change-the-ui-nobo...

        In any event, while we're chatting, thank you for your longstanding work in the field.

        • adrian_b 11 hours ago

          The 80-bit format was included in the IEEE standard since the beginning.

          The IEEE standard had included almost all of what Intel 8087 had implemented, the main exception being the projective extension of the real number line. Because of this deviation in the standard, Intel 80387 has also dropped this feature.

          Where you are right is that most other implementers of the standard have chosen to not provide this extended precision format, due to the higher cost in die area, power consumption and memory usage, the latter being exacerbated by the alignment issue. The same was true for Intel when defining SSE, SSE2 and later ISA extensions. The main cost issue is the superlinear growth of the multiplier size with precision, a 64-bit multiplier is not a little bigger than a 53-bit multiplier, but much bigger.

          Nowadays, the FP arithmetic standard also includes 128-bit floating-point numbers, which are preferable to 80-bit numbers and do not have alignment problems. However, few processors implement this format in hardware, and on the processors where it would need to be implemented in a software library one can obtain a higher performance by using double-double precision numbers, instead of quadruple precision numbers (unless there is a risk of overflow/underflow in intermediate results, when using the range of double-precision exponents).

          In general, on the most popular CPUs, e.g. x86-64 based or Aarch64 based, one should use a double-double precision library for all the arithmetic computations where the traditional 80-bit Intel 8087 format would have been appropriate.

        • WalterBright 2 hours ago

          Haha the calculator app misses one critical feature - a history of the numbers you typed in, so you can double check the column of numbers you added.

          > thank you for your longstanding work in the field.

          I sure appreciate that, especially since I give away all my work for free these days!

  • Cold_Miserable a day ago

    x87 should have been killed off. It would have forced lazy game developers to use SSE around the 2005 era.

    • WalterBright 20 hours ago

      Game floating point precision doesn't matter much - speed does. But if you're doing numerical analysis, it does matter.

kens a day ago

Author here if anyone has questions...

  • hnthrowaway0315 a day ago

    Hi kens, thanks for the knowledge sharing all these years. Can you please confirm this one? From Wikipedia, it says that 8087 uses CORDIC algorithm. Does that mean that it's the same (but different speed) as what I'd implement the functions in software, except in microcode (which has more granularity than usual assembly code)?

    I found it a bit surprising that as a 45-year old chip, there is no public information of its microcode. I guess hardware is indeed much more secret than software.

    • kens a day ago

      Yes, the 8087 uses CORDIC. I extracted the constants from the 8087's internal constant ROM and they are arctangent and log values for the CORDIC algorithm. You can implement the same functions in software, which is what floating-point emulation libraries did back then.

      There's almost no public information on the 8087 microcode, but I'm working on that :-)

  • farseer a day ago

    Is there 8087 IP available in verilog etc?

  • dapperdrake a day ago

    Thank you for the deep dive.

  • mschaef 21 hours ago

    Thank you. As always.

0xsn3k a day ago

super cool! i wonder how difficult it would be to recreate the entire chip at logic gate level in, say, VHDL or Verilog

  • kens a day ago

    It would be difficult, but not impossible. The main problem is tracing out all the circuitry, which is very time-consuming and error-prone. Trust me on this :-)

    The second problem is that converting the circuitry to Verilog is straightforward, but converting it to usable Verilog is considerably more difficult. If you model the circuit at the transistor level in Verilog, you won't be able to do much with the model. You want a higher-level model, which requires converting the transistors into gates, registers, and so forth. Most of this is easy, but some conversions require a lot of thought.

    The next issue is that you would probably want to use the Verilog in an FPGA. A lot of the 8087's circuitry isn't a good match for an FPGA. The 8087 uses a lot of dynamic logic and pass transistors. Things happen on both clock edges, so it will take some work to map it onto edge-trigger flip-flops. Moreover, a key part of the 8087 is the 64-bit shifter, built from bidirectional pass transistors, which would need to be redesigned, probably with a bunch of logic gates.

    The result is that you'd end up more-or-less reimplementing the 8087 rather than simply translating it to Verilog.

    • 0xsn3k a day ago

      ah, i see, thanks for the insight! do you have any advice on how one might get started with IC reverse-engineering? i think it would be interesting to reimplement these chips in a way that's at least inspired by the original design

      • kens a day ago

        How to get started reverse engineering? That's a big topic for a HN comment, but in brief... Either get a metallurgical microscope and start opening up chips, or look at chip photos from a site like Zeptobars. Then start tracing out simple chips and see how transistors are constructed, and then learn how larger circuits are built up. This works well for chips from the 1970s, but due to Moore's Law, it gets exponentially more difficult for newer chips.

        I also have a video from Hackaday Supercon on reverse engineering chips: https://www.youtube.com/watch?v=TKi1xX7KKOI

        • monocasa a day ago

          Do you have any good tips on what to look out for when buying a used metallurgical microscope for looking at decapped chips? Even if not a complete set constraints, I'd appreciate some off the cuff thoughts if you have the time.

          • kens a day ago

            I use a basic metallurgical microscope (AmScope ME300TZB). An X-Y stage is very useful for taking photos of chips and stitching them together. A camera is also important; my scope has a 10MP camera. I'm not into optics, so I don't know what lens characteristics to look for.

    • dapperdrake a day ago

      Noob here,

      does VH have options for encoding working with both clock edges?

      • kens a day ago

        There's a difference between what Verilog will allow and what is "synthesizable". In other words, there is a lot of stuff that you can express in Verilog, but when you try to turn it into an FPGA bitstream, the software will say, "Sorry, I don't know how to do that." Coming from a software background, this seems bizarre, as if C++ compilers rejected valid programs unless they stuck to easy constructs with obvious assembly implementations.

        Using both edges of a clock is something that you can express in Verilog, but can't be directly mapped onto an FPGA, so the synthesis software will reject it. You'd probably want to double the clock rate and use alternating clock pulses instead of alternating edges. See: https://electronics.stackexchange.com/questions/39709/using-...

        • derefr a day ago

          > Coming from a software background, this seems bizarre, as if C++ compilers rejected valid programs unless they stuck to easy constructs with obvious assembly implementations.

          To my understanding, isn’t it more like there being a perfectly good IR instruction coding for a feature, but with no extant ISA codegen targets that recognize that instruction? I.e. you get stuck at the step where you’re lowering the code for a specific FPGA impl.

          And, as with compilers, one could get around this by defining a new abstract codegen target implemented only in the form of a software simulator, and adding support for the feature to that. Though it would be mightily unsatisfying to ultimately be constrained to run your FPGA bitstream on a CPU :)

dboreham a day ago

Until I read this I did not know that 1970s microprocessors had register renaming. Feel a little cheated, thinking for all those years that they were actually moving the bits.

  • dapperdrake a day ago

    If you work through a math problem with pen and paper or nand2tetris or nandgame.com then it becomes obvious that changing indexes into a register file (a.k.a. pointers) are way faster and easier than wires to move stuff around.

  • peterfirefly a day ago

    How do you think the EXX and EX AF,AF' instructions work on the Z80?

    • avadodin a day ago

      And EX DE, HL

      • WalterBright a day ago

        E to the u, du dx, E to the x, dx!

        • avadodin 15 hours ago

          I must admit I do not know what you are referencing here, sir, but it is always a pleasure to run into your comments on HN.

          So much positive compiler-dad energy.

          • WalterBright an hour ago

            It's the MIT song! At least it used to be, it was a long time ago.

            > always a pleasure to run into your comments on HN.

            Wow what a nice compliment! Makes my day!

  • kens a day ago

    If you feel cheated now, wait until you find out that the ALU in the 8-bit Z80 was just 4 bits. :-)

    • mschaef 20 hours ago

      Does this have any similarities at all to the fact that the Pentium 4 used a 16-bit ALU?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection