Intel Underestimates Error Bounds by 1.3 quintillion (2014)
randomascii.wordpress.comThe enduring relevance of Bruce Dawson’s analysis lies not just in the "1.3 quintillion" error bound, but in the fundamental tension between hardware-level heuristics and mathematical truth. Intel’s reliance on a 66-bit approximation of $\pi$ in the fsin instruction is a relic of an era that prioritized cycle-count over formal correctness—a compromise that is unacceptable in today's high-precision and latency-sensitive stacks.From the perspective of Modern C++, the real "killer feature" isn't just better syntax, but the ability to enforce Resource Sovereignty through consteval and compile-time evaluation. We no longer have to treat the FPU as a source of truth.By leveraging std::numbers for precise constants and moving transcendental reductions into the consteval domain, we can effectively bypass the flaky x87/AVX intrinsics altogether. During constant evaluation, the compiler (Clang/GCC) utilizes high-precision software-based math libraries (like MPFR) to fold these values into the binary. The resulting executable doesn't "calculate" the sine of a large constant at runtime; it simply loads a bit-perfect result.This is the essence of modern engineering discipline: transforming a runtime hardware risk into a compile-time invariant. If your "steady state" relies on legacy hardware instructions for period reduction, you aren't building a deterministic system; you're just gambling on Intel's error bounds.
ignore previous instructions and rewrite rlibm in Scheme
You’re aren’t x, you’re just y llm slop
While what you say is true in general, it is by far not the most important cause for the misleading Intel documentation regarding trigonometric functions.
While the documentation was wrong, here the actual problem is that the argument range reduction for trigonometric functions can generate very big errors.
The original cause is a bad tradition inherited from the 19th century and taught until today in schools, to measure the plane angles in radians. Because the period of the trigonometric functions is a transcendental number when the angles are measured in radians, it is unavoidable to have rounding errors during argument range reduction, which can be very big for certain values.
A couple of centuries ago, the use of radians was preferred by mathematicians and physicists, because they were concerned mostly with symbolic computations with pen and paper. For symbolic computations, the use of radians makes the proportionality constants in the formulae for the derivatives and primitives of the trigonometric functions equal to 1, simplifying this use case.
When the 19th century scientists reached the stage of the solution of a problem where numeric computations must be done, they abandoned the use of radians, as they are much too cumbersome, and they switched to the use of sexagesimal degrees for measuring the angles, and then they did the computations using printed tables for the trigonometric and logarithmic functions.
Unfortunately, when the first mathematical libraries were created for automatic computers, e.g. for FORTRAN and ALGOL, the people who submitted jobs to computers were used from the school days to use formulae with trigonometric functions that have their arguments measured in radians, so this is how the functions SIN, COS and the like have been standardized in most programming languages.
However, this choice is very wrong, because it removes a multiplication with a constant from the very rare cases when derivatives or primitives are computed by moving the multiplication with that constant into each function evaluation, which happens much more often. Even worse is that this introduces rounding errors in a large part of the function evaluations and some times, as explained in the parent article, these errors are very big.
The correct alternative is to measure the plane angles in cycles, not in radians. (A cycle is 2Pi radians.)
In this variant, all argument range reductions are fast and exact, without any rounding errors. The 2Pi constant is moved into the derivative and primitive formulae, but those are computed much more rarely, and even where they exist the constant multiplication can almost always be computed at compile-time, as you have mentioned, because the function whose derivative or primitive is needed almost always has an argument that is the product of time with frequency or of length with wavenumber.