Inventor Claims to Have Solved Floating Point Error Problem
hpcwire.comConsidering the over-the-top language ("a game changer for the computing industry") and questionable or imprecise comments like, "[it] allows representation of real numbers accurate to the last digit" (um, who reads that without thinking of irrational numbers?) it sounds too much like a sales pitch and not like serious research.
I could be wrong, but based on the similarities to interval arithmetic everyone has already identified, I'm pretty skeptical. At best, this could be a patent on a more efficient way to build interval arithmetic into a CPU architecture rather than a completely new technique.
As my British friends would say though, I can't be arsed to actually read the patent.
That's what I was thinking too; If I do 1 / 3, then of course it will have to truncate, and integration errors would still be inevitable.
There is inaccuracy, but the point is that it tracks how much inaccuracy there might be. I picture this as being similar to how computers can't trust time for all sorts of reasons, so Google's Spanner uses time ranges and estimates of potential inaccuracy to make it possible to work with that. It will truncate, so you won't really have 1/3, but you'll know it's approximately 0.333, definitely more than 6/20 but definitely less than 7/20, for instance, and if you ever exceed certain bounds of certainty the calculation and all resulting calculations are flagged as not being that trustable. As is my understanding from the article, anyway.
> but you'll know it's approximately 0.333, definitely more than 6/20 but definitely less than 7/20, for instance
What strikes me as odd is that in my intro course to numerical methods taught how to calculate floating point error bounds when introducing the concept of floating point numbers, including how errors propagated with each flop.
Yeah I doubt this is anything novel in the purely mathematical realm. It sounds like what's patented is a practical design for doing this on a chip.
Not sure if you were taught a different method, but I envision this being similar to counting "significant digits" in scientific notation, and it sounds like that's very similar to the approach he took. I wonder if that explains there statements about tracking to the last "digit". They obviously can't do that for irrational number, so maybe they mean the last "significant" digit as far as the underlying floating point implementation is concerned.
Yes, this is my exact understanding of the article too.
So in that case, "[it] allows representation of real numbers accurate to the last digit" doesn't really hold true.
I think you're reading it wrong. I think the intended meaning was that it's accurate for real numbers up to, but not including, the final digit"
That's how I read it.
Many real numbers have an infinite number of digits
I think you're missing the point. Using the example above. 1/3 has no final digit. 0.333333 repeating infinitely.
So how do you then store this number to the final digit or final -1 digit using this mechanism?
Actually it's possible to represent 1/3 perfectly accurately, but what about all the numbers that it's theoretically impossible to compute? (Almost all real numbers have this property)
> Almost all real numbers have this property
> Almost all
I love this comment because it brings back memories of school.
In a layman's terms, I think almost all in this case means all but a finite amount
can we say almost all real numbers are irrational?
1 1/1 2/1 3/1 4/1 5/1 ... 2 1/2 2/2 3/2 4/2 5/2 ... 3 ...
so clearly we can count all the rational numbers but how many irrational numbers are there? are there (many) more irrational numbers than there are rational numbers?
The rationals have measure zero, so almost all reals are irrational. The Cantor set is an example of an uncountable set which also has measure zero.
http://austinrochford.com/posts/2013-12-31-almost-no-rationa...
It's not a finite amount, but an amount with measure zero.
You can count computable irrationals by numbering the algorithms that calculate (successive approximations to) them and lining them up in order. This is what Turing used his "Turing Machines" to do.
So all of the irrationals that we use or could ever use in calculations are countable. The uncountability of the irrationals comes entirely from the uncomputable ones, which we will probably never see.
The reals are deeply weird.
The rational numbers are clearly countable. The irrational numbers are uncountable, which means that for every rational number there are infinitely many irrational numbers.
But for every integer there are also an infinite number of rationals. And since we can put the rationals into 1-1 correspondence with integers, that means that for every rational there are an infinite number of rationals.
Infinity is funny like that.
The conclusion that there are somehow more irrationals than rationals depends on subtle philosophical points that have no possible proof or disproof and usually get glossed over. Accepting that philosophy also leads to the conclusion that not only do numbers which can in no way ever be represented exist, but there are more of them than numbers which we can explicitly name. Now I ask you, in what sense do they REALLY exist?
> The conclusion that there are somehow more irrationals than rationals depends on subtle philosophical points that have no possible proof or disproof and usually get glossed over
Can you elaborate? The proof that there are more irrationals than rationals is very straightforward, from my perspective.
The easiest way to understand it is to look at the question from a philosophical framework where it makes no sense to claim that there are "more" irrationals than rationals. And then untangle why it came to a different answer.
In Constructivism, all statements have 3 possible values, not 2. They are true, false, and not proven. All possible objects must have a construction. So instead of talking about a vague "Cauchy sequence", we might instead have a computer program that given n will return a rational within 1/n of the answer.
The first thing to notice is that all possible things that could exist is contained within a countable set of all possible constructions. There can't be "more" irrationals than rationals.
But what about diagonalization? That proof still works. You still can't enumerate the reals. But why not? The answer is because determining whether a given program represents a real is a decision problem that cannot in general be solved by any algorithm. You are running into the same category of challenges that lie behind Gödel's theorem and The Halting Problem. It is not that there are "more" irrationals than rationals. It is that there are specific programs which you can't decide whether they represent reals.
From a constructivist's eyes, the traditional proof that there are more irrationals falls apart because you're reasoning about unprovable statements about arbitrary sequences whose logic could have been constructed with the same sort of logic that you're applying to them. Is it any wonder that you wind up concluding the "existence" of things that clearly don't actually exist?
Now stepping back from BOTH philosophies, the differences between them lie in different attitudes about existence and truth. Attitudes that underly the axioms which we use, and cannot possibly be proven one way or another. (Gödel actually proved that. Any contradiction in Constructivism is immediately a contradiction in classical mathematics. But conversely there is a purely mechanical transformation of any proof in classical mathematics which resulted in a contradiction, into a constructive proof that also results in a contradiction.)
Excellent explanation, thanks! But I feel compelled to point out that only a small minority of working mathematicians are constructivists.
This is true.
However there is no logical argument that can disprove the constructivist view, in which there aren't "more" irrationals than rationals. And therefore the logical argument that there is must have some hidden implicit assumptions.
For every rational, there are infinitely many rational numbers too, since finite cartesian product of countable sets is countable.
I wasn't talking about irrational numbers.
But all non-computable numbers are irrational.
No, there are also non-computable numbers that are imaginary, complex, or transfinite.
All rational numbers are real. Therefore all non-real numbers are irrational.
Uh, no, "irrational" is defined as a subset of real numbers.
Non-rational, then.
This is about as consequential as debating whether 1 is a prime number.
This thread was about "all the numbers that it's theoretically impossible to compute". If you think the difference between "irrational" and "non-rational" in this context is irrelevant, you have a weak grasp of number theory. Yes, all are non-computable, but in different ways.
Yes, almost all real numbers are irrational, but I was talking about non-computable numbers, not irrational numbers.
Well, if I remember correctly, a product of an irrational and a rational is still irrational, right?
If so, here's a simple argument: If you have N rational numbers and I have K irrationals to start with, I can produce roughly N*K additional irrationals. Since we know at least two irrationals (pi and e) we can make at least twice as many irrationals as rationals.
Unfortunately this argument fails -- there are twice as many whole numbers as there are odd numbers right? but they are in 1:1 correspondence
0 <-> 1
1 <-> 3
2 <-> 5
so the odd and whole numbers have the same cardinality: there are the same number of each.:
There are, see Cantor's diagonal argument.
If they can't be computed, what are you planning on using them for? The biggest problem is going to be with the useful transcendentals.
At the risk of explaining the joke, obviously if I can't compute a number I won't need to represent it...
>Actually it's possible to represent 1/3 perfectly accurately
I'm curious, how?
> I'm curious, how?
As 1/3 - exactly as it's on your screen. All rational numbers can be represented exactly.
Hmm, yes, it can be represented in ASCII. But you still have to approximate when storing it in a way that is actually useful for computation using a finite number of bits.
Many real programming languages support arbitrary precision decimal and/or rational numbers.
Sure, there are times when space or speed concerns favor inexact representation over correctness, but that's an optimization that ought to be properly evaluated; the fact that lots of languages are designed in a way which makes it the default everyone reaches for contributes to lots of errors.
struct rational { int numerator; int denominator; };
And another dumb questions:
Is possible to convert to rationals in calculations?
So if I do:
1/3 + 4 + sum / total it record values properly?
Yes. Common Lisp is an example of a language that can represent rationals exactly and do arithmetic on them. You can avoid floating-point precision loss by using this method. But there are drawbacks: 1) The numerator and denominator will often turn into bignums as a calculation progresses, consuming ever-larger amounts of space and time, and 2) This won't help you with any calculation involving irrationals, except to prevent your initial imprecision from growing larger.
I've definitely owned a Casio calculator that worked on fractions rather than decimals and could even work in terms of roots and certain irrational numbers like e and pi.
For example, sqrt(2) + sqrt(8) would print 3 sqrt(2) rather than 4.242640687119286~
I wish I knew how it worked, it's probably something simple like suggested in another comment.
Look at any computer algebra system, for exaple sympy (https://sympy.org) and see how they implement it (it's open source).
Briefly read, it looks like an idea and not an 'apparatus'. Yes, carrying error bound information along with the number is interesting. How is that implemented in real time? How is accumulated error calculated (how do we know the floating values we are calculating with are not perfectly precise e.g. 3.0000... and not 3.000...1)?
> Briefly read, it looks like an idea and not an 'apparatus'.
It's patent law lingo. The patent covers both the idea (the "system") and subsequent implementations (the "apparatus") that are direct implementations of the idea.
That's easy, always round down and also carry max possible error (calculated by rounding up and subtracting the rounded down value). For example, 1/3 can be represented as 0.333 with error 0.001. Multiply it by 3 and get 0.999 with error 0.003.
You won't need many bits for the error, and operations can be made reasonably fast as only several lowest bits are affected.
> who reads that without thinking of irrational numbers?
Not directly related to the article, but many [1] irrational numbers (π for example, or sqrt(2)) can be represented in a computer in their entirety, i.e. "accurate to the last digit." Not all digits are stored at once in RAM, of course, but you can obtain an arbitrary digit (given sufficient time). That's precisely how computable numbers are defined (first by Turing in his 1936 paper that first defined the notion of computation, and was called On Computable Numbers, where the "numbers" in the title refer to real numbers, including irrational ones).
[1]: Relative to the irrational numbers that "we know", not to all uncountable ones, of course.
it's a journalist writing the article, they want clickbait.
How can it be "clickbait" if it's in the article? What am I supposed to click on?
Moreover, these are clearly marked quotes from the press release and the patent. Maybe this technology doesn't merit an article. But if it does, quoting the inventor is exactly what one expect from coverage. Note that it does invite scepticism, starting with "claims" in the headline.
This gratuitous hatred of journalism is seriously getting out of hands.
"clickbait" ~~ overboard sensationalism for the purposes of getting more attention than demure/traditional language would get. Even though it's in the article, this kind of language can be added by journalists to incentivize non-expert editors/publishers to publish their work versus others.
I agree with you. But there's more "clickbait" online than actual journalism. The saturation point has been reached for a lot of people and it's difficult for them to go back to respecting actual journalism or even distinguishing between the two.
The journalist quoted the "inventor".
Or a PR person was involved. I've been quoted in press releases saying stuff I never said, and it's par for the course, apparently.
you are right, I went to read the PR after commenting, it's cringy.
I’m willing to wager it’s regular old floating point and it just draws the repeating bar over the repeating decimals.
The floating point error problem has not been solved. This patent describes a floating-point representation that includes fields for storing error information. The standard IEEE floating-point representation has three fields: a sign field, an exponent field, and a mantissa (or significand). This patent proposes reducing the size of other fields and adding additional fields to store error information. The error information would be updated by hardware during regular operations. The patent proposed adding a configurable amount of precision to the numbers. If an operation exceeds this limit, an insufficient significant bits signal "sNaN(isb)" would be raised.
Not only does this method not reduce floating point error, it reduces the precision that you have for any given number of bits.
Unfortunately I can't find any of the figures referenced in the patent to help me understand the novelty of this patent.
It depends on how you interpret "floating point error". If by that you mean the error the error inherent in the representation, it actually increases that through loss of precision, as you note. If you interpret it as "problems caused by lack of precision in floating point" (i.e. the patriot missile problem references in the article is a "floating point error"), then the tracking of precision will allow you to easily know when you've hit an error threshold that is unacceptable, allowing you to avoid those problems.
how is this better than checking if its within some value by some epsilon manually?
> If an operation exceeds this limit, an insufficient significant bits signal "sNaN(isb)" would be raised.
What about for binary repeating decimals like 0.3? Wouldn't it always raise that signal?
Presumably if you specify that you want more units of precision than are available, then yes. But if you say you only want 1 significant digit, then it can store it.
Yeah. It seems like just a clever mitigation technique, which could prove useful in the fields he mentioned (military, industrial, etc.), but it's far from the wholesale solution he claims.
Is this different from/better than?
unums: https://en.wikipedia.org/wiki/Unum_(number_format)
interval arithmetic: https://en.wikipedia.org/wiki/Interval_arithmetic
No.
>The inventor patented a process that addresses floating point errors by computing “two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.”
That does seem useful but it's a bit akin to saying that you've solved the division-by-zero problem by inventing NaN. Suppose you're writing some critical piece of software and a floating point operation raises the "inaccurate" flag, how do you deal with that? Do you at least have access to the bounds computed by the hardware, so that you may decide to pick a more conservative value if that makes sense?
Besides the link to the "1991 Patriot missile failure" kinds of contradicts the claim that this would solve the issue since Wikipedia says:
>However, the timestamps of the two radar pulses being compared were converted to floating point differently: one correctly, the other introducing an error proportionate to the operation time so far (100 hours) caused by the truncation in a 24-bit fixed-point register.
If the problem comes from truncation in a FP register I'm not sure how this invention would've helped.
> a floating point operation raises the "inaccurate" flag, how do you deal with that?
You can trap. ...but then again, existing arithmetic traps are not uniformly enabled by default.
And immediately patents it... so no one else can use it.
EDIT: and for some other methods: https://en.wikipedia.org/wiki/Unum_%28number_format%29, particularly the latest one being the Posit method: http://superfri.org/superfri/article/download/137/232
EDIT2: of course other people can license it, but the other way to bring a new floating point to the scene would be through the same process that happened with IEEE 754. There are plenty of people who wouldn't touch anything patented at all, sometimes even with a patent clause.
He's an inventor. Inventors usually work towards patents. Also, it's not so "no one else can use it". It's so that he can license out his work to a company like Intel. The patent is to protect him from a company like Intel going "sweet, thanks for the fix." And then profiting off his work. Or do you expect this guy to work for free?
I don't expect him to work for free, but I do want Intel (and AMD, ARM, NVIDIA, TI, and anyone else who makes a floating point module) to go "sweet, thanks for the fix" as quickly and, almost more importantly, as collectively as possible.
I want this guy to be compensated, but I'd prefer this guy be compensated in a manner that doesn't prevent third parties from fixing their hardware. In general, I think bounties are a good solution to this. Failing that, there are plenty of trade groups and nonprofits and regulatory bodies that could be tasked (and funded) with acquiring and freely redistributing this class of innovation if we wanted to.
You're living in a fantasy world. You're looking at the status quo: some guy has invented a better floating point circuit, and you think there are two options. 1) The guy releases it to the public for general use, or 2) the guy patents it and holds a monopoly over its use.
Obviously 1) is a greater public good than 2), but in reality these are not the only options.
Here's some other realistic situations: 3) With no incentives to make things public, this guy either stops working on this much earlier, doesn't tell anyone, or throws it in the garbage. 4) This guy goes and talks to Intel about his design. They quietly pay him some money or hire him and implement it in secret. Two years from now they launch a processor with this feature and for the indefinite future, until their competitors spend costly time reverse engineering the secret hardware, only Intel processors have this circuit. 5) Same as 4) except Intel says, "Haha, thanks for being a sucker" and doesn't pay anyone.
This is the patent system at its best: incentivizing some guy to work on this invention, then publish his work and describe it in detail. For the next 20 years, he can license it to anyone he wants and profit from his work. After that point everyone can implement it as a public good.
The fantasy world is thinking that plucky little inventors creating something is the status quo. You think big companies like Intel don't have hundreds of people working on research full-time? That they freely lease all fruits of their research out to their competition rather than keeping a 20-year monopoly?
Patents, like any monopoly-granting device, benefit market incumbents much more than encourage new entrants
maybe large companies shouldn't be able to own patents.
Okay, so now we will have a bunch of 1 person in-name-only companies which hold patents, which offer exclusive licenses to big companies for $1 per 1000 years.
This. I know of someone that invented what appears to be useful medical technology; certain trusted professionals in the field that reviewed the work agreed.
However, some of the case law around what they'd need to patent the turned unfavorable around the time they were pursuing this and looking to turn it into a business. For your reasons 3) and 5), they've shelved it. This was an individual, not a large company, and without sufficient legal protection to reasonably hold large companies at bay, there simply was not sufficient reason to pursue this effort given how questionable the return would be.
Since it means getting less bits to store numbers in while also having to buy new hardware it doesn't seem like a very good solution though... and this is the patent system at its best apparently.
Patents are the way the government allows the inventor to require compensation.
Ideally, that would be how patents are "supposed" to be use: An idea you patent can be used by others in exchange for a royalty fee. In reality, that happens quite a bit, but we also get anti-competitive tech companies who wish to keep advances to themselves, and patent trolls who wait on violations to sue.
>Or do you expect this guy to work for free?
Apparently he did up until now, didn't he?
I agree he should get a percentage of the profits other companies make off of his invention, but it's not like he's entitled to any payments just because he liked to tinker around. Inventor's not a real profession.
> I agree he should get a percentage of the profits other companies make off of his invention, but it's not like he's entitled to any payments just because he liked to tinker around.
The first clause in your sentence is exactly what patents are for; the second clause is...tangential at best.
Everything is done for free up until it is paid for. That doesn't mean that it is fair to not pay for any work done in advance of payment. Not sure what kind of thought process led you say this, but it is not a productive one.
> There are plenty of people who wouldn't touch anything patented at all, sometimes even with a patent clause.
Then that's their loss. This seems like the ideal scenario for patent protections: small inventor developing a genuinely novel and useful invention that big, rich companies would otherwise shamelessly copy.
Ideal scenario are inventions that _require_ billion dollar investments to be discovered (aka big pharma). Anything that can be reinvented by chance independently without much effort if there is demand for it shouldn't be patentable. Probably everything concerning representation of floating point numbers should fall in this category.
Patents don't necessarily mean nobody else can use it. For example, he could license the technology which means people have to pay to use it.
He's even able to make more creative licensing terms: royalty free license or "royalty free if you use the BFPF trade organization logo."
And what about programmers that care about free as freedom?
What about them? You're free to discover new things, and not patent them, instead publishing them. If you have a problem with patents, take it up with the legislature and the judicial system, but there's no reason to attack inventors for using a common legal system for protecting their intellectual property.
> If you have a problem with patents, take it up with the legislature and the judicial system, but there's no reason to attack inventors for using a common legal system for protecting their intellectual property.
Many people attack companies that use tax loopholes even though the government does "allow" these loopholes. So if one is opposed to something, but it is formally allowed, one should attack both sides: The people doing it and the government.
OK it sounds like you just want to have an internet argument to win internet points.
Don't be grumpy, the "it is technically allowed, therefore morally good"-argument is exceedingly weak, you shoudn't rely on it.
I didn't make that argument. I made the argument that patents are the least bad approach to the problem, and that as an incentive, it works well.
> but it is formally allowed
Your correlation doesn't hold up because tax loopholes are not formally allowed. You call that out in your previous sentence by scare-quoting "allow". The inventor uses a formal system (patents). The company uses an informal system (tax loopholes). In the case of a formal system, you do not attack the inventor because he is just using the proper channel. In the case of an informal system, you do attack the company because the company is unethically taking advantage of a channel that shouldn't be there in the first place.
They're "free as in freedom"[0] to publish their inventions without applying for a patent, or to license their patents in any way they see fit, including free-as-in-everything.
[0]: The original "Free as in speech" seems to be a much better way to express your sentiment.
doubt it effects programmers, just chips.
https://en.wikipedia.org/wiki/Alice_Corp._v._CLS_Bank_Intern...
> doubt it effects programmers, just chips.
The boundaries are fluent (FPGAs).
Also, keep in mind that patents don't mean the invention works, and they certainly don't mean the invention is useful for anything.
https://ploum.net/working-with-patents/
> Note that by « valid », I mean that the Patent Office didn’t found a trivial prior art for this. It doesn’t mean that there is no prior art or that I’m the real inventor or that my invention works.
An example of an insane patent:
https://www.google.com/patents/US6960975
> Space vehicle propelled by the pressure of inflationary vacuum state US 6960975 B1
(via https://www.metabunk.org/do-patents-mean-the-invention-works... )
At least patents expire someday.
So far. That statement used to be true of U.S. copyrights as well.
I don't think this is going to change any time soon. the last time patent lifetime was changed, it was to make it consistent with the rest of the world (20 years). It's unlikely we (US) would change that; it would be a huge international battle for something that people have more or less accepted.
You don't patent things so that no one can use them. You patent things so that others have to pay you for using them. Patent holders often want others to use their inventions, and pay them money.
I understand the downside, but how else would he ever get paid for this?
They could have used trade secrets. Konrad Zuse tells the story that back then, optical equipment manufacturers used to contract him for computations. Instead of rooms full of mechanical calculators operated by humans (the original "computers") they would supply his company with data and algorithms and he would reply back with results.
Apparently, there were design reasons why for electronic calculations a different mathematical formulation was more efficient. The competing manufacturers would discover this fact one by one, and Zuse was worried that someone may question his integriry, thinking he was the source of the leak. But no one did.
Michael Hanack (the materials chemist) used a different strategy: he would not patent anything so his inventions could be used my any market participant, and he would consult for all of them.
On the other hand, everyone is looking forward to the day the aptamer patent runs out. Uptake is limited (and you'd think that CRISPR/CAS9 has the same problem) because of unreasonableness (in case of CRISPR uncertainty) around licensing.
Trade secrets are worse because then the design never gets disclosed, and has to be rediscovered. Time limited patents are a good and explicit trade off between the personal benefit of the inventor and the benefit of society.
> Michael Hanack (the materials chemist) used a different strategy: he would not patent anything so his inventions could be used my any market participant, and he would consult for all of them.
That won't work if the invention is easily copied. Chemistry is tricky and there's probably plenty of money to be made in consulting in it. A lot of industries aren't like that.
Trade secrets are worse because then the design never gets disclosed, and has to be rediscovered
In the sciences you'll notice that simultaneous discoveries are nothing unusual, once the groundwork is laid the idea hangs in the air, you do nothing but reach out and catch it. The Zuse example is just another instance.
In the case of mathematics and computing, the invention can be copied easily by development costs are minimal as well. These is no compelling reason why someone should reap disproportionate rewards from a government-supported artificial monopoly. Progress suffers and the economy as well.
> In the case of mathematics and computing, the invention can be copied easily by development costs are minimal as well. These is no compelling reason why someone should reap disproportionate rewards from a government-supported artificial monopoly. Progress suffers and the economy as well.
Yes there is: because of the effort of invention. The effort required to copy is irrelevant. Also, by arguing against patents, you're basically arguing that Intel and the like should get a huge payday at the expense of this guy's efforts.
Patents only last for 20 years [1], and it's not like progress stops when they're in force. Have some patience.
That's much worse for tons of reasons. The extreme version of that is a guild system and a bunch of secret societies, which is basically why the ancient world never industrialized. Egypt, Greece, and Rome had the basic knowledge for the industrial revolution (including maybe even electricity, search for Baghdad battery) but it was locked up in cloistered priesthoods who kept it very close to their vest. In many cases leaking such knowledge was punishable by gruesome death.
Offering an alternative to that is why the patent system was created. The idea is you publish and in return get a monopoly for a short period of time. Of course the system is abused (all systems are abused), but the reasons for its creation were actually quite "liberal."
This is meant to be implemented in a processor. I'm not sure how you would keep it secret from end users, much less from the company tasked with implementing it at the chip level, and their engineers, which might find employment at different companies in the future. This seems like a very poor for for a trade secret if you ask me.
He could use trade secrets and still have people on HN bitch about not liking what he did with his own idea.
Under a different system that doesn't incentivize holding back global progress for small amounts of individual enrichment.
There's a reason Linux and GNU utilities are so massively widely used, and overall they've probably provided billions in economic value. They do that freely, for any human to use, and in fact that's part of their main value proposition. Both were born out of the legal nightmare that was UNIX at the time.
How should he get paid for this? In a very-ideal world, people and corporations that used his idea and had spare capital would voluntarily give him donations. In a better-than-this world, governments (or some other entity) would pay bounties to inventories out of a pool of tax money, based on both perceived usefulness of the invention and how widespread its use came to be.
> Under a different system that doesn't incentivize holding back global progress for small amounts of individual enrichment.
Patents were explicitly created to avoid, "incentivize holding back global progress for small amounts of individual enrichment." The other realistic alternative is keeping the invention a trade secret, which is much worse for progress.
> How should he get paid for this? In a very-ideal world, people and corporations that used his idea and had spare capital would voluntarily give him donations.
That sounds exactly like how patents work except with no protections for the inventor. With a patents, the corporations that use the idea do give the inventor donations called license fees, but they can't screw him over and pay him less than he's worth (which they would otherwise do so they can keep the profits for themselves).
Basically, it sounds like you're proposing worse, more awkward systems because you irrationally hate the concept of patents for some reason.
A few things in no particular order.
1 - It's not holding back anything. If this invention has any merit, it will be licensed and society at large will benefit from it.
2 - Incentive. If this inventor could not make a guaranteed profit from this idea, I imagine he could have found more interesting ways to spend his time than playing with floating point numeric representations. It's not like he stumbled upon this on the road and then hid it from society. He set out to create things like this as a means of sustaining himself and improving the world at the same time.
3 - The reason markets are useful is because they work phenomenally well at determining value. As we swapped to currency operated markets over barter, nobody ruled a chicken was worth 3 coins, a pig 10, and a cow 25. Instead market forces decided, and tend to do better than any other means of measuring value. Donations, let alone bureaucracy, are extremely poor substitutes in determining the value of anything. Their ideal would be to approach market force level efficiency. This isn't to say markets are flawless, but rather they're a whole let less flawed than any alternative.
Motivation for patent system is that it incentivizes inventors to publish they inventions in exchange for time-limited exclusivity on it, which is certainly better for progress than inventions that are unpublished trade secrets.
Issue with patent system is that it's real implementations are gameable, not with the idea itself.
I agree with the sentiment but unfortunately we've never found a way to implement real socialism without creating a massive permanent welfare underclass that does nothing and a sclerotic self-serving bureaucracy to support them.
Most people are not little philosopher kings who work to advance a grand vision of life's unfolding in the universe. Most people just want to do the minimum to get by until they can get their next dose of entertainment. I hate to be such a cynic, but I have eyes.
Social democracy (Europe, USA to a lesser extent) is a hybrid that basically works but you still have to make money in that system.
> Social democracy (Europe, USA to a lesser extent) is a hybrid that basically works but you still have to make money in that system.
Whether it's capitalism with a large dose of socialism of socialism or socialism with a large dose of capitalism, neither extreme seems to work as well as approaches that adopt portions of both.
> Most people are not little philosopher kings who work to advance a grand vision of life's unfolding in the universe. Most people just want to do the minimum to get by until they can get their next dose of entertainment. I hate to be such a cynic, but I have eyes.
I think the only people that actually work to advance understanding of the universe do so for selfish reasons, and we all just get to benefit from the byproduct of their self interest. If I had to guess whether Einstein of Newton did what they did because they wanted every human to be better of, or if it was primarily because that's what they enjoyed doing and they were lucky enough to be in circumstances that allowed them to live doing that (and become famous as well), I know where I'd place my bet.
It's the same for child rearing. There's a sense of happiness and contentment when your children are happy and safe (or at a minimum there's usually uneasiness and dread when your children are unhappy or unsafe), so you try to make that the situation the one that makes you the most comfortable happy. It's an instinctual and emotional response, but doesn't necessarily make sense from a purely rational self-interest point of view.
We aren't saints, we're machines with wacky firmware that imparts interesting and had to evaluate value functions.
So he should have either hoped that corporations would feel charitable, or submitted himself as a guinea pig to a system that doesn't exist. Got it.
> Under a different system that doesn't incentivize holding back global progress for small amounts of individual enrichment.
Patents don't hold back global progress any more than capitalism keeps people in poverty. If we went back a few hundred years and impose alternatives, would we be better off today? I doubt it.
The whole point of the patent system is to foster innovation, and free exchange of ideas. Without the patent system, people are incentivized to keep their inventions secret, and hide how they work, but patents require publishing the details. Some companies don't patent some things just so they can keep it secret.
The problem with the patent system is not the concept of patents, but what they've been allowed to apply to and how long the terms have increased.
Him patenting the idea meanshe has control over his options. It also prevents another entity, such as Intel or AMD discovering it and patenting it. If he wants, he can make it free to use for non-profiting or open source entities.
Edit: The capitalism / poverty comparison is meant to infer communism, as the alternative to a competition and financially motivated system. I don't think it's contentious to say capitalism ended up being the better system for getting people's standard of living raised, even is communism sounds good on paper. I don't think the relation to the patent system in comparison to some ideal sounding solution where people get some money somehow for giving their ideas to the world immediately is all that strained.
> The whole point of the patent system is to foster innovation
And if it seems like the patent system is being gamed to hinder innovation (not really in this case, but yes absolutely in the case of pharma) it should be revisited. Intellectual property laws are based in pragmatism, not natural law. If the costs begin to outweigh the benefits, they should be changed.
There are areas of the patent system that are legitimately in need of reform, but that doesn't mean the underlying concept is not sound. Many people seem to be looking at the areas where reform is needed and erroneously concluding that the whole thing needs to be burned down. That's a shame, since the core idea of patents seems like a very good trade-off to me.
You can hate the patent system all you want but this specific case, assuming that his invention works, is a case where patents are being used correctly.
> Patents don't hold back global progress any more than capitalism keeps people in poverty.
Thanks. I was worried I would go the whole day without spitting out my drink.
Ignoring the weird comparison, he's right. Patents, as bad as the system is, are an improvement on no patents.
When people start talking about ideal and better-than-idea worlds and voluntary payments or government disbursements, my mind goes to communism. While communism sounds like it provides a better outcome on paper, capitalism (with constraints) has shown it's a much better system for raising the living standard of everyone, even if there is quite a disparity between the lowest and the highest.
Would some ideal-world type situation for patents work? I doubt it. We have patents, and there are some upsides to that system, even if it has been abused recently. I think people have transferred a lot of their anger at the abuses to the concept of patents themselves. Here we have someone that patented something, and people are upset that he did that before knowing how he intends to use the patent. The patent could be free for non-commercial use. It could be free for many things. Or it could be that it's most likely to be used in hardware by a large corporation that prints chips and not individuals, and he'll license it to them and the most anyone will see of it is a few cents added to the production cost of each chip (not that anyone even knows what that is, since retail chip pricing is so crazy).
How we measure a system shouldn't be based on just the biggest successes and biggest problems (but those should be looked at), but on the long track record of what it does and how it performs. In that light, I think capitalism has shown itself a better system in the long run, and I think patents have shown their merit in the long run as well.
Why going back a few hundred years? Even a few decades ago the idea of patenting mathematics was still mostly considered absurd. Imagine a world where people like e.g. Dijkstra and Lamport patented all their concurrency algorithms. Hoare patented quicksort. And so on.
It looks to my like it's a process to be carried out in hardware which is patented, and not the mathematics in question. It's a physical apparatus.
Even if it was purely mathematics, I think I would rather have someone patent it if possible and make it freely available than to leave it out and have someone else take a stab at patenting it, have the underfunded patent office fail to realize there is prior art, and grant the patent. Sure, you could effectively fight it, but until the system gets reformed enough to prevent most these abused, that's a lot of wasted resources (and having the patent lets them threaten others with it without actually bringing a case that could invalidate it).
> The problem with the patent system is not the concept of patents, but what they've been allowed to apply to and how long the terms have increased.
Patents aren't copyrights. The term for patents is only about 20 years.
You're right, I was conflating the two issues somewhat. I do think patent terms should be be somewhat different for different classes of patents, or industries they are used in. If patents for software is to be allowed, I'm not sure why it needs to have a term of more than 3-5 years from issuance. Any industry moving at a similar clip could also benefit from reduced terms. 20 years isn't forever, but depending on industry momentum and advance rate, it ends up retarding innovation instead of helping it.
In any case, I don't think we should immediately vilify someone for using the patent system (and using it as originally intended, IMO), just because we are unhappy with the way it's been abused recently, as I think it has provided us great benefit over it's existence.
I mean, it's unfortunate, but can you blame him?
Here's how I'm inclined to reason about it:
It depends on whether he would consider the filing's "invention" to be within a reasonable definition of what should be patentable.
If yes, then he's just playing his part in our society's overall machinations for technical progress, and there's nothing really blameworthy about the filing.
If no, then he's being deeply selfish: He's capitalizing on the government's unjustified encroachment on our individual liberties, via the patent system, for his own personal gain.
Yes.
Why? I'm genuinely curious why you blame him for patenting his work.
Because the whole patent system is broken and everyone who partakes in it shares blame.
This is an absolutist position that permits no subtlety. But the patent system is a subtle incentivization which seeks to balance a number of forces (in particular, it incentivizes people to invent new things that they can profit from, while also avoiding trade secrets).
> This is an absolutist position that permits no subtlety.
Where is the problem with such positions?
Simple: you are unlikely to affect any change by taking an absolutist position. Beyond making it look like you're inflexible, patents have extensive pre-existing legal support, so it's really unlikely anybody is going to change their mind just because you feel strongly.
It's better, when you're arguing on the internet, to limit yourself to reasonable arguments that people are receptive to, and working to convince people.
I used to feel very strongly about patents (that they were "wrong") but, over time I've come to believe they are the least unreasonable protection for IP that also encourages long-term sharing. Trade secrets are worse because society as a whole doesn't get to benefit.
If he didn't patent his work someone else will and collect the royalties resulting in 0 additional freedom. There is nothing gained by not playing.
If he publicly discloses it then (theoretically) nobody else can patent it.
Because it would be nice if someone, somewhere would do the right thing rather than whatever stupid nonsense benefits them personally the most.
Why is the right thing to do work for no compensation?
It is the right thing when you know that mathematics isn't patentable.
Everything is mathematics from the right perspective.
Someone like Nils Bohlin and Volvo for the three-point seatbelt patent?
It's a dog eat dog world.
Would you still feel this way if this was just a run of the mill patent by intel?
Of course.
If you want to keep others from patenting you work, you need to patent it yourself.
No you just need to publish it, then it becomes prior art.
It's fortunate and we should praise him. I understand backlash against patent trolls, but this is pure bullshit.
It doesn't actually sound like he "solved" it. More like he put error bounds around it and can detect when the error is more than X.
> When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.
Solving it would be a pretty big deal. This doesn't feel like it is, though I admit I haven't worked on a similar problem in a long time. Kinda feels like patent trolling as I imagine that lots of companies have put bounds on detecting floating point errors when they need it. There are certainly lots of papers on it: https://www.google.com/search?q=floating+point+error+bounds
IANAL, but if other companies have already done it and it's that easy to find, then it wouldn't be a good patent troll, because there's obvious and easily discoverable prior art (which would invalidate the patent anyway).
Without reading the patent it sounds a lot like interval arithmetic [1] which sounds like a really good idea at first but is not without its own problems. For example the inverse 1/x for an interval x like [-1,+1] containing 0 consists of two intervals (-∞,-1] and [+1;+∞).
"In addition, though the embodiment presented herein represents an apparatus and associated method for bounded floating point addition and subtraction, it is presented as an example of bounded floating point operations. By extension, the same inventive apparatus for calculating and retaining a bound on error during floating point operations can be used in other floating point operations such as multiplication, division, square root, multiply-add, and other floating point functions."
The hard part has been left as an exercise for the examiner.
In your example, this would correspond to having the number x = 0 +-1 and then wanting to compute 1/x. If your number can potentially be zero, why would you want to use it as a divisor?
The problem remains if you wrap the division in a non-zero check. Or maybe the interval [-1,+1] is already kind of a lie, i.e. x is known to be in the interval but you additionally know that x is non-zero when you are about to perform the division. The example is just meant to illustrate the problem that using a single interval is not good enough to track error bounds in the general case.
You could always just return nan in that case, like for normal arithmetic.
In which case you go from a potentially imprecise result to no result at all which would arguably make things worse, especially if you happen to know that x is non-zero and division by zero is not an issue. The problem is that using a single interval is not generally good enough to track error bounds for arbitrary calculations while avoiding to turn everything into the useless (-∞,+∞). The common solution is to use multi-interval arithmetic but having to deal with a data structure of variable length is really painful for hardware implementations.
Even worse, a patent for a "processor design, which allows representation of real numbers accurate to the last digit" is obviously nonsense. Pi (=3.141...) is a real number where there is no "last digit".
I assume it means accurate to the last digit of the representation, not the number being represented. obviously the latter would be absurd to suggest.
I think he describes a procedure which guarantees that the last digit of the representation is exact. So that we (in decimal) can have 3.14 as a representation for pi, but never 3.13 or 3.15. But this is hardly 'solving floating point errors' and hardly novel.
The whole technique smells a bit fishy to me, but it might be genuine (in any way, the article seems more like marketing since the technical merit is not immediately obvious, and the difference with existing techniques not immediately clear).
it's the patent for a circuit, and you can take the the sentence the other way around "the displayed number is the denoted one", not "any real number can be represented".
Is this https://en.wikipedia.org/wiki/Interval_arithmetic ? I.e. you carry the lower and upper bound all the way?
In the patent he contrasts his "apparatus" with interval arithmetic. He says IA greatly increase computation (while his method doesn't) and requires twice as much storage (while his method doesn't.
To me, it looks like a specific mechanism for encoding the bounds and scale of error into a floating point representation, along with a pipeline for processing operations on operands of this form (presumably efficiently). So to me it looks like a specific variant of IA.
It looks like the purpose is to be implemented as an alternative to conventional floating point libraries and CPU modules. E.g., Intel might license this and add a floating point module based on this + instructions to access it to a future CPU. (Well, even if it's great and all is as advertised, and proves to be generally useful, I'm not sure it would jump right into the CPU. It would probably have to grow more organically first, but that's another discussion.)
I mean, I have no idea if this does all of what it says or if it does, whether that would prove to be generally useful enough to make it out of niche cases.
But it's interesting.
> “In the current art, static error analysis requires significant mathematical analysis and cannot determine actual error in real time,” reads a section of the patent. “This work must be done by highly skilled mathematician programmers. Therefore, error analysis is only used for critical projects because of the greatly increased cost and time required. In contrast, the present invention provides error computation in real time with, at most, a small increase in computation time and a small increase in the maximum number of bits available for the significand.”
I'm not sure how much it increases computation time, but software for exactly this is freely available, see for instance Arb: https://github.com/fredrik-johansson/arb
I'm the main author of Arb. Note that it's an arbitrary-precision library. It's ~100x times slower than hardware floating-point because of using arbitrary-precision floating-point numbers implemented entirely in software. But if you have to do arbitrary-precision arithmetic to begin with, Arb's error tracking only adds negligible further overhead.
For machine precision, I believe ordinary interval arithmetic is the best way to go still. Unfortunately, this not only uses twice as much space; the time overhead can be enormous on current processors due switching rounding modes (there are proposed processor improvements that will alleviate this problem). However, the better interval libraries batch operations to minimize such overhead, and it's even possible to write kernel routines for things like matrix multiplication and FFT that run just as fast as the ordinary floating-point versions (if you sacrifice some tightness of the error bounds).
Regarding the article, using a more compact encoding for intervals is a fairly old idea and I'm not really sure what is novel here.
> It's ~100x times slower than hardware floating-point because of using arbitrary-precision floating-point numbers implemented entirely in software.
Thanks for the numbers, how did you get that estimate? Did you consider SIMD?
Does anyone know a similar library for Rust?
Arb depends on lots of other numerical libraries (namely FLINT, MPFR and GMP or MPIR). If you want pure-Rust alternatives, the ecosystem is just not there yet.
It's a plain C library with an API very similar to GMP, so an option would be to wrap it from Rust, which should not be too difficult.
> “Apparatus for Calculating and Retaining a Bound on Error During Floating Point Operations and Methods Thereof”
It seems to be a system where the hardware design itself keeps track of the accuracy losses in floating point calculations, and provides them as part of the value itself.
The title is (predictably) exaggerated, but it's an interesting idea, and could potentially be a significant improvement in particular use cases.
Patent in case anyone is curious
Thanks for the link.
Looking at the claims, it looks like he's patented an augmented floating point unit (hardware) that does bounded arithmetic. The #1 claim is "A processing device [with a] FPU [and a] bounded floating point unit (BFPU)." All the following claims are "The processing device as recited in claim 1" (e.g. CPU+FPU+BFPU) with subsequent changes.
The usual advice w.r.t. patents is to not read them.
This may seem odd, but it can be the difference between knowing and unknowing infringement. Knowing infringement results in triple damages.
IANAL — just repeating consistent advice I have received
Unless you are planning to infringe (which is knowing in itself), this is very bizarre advice.
Reading a patent is more likely to make you not infringe upon it than to make you knowingly infringe upon it.
Reading a patent will make you not infringe in the short term, but will you remember you got the idea from that patent in ten years?
For people who are writing novel software it can be better to always avoid reading patents, that way they can honestly state they haven't read a specific patent.
Accidental infringement is possible whether you've read the patent or not. I was once told not to even mention the possibility of the existence of a patent as that would be evidence used to go for triple damages.
Perverse incentives indeed.
Mathematica has the cool ability to do symbolic tracking of numerical precision, for the ability to tell you when, for example, your differential equation solver is giving you meaningless results.
Is there a reason why we can't have that as a compiler warning?
Because it depends on the algorithms you put a float through.
Addition has a maximum accuracy of 1 LSB. Makes sense: the last bit could have been "rounded off" and 1.5+1.5 == 2 (but really 3 should have been returned).
Subtraction has unlimited error bounds (!!!). Well, I guess there's 53-bits of a double-precision float. So subtraction can theoretically create 53-bits of error.
In practice, you need to keep track of the error bounds during the runtime of the program. Its not something that can be computed at compile time. After all, addition of a positive and negative number IS subtraction. (so some subtractions are additions: with accuracy of 1LSB. While some additions are subtractions: with unlimited error bounds)
Yes you can, if your programming language supports dependent types: https://bluishcoder.co.nz/2013/05/07/ranged-integer-types-an...
At a glance this reads similar to Interval Arithmetic in that it places bounds on how much error a value carries.
Is there something more novel to his approach?
I think the novel part is in the encoding of the error in the bits of the value. It's hard to see how much value this patent really holds.
Here’s the issued patent: https://www.google.com/patents/US9817662
Note that it’s a claim on the processing unit implementation (e.g. the FPU), not the method.
Nonetheless, I’d be very surprised if this stands the test of interval arithmetic prior art.
The beauty of the patent is that it will never be tested, being of no practical value, for numerous reasons already mentioned in other comments, plus a few more not worth the bother of going into.
So the inventor gets a patent number for his LinkedIn profile, and USPTO get their fee, and that's the end of it. A win-win for all involved.
Earlier HN discussion of the phenomenon: https://news.ycombinator.com/item?id=16015371
"Win-win"...not at all.
Patents like this have "threat value", which is often happily exploited by "IP monetization" companies, contingent law firms, etc.
This is the kind of stuff that turns into 100x $50k settlement demands.
Unless the article is missing some important nuances, this is just "range arithmetic" or "interval arithmetic"from the 1950's. Here's a wikipedia page explaining how it works:
Wouldn't something like what Douglas Crockford built with DEC64 be more useful and practical?[0]
This looks so obvious. How could this be patented ? The real question is why no one already implemented it ? It wouldn't surprise me if it already exist.
If no one's thought of it before, how can it be obvious? If they have thought of it, the prior art can be brought to the attention of the patent office and the patent invalidated.
Terrible title with a terrible description of the invention.
What he is doing appears so be interval arithmetic: https://en.wikipedia.org/wiki/Interval_arithmetic
Because we don't have infinite computer memory or processing power numbers have to be finite, so no one will ever "solve the floating point error problem" however being able to quantify the error is both extremely useful and extremely complex because you have to try to determine how the error propagates through all of the operations applied over the original input values.
In science this is also done based on the precision of the raw data... roughly through selecting a sensible number of significant figures in final calculation. In other words they omit all of the digits they deem to be potentially outside of the precision provided by the raw data, e.g your inputs a:123.456 and b:789.012 but your result from some multistep calculation is 12.714625243422799, obviously the extra precision is artificial and should be reduced to something slightly less than the input precision (because it will have been rounded).
For floating point math this is about going a step further by calculating the propagation of error from the end of the maximum length significand provided by IEEE 754 (where anything longer causes rounding and thus error), and trying to quantify how that window opens wider and wider as those rounding errors propagate towards more significant digits as more operations are performed. With interval arithmetic this is done by keeping track of the upper and lower bounds of that window (the real number existing somewhere within that window).
This doesn't solve any of the many issues that floating point math has, but it allows whatever is consuming it to potentially assign significance to the output of a calculation more precisely. i.e so that you can say 1369.462628234m is actually 1.4e3m (implying ± 100m) perhaps translating into understanding that your trajectory calculation isn't actually as accurate accurate as the output looks, but instead the target has a variance of up to 100x100 meters.
I expect the patent details a hardware implementation to make this practical at the instruction level rather than a likely very slow software implementation.
Obvious crank.
I wrote an interval arithmetic package once too. It was slow, because it had to change the FP rounding flags multiple times for some operations.
In the end, it seemed like any substantial computation ended up having extremely wide bounds, much wider than they deserved. Trying to invert a matrix often resulted in [-Inf .. +Inf] bounds.
Here's a link to the patent https://patents.google.com/patent/US9817662B2/en?oq=No.+9%2c...
He reinvented interval arithmetic, it sounds like.
Funny. There was a project at Sun Labs in the early 2000s that way far down this road. Without looking at its specifics, I am still surprised that the patent was accepted.
This appears to be complete nonsense.
You can't patent math.
Has he solved http://0.30000000000000004.com/
Not really, the idea is to store the amount of error in the binary representation of the number. When converting from decimal "0.3" to this floating point representation, it's more like 0.30000000000000004 ± 0.00000000000000004
I don't think he has. With his approach you would get results that are honest about their precision, like '3 plus-or-minus .0000000000000010'. That still doesn't help you decide whether or not the result is actually 3.