Show HN: Numbat – A programming language with physical dimensions as types
numbat.devHow does something like `let y: Time = 1 year` work? Does it take into consideration the idea of leap years and leap seconds, counting a single year as 365.2422 days[0]? Or does it count as 365 days?
I got curious and installed the CLI tool[1] and found that it does indeed account for leap second / leap years:
>>> let jahr: Time = 1 year
>>> let tage: Time = jahr -> days
>>> tage
= 365.243 day [Time]
References:I feel like that’s unit confusion. Converting from year to day should require you to specify what calendar year you’re in to resolve the ambiguity. Otherwise I set an alarm for today + 1 year and things are off by 6 hours.
Time is nasty because there’s lots of overloaded concepts and bugs hide in the implicit conversions between meanings.
I’m also kinda curious what the underlying type is. Is it a double or do they use arbitrary precision math and take the perf hit.
> I feel like that’s unit confusion. Converting from year to day should require you to specify what calendar year you’re in to resolve the ambiguity.
A year has two meanings - a calendar year, with all the leap days and seconds and timezones, or a duration of time. The latter is still useful, e.g. when someone states that Proxima Centauri is 4.2 light-years away, they don't want to deal with leap-days.
Decent time libraries have separate ways to deal with durations and dates.
Except there is also https://en.m.wikipedia.org/wiki/Sidereal_year So I would say they need explicit different years.
So 'year' refers to the Gregorian year and is equal to 365.243 days [1]. We also have 'julian_year' which is equal to '365.25 days'.
We also have 'sidereal_day' equal to '23.9345 hours', and if you believe it is useful, we can also add 'sidereal_years'.
Do you have a calendar_year and a calendar_leap_year?
Ok so I did some reading and I like what I see, its important however to properly disambiguate between the two kinds of time units.
Chronological Time units and Calendrical Time units... These are fundamentally different concepts that overlap a lot in day to day life but when you need to ensure technical accuracy, can be very different.
- Planck time, Stoney time, Second: Unambiguously valid for both chronological and calendrical usage. Since we define everything in terms of the second anyway it's basically the centre of the Venn diagram.
- Sidereal day: It isn't a fixed value over longer periods of time, getting longer at a rate on the order of 1.7 milliseconds per century [1], so a conversion of a short period like 7 sidereal days into seconds is going to be off by something like 3.26×10^-7 seconds which might be ok, particularly if you also track the precision of values to avoid introducing false precision in the output, since you can then truncate the precision to above the error margin for a calculation like this one and treat it unambiguously as valid for both calendrical and chronological purposes.
- It's also worth noting since you mentioned it, the slight difference between Tropical year (seconds the earth takes to do a complete orbit around the sun), and Sidereal Year (seconds for the sun to fully traverse the astronomical ecliptic), the sidereal year is longer due to the precession of the equinoxes.
- Minute, Hour: These can vary in length up to a second if a leap second is accounted for, so while conventionally fine for chronological calculations as fixed multiples, don't have precise chronological values when used with calendar calculations. The exact number of minutes between now and 2030 is fixed, but the number of seconds in those minutes is not.
- Day: In addition to leap seconds, the length of a calendar day also have to deal with the ambiguity of daylight savings time and are where the significant differences in calendrical calculations vs chronological calculations really start to kick in.
- Week, Fortnight: all the problems of days but magnified by 7 and fourteen respectively. Also, theres the concept of standard business weeks and ISO week calendars, where some years wind up with more weeks than others due to the ISO week related calendar rules.
- Month: Obvious problem... "which month?" theres quite a few less seconds in February than in October.
- Julian year, Gregorian year: These are conventionally defined by how many days they have and the leap day rules and then approximated to an average seconds value so you can "pave over" the problem here and a lot of people might not be as surprised as if you average the length of a day or a month.
- Decade, Century, Millennium: are all affected by leap day related rules, and over a given length of time you see the introduction of an unknown but sort of predictable number of leap seconds. So while you can average it down yet again, the problems of anything bigger than a day have reached the point where over a millennium you're dealing approximately 0.017 seconds of change in the rotation of the earth,
Doing this right is basically incompatible with doing this the easy way, I'd at least re-label the averaged time units to make the use of average approximations more obvious, and ideally I'd split the time types into calendrical and chronological, and use more sophisticated (and I'll be the first to admit, annoying to implement) calendar math for the chronological calculations.
[1] - Dennis D. McCarthy; Kenneth P. Seidelmann (18 September 2009). Time: From Earth Rotation to Atomic Physics. John Wiley & Sons. p. 232. ISBN 978-3-527-62795-0.
The entire type system needs to be parameterized by the inertial frame of reference, too.
Some people may think you’re tossing out a sarcastic joke here… but unambiguously fuck yes … because doing this kind of preemptive typing, the forward thinking to “frame of reference” is basically the next step after overhauling everything to disambiguate between calendrical and chronological timekeeping and units…
Because fundamentally you can’t correct for the reference frame if you can’t work out if your dealing with chronological or calendrical units. Calendrical units are in a weird liminal space outside of the earth reference frame. We measure the history of most deep space missions by earth reference frame mission elapsed time and do so by keeping a clock on earth and silently keeping records of the vehicle clock.. but on Mars we have a per mission Sol count that brings Mars time into the mix, and I know for a fact a lot of people neglect the barycentric gravity gradient difference between Earth and Mars because for literally 99.9% of things it doesn’t matter… but if you measure a transit of an Astronomical body from instruments on Mars and don’t deal with the relative reference frames your fractions of an arc second are basically pointless false precision.
It would be fun to play an interstellar mashup of Sid Meyer’s Civilization and Kerbal Space Program.
Isn't that Sid Meyer's Alpha Centauri?
Alpha Centauri is simply a weird earth. I think that a game around long-lived creatures that colonize a galaxy and have to work with relativistic effects could be different and fun. At some level it builds on all these different ways of looking at time. Breathe some life via simulation into this tongue-in-cheek interstellar economics: https://www.princeton.edu/~pkrugman/interstellar.pdf
They will not have to work with relativistic effects. Nobody is going to fly faster than 0.1C. It is prohibitively expensive energy-wise and there is no point of doing that. Just accelerate to 0.01C, and arrive at the neighbouring system in 400 years. 400 years is a blink of eye, anyway.
365·243 ought to be 365·2425 exactly:
Per 400 years, there is one leap day every 4 years (100 leap days), except when the year is divisible by 100 (so we overcounted by 4 and there are 100 – 4 = 96 leap days), except when the year is divisible by 400 (so we need to add that day back and arrive at 100 – 4 + 1 = 97). This gives us 97/400 = 0·2425.
The tropical year is about 365·24219 days long, but that's not relevant to timekeeping.
> 365·243 ought to be 365·2425 exactly:
Yes. This is also how it is defined: https://github.com/sharkdp/numbat/blob/ba9e97b1fbf6353d24695...
The calculation above is showing a rounded result (6 significant digits by default).
That's what I figured! but thought the derivation would be fun to share with people reading the comments.
Just a fun related video from Neil deGrasse Tyson https://youtu.be/mKCvqzCrZfA?si=uW4FOXg2nrqic1ZP
How does Numbat infer the type/unit for operations that involve transcendental functions like the trigonometric functions and the natural logarithm? How about for exponents that are not integers (rational or irrational)?
I've seen from to time some empirical equations that result to this. Usually, it's just hand-waved away and the non-integer units are ignored or dropped when the transcendental function is evaluated. I guess the trivial answer is probably add an extra factor with a value of 1 with the reciprocal units to result to the said type 1 for dimensionless units?
Those types of functions are only allowed to take in dimensionless (ie scalar) values. Numbat seems to handle this correctly, see [1].
[1]: https://numbat.dev
I guess that's logical, still a bit tedious in converting values to scalars...
To be fair, thse functions aren't defined for physical quantities.
As a physicist, if you ever see units on the parameter to a trigonmetric function, you can be fairly certain something is wrong.
You can derive this from how e.g. cos(x) can be written as a polynomial series, something like 1 - x^2 + whatever; and since 1 and x^2 would have different units if x is anything but unitless, you've goofed.
That's a big reason why units are used in calculations in the first place. They basically act as a sort of checksum for your calculations.
> As a physicist, if you ever see units on the parameter to a trigonmetric function, you can be fairly certain something is wrong.
Exactly. Which is why trigonometric functions in Numbat have the signature
and only take quantities as arguments that are implicitly convertible to a scalar, like 'cos(pi / 3)', 'cos(30 deg)' or 'cos(0.5 turn)'.cos(x: Scalar) -> ScalarWhether or not angles should be considered dimensionless is actually a matter of academic dispute. If you want to know more, take a look at https://github.com/sharkdp/numbat/pull/167 and the references therein.
Radians vs degrees?
For radians, we can note that the length of a circle segment is rα, where r is the radius and α is the angle, where both the radius and circle segment have dimensions length; then we must infer that α is dimensionless.
Degrees follow a similar argument.
Radians, gradians, degrees, minutes, seconds, thirds, revolutions are all ways of dividing up a circle
If the value passed to the function isn't dimensionless there's very likely a unit error somewhere.
I can't come up with an example where that would be a reasonable thing to do, but haven't thought too much about it.
You don't have to convert to scalars. Numbers have a type of 1 (= Scalar).
You can just call exp(3), for example: https://numbat.dev/?q=exp%283%29%E2%8F%8E
Or you could pass an angle quantity, which is convertible to a Scalar. Like cos(30 deg): https://numbat.dev/?q=exp%283%29%E2%8F%8Ecos%2830+deg%29%E2%...
If you find yourself wanting to do that, you're probably making the exact kind of error that the language is meant to prevent.
the classic problem for such number systems is linear algebra https://yosefk.com/blog/can-your-static-type-system-handle-l...
the issue is that each column and each row of a matrix can have different units. worse, gauss-jordan elimination chooses which rows to operate on dynamically. there is eventually a solution to this problem in c++ far down the comments thread
i don't see anything in https://numbat.dev/doc/type-system.html about aggregate types such as vectors, matrices, maps, arrays, lists, trees, records, etc., though it does have a suggestively named scalar type. i wonder how it handles this sort of thing
No, we do not have aggregate types in Numbat yet. But it is definitely something I would like to support.
Note that it is possible to construct a type system solution to this problem (vectors/matrices with non-uniform units). A colleague of mine has an excellent talk on this: https://www.youtube.com/watch?v=SLSTS-EvOx4
By 'Scalar', in the document you referenced, we mean a dimensionless quantity. Not a scalar in the scalar-vector-matrix-tensor sense. Maybe I should rethink that notation.
thanks, i'll take a look
i know it's possible (even in c++ apparently) but so far it seems difficult
Interesting article... though to be honest, I'm not sure I buy the premise.
In the article, he states: "Let's call the matrix of all (xi 1) `X` and let's call the vector of all yi `Y`", and then states that the units of `X` are (m 1).
But if you instead say the units of `X` are just m, the "1" is in units "m", then the problem goes away, the whole matrix has the same units ("m"), and everything works fine.
I'll be honest in that I've never thought of a matrix having multiple units per column... and I can't think of any example where you aren't just putting the units in the wrong place. Let me know if you have a concrete example that might work.
If you multiply by a transform where the first entry is 1, the second entry is d/dx, the third entry is d^2/dx^2, etc. You will get a different unit for each entry of the vector.
Also most of economics consists of linear systems with non homogeneous units. The first entry could be bushels of wheat, the next demand for steel, etc.
One way to think of this is that there are always multiple ways to break up the units when moving a system of equations into matrix form. What I’m trying to say is that there is always one way to break up the units that leads to matrices and vectors with a consistent unit.
Transform vectors like you mentioned can have an implicit 1 with the right units in them in order to make the vector have a consistent unit.
> Also most of economics consists of linear systems with non homogeneous units.
The system of equations needs to have some sort of unit consistency. Which means there is some way to split up the units to keep the matrices and vectors with consistent units or the math doesn’t actually make sense.
I don't understand the point you are making. These systems are intrinsically non-homogeneous in their typing.
In input-output analysis in economics, for example, the elements of a vector represent the amounts of a commodity (coal, steel, electricity, etc.), and the columns of the demand matrix represent how much of each commodity is required to produce one unit of that commodity as output. So the type of row 1 is "kg of coal", the type of row 2 is "kg of steel", the type of row 3 is "kJ of energy", etc. Given this matrix and a vector representing the starting quantities, you can do some linear algebra to get a matrix representing how much of each quantity to allocate to each sector, and a vector representing the resulting output. The type of these vectors are "[kg Coal, kg Steel, kJ electricity, ...]"
I don't now how you can "have an implicit 1 with the right units in them in order to make the vector have a consistent unit" given this setup.
being implicitly non-homogeneous doesn't mean they're logically intractable
you can still imagine some kind of static analysis that gives you an error if you accidentally write an algorithm that might add kgs of steel to kilojoules, and which is sufficiently powerful to accept at least the commonly-used numerical algorithms; not just matrix multiply but, in your example, probably the simplex algorithm
(we could call that analysis a 'type system', but perhaps obviously, it is significantly different from the type systems we're most familiar with)
i agree that this doesn't end up with vectors of homogeneous types, but it does end up with vectors of the correct type, which might be what was meant by 'consistent'
Yes, ok I agree with this.
> there is always one way to break up the units that leads to matrices and vectors with a consistent unit.
maybe i don't understand what you're saying but it sounds like you don't understand the problem
it's not that the units are inconsistent in the sense of adding kilograms to kilojoules; they're just inhomogeneous
you have a potentially large matrix in which potentially every cell has different units. so type systems that require the matrix cells to all be of the same type aren't helpful unless that type is something machine-oriented like `real*8`
moreover this is intrinsic to the problem, or at least the decision to use linear algebra on the problem
an additional difficulty for things like *gemm or lu factorization is that you want them to be applicable to matrices of any size with any type of units of measurement, as long as they're consistent; so they're parametrically polymorphic over units of measurement
i hadn't thought of it either before auditing an introductory numerical methods class, but in retrospect the page i linked does explain this, repeatedly, in the comments; i just didn't understand it at the time
basically it's very very common
If I’m reading this correctly, the Frink language has similar features (and also seems darn useful!) https://frinklang.org/
I looked at Frink quite a bit and I agree that it looks cool. But as far as I can tell, it does not have a static type system. At least it's not based on physical dimensions.
And it is not open source.
A month cannot be unambiguously represented as a length of time, as the length of a month is variable.
It looks as if they're defining a month as some amount of days. (30.4369, given the following.)
>>> 1 day + 1 month 1 day + 1 month = 31.4369 day
Just playing around with this and really enjoying it.>>> 1 month -> seconds = 2_629_746 s [Time]
Still reading the article, and really intrigued by and love this idea. But can I just say that I find this intro extremely well written. I feel like the writer is reading my thought process almost. Wvery time I start to form questions about decisions made in the language, it seems to answer them one or two paragraphs later. I feel like this is incredibly rare and yet incredibly important in an intro piece. Nicely done on the author['s]/[s'] part.
This is really cool!
If the author is around, I notice in the README you mention the GNU units program, which I use quite a bit. I'm curious if you've made any notable divergences from it?
Thank you.
I don't think we diverge from units on purpose anywhere, but I am probably not very familiar with its syntax, to be honest. I did however look at its huge collection of units and checked if there were any important units missing. I think we have a pretty comprehensive list of units that are supported by now in Numbat [1].
First project I've ever seen that made me instantly sponsor it. Then to only find out you wrote fd and hexyl and some other classics. Great work.
In theory it can help to compute how many ore belts I need for producing full belt of white sciences in Factorio?
Impressive!
One thing I find unintuitive is the implementation of variable length units (like months or years) as an averaged constant. Then again, perhaps that’s the point - and e.g. working with months without specifying a calendar is just unintuitive.
After having worked on accounting software for a living, which did need to precisely take into account the variable nature of time measurements, I greatly appreciate that Numbat has decided not to open that can of worms.
Using averaged constants like this isn't applicable to all use-cases, but has the virtues of being simple and easy to reason about.
IMO they did exactly the right thing by not trying to shoehorn an endless supply of complexity and pain into their language's type system. Users who do need to deal with date math can use a library.
Can it be used as a date calculator? I didn't see a way to specify "today" or "tomorrow". Like "today" + 14 days etc.
If you're looking for a general solution to that specific type of problem, then dateutils¹ is really useful. It deals with conversions, durations, matching, repeats, etc. It also has a clear and well defined date input format, and being a collection of command line tools it is easy to mangle it in to other tools.
thanks for the recommendation, I hadn't come across this before and it looks really useful!
I was also looking into this, and couldn't find a way to do it.
My guess is that it has support for time, but not for calendars. The fact that a "month" is a static length points in this direction.
Maybe there's going to be a calendar module in the future though?
Apparently this replaces https://github.com/sharkdp/insect
Yes. If you want to know more, you can read about it here: https://github.com/sharkdp/numbat/blob/master/assets/reasons...
It’s interesting that they support currency units. Wonder how the conversion works. Probably have to provide conversion factor as well?
It'll fetch current conversation rates from the internet
Yes. We use up-to-date currency exchange rates from the European Central Bank [1].
https://github.com/sharkdp/numbat/blob/786512175b99c195a7d5b...
That's one of these things where you can spend quite some effort to get that right but it's not going to be useful for anything deeper. That's like saying "here, we have implemented Newton's mechanics in fundamental types, now you can in theory simulate a body up from its atoms!" and while that would sound compelling if you don't know much about physics, it would quickly become useless once you know more about physics.
Modeling compile-time units was one of the memes of c++ 11. One of many examples:
https://benjaminjurke.com/content/articles/2015/compile-time...
So if I try to get the Falcon to do the Kessel run in two Parsecs, Numbat will raise an exception.
Because that’s too short (12 parsecs being close to the theoretical maximum without being swallowed by a black hole) or because it thinks that you confused a unit of distance and speed?
The latter. FYI - Han Solo makes that mistake in the Original Star Wars (A New Hope).
Han Solo’s quote makes sense if you understand it as a unit of distance, optimized by the onboard computer to be as short as possible, by dodging black holes, and therefore fast.
Aha, Lucas claims he meant distance too.
This is great. I want this as an addon to JS and Python.
For Python, there is the Pint library: https://pint.readthedocs.io/en/stable/
Another python alternative that work well with numpy arrays is astropy.units (also mentioned on the pint website):
Cool that you can use actual 1 × 2 multiplication sign or 4 ⋅ 8 instead of that kludge of an asterisk *
> On the other hand, 3 months + 2 lightyears is ill-typed, because the right-hand side is of type Length. You can change ‘years’ to ‘lightyears’ in the example above to see the resulting compiler error.
Why not both? Make lightyears both Length and Time
I can't resist mentioning F# Units of Measure aka UoM :)
https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
F# is the only language I looked at that has a really powerful unit system. The author of that unit system wrote his PhD thesis on the subject [1]. Numbat's type system is actually pretty close to what was suggested in that thesis (in contrast to F#'s system, which is based on units — not dimensions).
Or unitful.jl in Julia which evaluates the units at compile time resulting in no performance loss at runtime.
And the equivalent uom in Rust.
Thank you for not resisting. F# is very hard to beat for a tight domain model.
Super cool! I love when programming languages try to do more with types.
Julia has a neat little library called Unitful.jl[1] which does almost exactly what Numbat does by taking advantage of Julia's extremely flexible type system. Extending it to cover all of Numbat's functionality could be trivially accomplished in a few lines. In fact, fun type magic like this is pretty much my primary motivation for using Julia in the first place.
Not that I want to detract from the author's hard work, of course, I'm just very excited about Julia! Numbat is obviously an awesome project, and I can't wait to see where it leads!
I looked at Julia and Unitful.jl quite a bit when designing Numbat. It looks great.
> Extending it to cover all of Numbat's functionality could be trivially accomplished in a few lines.
Look, I'm not claiming that Numbat is superior. But I think Numbat might have its own little niche. And even if not, it's always good to have alternative solutions available.
- Numbat is specifically designed to handle physical dimensions and units. It has a special syntax to give developers the best experience when dealing with units. You can just type "KiB" and have it be parsed as kibibytes. You can use the "->" operator to convert to other units. You can directly annotate functions with physical dimensions ("fn kinetic_energy(m: Mass, v: Velocity) -> Energy = …"). Other languages and their unit libraries might have the same functionality, but it has always been added as an afterthought.
- Numbat has a static type system, Julia's is dynamic.
- Numbat can be compiled to Web Assembly and you can run it in your browser. I'm not sure if that is possible with Julia?
- Numbat might eventually be able to infer (physical dimension) function parameter types with its Hindley-Milner-style type system [1]. The only language that I know of that can do this is F#. The author of F#'s unit system wrote his PhD thesis on the subject [2], and Numbat follows the original approach in the thesis quite closely.
> Numbat can be compiled to Web Assembly and you can run it in your browser. I'm not sure if that is possible with Julia?
It's fine these days. https://tshort.github.io/WebAssemblyCompiler.jl/stable/examp...
I think the biggest issue with unit systems though is that algorithms are not generally unitful. I develop the Julia SciML solvers (https://docs.sciml.ai/Overview/stable/) and there's certain pieces like the initial dt calculation of an ODE solver which are not canonically correct in a unitful sense. So there's certain pieces where you have to opt out of unit checks which gets a bit messy. But other than that Unitful.jl is fine, and that's an algorithmic thing not a unit package thing.
Or you could just use Nim [0], where this sort of thing can be implemented in Nim's macro system. Then you have a regular programming language combined with CT safe units. :)
It even pretty much looks identical to those Numbat snippets!
import unchained
let earth_mass = 5.972168e24.kg
let solar_mass = 1.9885e30.kg
let lunar_mass = 7.342e22.kg
let distance_sun = 1.AU # astronomical unit
let distance_moon = 384_400.km
let force_sun = G_Newton * earth_mass * solar_mass / distance_sun
let force_moon = G_Newton * earth_mass * lunar_mass / distance_moon
echo force_sun / force_moon
# 69593.6 UnitLess
Sorry for the shameless plug. ;) Numbat looks quite cool though and the article talks about a lot of things to think about when writing such a program / lib / programming language.'can be implemented' is different from 'has been implemented'
how does unchained handle gaussian elimination
Gaussian elimination in what context even? If your LA library supports generic types, it might work. But generally generic math operations are tricky to get right, because math often does things that from a pure physical perspective don't make a whole lot of sense / you run into trouble with too many competing types due to temporary multiplication / divisions etc (which is a big issue in any statically typed language, because your container (vector, matrix, tensor whatever) type is typically a single unit type!
most linear algebra requires vectors of multiple unit types. think of runge-kutta for a second-order system, for example, or just about any multivariate system. see https://yosefk.com/blog/can-your-static-type-system-handle-l... for more information
if your static type system can't handle that, it can't handle unit types for basic linear algebra subroutines
I mean that article of yours highlights the difficulties one encounters fairly well, I would say. I don't disagree that this is (generally) a tricky problem!
Nim allows you to do a lot, e.g. derivatives of a unitful expression with measurement errors [0]. But other aspects run into the reality of dealing with a static type system. For example in Measuremancer [1], the library handling measurements with uncertainties, each `Measurement` is a single generic `Measurement[T]`. Each measurement stores the derivatives for error propagation. Obviously the derivatives have different units. Now, we could make `Measurement` a two-fold generic, `Measurement[T, U]`, but that just makes things more unwieldy in practice, for not much gain.
Without rewriting a majority of existing code you will always run into trouble where your perfect unit type system will either break or you'll need to work around it anyway (e.g. calling into some C library for part of the code).
just to clarify, i didn't write the article; yossi kreinin did
you seem to be saying that it's impossible to formulate a type system that handles units correctly in contexts like generic linear algebra algorithms
i think you are mistaken about that
I nowhere said impossible! All I say it is tricky and that likely the code you write will have to be adapted in certain cases (compared to current existing implementations of numerical algorithms).
if you have to adapt it in certain cases because of limitations in the type system, from my point of view that amounts to the type system handling units incorrectly
in the same way that pascal's type system handled array sizes 'incorrectly' because you couldn't write a subroutine that could operate on an array of any size, just arrays of one size or another
you could say 'yes, well, the code you write will have to be adapted in certain cases (compared to current existing implementations of string algorithms)', like say if your string has more than 20 characters in it, but the reason is that you're doing something the type system can't handle
Would you be able to do nice looking syntax like this in Nim?
x²
x÷y
Yeah, unicode characters in Nim code are supported. However, if by `x²` you'd want to square an identifier `x`, that won't work. The Nim lexer parses `x²` as a single identifier.
We do have infix unicode operators though. So `x ÷ y` (spacing required though!) could be implemented easily. I use this for `±` in Measuremancer [0] (which btw also supports Unchained units, for error propagation on unitful measurements).
Required spacing is a blocker/friction, that's one of the benefits of DSLs, I guess, being able to eschew the conservative syntax of the general-purpose languages (though Unicode support is a good step forward)
Thank you for the reference. I hadn't seen Unchained.
You missed the point about this example though. I wanted to show how Numbat can help prevent the exact error that you made in your program. 'G_Newton * earth_mass * solar_mass / distance_sun' is not a force. It's an energy. How would you write type annotations in this Nim example to help you find that same mistake?
Whoops, guilty as charged. I did indeed glance over that (both the text and the equation not actually being 1/r²).
In this case to get compiler help it's pretty much identical to Numbat. Annotating the "force" variables with a `Force` type. There's a few technicalities involved though. Normally the user is supposed to use explicit type annotations for variables. Quantities like `Force`, `Energy` etc. are mainly supported for function arguments (they are "concepts" in Nim lingo, a specific version of generics).
Technically you can abuse these concepts for type checking, by annotating with `Force`, e.g. `let force_sun: Force = ...`. That will correctly raise a CT error about mismatching units in this case. However, the code will /also/ not compile if you type `Energy` due to the underspecified type.
So what you're supposed to do is:
which will correctly raise aimport unchained let earth_mass = 5.972168e24.kg let solar_mass = 1.9885e30.kg let lunar_mass = 7.342e22.kg let distance_sun = 1.AU # astronomical unit let distance_moon = 384_400.km let force_sun: N = G_Newton * earth_mass * solar_mass / distance_sun let force_moon: N = G_Newton * earth_mass * lunar_mass / distance_moon echo force_sun / force_moon
(or any other unit of force; note that if it was a non SI unit, e.g. `eV` for an energy, you'd need to explicitly convert the RHS expression into `eV` using `.to(eV)`. That will perform the CT check of whether the conversion is valid, and if so, hands you the value in `eV`. Implicit conversions to explicitly given types is not supported)Error: type mismatch: got 'Joule' for '...' but expected 'Newton = Alias'So I think we more or less do the same things. :)
And just to clarify, I'm always happy to see more libraries / programs etc. that do units as types. But for the same reason you hadn't even heard about Unchained, is the reason I can't stop myself from mentioning it in such a context. :D (niche programming language + niche topic clearly doesn't help).
Frink (https://frinklang.org/) is an older language with similar design goals.
Frink runs on the JVM and is also available on Android.
I use it as a general purpose calculator on my smartphone.
It's really nice that Numbat is written in Rust :) Will have to try it out.
There's also Insect for interactive use: https://insect.sh/
Insect is the predecessor project. Numbat is the replacement. It also has an interactive version at https://numbat.dev/
Frink is not open source, unfortunately.
>> Frink is not open source, unfortunately.
True: https://frinklang.org/faq.html#OpenSource
Thanks for sharing Numbat with us.
It looks great!
Remind me of GNU Units https://www.gnu.org/software/units/
Ada’s type system also allows compile-time checking of units like physical dimensions:
https://gcc.gnu.org/onlinedocs/gcc-4.9.4/gnat_ugn_unw/Perfor...
You don't need a new programming language for something that C++ templates with integer constants give you for free.
Some of us don’t want to wait until the heat death of the universe for the code to compile (not a snipe at boost as a whole - boost units is quite bad iirc). There’s also performance problems if the compiler isn’t able to see through all the templates although that’s true for many things in this space. Finally iirc it doesn’t let you define your own custom units which is a common challenge with dimensional analysis as a library.
If rust ever gets more complete const generics, then things like tiny-uom might work.
Can you provide examples?
https://crates.io/crates/uom trivially convertable to cpp
- you define a template class accepting 1 constant integer for each dimension you want to consider (length and/or time and/or mass and/or temperature);
- you define addition and subtraction between objects of one type returning an object of the same type;
- you define multiplication and division as returning an object of a type with the dimensions summed or subtracted;
- and you get dimensional correctness via type safety.
In Scala https://github.com/typelevel/squants
Came here to say this. As soon as I saw the first example on the linked web page, I immediately thought of C++. Reinventing the wheel.
perform computations with physical units.
It has units awareness --- not necessarily only physical units.
For example; light, temperature and time are measured in units but they are not physical units --- aka 3 dimensional.
All are physical units. In particular in special relativity time is couples with position and you myst use cuadrivectors (t, x, y, z).
Time exists independent of location and can not be visualized in any "physical" way. The same is true for lots of other units of measure.
https://phys.org/news/2012-04-physicists-abolish-fourth-dime...
Physical can be understood to mean "of or relating to matter and energy or the sciences dealing with them, especially physics." Light, temperature, and time are described by physics, so I would consider "physical" to be an accurate adjective for those units. You seem to be using a more limited definition of physical, closer to the word "spatial."
Also, btw, you might want to read the article you linked. The article is discussing a fringe idea that some scientists have. Scientists are free to explore new ideas, and they make for good click bait for science journalists. Until those ideas gain traction, they don't provide for compelling arguments, though.
That paper is not good.
I Special Relativity you have many cuadrivectors, for example:
* time-space (t, x, y, z)
* energy-omentum (E, px, py, pz)
* Electromagnetic Potencial (V, Ax, Ay, Az)
and even a few 4x4 matices like the electric and magnetic fields together.
All of them change with the same equations. You can't break (t, x, y, z) and keep all the other.
And there are extensions to particle physics that mix Special Relativity and Quantum Mechanics that use the same equations, and they have an agreement of theory and experiment of 8 digits. You can't break (t, x, y, z) and keep all the other!
Also, the paper is published in a journal I never heard about. It's impossible to be sure, but it has a lot of single author articles that is a big red flag, and the titles of the other articles are too weird (Nobel price or crackpot, nothing in between).