Settings

Theme

A Brief History of Random Numbers

sr.ht

97 points by jayshua 3 years ago · 54 comments

Reader

ggm 3 years ago

These are mostly about PRNG, not true random sources.

brief funny story. My dad built the 5th computer in the UK, ICCE [0] and one of the tests they wanted to run was statistical analysis over random number fields.

They approached the General Post Office (GPO) which had an RNG called "ernie" [1] which ran the postal investment bond lottery. This is a true RNG, based on radio device avalanche diode behaviour (actually, neon tubes). It was dressed up as a computer but it was basically a detector device and A-to-D converter dressed up to look like one. They asked for a million truly random numbers to run some tests over. Interestingly, ERNIE was made by somebody who worked on Colossus at Bletchley.

The GPO refused to share a feed of numbers: They were concerned the team would discover some predictable event in the number field, and either destroy the post office bond scheme by revealing it, or use it to make millions.

[0] https://en.wikipedia.org/wiki/Imperial_College_Computing_Eng...

[1] https://en.wikipedia.org/wiki/Premium_Bond#ERNIE

  • DougMerritt 3 years ago

    ERNIE sounds kind of fascinating. And somewhat amusing.

    Going off on a tangent: An oft-neglected issue is that, even when the random source (like avalanched diodes) is actually sufficiently random, any apparatus that captures that randomness for use inherently causes a bias in the observations.

    Even if everything else is perfect (it usually isn't), in terms of signal processing, any observation window (e.g. a finite length of time of measurement) is an aperture which ends up getting convolved with the signal source being observed.

    It sometimes helps to convert the skew into white noise with a "whitening" post-pass algorithm.

    Using real life randomness is still a good thing to do, of course, it's just that are always real world issues with anything and everything.

  • defrost 3 years ago

    Interestingly, ERNIE was made by TWO somebodies who who worked on Colossus at Bletchley.

    From your ERNIE link:

        The designers were Tommy Flowers and Harry Fensom and it derives from Colossus, one of the world's first digital computers ...
    
    https://en.wikipedia.org/wiki/Tommy_Flowers

    https://en.wikipedia.org/wiki/Harry_Fensom

    ( I travelled to Canada from Australia in the 1980s and interviewed William Tutte about codebreaking and the war, I missed out on an opportnity to talk to Tommy Flowers the following year and didn't return to the UK until after his death )

    • ggm 3 years ago

      Like a lot of early machines, ICCE was made in part from ex-GPO relays. (ie, it was electromechanical.) Valves were more expensive and relays were flooding the market post-war.

      Guess which WW2 activity used a very large number of relays, and was made by the GPO (who used relays heavily in telephony), and was subsequently discontinued by the GPO at wars end as things to do with encoding and decoding scaled back thus flooding the electronics market in the UK with relays...

      Tommy Thomas (head of the ERCC in edinburgh where my dad wound up and I therefore lived) worked at Manchester on the Mark 1. He never talked about it and I never asked, subsequently. We both wound up in Australia at the CSIRO, where Radiophysics had been started by people in the radar space, and that bled into their interest in Computing. it's a small world. He was the head of the IT division and I was a lowly researcher, our paths didn't cross much. I wish I'd talked to him more, about this stuff and computer history.

      • defrost 3 years ago

        Seems like a click baiting kind of question that I'd surely Bombe.

        I ended up mainly doing geophysical field work with a lot of aquisition, processing, and interpretation coding work .. but I dabbled in symbolic computer algebra systems (in Australia) for a while and loitered a little in technical history of the borderline classified - I interviewed "for the record" Leonard Beadell, Jack Wong Sue, Mark Oliphant, people associated with the Mungalalu Truscott Airbase, etc.

        I seem to recall a fair bit was done here with early over the horizon radar work but I didn't go far in that direction ending up more in radiometrics and resource | energy tracking.

  • layman51 3 years ago

    Wow! Also, I think [0] is the first Wikipedia I have come across that doesn’t have a lead section at all.

    • ggm 3 years ago

      s/doesn't/didn't/ but it's probably grossly inadequate.

camel-cdr 3 years ago

A history of PRNGs without mentioning George Marsaglia is heresy.

Also, PCG didn't stop the development. Nowadays, modern PRNGs explore the usage of chaotic PRNGs (without a fixed period), which are often faster than non-chaotic ones. Notable examples are the Romu family [0] of PRNGs and sfc [1], and tylov's sfc derivative [2].

Another thing that would be nice to mention is that we went full circle, the good old middle square method already used by von Neumann, has been found to work very well if you scale it up and add a weyl sequence. [3]

Edit: And how could I forget, there has also been a lot of effort in using SIMD, e.g. by SHISHUA. [4]

Another thing to consider is how to efficiently distribute the generated numbers in a given distribution. I'm not aware of any recent improvements in that regard, other then some approximations that have probably been reinvented a bunch of times.

[0] https://www.romu-random.org/

[1] https://numpy.org/devdocs/reference/random/bit_generators/sf...

[2] https://github.com/tylov/STC/blob/master/docs/crandom_api.md

[3] https://arxiv.org/abs/1704.00358

[4] https://espadrine.github.io/blog/posts/shishua-the-fastest-p...

Edit: I had a few names mixed up

nemo1618 3 years ago

Pseudo-random!

A new programmer reading this article would come away with the impression that, if they need random numbers, they should use xorshift or PCG, when in reality they should be calling getentropy(), or, if a syscall is too expensive, using a CSPRNG (e.g. ChaCha or BLAKE3) seeded with getentropy(). We now have RNGs that are both secure and really, really fast -- multiple GB/s fast -- so there are very few circumstances where a PRNG is truly necessary.

  • SideQuark 3 years ago

    If people call getentropy() when it;s not needed, it lowers entropy for places that do need it. There's not an infinite amount of crypto secure randomness available for all processes to go nuts sucking it up. This is one reason not to use getentropy as a PRNG - it will cause problems with other entropy needs. So advising people to just suck it up when they feel like it without truly needing it is a bad idea.

    If one needs fast PRNGs, say for simulation, monte carlo stuff, etc. then CSPRNGs are a terrible idea. They're literally orders of magnitude slower than fast PRNGs. Almost nothing needs CSPRNGs (only things needing crypto level security, which is a tiny amount of the uses for PRNGs).

    In short, use the right tool for the job.

    • nemo1618 3 years ago

      > There's not an infinite amount of crypto secure randomness available for all processes to go nuts sucking it up

      In fact, there is! The idea that your kernel has some finite amount of entropy that can be "used up" is a persistent myth. See https://www.2uo.de/myths-about-urandom/

      Okay, it's technically not infinite, but 128 bits of "true entropy" is enough to seed a CSPRNG that will generate as many random numbers as you will ever need.

      > Almost nothing needs CSPRNGs

      This is how we end up with, e.g., hashmap implementations that use non-cryptographic hash functions, and then an attacker DoS's your server with 10,000 keys that all end up in the same bucket. Or "random" automated tests that always use the same PRNG with the same seed, meaning certain code paths will never be taken.

      We used to live in a world where developers frequently had to choose between safety and performance. Except for a few niche applications, that is no longer the case: we can have both. I certainly agree that there are situations where maximum performance is required, and security is pointless -- but 99% of the time, developers should not even be asking that question. They should just be using the default RNG provided by their language, which should be a CSPRNG, not a PRNG.

      • SideQuark 3 years ago

        >The idea that your kernel has some finite amount of entropy that can be "used up" is a persistent myth

        Thomas' Digital Garden blog is not really the place to find good advice on this. Crypto researchers are the place to look. The quality of how Linux handles this is quite open to debate, and researchers routinely question the choies made. Here's [1] one of many. Use Google Scholar, enter urandom, and limit the search to recently to see more.

        The way urandom does stretching does not provide entropy (which follows from the second law of thermo). And yes, there is a finite amount. Developers have decided to conflate actual entropy with "hard to compute," which is simply not true.

        Yes, you can get 128 bits of true entropy, and then use a stream cipher to reuse it over and over, but it's not adding or making more entropy. It's simply that the stream cipher is not (yet?) broken, at which point you'll realize there is not enough entropy.

        By the argument of entropy stretching, you could claim you only need 10, or 2, or 1 bit of entropy, and then use a stream cipher, and have unlimited entropy. Since that would be broken quickly, showing the lie, they choose 128 or 256 or so, and hope the cipher method doesn't break.

        And yes, getting more and more from that stream weakens the system, and eventually, like all crpyto, the method will break.

        >This is how we end up with, e.g., hashmap implementations that use non-cryptographic hash functions, and then an attacker DoS's your server with 10,000 keys

        Conversely, you use an algorithm that takes 100x of the time to hash, then they simply DoS's your server by sending it any large batch of things to hash. For many hashes the PRNG is a significant amount of time used. This is why there is no one size fits all method.

        I have worked in crypto (and high speed computing and numerics) for decades, and have patents in crypto stuff, and have even written articles about PRNGs (and given talks on how to break them via SAT solvers and Z3 stuff), so I am well aware of the uses of all these pieces.

        If some area needs crypto, assuming an amateur will simply throw in a proper function call and now magically have security is terrible way to design or implement systems. If an area needs crypto, have a senior person decide what to do, and walk the person that is left guessing through how it works.

        Your advice leads to people thinking (incorrectly) that since they called getentropy to seed, now they are secure, when there is a zillion other attacks they are still open to (timing, forgetting how to use block ciphers, choosing the wrong hash map type in the case of hash maps, not handling salt correctly, not using memory buffers properly, not ensuring contents don't leak, ensuring things running in the cloud don't leak, and on and on..., forgetting any of a ton of things needed to make the thing secure).

        If you're worried about DDOS, paying 100x time for a PRNG call can lead to similar system failures - in gaming, rendering, simulation, science stuff.

        So the same advice: use the one that is most suited. I always recommend default to faster since someone not knowing about crypto should ever be implementing things that accidentally need crypto at such a tiny scale. And the slowdowns hit all software.

        >Except for a few niche applications

        The majority of running code in the world is not facing attackers - it's stuff like embedded systems, medical devices, tools and toys, programs on my PC (only a tiny few face the world), data processing systems (finance, billing, logging, tracking, inventory...), and so on. The largest use of PRNG calls are simulation by far, which most certainly do not want to pay the 100x performance hit.

        The niche applications are those needing a CSPRNG, which is truly the smaller fraction of pieces of code written.

        >Or "random" automated tests that always use the same PRNG with the same seed, meaning certain code paths will never be taken

        Repeatable tests are by far easiest to fix. I've seen tons of people do exactly what you say, and get an error they never see again. The correct answer here is not to rely on randomness executing a path in your code for testing. Make sure that the code is tested properly, not randomly. Relying on the luck of random number generators is simply bad advice. Use them to get more values, or to stress test for load testing, but expecting a lucky number pick to happen on test 1 of a trillion runs is not very solid advice compared to simply running the test, repeatably, with enough runs to provide the level of stress you desire. At least then you can redo the test and find the bug and fix it in reasonable time.

        >that is no longer the case: we can have both

        No, we do not. I regularly work on crypto code and often get called in to go over stuff with others. I also write lots of high performance code (mostly scientific, simulation, and the occasional rendering needs) that needs speed. There is no PRNG that meets both needs by a long shot.

        > They should just be using the default RNG provided by their language,

        Agreed.

        > which should be a CSPRNG, not a PRNG

        And that is not true for any language I am aware of for the reasons I mentioned.

            Python: Mersenne Twister
            C#: Xoshiro
            JavaScript:  xorshift128+
            Go: Additive Lagged Fibonacci
            C/C++: large range, from really bad to mostly bad
            Java: LCG
        
        And so on......

        [1] "An Empirical Study on the Quality of Entropy Sources in Linux Random Number Generator", https://ieeexplore.ieee.org/abstract/document/9839285

        • oconnor663 3 years ago

          > Thomas' Digital Garden blog is not really the place to find good advice on this.

          I prefer DJB's blog on this: https://blog.cr.yp.to/20140205-entropy.html

          >> The Linux /dev/urandom manual page claims that without new entropy the user is "theoretically vulnerable to a cryptographic attack", but (as I've mentioned in various venues) this is a ludicrous argument—how can anyone simultaneously believe that

          >> - we can't figure out how to deterministically expand one 256-bit secret into an endless stream of unpredictable keys (this is what we need from urandom), but

          >> - we can figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.)?

          • SideQuark 3 years ago

            So you'll believe a 2014 djb blog post over the following near decade of peer-reviewed research?

            That's how we got here.....

        • nemo1618 3 years ago

          > Developers have decided to conflate actual entropy with "hard to compute," which is simply not true.

          True, they are different, but there is no meaningful distinction between a value that is "truly random" and a value that can be computed with a computer larger than the universe.

          > eventually, like all crypto, the method will break

          It's disheartening to see this claim being made by someone who regularly works on crypto code. When it comes to symmetric encryption, the war between cryptographers and cryptanalysts is over -- and the cryptographers have won. The security margin provided by modern ciphers like ChaCha20 is so high, and attacks on them so pitiful, that there are now calls for reducing the strength of ciphers in order to increase performance without sacrificing a meaningful amount of security: https://eprint.iacr.org/2019/1492

          ChaCha20 will not be broken in our lifetime; probably it will never be broken, in the sense that an attacker will be able to observe any subset of the keystream and predict the next block (which is what we need from a CSPRNG).

          Anyway, given these premises (which AFAIK we both agree on):

            1) There is no "one-size-fits-all" RNG
            2) Using a PRNG instead of a CSPRNG may lead to security vulnerabilities
            3) Using a CSPRNG instead of a PRNG may lead to performance degradation
          
          Which type of RNG should be the default (e.g. the one you get if you type 'import rng'), and which type should the programmer have to ask for specifically? That's the question at hand here.
          • SideQuark 3 years ago

            >True, they are different, but there is no meaningful distinction between a value that is "truly random" and a value that can be computed with a computer larger than the universe.

            Sloppy thinking and conflating different ideas are not a good way to think about computer security.

            >ChaCha20 will not be broken in our lifetime; probably it will never be broken

            As was said of the zillion currently broken cryptosystems, hashes, and all manner of security schemes......

            > the cryptographers have won.

            Is this why NIST routinely is asking for better crypto systems? Because crypto is solved?

            > there are now calls for reducing the strength https://eprint.iacr.org/2019/1492

            Yet followup papers often invent new methods of attack https://eprint.iacr.org/2022/695. It's almost as if theoretical advances can change the unproven-yet-assumed strength of previous methods.

            >Which type of RNG should be the default (e.g. the one you get if you type 'import rng')

            I already demonstrated that answer is PRNG for pretty much all widely use languages, which I agree with. There's simply no CSPRNG possible that ports over the widespread systems these languages are used for, so it's silly to continue to argue that they should default to a CSPRNG. CSPRNGs are not used by default, have never been, there is no trend to move that way I can find, all for the reasons I gave my very first reply in this thread.

        • camel-cdr 3 years ago

          Note that most of the defaults are actually not even good non-CS PRNGs.

          The JavaScript one decent (only because it generates floats using it).

          The MT in C++ and python isn't all that bad, bit it's huge and it fails statistical test, albeit pretty late.

          The others are slow and low quality. (If I remember my test correctly)

          • SideQuark 3 years ago

            >Note that most of the defaults are actually not even good non-CS PRNGs.

            Agreed - this is mostly due to age. Mersenne Twister appearing so many places is terrible since it has bad mixing properties (especially around 0, but the same thing shows up elsewhere), is error prone to seed as a result, is huge and slow.

            I doubt default C++ rand() ever uses MT - it's far too slow. It appeared in Boost, then in the std (and like all C++ stuff, appeared a decade after it was obsolete, unfortunately).

            The xo* styles ones are decent, but PCG seems to outperform them at just about everything someone needs for fast PRNG with good statistical properties.

            >The JavaScript one decent (only because it generates floats using it).

            I can almost guarantee they make the usual errors trying to convert to float. :)

            Converting PRNG output to float is a notoriously error-prone minefield, so if a language makes it into a float, you can almost certainly assume it is not a good RNG value.

            The first problem many make is that the underlying source better be uniform as integers, and I've ran across many language implementation that were not. The PRNGS with period 2^N-1 are a common source of error compared to those with 2^N periods. This PRNG should have lots of other nice properties such as k-equidistribution for high enough k. Many more fail that.

            EDIT: ha ha - called it. Here's [1] Javascript (V8) rng functions. They at least admit the double [0,1) is not uniform :) They start off just as I claimed will happen, using a 2^N-1 period, push this non-uniform value into the mantissa as simply bits (ensuring already non-uniformity by anything downstream), then in other places in the code they take this at most 53 bits mantissa, mult by a 64 bit value, then use that as if it's truly 64 bits. What a mess...... This is the state of most libraries when I inspect them.... This type of crap shows up when you run large numerical simulations (weather, nuke testing, giant finance, physics/planetary sims...) and the underlying bias craps out your results. It's hard to test for events occurring once in trillions when the underlying code is so flaky.

            Then, and here is the great part. They quite often simply divide this by 2^N as a float, which means lots of possible floating point values are not represented, and they're certainly not represented with the frequency one needs them to be. For example, there are twice as many float in [0,1/2) as [1/2,1) and so on, so the division left out lots of possible values. A float has a 23 bit mantissa, a double has 53. Then a person often simply multiplies this [0,1) float back into an integer range, and suddenly you lost tons of of properties you wanted: uniform distribution, all values equally likely, etc.

            So the float version is nearly always a bad choice.

            Whenever I have to provide some sources of (P)RNG for a library, I always make a few that do the common tasks so people don't roll their own: uniform 32 and 64 bit (if needed), uniform(M) for [0,M), uniform for [A,B), a proper float for [A,B) (since the get [0,1) then scale loses values, a proper float [0,1) and [0,1] (which are different things), etc., and hopefully people looking at the ways to get random numbers use these. They save a lot of issues.

            [1] https://github.com/v8/v8/blob/main/src/base/utils/random-num...

            • camel-cdr 3 years ago

              Yeah, the amount of times I've seen `rand() % range` and other non-uniform distributions is staggering.

              I've actually held a presentation on the topic, with a chapter on generating uniform floats [1]

              There I also describe a simplified algorithm for generating uniform floats where every representable floating point value can occur with the appropriate probability.

              I'm still not 100% sure that my implementation is correct, but I did all I could think of to test it [2]

              [1] https://youtube.com/watch?v=VHJUlRiRDCY&t=2774s

              [2] https://github.com/camel-cdr/cauldron/blob/main/cauldron/ran...

              • SideQuark 3 years ago

                Nice - you are my new best PRNG friend - I've never met someone in real life that has done the work to make correct floats, although I've read work from a few. That code looks really solid.

                >I'm still not 100% sure that my implementation is correct

                It's super hard to do so. I've often thought of using Z3 to try to formally prove that my algorithms are correct, but have not yet done so. I had a version for about a decade then found a tiny bug and that type of stuff keeps me always thinking of how to formally derive things, but the formal proofs get hard. Z3 has a really nice proper IEEE floating point object that lets you do such things slightly less painfully.

  • thanatropism 3 years ago

    Many people actually want low-discrepancy sequences anyway.

    https://en.wikipedia.org/wiki/Low-discrepancy_sequence

  • denton-scratch 3 years ago

    > when in reality they should be calling getentropy()

    A new programmer shouldn't be meddling in cryptography, so they probably don't need either cryptographically-secure pseudo-random numbers nor true random numbers. True random numbers are tricky.

    • nemo1618 3 years ago

      My whole point is that cryptographically-secure should be the default, as there are many scenarios where a PRNG leads to a security vulnerability where a CSPRNG would not. It is precisely new programmers who should be using CSPRNGs for everything, because they are the least well-equipped to know when strong entropy is necessary! We should (almost) never be asking "Do you really need a CSPRNG?" but rather "Do you really need a PRNG?"

      • denton-scratch 3 years ago

        > because they are the least well-equipped to know when strong entropy is necessary!

        Yuh. I'm not sure what "strong entropy" means, in this context; entropy's usually reported as some number of bits of entropy. So perhaps "a lot of entropy" is clearer.

        At any rate, by default a (CS)PRNG doesn't have any entropy that isn't present in its seed. According to some, at least, that entropy is diminished every time you read from the RNG, so it depletes to nothing after a finite number of reads.

        I've finally come to the conclusion that entropy, whatever that means, is orthogonal to RNGs. Instead, RNGs should be classified by their unpredictability. A CSPRNG is one with high unpredictability. And I've given up on trying to build a DIY HWRNG. It was a misbegotten project.

  • whyever 3 years ago

    The linked text is from a Rust library for generating random numbers where predictability is acceptable, i.e. it does not concern itself with cryptographic security.

    The more popular library rand usese ChaCha and getentropy as you described.

spiffytech 3 years ago

It amused me that my college statistics textbook had an appendix of random numbers in the back of the book. Just a long list of numbers generated at random and then immortalized on the same medium as ancient texts like the Dead Sea Scrolls.

I guess that's the best we had for students before the widespread adoption of computers?

rgmerk 3 years ago

What this comes down to is that you just can't arbitrarily choose a random number and hope that it meets your needs.

You have to understand what properties you actually care about and choose a (P)RNG that has those properties.

  • dumpsterdiver 3 years ago

    It's sounds like what you're saying is that any truly random property would prove vexing to those who would prefer properties that only appear random, but in fact are reliably less random.

    Perhaps this is because true randomness doesn't always appear to be random enough. I would argue that this property is what makes it real. Sometimes true randomness might be six dice all showing the face of six.

  • whyever 3 years ago

    Nowadays, you can just choose a CSPRNG and be done with it. There are not many use cases where you might prefer a simpler PRNG.

    • pixelesque 3 years ago

      Are CSPRNGs as fast as general high-performance (and non-secure) PRNGs like MT, PGC or Xoroshiro256+?

      For many use cases in statistics / sampling / monte carlo simulations, you often need millions/billions of well-distributed random numbers with very low generation overhead.

      Even things like game AIs care about performance with regards to the RNGs they use.

      • espadrine 3 years ago

        Not all of them, but ChaCha8 (which many renowned cryptographers consider secure[0]) is in the same ballpark as the most common ones[1].

        (A few notes on the second link: I wouldn’t recommend xoshiro256+x8 since it is very weak statistically, same for xoshiro256 IMO. Also, disclaimer, I wrote SHISHUA.)

        [0]: https://eprint.iacr.org/2019/1492.pdf

        [1]: https://github.com/espadrine/shishua#comparison

        • pixelesque 3 years ago

          In fairly comprehensive comparisons of PRNGs (MT, MTSMT, basic LGC, PGC and Xoroshiro256+) for genering random numbers for Monte Carlo sampling for pathtracing and simulation, generating both unit length float32s and shuffle indices, I found Xoroshiro256+ as good as the rest from a statistical sample distribution point-of-view (in terms of not being biased and providing excellent sample distribution in terms of converging to a ground-truth in many different simulations) and the fastest.

          I don't dispute that at the bit level using something like PractRand it has issues and there are better "quality" ones, but at a practical sense in generating excellent 0.0f -> 1.0f float32 numbers and uint32_t indices I couldn't actually notice any quality issues with what it generated for very long running Monto Carlo simulations using billions of random numbers, even though it should have been causing issues with the integer numbers due to the weaker lower bits (although in practice, most of the indices were < 16 bits in size, so that might have explained it).

          I wasn't aware of SHISHUA though, I'll check it out.

        • camel-cdr 3 years ago

          Wasn't xoshiro256+ mostly weak in the lower bits, and recommended to use to generate floating point numbers?

          I suppose that this is probably indicative of a more fundamental weakness, but for reference the upper bits should be way higher quality that the Messene Twister (As that one fails PractRand while the upper bits of xoshiro256+ don't IIRC)

          • espadrine 3 years ago

            Yes, but it is also weak in terms of seed correlation, which makes xoshiro256+x8 weak in many more bits.

            That said, there are certainly use-cases for it! I just like the idea that we can have our cake and eat it too: something closer to the Pareto frontier, that doesn’t have those caveats, and yet is faster.

      • NohatCoder 3 years ago

        Depends on what you compare, but with modern cryptography instructions you can now generate a few bytes per cycle, so billions of numbers is not an issue.

beyondCritics 3 years ago

Wow, reading this am suddenly noticing that hiding the latency in system code can be much more simpler than i thought it is. Say i have a function

  X f(X u);
on which i want to iterate occasionally to get in turn f(u0),f(f(u0)),... The "smart" way to do this, is

  X u=f(u0);     //Initialize once
  ...
  X smart_f() {
    X w = u;
    u = f(w);
    return w;   // This line is not stalled by the previous one, 
                // hence a super scalar processor might be able to hide the latency of calculation f(w)    
  }
I doubt any compiler will be able to figure this out, and surely not if f has side effects.
vlmutolo 3 years ago

There’s some interesting discussion regarding xoshiro vs PCG.

https://news.ycombinator.com/item?id=24785572

makeworld 3 years ago

Learn more about PCG here: https://www.pcg-random.org/

denton-scratch 3 years ago

Why's he going on about slide-rules?

  • h2odragon 3 years ago

    good shorthand for "the time before computers, when math was done with meat and dinosaurs roamed the earth"

    • denton-scratch 3 years ago

      Oh, OK. I couldn't see what approximate calculations using logarithms had to do with random integers.

      Also, I learned to use a slide-rule in the sixties; I 've never touched one again until I inherited an antique, a few years ago. Nobody was using slide-rules in the seventies, surely.

  • thanatropism 3 years ago

    Stylistic flair.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection