When teaching mathematics, the traditional method of lecturing in front of a blackboard is still hard to improve upon, despite all the advances in modern technology. However, there are some nice things one can do in an electronic medium, such as this blog. Here, I would like to experiment with the ability to animate images, which I think can convey some mathematical concepts in ways that cannot be easily replicated by traditional static text and images. Given that many readers may find these animations annoying, I am placing the rest of the post below the fold.
Suppose we are in the classical (Kolmogorov) framework of probability theory, in which one has a probability space representing all possible states
. One can make a distinction between deterministic quantities
that do not depend on the state
, and random variables (or stochastic variables)
that do depend (in some measurable fashion) on the state
. (As discussed in this previous post, it is often helpful to adopt a perspective that suppresses the sample space
as much as possible, but we will not do so for the current discussion.)
One can visualise the distinction as follows. If I pick a deterministic integer between
and
, say
, then this fixes the value of
for the rest of the discussion:
.
However, if I pick a random integer uniformly from
(e.g. by rolling a fair die), one can think of
as a quantity that keeps changing as one flips from one state to the next:
.
Here, I have “faked” the randomness by looping together a finite number of images, each of which is depicting one of the possible values could take. As such, one may notice that the above image eventually repeats in an endless loop. One could presumably write some more advanced code to render a more random-looking sequence of
‘s, but the above imperfect rendering should hopefully suffice for the sake of illustration.
Here is a (“faked” rendering of a) random variable that also takes values in
, but is non-uniformly distributed, being more biased towards smaller values than larger values:
.
For continuous random variables, taking values for instance in with some distribution (e.g. uniform in a square, multivariate gaussian, etc.) one could display these random variables as a rapidly changing dot wandering over
; if one lets some “afterimages” of previous dots linger for some time on the screen, one can begin to see the probability density function emerge in the animation. This is unfortunately beyond my ability to quickly whip up as an image; but if someone with a bit more programming skill is willing to do so, I would be very happy to see the result :).
The operation of conditioning to an event corresponds to ignoring all states in the sample space outside of the event. For instance, if one takes the previous random variable , and conditions to the event
, one gets the conditioned random variable
.
One can use the animation to help illustrate concepts such as independence or correlation. If we revert to the unconditioned random variable

and let be an independently sampled uniform random variable from
, one can sum the variables together to create a new random variable
, ranging in
:

(In principle, the above images should be synchronised, so that the value of stays the same from line to line at any given point in time. Unfortunately, due to internet lag, caching, and other web artefacts, you may experience an unpleasant delay between the two. Closing the page, clearing your cache and returning to the page may help.)
If on the other hand one defines the random variable to be
, then
has the same distribution as
(they are both uniformly distributed on
, but now there is a very strong correlation between
and
, leading to completely different behaviour for
:
.