Random Walks: the mathematics in 1 dimension
mit.eduOne might ask the question: what is the probability that you will return to your starting position over the course of an infinite random walk? On a 1 dimensional or 2 dimensional lattice, that probability is 1. What's crazy though is that for a 3D lattice, the probability is not 1 — it's about 0.3405.
I really love this proof. It's a great example of using maths to prove a counter-intuitive result. They way to prove it is rather clever, and made me appreciate what mathematicians do a lot more.
Shame I've never seen it shared online. I was actually hoping the submitted article was a proof of this, but you can't have everything in life.
Wow, do you have any proofs for this? I'm especially curious about the generalized n-dimensional case.
There is some more information and references here: http://mathworld.wolfram.com/PolyasRandomWalkConstants.html
If you find it GP, I'd love to see where the integral constructed comes from, since that's the clever part rather than the evaluation.
Here's a reference I found for one way to do it: http://www.math.nus.edu.sg/~matsr/ProbII/Lec6.pdf (Theorem 2.1). You define the Green's function G(x, y) = \sum_n Pr_x(S_n=y), where x and y are 3-vectors and Pr_x(S_n=y) is the probability that an n-step random walk starting at x ends up at y. If you have an infinite random walk starting at 0, then G(0, 0) is the expected number of times that the walk returns to 0. That's what the mathworld link calls u(3). You can use Fourier inversion to compute G(0, 0) -- the link gives the gnarly details. It's pretty cool.
You're a scholar and a gentleman, merci buckets
The probability of returning to the origin follows a very smooth logarithmic curve for dimensions 3 - 8 [copied values from the mathworld link]
image: http://imgur.com/CL8MXej
Thanks a lot. Had to save this for further reference the moment I saw it.
Random walks have been used also to numerically solve differential equations. See e.g. [1]
[1] http://www.jstor.org/stable/3612176 "A Proof of the Random-Walk Method for Solving Laplace's Equation in 2-D"
Overview why the question on (and need for explanation of) random walks arise:
http://www.mit.edu/~kardar/teaching/projects/chemotaxis(Andr...
How much of this led to early stages of life ?
Here is a related lecture from MIT https://youtu.be/56iFMY8QW2k which mathematically proves how it is pretty much impossible to go "happy" from gambling in a club even though intuition says otherwise.
What does "happy" mean?
It's not hard to set up a bet that gives you an arbitrarily high chance of gaining money, despite an expected value of less than 1.
Isn't the expected distance (undirected) given by E[|d|], while sqrt(n) is the value of sqrt(E[d^2])?
They each measure the same thing, more or less, but it's easier to work analytically with squares than absolute values. Similarly, we tend to work with the variance rather than with expected absolute deviations, we calculate sums of squares rather than sums of absolute values, etc.
More fundamentally, root-mean-square is the norm induced by the expectation inner product in the space of random variables. Norms generalize the geometric notion of length, so intuitively RMS is an appropriate measure of the "stochastic distance" from the origin of a random walk after a set number of steps. RMS can likewise be used as an analogue for geometric length for other purposes in a stochastic context, e.g., in calculating the similarity dimension of fractal stochastic processes like Brownian motion.