You know the Fibonacci numbers:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …
Each number is the sum of the previous two. Let’s say the zeroth Fibonacci number is zero, so:
And let’s say you want to write some code to compute this function. How would you do it? Perhaps something like this? (Python code)
def fib_rec(n): assert n >= 0 if n < 2: return n return fib_rec(n-2) + fib_rec(n-1)
This is a pretty popular algorithm, which can be found in dozens of places online. It is also a strong candidate for the title of Worst Algorithm in the World. It’s not just bad in the way that Bubble sort is a bad sorting algorithm; it’s bad in the way that Bogosort is a bad sorting algorithm.
Why so bad? Well, mainly it is quite astonishingly slow. Computing fib_rec(40) takes about a minute and a half, on my computer. To see why it’s so slow, let’s calculate how many calls are made to the fib_rec routine. If we write for the number of calls made when calculating
fib_rec(n), then:
So are the Leonardo numbers,
. In other words, computing fib using
fib_rec takes time O(fib(n)).
So computing fib_rec(40) involves c(40) = 331,160,281 calls to the fib_rec routine, which is pretty crazy considering it’s only called with 40 different arguments. An obvious idea for improvement is to check whether it’s being called with an argument that we’ve seen before, and just return the result we got last time.
cached_results = {}
def fib_improved(n):
assert n >= 0
if n < 2: return n
if n not in cached_results:
cached_results[n] = fib_improved(n-2) + fib_improved(n-1)
return cached_results[n]
That’s a huge improvement, and we can compute fib_improved(40) in a fraction of a millisecond, which is much better than a minute and a half. What about larger values?
>>> fib_improved(1000) 43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875L
That looks good, and it’s still apparently instant. How about 10,000?
>>> fib_improved(10000) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in fib_improved File "<stdin>", line 6, in fib_improved [...] File "<stdin>", line 6, in fib_improved RuntimeError: maximum recursion depth exceeded
Oh dear! We’ve blown the stack. You could blame Python here for having a hard (if configurable) limit on stack size, but that’s not the real point. The problem here is that this algorithm creates n stack frames when you call fib_improved(n), so it uses at least O(n) space.
(Actually it uses O(n2) space — later we’ll get into why that is. Thanks to CoderKyd on reddit for pointing that out.)
It’s easy enough to write a version that only uses constant space — well, not really: but it only uses twice as much space as we need for the end result, so it’s within a small constant factor of optimal — as long as we’re willing to forgo recursion:
def fib_iter(n):
assert n >= 0
i, fib_i, fib_i_minus_one = 0, 0, 1
while i < n:
i = i + 1
fib_i, fib_i_minus_one = fib_i_minus_one + fib_i, fib_i
return fib_i
This is much better. We can compute fib_iter(100,000) in less than a second (on my computer, again), and fib_iter(1,000,000) in about 80 seconds. Asymptotically, this algorithm takes O(n2) time to compute fib(n).
(Maybe you think it should be O(n) time, because the loop runs for n iterations. But the numbers we’re adding are getting bigger exponentially, and you can’t add two arbitrarily-large numbers in constant time. If you think that sounds a bit theoretical, you might enjoy the post where Peteris Krumins measures the running time of fib_iter, and explains why it’s quadratic rather than linear.)
That’s not the end of the story. Since the Fibonacci numbers are defined by a linear recurrence, we can express the recurrence as a matrix, and it’s easy to verify that
Since we can get from to
by squaring, this suggests we can compute fib(n) using just log2(n) iterations:
def bits(n):
"""Represent an integer as an array of binary digits.
"""
bits = []
while n > 0:
n, bit = divmod(n, 2)
bits.append(bit)
bits.reverse()
return bits
def fib_mat(n):
assert n >= 0
a, b, c = 1, 0, 1
for bit in bits(n):
a, b, c = a*a + b*b, a*b + b*c, b*b + c*c
if bit: a, b, c = b, c, b+c
return b
This is certainly much faster in practice. We can compute fib_mat(1,000,000) in less than a second and a half. The asymptotic complexity depends on the multiplication algorithm used. If it’s conventional long multiplication then multiplying two k-bit numbers takes time O(k2), in which case this algorithm is actually still quadratic! I believe Python uses the Karatsuba algorithm, which makes it about O(n1.6) in Python.
While we’re writing code, let’s improve the constant factor. Each step of fib_mat uses six multiplications, but we can halve that just by observing that c can always be computed as a+b and rearranging things a little:
def fib_fast(n):
assert n >= 0
a, b, c = 1, 0, 1
for bit in bits(n):
if bit: a, b = (a+c)*b, b*b + c*c
else: a, b = a*a + b*b, (a+c)*b
c = a + b
return b
And this does indeed run about twice as fast. Further improvement is possible, but I think the point has been made so let’s leave it there. If you want to see a super-efficient version, have a look at the algorithm in GMP.
Another good article on the subject: David Eppstein’s lecture notes, which cover similar ground to this.
If you’re thinking “This is silly! You can just compute Fibonacci numbers using the closed-form expression.” then you’ll probably enjoy the follow-up article on computing Fibonacci numbers using Binet’s formula.
Are you wondering what this has to do with mazes? We’ll get to that eventually, I hope…