The matrix equation in the last post can actually be whittled down a bit further to produce another equation that, in some ways, is easier to work with. The result is as follows:
Fk is, of course, the kth Fibonacci number. Now, before I go on, I want to point out that I stumbled upon these two equations and the proofs for these equations in Gilbert Strang’s excellent Linear Algebra and Its Applications. All credit goes to him for coming up with this, and I recommend his book for an even more in depth explanation.
Anyway, back to the equation, which I found interesting for a few reasons:
- This equation allows you to easily find the kth term using only a pocket calculator. The last equation requires something that can deal with matrices.
- This also makes it easier to use in programming applications. There’s no need to write functions or import libraries to deal with matrices. It’s also probably faster than writing some sort of recursive function to compute the kth Fibonacci number.
- As Strang points out, that equation, amazingly, produces an integer, despite all the fractions and square roots.
- We can simplify that equation further for a very good approximation (so good that it’s not really an approximation).
To see how we get that equation from the equation in the last post, let’s begin with the way Mr. Strang has it written in his book (it’s the same as mine, but flipped upside down).
Where F0=0, F1=1, F2=1, etc. Let’s now make some substitutions:
We now have:
This hasn’t changed the content of the equation at all. We’ve just substituted in new symbols to represent the different terms.
The next step is to diagonalize the matrix A. Remember that diagonalizing A produces A = SΛS-1 where S contains A’s eigenvectors and Λ contains A’s eigenvalues. Substituting A = SΛS-1 into the previous equation yields:
Notice how everything but the first and last S and S-1 cancel, giving the final form: SΛkS-1u0. This is important, because Λ is a diagonal matrix, making Λk very simple:
λ1 and λ2 are, of course, the eigenvalues of A. Let’s now make one last substitution and put all this diagonalization stuff together. First we define c as
Then we substitute it into the equation:
That might need some explaining. The first bit is simply the equation with c substituted in. Following that, the first matrix is S, but written to show that it contains x1 and x2, which are the eigenvectors of A placed vertically in the two columns. The middle matrix is Λk and the last is simply the matrix c. Finally, the matrices are multiplied out, yielding the final c1λk1x1+c2λk2x2
The final steps are simply to compute c, and A’s eigenvalues and eigenvectors. I’ll spare you the tedious algebra and simply tell you that:
This can be computed the standard way, by solving: det(A-λI)=0.
All the variables are then plugged into the equation for uk.
We want Fk, so we multiply, factor, and take the bottom row, giving the equation we want:
That’s the full equation, but now notice that the second term is always less than 1/2. This mean we can simply drop it, yielding:
In fact, since the second term is always less than 1/2 and the full equation always gives us an integer, we can take this a step further: Rounding the approximation to the nearest integer will always give you the exact value for Fk. Cool.