In the previous page you saw that once we have an eigenvector, every vector on the same line through the origin is also an eigenvector with the same eigenvalue:
You also saw the different arrangements of eigen-axes available for a matrix. In this page, you will see how we use those eigen-axes to model what happens to every vector in .
Eigen-axes are incredibly useful for modeling what matrices do to vectors. To see this clearly, we'll start with the very nice case where the eigen-axes are the -axis and -axis.
Scaling the coordinates by the eigenvalues gives a very natural way to model what the matrix is doing.
Example 1: and
Example 2: and
Example 3: Matrix powers, and
If and for a matrix , what does do the vector ?
Even when the axes are different from the standard x and y axes, we can still use them to model what a matrix does. We just treat one eigen-axis as the new x-axis and the other like the new y-axis.
Example 1: and
The matrix in this question has the same eigenvalues as before, and , but the eigen-axes are slightly different. They are the spans of and Using the diagram below, find for the vector .
We are told that ratios of consecutive Fibonacci numbers approach the Golden Ratio.
Let's examine this claim, representing these ratios as points with adjacent Fibonacci number coordinates. Our graphs will have the and axes switched so that a vector of the form lies on the line with slope
There is an easy way to generate these Fibonacci pairs, using a matrix with just 0's and 1's. The top row of gets us by adding the previous Fibonacci pair together:
The bottom row of moves to the bottom entry.
And there we go! The Fibonacci pairs are generated by powers of this matrix :
Now let's use the eigen-axes of . It may not surprise you that one of these eigen-axes is a line whose slope is the golden ratio . As a quick reminder,
The other eigen-axis is a line whose slope is the conjugate of the golden ratio,
Now here is something really neat about this setup. Not only is the golden ratio the slope of the first eigen-axis, but it is also the eigenvalue. Similarly, is also the eigenvalue for the second eigen-axis as well as its slope.
In conclusion, eigen-axes can be used to see the long-term behavior of these Fibonacci pairs. Because and , the first component grows by about 62% and the second shrinks by about 38% every time is applied, leading to rapid exponential growth along the first eigen-axis and rapid exponential decay along the second.
So it's easy to see why these ratios of Fibonacci numbers quickly approach the golden ratio. What I think is far more illuminating is that there was nothing magical about the Fibonacci sequence in particular! This same behavior occurs when you start with any two distinct integers. You can see this if you like by going back to the figure above, zooming back in to original blue point and dragging it around. The only thing connecting Fibonacci numbers to the Golden ratio is the matrix , which encodes the recurrence relation .
So far, you have only seen examples of eigen-axes that are perpendicular. I did this to ease the transition from the regular and axes to these new ones. But they don't have to be perpendicular. We'll end this mostly-visual exploration of eigen-axis usage with a non-perpendicular example.
You might sense another question coming that asks you to find the eigen-coordinates for a given vector. But you're about to see how to compute them with a nice, easy trick. Instead, this is a great time to point out that when then eigen-axes aren't perpendicular you get some unexpected behavior from the matrix. The eigenvalues are only 2 and -1, so it doesn't seem like the output should get much bigger than the input.
This means one of two things:
In the case of no eigenvectors, we can extend our scalars to complex numbers and model the stretch and a rotation that a matrix does to every vector.
In the case of only one eigen-axis, we can extend our single axis of eigenvectors to a more general notion of eigenvector. This extension has a very nice parallel to repeated roots in differential equations.
Both extension methods are very closely related to how they will be used in Differential Equations, so they will be covered there. See complex conjugate roots and repeated roots for first order linear systems.
In the meantime, you will see a short page on how to compute eigen-coordinates and then the fundamental connection of eigenvectors to differential equations.