Your subscription plan will change at the end of your current billing period. You’ll continue to have access to your current plan until then.
Welcome back!
Hi ,
We'd like to know you better so we can create more relevant courses. What do you do for work?
Course Syllabus
You've achieved today's streak!
Complete one lesson every day to keep the streak going.
Su
Mo
Tu
We
Th
Fr
Sa
You earned a Free Pass!
Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.
Elevate Your Career with Full Learning Experience
Unlock Plus AI learning and gain exclusive insights from industry leaders
Access exclusive features like graded notebooks and quizzes
Earn unlimited certificates to enhance your resume
Starting at $1 USD/mo after a free trial – cancel anytime
Now that you know what an eigenbasis is, you may be wondering how to find it. The process is actually not too difficult. It entails solving an equation with a determinant. Here it is. First, take a look at the matrix with entries 2, 1, 0, and 3. And take a look at how it acts on these points around the square. This should make the entire transformation clear. As you've seen before, the two horizontal vectors get stretched by 2, and the diagonals get stretched by 3. And for the other points, this happens. Now compare this to another matrix, one that simply stretches the entire plane by a factor of 3 in any direction. This matrix has entries 3, 0, 0, and 3. And it's really 3 times the identity matrix. Notice one thing. These two transformations are not the same transformation, but they do coincide in many points. In other words, they act the exact same way for infinitely many points, all the points in this line. So to be more specific, in this diagonal, the two transformations do the exact same thing. So they match in infinitely many points. Now that is strange. See, transformations should only match at one point, the point 0, 0. When they match at infinitely many points, something non-singular is happening. And what's happening? Well, let's look at their difference. If these two transformations match at infinitely many points, that means the difference is 0 at infinitely many points. So if you apply the difference of these two matrix to any vector in any of these diagonals, you get the vector 0, 0. In other words, this matrix times vectors x, y for infinitely many vectors is 0, 0. Now that is the trait of a singular transformation. Recall that a non-singular transformation has a unique solution to the equation matrix times vector equals 0, 0, and that's the vector 0, 0. So if you have infinitely many solutions to this, it means your matrix is singular. And you can verify that this is indeed a singular matrix as the determinant is 0. Now let's do something similar, but for another transformation on the right. Our transformation does exactly as it did before. And now let's compare it to the transformation that stretches the plane by 2 in every direction. These two are not the same, but they match in this entire line. So in other words, matrix of the left times x, y is equal to matrix of the right times x, y for any vector x, y on this line that infinitely many points. So we can do the same procedure, take the difference, and that matrix times a vector equals 0, 0 for infinitely many vectors. That means that the matrix 0, 1, 0, 1 or the difference between our matrix and 2 times the identity is a singular matrix. And you can check indeed that it is a singular matrix because the determinant is 0. So what is special about the eigenvalues? So what happened for the eigenvalue 2 and the eigenvalue 3? Let's think about it in general. If lambda was an eigenvalue, then the transformation given by our matrix and the transformation given by scaling the plane entirely by a factor of lambda are equal for infinitely many vectors x, y. That means their difference times a vector is equal to 0, 0 for infinitely many vectors. It's an equation with infinitely many solutions. Therefore, this matrix 2-lambda, 1, 0, 3-lambda has to be a singular matrix, so its determinant is 0. Its determinant is given by this equation when we expand it, and that's called the characteristic polynomial. So basically, to find the eigenvalues lambda, all we need to do is look at the characteristic polynomial and find the roots. The place where the characteristic polynomial is 0 are the eigenvalues. So in this case, they're going to be 2 and 3. So now that you have the eigenvalues, let's try to find the eigenvectors. So recall that the eigenvector is the vector that satisfies the equation matrix times vector equals eigenvalue times vector. So if we expand this, we get these equations over here, and the solution for these are x equals 1, y equals 0, or any multiple of it. So here is one of the eigenvectors, the one corresponding to the eigenvalue 2. We do the same thing with 3, solve for these equations, and get 1, 1. So the eigenvector 1, 1 is corresponding to the eigenvalue 3. Now you're ready for a quiz. Find the eigenvalues and eigenvectors of this matrix. And the solution is that for the eigenvalues it's 11, 1, and for the eigenvectors it's 2, 1, corresponding to the eigenvalue 11, and minus 1, 2, corresponding to the eigenvalue 1. Why? Well, if you look at the characteristic polynomial, it is the matrix with entries 9 minus lambda, 4, 4, and 3 minus lambda. So that expands as lambda squared minus 12 lambda plus 11, which factors as lambda minus 11 times lambda minus 1. Therefore, the eigenvalues are lambda equals 11 and lambda equals 1. And I'll leave it as an exercise for you to solve the equations for the eigenvectors and verify that they are going to be 2, 1, and minus 1, 2, or some multiple of them, right, because all that matters is the direction. The process of finding eigenvectors for a 3 by 3 matrix is very similar. Consider A equals 2, 1, minus 1, 1, 0, minus 3, minus 1, minus 3, 0. The characteristic polynomial is the determinant of A minus lambda times the identity matrix of whatever size you need. In this case, lambda minus I will be a 3 by 3 matrix that looks like this. The characteristic polynomial will be the determinant of the difference of these two matrices. Using the diagonal method to calculate the determinant of a 3 by 3 matrix, you can construct the pieces of the characteristic polynomial. Then combine terms to get the polynomial negative lambda cubed plus 2 lambda squared plus 11 lambda minus 12 equals 0. Factoring this polynomial, which I won't show in detail here, will give you these three factors. And now it's easy to find the zeros of this equation, which are when lambda is minus 3, 1, or 4. And so that gives you the three eigenvalues of this matrix. Now that you've found the three eigenvalues for matrix A, let's find the eigenvectors associated to each eigenvalue. Let's begin with our last eigenvalue, 4. Remember that to find the eigenvalue, you need to solve the equation AV equals lambda V. Then that means you need to solve this equation where x1, x2, and x3 are the three components of the eigenvector. Multiplying the 4 by the vector gives the new vector 4x1, 4x2, 4x3. And completing this dot product on the left gives you this new vector. Now all you need to do is set these three vectors equal to one another and solve for x1, x2, and x3. I'll rewrite the system of equations up here. Now subtract the terms from the right-hand side, leaving just zeros. This gives the final system of equations you'll need to solve, which I'll label rows 1, 2, and 3. You can use these equations to solve for x1, x2, and x3. Adding rows 2 and 3, you get negative 7x1 minus 7x3 equals 0, which simplifies to x2 being equal to negative x3. Adding 3 times row 1 to row 3, you get negative 7x1 minus 7x3 equals 0, which simplifies to x1 also being equal to negative x3. This is actually as far as you can get solving this system, and the result is that there are infinitely many solutions. Any vector that satisfies x1 equals k, x2 equals k, x3 equals minus k will be a solution for any value of k. For example, 1, 1, minus 1 works. But 2, 2, minus 2 would work as well, and this makes sense. There are actually always going to be an infinite number of potential eigenvectors that lie along the same line. In this case, though, let's just keep it simple and say the eigenvector is 1, 1, minus 1, and that's it. You've found the first eigenvector. If you want to find the other two eigenvectors, you can just repeat the process with the other two eigenvalues. You already found that the eigenvalue 4 is associated with the eigenvector 1, 1, minus 1. And if you follow the same process for eigenvectors 1 and negative 3, you could find their eigenvectors 0, 1, 1 and 2, minus 1, 1. If you noticed, the two examples you worked on were for 2 by 2 and 3 by 3 matrices, all of which are square matrices. Could you compute the eigenvalues and eigenvectors for any matrix of any shape? Remember, to get the eigenvalues, you need to solve for a determinant. But as you learned in previous lessons, the determinant is only defined for square matrices. So for any square matrix, you can find eigenvalues and eigenvectors. However, if the matrix is not square, like the one right here, then it doesn't have any eigenvectors or eigenvalues.