At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga.
Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus.
Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
By definition, a vector v is an eigenvector of matrix A with eigenvalue \( \lambda \) if
\[ Av = \lambda v \]
\[ (A - \lambda I)v = 0 \]
and this is equivalent to saying
\[ \det(A - \lambda I) = 0 \]
This is therefore the equation you solve to find the eigenvalues.
Once you have found the eigenvalues, \( \lambda_i \), you want to find the null space of the operator
\[ A - \lambda_i I \]
for each \( i \). This corresponds to the eigenspace for \( \lambda_i \).
This all probably seems a bit abstract, but is absolutely correct. I STRONGLY recommend you go through you lecture notes and/or text book to find some worked examples. Once you understand them, attempt your particular problem.
ok thx for the advice !
@JamesJ i can get to the part where you use matrix multiplication to get like 4 separate equations..2 with a, b and 2 with c, d...do you just solve the 4 unknowns and put it in as the matrix??sorry
Not the answer you are looking for? Search for more explanations.
You want to solve for lambda
If you look at an example, you would see that
yeah but in this problem its given so do you just work backwards to get the original matrix?
Then for each eigenvector Av = lambda.v
Let v1, v2, ..., vn be basis vectors for each of the eigenspaces. If we treat each of those as column vectors, then you can see that
A [ v1 v2 ... vn ] = [ lambda1.v1 lambda2.v2 .... lambdan.vn ]
where here lambdaj is the eigenvector corresponding to vj. Call this matrix
V = [ v1 v2 ... vn ]
V^-1 A V = V^-1 [ lambda1.v1 lambda2.v2 .... lambdan.vn ] = D
where D is a diagonal matrix with the lambdaj down the diagonal.
A = V D V^-1
So that's how you recover A from the lambda and the vj.
Again, read an example!