anonymous
  • anonymous
Need help in Linear Algebra.
Mathematics
  • Stacey Warren - Expert brainly.com
Hey! We 've verified this expert answer for you, click below to unlock the details :)
SOLVED
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
katieb
  • katieb
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
anonymous
  • anonymous
1 Attachment
anonymous
  • anonymous
I don't know where to begin solving the for k, can anyone guide me please?
anonymous
  • anonymous
I know that I have to (xI-A)X=0 I: Identity Matric x: 3 A= matrix X=Eigenvector

Looking for something else?

Not the answer you are looking for? Search for more explanations.

More answers

thomas5267
  • thomas5267
Start with \(\det(A-3I)=0\)
anonymous
  • anonymous
does \[\det(A-3I)=\det(3I-A) \]?
thomas5267
  • thomas5267
The \(\det(3I-A)=-\det(A-3I)\) for square matrix of odd size and \(\det(3I-A)=\det(A-3I)\) for square matrix of even size.
thomas5267
  • thomas5267
Since you know \(\det(3I-A)=0\) it doesn't matter.
anonymous
  • anonymous
ok
anonymous
  • anonymous
|dw:1446248818993:dw|
anonymous
  • anonymous
det of that equals 0
thomas5267
  • thomas5267
What is happening? That is not \(A-3I\) or \(3I-A\) for sure! Elements outside the diagonals should be same as A (up to a sign)!
anonymous
  • anonymous
oh wait what am i doing lol, I didnt mutliply the identity by 3
anonymous
  • anonymous
sorry about that one sec
thomas5267
  • thomas5267
\[ A-3I= \begin{pmatrix} 3&-2&k&0\\ 0&1&3&1\\ 0&0&2&1\\ 0&0&0&3 \end{pmatrix} - \begin{pmatrix} 3&0&0&0\\ 0&3&0&0\\ 0&0&3&0\\ 0&0&0&3 \end{pmatrix} \]
anonymous
  • anonymous
|dw:1446249894680:dw| is this right now?
jim_thompson5910
  • jim_thompson5910
This property may come in handy `A determinant with a row or column of zeros has value 0.` source: http://mathworld.wolfram.com/Determinant.html
anonymous
  • anonymous
so theres no point in putting column 1 and row 4?
jim_thompson5910
  • jim_thompson5910
oh nvm you already found the eigenvalue. So no need for determinants now
jim_thompson5910
  • jim_thompson5910
I mean you already have the eigenvalue
anonymous
  • anonymous
yea its given in the question
thomas5267
  • thomas5267
It seems like every k works since the determinant is necessarily 0.
phi
  • phi
yes, but most k's give a repeated eigenvector. They want two different eigenvectors for eigenvalue 3. k= 4 works, but I'm not sure of how you are supposed to find it.
thomas5267
  • thomas5267
What is a repeated eigenvector?
phi
  • phi
the way you find the eigenvectors for eigenvalue 3 is to find the vectors that lie in the null space of A - 3 I namely, start with [ 0 -2 k 0 0 -2 3 1 0 0 -1 1 0 0 0 0] and row reduce we will get to this step [ 0 1 -k/2 0 0 0 3-k 1 0 0 -1 1 0 0 0 0] now we notice if 3-k happens to equal -1 then we will have an extra row of zeros, meaning we will have more than 1 vector in the null space. in other words, for 3-k= -1 we will get two different eigenvectors for e-val 3
phi
  • phi
instead of "repeated eigenvector" I should say we have repeated eigenvalue 3 in this problem. (the eigenvalues lie on the main diagonal of a triangular matrix, so we see the repeated 3) ordinarily we would expect to see the same eigenvector for this repeated value. but if the null space happens to contain more than 1 vector, we can have different eigenvectors for the same eigenvalue.
anonymous
  • anonymous
How did you conclude that if you have to 2 null spaces in that matrix, you will get to two different eigenvector?
anonymous
  • anonymous
oh wait nvm lol
anonymous
  • anonymous
I see what you mean
anonymous
  • anonymous
Vector 1: \[4x _{3}+1x _{4}\] Vector 2: \[1x _{_{2}}-2x _{3}\]
anonymous
  • anonymous
Would those be the 2 eigenvectors associated with e-value 3
anonymous
  • anonymous
woops in the first vector that should have been \[1x _{3}\]
anonymous
  • anonymous
-1*
anonymous
  • anonymous
btw I got the question right, 4 was the answer, but im still trying to understand it.
phi
  • phi
once you get the matrix A-3I the problem reduces to finding all the vectors that lie in its null space. For most k values, there is only 1 vector but for k=4, there are two vectors. this means we need to know how to find the vectors in the null space of this matrix.
anonymous
  • anonymous
but how did you that for k=4 there are two vectors? did you just look at the matrix and noticed that if 3-k= -1. row 3 will be all zeros?
phi
  • phi
let's call A-3I matrix B the rank of B + rank of its null space = 4 if we set k \( \ne\) 4, the rank of B is 3, so the rank of the null space = 1 for k=4, B's rank is 2, and so the null space's rank is 2 (i.e. 2 vectors in it) You can find the rank of a matrix by finding the number of "pivots" (this is how Prof Strang describes it... see http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-7-solving-ax-0-pivot-variables-special-solutions/
anonymous
  • anonymous
ah ok I get it now
anonymous
  • anonymous
okay Thank you guys so much for your help! :)
ybarrap
  • ybarrap
$$ \begin{bmatrix} 3 &-2 &k&0 \\ 0 &1 &3&1 \\ 0 &0 &2&1 \\ 0 &0 &0&3 \end{bmatrix}\\ $$ Row reduces to $$ \begin{bmatrix} 0 &0 &0&-(3+k) \\ 0 &2 &0&-2 \\ 0 &0 &1&1 \\ 0 &0 &0&0 \end{bmatrix} $$ Note that if \(k=3\) then columns 2 and 3 will be independent and the eigenvectors will be $$ \vec{v}= a\begin{bmatrix} 0 \\ 2\\ 0 \\ 0 \end{bmatrix}+ b\begin{bmatrix} 0 \\ 0\\ 1 \\ 0 \end{bmatrix} $$ for all real a,b. In otherwords, if k=3 there will be two basic eigenvectors associated with \(\lambda=3\). Note that the last column of the matrix above will be equal to $$ \begin{bmatrix} 0 \\ -2\\ 1\\ 0 \end{bmatrix}= -1\begin{bmatrix} 0 \\ 2\\ 0 \\ 0 \end{bmatrix}+ 1\begin{bmatrix} 0 \\ 0\\ 1 \\ 0 \end{bmatrix} $$ when k=3. Hence, the last column will be dependent on columns 2 and 3 and we will have only two independent vectors.
phi
  • phi
@ybarrap row-reducing the original matrix does not get you the eigenvectors. You can verify this by checking if your vectors satisfy \[ A x = \lambda x \]
ybarrap
  • ybarrap
Ah yes thanks! Row-reduced the wrong matrix, but really not necessary - correction: $$ A \vec{x} = \lambda \vec{x}\\ A \vec{x} - \lambda \vec{x}=0\\ (A-\lambda I_n)\vec{x}=0\\ \begin{bmatrix} 3 &-2 &k&0 \\ 0 &1 &3&1 \\ 0 &0 &2&1 \\ 0 &0 &0&3 \end{bmatrix}- \begin{bmatrix} 3 &0&0&0 \\ 0 &3 &0&0 \\ 0 &0 &3&0 \\ 0 &0 &0&3 \end{bmatrix}= \begin{bmatrix} 0 &-2&k&0 \\ 0 &-2 &3&1 \\ 0 &0 &-1&1 \\ 0 &0 &0&0 \end{bmatrix} \\ \begin{bmatrix} 0 &-2&k&0 \\ 0 &-2 &3&1 \\ 0 &0 &-1&1 \\ 0 &0 &0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2\\ x_3 \\ x_4 \end{bmatrix}\\ \implies\\ -2x_2+kx_3=0\\ -2x_2+3x_3+x_4=0\\ -x_3=-x_4\\ \text{Let }x_4=t\\ x_3=t\\ x_2=2t\\ k=\cfrac{2x_2}{x_3}=4\\ \implies\\ \vec{v_1}= t\begin{bmatrix} 0 \\ 2\\ 1 \\ 1 \end{bmatrix}\\ $$ For all real \(t\). And with \(x_1=1\): $$ \vec{v_2}= t\begin{bmatrix} 1 \\ 0\\ 0 \\ 0 \end{bmatrix}\\ $$ So $$ A \vec{v_1} =\\ \begin{bmatrix} 3 &-2 &4&0 \\ 0 &1 &3&1 \\ 0 &0 &2&1 \\ 0 &0 &0&3 \end{bmatrix} \begin{bmatrix} 0 \\ 2\\ 1 \\ 1 \end{bmatrix}= \begin{bmatrix} -4+4 \\ 2+3+1\\ 2+1 \\ 3 \end{bmatrix}= \begin{bmatrix} 0\\ 6\\ 3 \\ 3 \end{bmatrix}= 3 \begin{bmatrix} 0 \\ 2\\ 1 \\ 1 \end{bmatrix}= \lambda \vec{v_1}\\ $$ And also $$ A \vec{v_2} =\\ \begin{bmatrix} 3 &-2 &4&0 \\ 0 &1 &3&1 \\ 0 &0 &2&1 \\ 0 &0 &0&3 \end{bmatrix} \begin{bmatrix} 1 \\ 0\\ 0 \\ 0 \end{bmatrix}= \begin{bmatrix} 3 \\ 0\\ 0 \\ 0 \end{bmatrix}= 3 \begin{bmatrix} 1 \\ 0\\ 0 \\ 0 \end{bmatrix}= \lambda \vec{v_2}\\ $$ Therefore, with k=4 and \(\lambda=3\) we have our two independent basis vectors, \(\vec{v_1}\text{ & }\vec{v_2}\).

Looking for something else?

Not the answer you are looking for? Search for more explanations.