Here's the question you clicked on:
corymcox
If A = 2x2 matrix (1,1,1,1) and B is identity 2x2 (1,0,0,1), find (A + 2B)^-1 (inverse of sum of A plus 2xB.... I am getting lost but assume this question has some obvious pieces I have not gotten to in lectures given the identity is involved... plus 1,1,1,1 seems singular... not sure of how that impacts... any pointers to get me to a full understanding would be helpful.
\[A = \left[\begin{matrix}1 & 1 \\ 1 & 1\end{matrix}\right]\] \[B = \left[\begin{matrix}1 & 0 \\ 0 & 1\end{matrix}\right]\] What is \[(A + 2B)^{-1}\]
I tried doing operations, so \[\left[\begin{matrix}1 & 1 \\ 1 & 1\end{matrix}\right] + \left[\begin{matrix}2 & 0 \\ 0 & 2\end{matrix}\right]\] then \[\left[\begin{matrix}3 & 1 \\ 1 & 3\end{matrix}\right]\] augment matrix with Identity and then elimination using Gauss Jordan... but I get lost
Augmenting identity matrix P=[3 1 1 0 ; 1 3 0 1]; R2-R1/3; = [3 1 1 0 ; 0 8/3 -1/3 1]; R1/3; = [1 1/3 1 1/3 0 ; 0 8/3 -1/3 1]; R2/(8/3) = [1 1/3 1/3 0 ; 0 1 -1/8 3/8 ]; R1-((1/3)R2) [ 1 0 3/8 -1/8 ; 0 1 -1/8 3/8 ]; So P = (A + 2B)inv = [3/8 -1/8 ; -1/8 3 /8]; sorry i dont know how to use equation ... ; means starting of new row.
for matlab p = [3 1; 1 3]; d = [p eye(2)]; k = rref(d); Answer = k(:, 3:4);
Two steps : (1) first calculate A + 2B \[A + 2B = \left[\begin{matrix}1& 1 \\ 1 & 1\end{matrix}\right] + 2\left[\begin{matrix}1 & 0 \\0 & 1\end{matrix}\right]= \left[\begin{matrix}3 & 1 \\ 1 & 3\end{matrix}\right]\] (2) Find inverse of that. Form augmented matrix with LHS as above and RHS the identity. \[\left[\begin{matrix}3& 1 \| 1&0\\ 1 & 3\|0&1\end{matrix}\right]\] then do Gauss ( row reduction, pivots, to clear lower triangle ) then Jordan ( effectively back substitution to clear upper triangle ) to reduce LHS to identity and then the RHS is desired inverse. \[\left[\begin{matrix}1& 0\| ?&?\\0 & 1\|?&?\end{matrix}\right]\] Whatever you do to the LHS you do the same to the RHS. Augmented matrix is a notational shortcut for operating on two matrices at once. Logic is that if some matrix Q represents the totality of steps to make A + 2B into the identity : Q (A + 2B ) = I then \[Q = (A+2B)^{-1} \]so the RHS of the augmented matrix is equal to QI =Q Ultimately this relies upon those E matrices that Prof Strang speaks of : so the product of the matrices representing all those individual Gauss-Jordan steps becomes Q.
I follow your logic, but the math seems to suggest this is not invertible. My last elimination step leaves me with \[\left[\begin{matrix}3 & 0 & | &\frac{ 9 }{ 8 } & -\frac{ 3 }{ 8 }\\ 0 & \frac{ 8 }{ 3 } & | &-\frac{ 1 }{ 3 } & 1\end{matrix}\right]\] I have not gotten to determinants in lectures yet burt I assume the problem is that I am dealing with 1.) A is not invertible because not linearly ind and 2.) B is identity and 2B is still no usefule
According to vinkle he multiplies each row by some constant, multiples of self not other rows. Is this allowed? For some reason I am not fully understanding from lectures proper elimination steps. I did not think you could take multiples of same row as that seems to change the purpose of elimination (to force vectors into each other in proportions that maintain value overall) Is this truly allowed? Maybe point me to place in textbook so I can see as I dont see it in lectures yet.
It doesn't matter whether or not A and/or B are separately invertible. It is the sum A + 2B we are considering, meaning that \[(A+2B)^{-1}\ne A^{-1}+(2B)^{-1}\]it's analogous to, say : \[\frac{ 1 }{ (2 + 3) }\ne \frac{ 1 }{2}+ \frac{ 1 }{ 3 }\]It is allowable to multiply one row by some constant and leave the others alone, as in this case we are doing the same to the RHS of the augmented matrix. If you like, the information that the original column vectors ( of A + 2B ) represent is not lost as the RHS is transformed likewise and so it's a bit like the number 2 being stored as 1/2 ( provided that we remember that we used inversion ). Here you have brought up an important point though. You can multiply matrices and yet lose information and the pathway that brought you there eg. \[\left[\begin{matrix}a & b \\ c & d\end{matrix}\right]\left[\begin{matrix}0 & 0 \\ 0 & 0\end{matrix}\right]= \left[\begin{matrix}0 & 0 \\ 0 & 0\end{matrix}\right]\]regardless of the values a, b, c,and d. If I then ask what LHS led to that RHS then you cannot uniquely specify. Especially \[\left[\begin{matrix}a & b \\ c & d\end{matrix}\right]\left[\begin{matrix}0 & 0 \\ 0 & 0\end{matrix}\right]\ne \left[\begin{matrix}1 & 0 \\ 0 & 1\end{matrix}\right]\]and thus the matrix of all zeroes has no inverse. We are familiar with number sets that have only one non-invertible member ( zero ) with the operation of multiplication. But there are infinitely many matrices for which there is no multiplicative inverse, and these are not the matrix of all zeroes. For those matrices then : the row-reduction process will fail, in particular a zero will appear in some pivot position and there won't be available a lower row to swap in order to bring a non-zero value into that pivot position. And that's the step where you discover that one of the columns was in fact a linear combination of the others. You may not have known that by inspection of your original matrix.
an easy method to see whether a matrix is invertable or not is too see whether its determinant is nonzero or not, if the determinant is non zero then it is invertible otherwise it is not
did you got it CORYMCOX?????
The 'ease' depends on matrix size ( call it n x n ) and certainly for the small examples here that's true. The problem is that a determinant calculation - the 'atomic' operation considered here is multiplication - goes like factorial of n. So for larger sizes it's actually quicker to do Gauss-Jordan which goes like the square of n. In systems where the determinant isn't even available ie. rows is not equal to columns, then Gauss-Jordan is the recourse ( or for suitable systems/problems FFT which goes like n x log(n) ). So this is basically why there's not much emphasis on determinants in the course as for real world interesting problems the efficiency wall is hit fairly readily if you use them.
Sorry, cube of n for Gauss-Jordan. Any particular power is swamped by factorial in short order alas.