Here's the question you clicked on:
GreenwoodMom
I just watched lecture 1. It seems to me that a key thought on the columns is that the vectors are not in an x,y,z coordinates(since we are trying to get x y and z values) but a different coordinate scheme (say r,s,t) is that right?
I actually do think of the columns as vectors in the xyz space. The reason is that you can transpose the matrix, turning columns into rows, and the information in the matrix has not changed at all. Therefore, the column space becomes the row space. A matrix that is not square is more complicated, since the number of equations versus unknowns is different depending on whether the matrix is transposed, or not transposed.
I guess what I'm thinking is not that it isn't in 3 d space but that it can't be an same coordinate space as the x,y,z since in the lecture from the column perspective it was x times a vector in 3d space +y times a 3 vector+z times a vector in 3d space. So the vector cant be in the xyz space. As a row perspective the equations describe planes in the xyz. Kind of reminds me of how in a database certain data might be represented as the value of fields or the possible values for the field in a different representation become the field names. Information is the same but the 'dimensions' in which the information is expressed changes.
In general the matrices that we use to transform with are not square, that is the number of rows and columns are different. We use an m x n matrix on the right to multiply an n x 1 vector ( a vector is a matrix with short rows ! ) on the left, giving an m x 1 vector. For example : \[\left[\begin{matrix}1& 2 \\ 3 & 4 \\ 5 & 6\end{matrix}\right]\left[\begin{matrix}7\\ 8 \end{matrix}\right]=\left[\begin{matrix}23\\53\\83\end{matrix}\right]\]that is \[Ax = b\]This implies b and the columns of A are in the same space of dimension m ( here 3 ), but x is of dimension n ( here 2). So you are right : in general x and b are from different dimensioned spaces. We consider using the components of a vector ( x ) from one space determining the coefficients to use within a linear combination of vectors ( the rows of A ) from another space. Hence the comment that b is a linear combination of the columns of A, or b is in the rowspace of A etc. The whole Ax = b thing ( as matters are usually presented ) is finding, if any, suitable x's for a given A and b. Now if A is square then I guess it is a matter of language - or more probably the meaning of the underlying problem or application being described - as to whether x, b and the columns of A are 'in' the same space. Many physical problems have A creating b as a physical transform ( movement, rotation, reflection ) of x, for instance, and so may be reasonably all lumped together. But you might be doing accounting, so the vectors are prices and quantities ie. not in a natural sense 'the same'.
Whoops - We use an m x n matrix on the LEFT to multiply an n x 1 vector ( a vector is a matrix with short rows ! ) on the RIGHT !!!
AND - within a linear combination of vectors ( the COLUMNS of A ) !! Sorry 'bout that .... :-)
FourColor said: "As a row perspective the equations describe planes in the xyz." Certainly that is true. I think I did not explain my thoughts as thoroughly as I needed to. Please allow me to expand on them. Look at the first column, the x column, it is describing the proportion of the x values in the planes. So we can think of these proportions as another coordinate system, but that's an arbitrary choice, just as arbitrary as the choice to view the rows as part of the xyz coordinates. In other words, if the columns are related to an rst coordinate system, then the rows contain exactly the same information about the rst coordinates as the columns, just in a different arrangement. I hope that makes sense.
Further thoughts or why rows and columns may get confusing: One of the challenges in linear algebra is discovering and managing some dualities. You see what we call a row or a column are to a certain extent arbitrary as one can take column statements/manipulations/theorems, and then suitably transpose all entities and wind up with equivalent truths as row statements/manipulations/theorems. Prof Strang will go into that in later lectures, especially I would say lecture 3 ( multiplication and inverse matrices ) and lecture 10 ( the four fundamental subspaces ). One other useful point he makes, again later on, is that when you get to higher dimensions you will quickly/readily lose geometric intuition about the entities that one manipulates. So, say, thousands or even of millions of dimensions are dealt with routinely ( with computer help of course ). So one reason why he emphasises the column viewpoint is that it is easier to extend that conceptually as the degrees of freedom go up. Meaning that if I expand the size of a matrix A in both length and width then I can ( in my mind's eye ) consider that as : more vectors ( A has more columns ), with each vector having more components ( A has more rows ). Otherwise one can get pretty bogged down in trying to visualise/manipulate/juggle all sorts of higher dimensional analogues of planes, cubes and whatnot. Which doesn't help and is not needed to solve real problems anyway. He could have chosen to begin the course discussion and examples from a row viewpoint, but the same lessons would still apply. [ Thanks to whom-ever awarded the meda l! :-) ]