At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
btw A is a 3x3 matrix if you cant tell :S
you familiar with paul's online notes
i totally agree with your comments on college
as far as paying 40 k to teach yourself, lol. ironic huh
got about 10 tabs opened up but very boring to read and im not very good at reading a bunch of boring stuff lol for that id be able to just read a textbook :S i have add...short attention span ftl
paul is boring? hes better than a text book in some sense
and yep definitely but...makes for a nice degree lol
i think the future is teaching yourself, since we're doing it anyway
as in, online supplements like khan academy and pauls notes
uhh yea idk i find it to be a bit bleh maybe more because i have... about 10 tabs of it opened that im trying to learn at once and skim through to find what i actually need to re-teach myself lol
paul is a fluttering genius. but just with calculus and linear algebra. where are is his probability notes, lol
oh cmon, he has all the examples done out
you want to do the eigenspace whachamakallit?
alright, one sec
we’re going to start with a square matrix A and try to determine vectors x and scalars so that we will have,
scalar lambda such that A x = lamda * x
so in other words, multiplying the vector x by A is equivalent to multiplying the vector x by some scalar like 2 or 3 . remember x is a column vector, A is a square matrix, and they have to be defined
so its like multiplying A by x is lengthening the arrow of x or dilating it
x is the eigenvector, and lambda the scalar is the eigenvalue
ok for the beginning im aware you subtract lambda I from the original matrix A then you take the determinant of that and set it = to 0 to find the eigenvalues of A
then you plug in each eigenvalue into the A-lambda I but then what do you do for the basis after that? the way i was taught was marked wrong on the worksheet i have
what i had done is i REF the matrix after i plugged in the eigenvalue then i took thecolumn with leading non-zero terms... the only thing i can think of is maybe i had to take that column but from the original matrix?
we did A x = lambda I * x lambda I x - A x = 0
why not RREF ?
not necessary cuz you end up with 2 -3 1 and 2 rows of 0
also not possible
well not for the eigenvalue of 2 i didnt try 1
1 you end up with 1 - 1 0 for the first row second row is 0 0 1 and then a row of 0s
i know how to find the basis of the kernal...basis of the range... but not sure on just the basis of the eigenspace lol
ok thats ok a row of zeroes is fine brb, 10 minutes
kk im off to bed for the night ty anyway