Loser66
  • Loser66
On attachment Please, help
Mathematics
  • Stacey Warren - Expert brainly.com
Hey! We 've verified this expert answer for you, click below to unlock the details :)
SOLVED
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
katieb
  • katieb
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
Loser66
  • Loser66
1 Attachment
Loser66
  • Loser66
I would like to know from where we have "Hence \(x(t) = e^{tA}x_0\) at 1.4, 1.5
freckles
  • freckles
what are you asking?

Looking for something else?

Not the answer you are looking for? Search for more explanations.

More answers

freckles
  • freckles
like how did they get that equation?
Loser66
  • Loser66
It didn't indicate to x(t) before. It worked on \(e^{tA}\) and derivative of it. Suddenly, I have :" Hence x(t) =...." why?
ganeshie8
  • ganeshie8
plug x(t) into the IVP \[\dfrac{dx}{dt} = Ax~;~x(0)=x_0\] and see if it is really a solution
Loser66
  • Loser66
If "plug x(t) into .." \(\dfrac{d}{dt}e^{tA}= Ae^{tA}\) then \(\dfrac{d}{dt}x(t)= Ax(t)\), right?
anonymous
  • anonymous
since we have that \(\dfrac{d}{dt}e^{At}=Ae^{At}\), it follows that \(x=e^{At}\) is clearly a solution to \(\dfrac{dx}{dt}=Ax\) (this is just a restatement of the previous fact)
ganeshie8
  • ganeshie8
\(x(t) = e^{tA}x_0\) right ?
Loser66
  • Loser66
@ganeshie8 That is what I asked!! How and why we have that equation?
Loser66
  • Loser66
and @oldrin.bataku gave the answer, but I didn't get how "it follows that x = e^(At) :(
ganeshie8
  • ganeshie8
they are saying that function is a solution to the given IVP i was asking you to check if it is really the case by plugging x(t) into the differential eqn
ganeshie8
  • ganeshie8
btw, \(x_0\) is just a constant recall that if \(f(x)\) is a solution , then a constant multiple of it is also a solution
anonymous
  • anonymous
in fact, since the derivative is linear, we actually have \(\dfrac{d}{dt}\left(ke^{At}\right)=kAe^{At}\) so the general solution is actually a whole parameterized family of solutions \(x(t)=kAe^{At}\) for an \(n\)-by-\(1\) constant matrix \(k\)
anonymous
  • anonymous
oops, i meant \(x(t)=ke^{At}\)
anonymous
  • anonymous
so if \(x(0)=x_0\) then \(ke^{A\cdot0}=x_0\implies kI=x_0\implies k=x_0\) so \(x(t)=x_0e^{At}=e^{At}x_0\) is the unique solution to \(\dot x=Ax\) subject to \(x(0)=x_0\)
Loser66
  • Loser66
I got it. Thank you very much. I am confused with x_0. But now, I am ok with it.
ganeshie8
  • ganeshie8
In the textbook, \(x_0\) is a specific constant for the IVP. Maybe think of it like the usual \(c_0\) if the past experience with simple ODE's helps..
Empty
  • Empty
I think it's cute how you can kinda pretend these aren't vectors and matrices and just kinda like \[\frac{dx}{dt} = Ax \] "separate" \[\frac{dx}{x} = Adt\] \[ \ln x = At +c\] \[x=ke^{At}\]
anonymous
  • anonymous
note the idea here is actually very general and underlies what we call flows of vector fields -- \(d/dt\) is an infinitesimal generator of the evolution of \(x\) through time, i.e. it gives rise to flows
Loser66
  • Loser66
I have another question, please explain me Ex 1
1 Attachment
anonymous
  • anonymous
which part is giving you trouble?
Loser66
  • Loser66
1.31
anonymous
  • anonymous
recall that for a matrix \(A\) with columns \(A_1,A_2,A_3,\dots,A_n\) the image of a column vector \(u=(u_1,\dots,u_n)^T\) is \(Au=u_1A_1+\dots+u_nA_n\)
anonymous
  • anonymous
i.e. the \(n\)-th column of the matrix is the action of the matrix on the \(n\)-th basis vector \(e_n\), represented by a matrix of zeros with a \(1\) in row \(n\): $$e_1=\begin{bmatrix}1\\0\end{bmatrix},e_2=\begin{bmatrix}0\\1\end{bmatrix}$$so the matrix $$\begin{bmatrix}a&b\\c&d\end{bmatrix}$$represents the transformation that takes \(e_1\to \begin{bmatrix}a\\c\end{bmatrix}\) and \(e_2\to \begin{bmatrix}b\\d\end{bmatrix}\)
anonymous
  • anonymous
so to figure out the first column of the matrix representing \(e^{At}\), we just need to see how it behaves on \(\begin{bmatrix}1\\0\end{bmatrix}\), and similarly for the second column
Loser66
  • Loser66
I got what you meant . How about 1.30?
Loser66
  • Loser66
1.33 is for \(e^{tA}(1,0)^T\) that is the first column of \(e^{tA}\) 1.34 is what? why not \(e^{tA}(0,1)^T\) which is the second column of \(e^{tA}\) Why it calculate \(e^{tA}(1,1)^T\) ??
anonymous
  • anonymous
consider that if \(v\ne 0\) is an eigenvector of \(A\), with \(Av=\lambda v\), it follows that \(A^2v=\lambda^2 v\) and more generally \(A^nv=\lambda^n v\). so we have: $$e^{A}=\sum_{n=0}^\infty\frac1{n!} A^n\\e^Av=\left(\sum_{n=0}^\infty\frac1{n!} A^n\right)v=\sum_{n=0}^\infty\frac1{n!}\left(A^nv\right)=\sum_{n=0}^\infty\frac{\lambda^n}{n!}v=\left(\sum_{n=0}^\infty\frac{\lambda^n}{n!}\right)v=e^{\lambda}v$$
anonymous
  • anonymous
if \(v\) is an eigenvector of \(A\) it's also an eigenvector of \(tA\) as \((tA)v=tAv=t\lambda v=(t\lambda )v\) so:$$e^{tA}=e^{t\lambda}v$$
anonymous
  • anonymous
so now we know the eigenvectors of \(e^{tA}\) are those of \(A\) and the eigenvalues \(\lambda\) become \(e^{t\lambda }\), which is what happened there
anonymous
  • anonymous
now, since we know the eigenvectors \(v_1,\dots,v_n\) of \(e^{tA}\) and we have a vector \(v\in\operatorname{span}\{v_1,\dots,v_n\}\) then we can decompose \(v_1=c_1v_1+\dots+c_nv_n\) and it follows $$e^{tA}v=c_1 e^{tA}v_1+\dots+c_2 e^{tA}v_n=c_1 e^{t\lambda_1}v_1+\dots+c_n e^{t\lambda_n} v_n$$. this gives us a way to determine how \(e^{tA}\) behaves on linear combinations of the eigenvectors
anonymous
  • anonymous
so to figure out how \(e^{tA}\) acts on \(e_1\begin{bmatrix}1\\0\end{bmatrix}\) to figure out its first column we can decompose \(e_1=\dfrac12v_1+\dfrac12v_2\) and \(e^{tA}e_1=\dfrac12 e^{tA}v_1+\dfrac12 e^{tA}v_2=\dfrac12e^{t\lambda_1}v_1+\dfrac12e^{t\lambda_2}v_2\)
Loser66
  • Loser66
I follow it.
anonymous
  • anonymous
since we have \(\lambda_1=1,\lambda_2=-1\) and \(v_1=\begin{bmatrix}1\\1\end{bmatrix},v_2=\begin{bmatrix}1\\-1\end{bmatrix}\) this says: $$e^{tA}\begin{bmatrix}1\\0\end{bmatrix}=\frac12e^t\begin{bmatrix}1\\1\end{bmatrix}+\frac12 e^{-t}\begin{bmatrix}1\\-1\end{bmatrix}=\begin{bmatrix}\frac12 e^t+\frac12 e^{-t}\\\frac12 e^t-\frac12 e^{-t}\end{bmatrix}$$ is our first column of \(e^{tA}\)
Loser66
  • Loser66
Yes, that is 1.33
anonymous
  • anonymous
in 1.34 they made a typo, they actually decomposed \(e_2=\frac12 v_1-\frac12 v_2\) and then compute similarly to the above $$e^{tA}\begin{bmatrix}0\\1\end{bmatrix}=\frac12 e^t\begin{bmatrix}1\\1\end{bmatrix}-\frac12e^{-t}\begin{bmatrix}1\\-1\end{bmatrix}=\begin{bmatrix}\frac12 e^t-\frac12e^{-t}\\\frac12e^t+\frac12e^{-t}\end{bmatrix}$$
anonymous
  • anonymous
so we've figured out the columns of \(e^{tA}\) and can write it now as $$e^{tA}=e^{tA}\begin{bmatrix}1&0\\0&1\end{bmatrix}=\begin{bmatrix}\frac12(e^t+e^{-t})&\frac12(e^t-e^{-t})\\\frac12(e^t-e^{-t})&\frac12(e^t+e^{-t})\end{bmatrix}$$
Loser66
  • Loser66
Yes, I guessed it is a typo but it tortured me a lot since I didn't understand it. Now I got it. Thank you so so so much.
Loser66
  • Loser66
Can I have another answer for another problem? The same topic, the same problem, just want to know the trick. Please.
Loser66
  • Loser66
1.41 . Is there any way to quickly figure out \(-\dfrac{i+1}{4}\) tem? and \(-\dfrac{i}{2}\)
1 Attachment
anonymous
  • anonymous
the easy way is to compute the inner products, since the eigenvectors \(\{v_j\}\) are orthogonal we have \(\langle v_i,v_j\rangle=0\text{ where }i\ne j,\text{ or }|v_i|^2\text{ where }i=j\)
anonymous
  • anonymous
so: $$\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2\\1-i\end{bmatrix}\\\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}-2\\1-i\end{bmatrix}\\-2\cdot1+(1+i)\cdot0=c_1((-2)^2+(1+i)^2)+c_2((-2)^2+(1+i)(1-i))\\-2=c_1(4+1+2i-1)+c_2(4+1+1)\\-2=(4+2i)c_1+6c_2$$similarly we get$$\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}-2\\1-i\end{bmatrix}\\-2\cdot1+(1-i)\cdot0=c_1((-2)^2+(1-i)(1+i))+c_2((-2)^2+(1-i)^2)\\-2=c_1(4+1+1)+c_2(4+1-2i-1)\\-2=6c_1+(4-2i)c_2$$now you can do substitution to solve \(c_1,c_2\)
Loser66
  • Loser66
I got it. Again, thank you so much.
anonymous
  • anonymous
oops they aren't orthonormal or even orthogonal here but they are linearly independent and we can still solve for unique \(c_1,c_2\)
Loser66
  • Loser66
I did as usual and got a different solutions from the paper. I am checking.
anonymous
  • anonymous
http://www.wolframalpha.com/input/?i=-2%3D%284%2B2i%29x%2B6y%2C+-2%3D6x%2B%284-2i%29y
Loser66
  • Loser66
Wow!! it takes a long time to work with :)

Looking for something else?

Not the answer you are looking for? Search for more explanations.