Empty
  • Empty
Is it possible for A^2=0 and A is not equal to 0?
Mathematics
  • Stacey Warren - Expert brainly.com
Hey! We 've verified this expert answer for you, click below to unlock the details :)
SOLVED
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
chestercat
  • chestercat
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
Empty
  • Empty
I'll admit, I'm being ambiguous on purpose here haha.
ganeshie8
  • ganeshie8
\[A=\begin{bmatrix}0&\alpha \\0&0\end{bmatrix}\]
Empty
  • Empty
Alright you win, I can't pull a fast one on you for a nilpotent matrix.

Looking for something else?

Not the answer you are looking for? Search for more explanations.

More answers

Empty
  • Empty
https://en.wikipedia.org/wiki/Dual_number
ganeshie8
  • ganeshie8
interesting, is that the only matrix form with degree 2 ?
Empty
  • Empty
Yeah, including the transpose I think so. It's interesting because dual numbers are like the complex number representation of a matrix, only instead of giving us rotations they give us shearing transformations instead when we multiply them.
ganeshie8
  • ganeshie8
so they do stuff like transforming a square into rhombus etc is it
Empty
  • Empty
So for contrast to see more vividly: \[a+\epsilon b= a\left[ \begin{array}c 1 & 0\\0 & 1\\\end{array} \right] + b\left[ \begin{array}c 0 & 1\\0 & 0\\\end{array} \right] = \left[ \begin{array}c a & b\\0 & a\\\end{array} \right]\] \[a+i b= a\left[ \begin{array}c 1 & 0\\0 & 1\\\end{array} \right] + b\left[ \begin{array}c 0 & 1\\-1 & 0\\\end{array} \right] = \left[ \begin{array}c a & b\\-b & a\\\end{array} \right]\]
Empty
  • Empty
I just found this now so I don't really know anything much but you can tell by simply multiplying out the rules: \[(a+b \epsilon)(x+y \epsilon) = ax+(bx+ay)\epsilon\] and the part \(-by\) we would normally have is thrown away which is the difference between the rotation and shear! Interesting!
ganeshie8
  • ganeshie8
Okay I have a little experience with transformation matrices, so im trying to relate that with these nilpoint matrices Consider a 2 dimensional point on plane \(z=1\) : \((x,y,1)\) \[\begin{pmatrix}x'\\y'\\1\end{pmatrix}=\begin{bmatrix} a&\color{red}{c}&e\\\color{red}{b}&d&f\\0&0&1\end{bmatrix}\begin{pmatrix}x\\y\\1\end{pmatrix}\] if i remember correctly those red numbers represent the shear effect
Empty
  • Empty
Ahhh perhaps what I've written is confusing for the same reason that if I write out: \[(a+b i)(x+y i) = (ax-by)+(bx+ay)i\] This represents not only a rotation but a stretching. By the same reasoning the equation below is not only strictly shearing, but also extension at the same time. \[(a+b \epsilon)(x+y \epsilon) = ax+(bx+ay)\epsilon\] I'm essentially using their line her to come to this conclusion from the wikipedia article: http://prntscr.com/7o8mzb Specifically in my mind the only difference is the real part (x axis) of the complex numbers is shifted to the left by \(yb\) while in the dual numbers we aren't shifting left by \(yb\) and are instead solely moving upwards (and stretching in this case) but in that article they do a pure shear I believe.
Empty
  • Empty
Specifically I'm interested in grasping the "geometry" section of the article here: https://en.wikipedia.org/wiki/Dual_number#Geometry which should answer our questions about shearing.
Empty
  • Empty
Just playing around right now, I noticed there's an interesting form for powers of dual numbers: \[d=a+b \epsilon\] \[d^n = (a+ b \epsilon)^n = a^n + n a^{n-1}b \epsilon\] Which is just the first two terms of the binomial theorem. I'm sort of just playing around and trying to figure it out since I don't really understand this but it's interesting and new and might have some interesting uses for solving problems since it's so simple to use.
UsukiDoll
  • UsukiDoll
don't have an A matrix with 0 entires ...then A^2 won't be equal to 0
UsukiDoll
  • UsukiDoll
A^2 = A(A) with matrix multiplication
Empty
  • Empty
I'm talking about this matrix specifically: \[\left[ \begin{array}c 0 & 1\\0 & 0\\\end{array} \right] \left[ \begin{array}c 0 & 1\\0 & 0\\\end{array} \right] = \left[ \begin{array}c 0 & 0\\0 & 0\\\end{array} \right]\]
UsukiDoll
  • UsukiDoll
ohhhhhhh
anonymous
  • anonymous
I thought it's not possible. I wasn't thinking about matrices.
ganeshie8
  • ganeshie8
nice thats because \(\epsilon^k=0\) for \(k\gt 1\)
Empty
  • Empty
In effect it seems as though I've really calculated more impressive looking haha: \[\left[ \begin{array}c a & b\\0 & a\\\end{array} \right]^n = a^n\left[ \begin{array}c 1 & \frac{nb}{a}\\0 & 1\\\end{array} \right]\]
anonymous
  • anonymous
What about: \[ \delta = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} \implies \delta^2= \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix}^2 =\begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} = 0 \]And we might think: \[ i = \epsilon - \delta \]
anonymous
  • anonymous
Is there any way to compare dual numbers?
anonymous
  • anonymous
Actually it seems \(\delta = \epsilon^T\), so we would say: \[ i = \epsilon - \epsilon^T \]
Empty
  • Empty
Interesting one application of dual numbers is that they allow you to do automatic differentiation! Which makes sense I think when you look at them. http://prntscr.com/7o8sqt So specifically I'm guessing because we have \[(a+ b \epsilon)^n = a^n+na^{n-1} b \epsilon\] we subtract a^n and divide by epsilon (well whatever that means). Since these are not invertible. I don't know, that's a good question @wio . One thing I'm noticing is we can possibly make a "dual conjugate" which makes the epsilon term negative but what about if we transpose the matrix, that almost seems to correspond to another operation altogether too.
Empty
  • Empty
Let's suppose we define \[\epsilon^* = -\epsilon \] \[\epsilon ^T\]and allow us to just put a transpose on it to represent the transposed matrix, there's no other thing we will do. Then if we take the conjugate transpose of a dual number and add it to itself we get a complex number.
Empty
  • Empty
Example: \[(a+b \epsilon)^* = a-b \epsilon \]\[(a+b \epsilon)^T = a+b \epsilon^T\] Combine these operations to make a "Hermitian" operator:\[(a+ \epsilon)^H = a-b \epsilon^T\] Now we add: \[(a+b \epsilon) + (a+ b \epsilon)^H = 2a+bi\] Hmm... good idea @wio!!!
Empty
  • Empty
Interesting thing I've just read is by considering the power series of a function it naturally falls out that: \[f(a+b \epsilon) = f(a)+b f'(a) \epsilon\]
Empty
  • Empty
I wonder if there's a quaternion analog for dual numbers as well
anonymous
  • anonymous
\[ \epsilon^T\epsilon = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix}\begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} \]Therefore \[ \epsilon \epsilon^T +\epsilon^T\epsilon = 1 \]And we can say: \[ a\epsilon\epsilon^T+b\epsilon+c\epsilon^T+d\epsilon^T\epsilon = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \]From \(\epsilon\) and its transpose, you can get a full \(2\times2\) matrix.
Empty
  • Empty
Interesting, we can get to all 4 parts by algebra and kind of throwing away the matrix in a sense. Hmmm this seems more fundamental than complex numbers in a sense.
anonymous
  • anonymous
I don't think there is a single means of generalizing dual numbers. A while back I came up with a way to generalize imaginary numbers that didn't lead to quaternions.
Empty
  • Empty
Hmmm well in this light it almost feels as though we have a companion to the "minus sign" and we are using the "transpose" as sort of another minus sign in a different sense here. Specifically when I said the thing about quaternions I meant the Cayley-wingspanson construction https://en.wikipedia.org/wiki/Cayley%E2%80%93Dickson_construction
anonymous
  • anonymous
What happens when you do \(\epsilon^0\)? Are we saying it is \(1\) or indeterminate?
Empty
  • Empty
When I say more fundamental I also mean that we can see that all rotations are really a specific form of combining two shears. Now that we've decoupled the shears into two parts, it seems like maybe we'd be able to rotate along an ellipse or hyperbola if we desire. According to the wikipedia article \(\epsilon^0 = 1\) from calculating it here: http://prntscr.com/7o8yrw
Empty
  • Empty
I am fine with the idea of interpreting it as the identity matrix I think.
Empty
  • Empty
I feel as though instead of previously we were just throwing complex numbers into a matrix because it happened to work, now it seems to be crucial since we are now giving each entry of the matrix a specific role to play in our algebra. One shears in one axis, one shears in the other axis, one stretches in one axis and the other stretches in the other axis. At least I believe this is what we can now understand a 2x2 matrix to represent.
Empty
  • Empty
Complex numbers are just the case when the shearing is identical and the stretching on the axes is identical which causes circular motion and uniform stretching that commutes with it. I could be wrong on this but this is my hunch, I don't know, this is the direction I'm sorta being dragged along I guess haha weird stuff.
anonymous
  • anonymous
Makes me wonder what happens if you do something like: \[ a+b\alpha = \begin{bmatrix} a+b& b \\ 0&a \end{bmatrix} \]It looks like \[ (b\alpha)^2= \begin{bmatrix} b&b \\ 0&0 \end{bmatrix}^2= \begin{bmatrix} b^2&b ^2\\ 0&0 \end{bmatrix}= b^2\alpha \]
anonymous
  • anonymous
But might not be closed under multiplication:
Empty
  • Empty
Interesting yeah. \[(aI+b \alpha)^2 = a^2I+2ab \alpha + b^2 \alpha^2\] We can easily and quickly show \[\alpha^2=\alpha\] so \[(aI+b \alpha)^2 = a^2I+(2ab + b^2) \alpha\]
anonymous
  • anonymous
I had just computed: \[ (a+b\alpha)(c+d\alpha) = ac+(ad+bc+bd)\alpha \]
Empty
  • Empty
Another thing that I see them saying is that no dual number has an inverse but you can still do division. I think this is essentially the same idea as clifford algebra division of vectors kind of thing but it doesn't exactly translate to matrices as we know them so I'd like to get a better foundation on this operation: \[\frac{a+b \epsilon}{x+ y \epsilon} = \frac{a+b \epsilon}{x+ y \epsilon} \frac{x- y \epsilon}{x- y \epsilon} = \frac{a}{x} + \frac{ bx-ay}{x^2} \epsilon\]
anonymous
  • anonymous
\[ (a+b\alpha)^n = \sum_{k=0}^n{n\choose k}a^{n-k}(b\alpha)^k = a^n\alpha^0+\sum_{k=1}^n{n\choose k}a^{n-k}b^k\alpha \]Not sure what \(\alpha^0\) would look like.
Empty
  • Empty
I think in general we can say that for matrices \(A^0 = I\)
Empty
  • Empty
As long as A isn't the zero matrix. The reasoning for why that would be a suitable definition is just by how we define the exponent rules: \(A^{n+m} = A^nA^m\)
anonymous
  • anonymous
Actually, that does make sense here: \[ (a+b\alpha)^n = a^n + \left(\sum_{k=1}^n{n\choose k}a^{n-k}b^k\right)\alpha \]Seems similar to what you got for \(n=2\).
anonymous
  • anonymous
How would you define modulus or magnitude for dual numbers?
anonymous
  • anonymous
I guess it's really asking what your inner product would be.
anonymous
  • anonymous
I think that imaginary numbers would use the determinate for the inner product.
Empty
  • Empty
To find out what an inverse dual matrix representation would be, I calculated out multiplying a matrix with values and plugged it in to get: \[\left[ \begin{array}c \frac{1}{x} & -y\\0 & \frac{1}{x}\\\end{array} \right]\] is the inverse of \[\left[ \begin{array}c x & y\\0 & x\\\end{array} \right]\] Which supposedly doesn't exist but now I'm curious if multiplying these gives us 1. Inner product is a good question, I don't know. Since we can turn a dual into a complex number we might be able to just hijack that
anonymous
  • anonymous
But for dual numbers, the determinate seems to give \(\langle a+b\epsilon, a+b\epsilon\rangle = a^2\).
anonymous
  • anonymous
For the alpha concept, it would seem determinate inner product would give \[ \langle a+b\alpha, a+b\alpha\rangle = a^2+ab \]
Empty
  • Empty
I guess I don't know if I even understand these things geometrically yet in the "dual plane". While I can visualize complex number multiplication I can't quite visualize dual multiplication yet, so I guess that's what's to look into now for me.
anonymous
  • anonymous
Hmmm \[(a+b\alpha)^{-1}=\begin{bmatrix} a+b&b\\0&a \end{bmatrix}^{-1} =\frac{1}{a^2+ab}\begin{bmatrix} a&-b\\0&a+b \end{bmatrix}^{-1} =\frac{(a+b)+-b\alpha}{a^2+ab} \]
Empty
  • Empty
So while \(e^{i t} = \cos t + i \sin t\) it appears that we have \(e^{\epsilon t} = 1+ \epsilon t\) which is supposedly some kind of parabolic rotation. I'm not sure, I'm just about to download and skim this paper: http://arxiv.org/abs/0707.4024
Empty
  • Empty
See @wio I said earlier I was studying probability and now I'm onto this already haha
Empty
  • Empty
Interesting, another alternative is that we can use a different matrix other than i or epsilon to create a hyperbola.
anonymous
  • anonymous
\[ e^{\epsilon t} = \sum_{n=0}^\infty \frac{(\epsilon t)^n}{n!} = 1 + \epsilon t + \sum_{n=2}^\infty \frac{(\epsilon t)^n}{n!} \]Where \[ \sum_{n=2}^\infty \frac{(\epsilon t)^n}{n!} = \sum_{k=0}^\infty \frac{(\epsilon t)^{k+2}}{(k+2)!} = \sum_{k=0}^\infty \frac{0\cdot (\epsilon t)^k}{(k+2)!} = 0 \]
Empty
  • Empty
All of the shapes traced out by the "rotations" are defined by this single function: \[x^2-i^2y^2=1\] So if i is the traditional imaginary number we have \[x^2+y^2=1\] if i is our new dual epsilon we have: \[x^2=1\] and if we have this other alternative hyperbolic i we have \[x^2-y^2=1\] I wonder what matrix they're using for this hyperbolic i that squares to 1 but isn't 1.
Empty
  • Empty
Probably \[\left[ \begin{array}c 0 & 1\\1 & 0\\\end{array} \right]\]
Empty
  • Empty
This is illuminating: http://prntscr.com/7o9ctw
anonymous
  • anonymous
One way to generalize dual numbers: \[ \epsilon_3 = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}, (\epsilon_3)^2 = \begin{bmatrix} 0&0&1 \\ 0&0&0 \\ 0&0&0 \end{bmatrix} \]We have: \[ (\epsilon_3)(\epsilon_3)= (\epsilon_3)^2 \\ (\epsilon_3)^3 = (\epsilon_3)^2(\epsilon_3) = 0 \]This is similar to my previous generalization of imaginary numbers, where tried to start with an identity: \[ (i_3)^3 = -1 \]It turned out that \(i_3\) was just a 6th of a rotation around the unit circle, if you projected onto the imaginary plane.
anonymous
  • anonymous
complex plane
anonymous
  • anonymous
However, the 'quaternion' approach to it might be more along the lines of: \[ \epsilon = \begin{bmatrix} 0&0&1&1 \\ 0&0&1&1 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} \]Or \[ \epsilon = \begin{bmatrix} 0&0&0&1 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} \]
Empty
  • Empty
The amazing parabolic sine and cosine functions lol, and they satisfy the hilarious \[e^{\epsilon t} = cosp(t) + \epsilon sinp(t) = 1+\epsilon t\] \[sinp(x) = x=\frac{e^{\epsilon x}-e^{-\epsilon x}}{2 \epsilon} \]\[cosp(x)=1=\frac{e^{\epsilon x}+e^{-\epsilon x}}{2 }\] \[\frac{d}{dx} sinp(x) = cosp(x)\]\[\frac{d}{dx} cosp(x) = 0\] So at least it feels natural in a stupid way.
Empty
  • Empty
Ah interesting I see what you're saying. I think I am sort of seeing how pauli spin matrices might just be how we write quaternions using matrices and complex numbers or something, I'm not sure.
Empty
  • Empty
Out of curiosity can we extend Fermat's Last theorem to matrices? \[A^n+B^n=C^n\]
anonymous
  • anonymous
\[ e^{\alpha t} = \sum_{n=0}^\infty \frac{(\alpha t)^n}{n!} = \sum_{n=0}^\infty \frac{\alpha^n t^n}{n!} = 1+ \sum_{n=1}^\infty \frac{\alpha t^n}{n!} \]We can can start that \(n\) back at zero by adding and subtracting \(\alpha\):\[ 1+ \sum_{n=1}^\infty \frac{\alpha t^n}{n!} + \alpha t^0 - \alpha t^0 = 1 - \alpha + \alpha\sum_{n=0}^\infty \frac{t^n}{n!} = 1-\alpha + \alpha e^{t} \]In summary:\[ e^{\alpha t} =1-\alpha + \alpha e^{t} \]
Empty
  • Empty
This is very illuminating to me since the difference between these three matrices is simply the bottom left value being -1, 0 or 1. http://prntscr.com/7o9h0g It almost seems like we should consider all of these simultaneously with a value in the bottom left that we can change spaces with.
anonymous
  • anonymous
Which could also be said as: \[ e^{\alpha t} = 1 + (e^t-1)\alpha \]So we could say: \[ \cos\alpha(t) = 1 \\ \sin\alpha(t) = e^t-1 \\ \]Very weird result.
anonymous
  • anonymous
Well, not completely sure about that last one but
Empty
  • Empty
\[\left[ \begin{array}c 0 & 1\\s & 0\\\end{array} \right]^2 = s\left[ \begin{array}c 1& 0\\0 & 1\\\end{array} \right]\] So we have this matrix where the bottom left entry is the value of the square. 0, 1, or -1. How are you coming to that result wio?
anonymous
  • anonymous
I am just using the weird identity: \[ e^{\alpha t} = \cos_\alpha(t) + \alpha \sin_\alpha(t) = 1+(e^t-1)\alpha \]
anonymous
  • anonymous
Though I'm not sure what the underlying assumptions of that identity are.
anonymous
  • anonymous
It's more of a definition though, so...
Empty
  • Empty
well as long as you define sine and cosine by this structure: \[sinp(x) = x=\frac{e^{\epsilon x}-e^{-\epsilon x}}{2 \epsilon} \]\[cosp(x)=1=\frac{e^{\epsilon x}+e^{-\epsilon x}}{2 }\] I think you will in a good position.
Empty
  • Empty
I'll try to play around with the alpha values a bit to see if I get to it as well
anonymous
  • anonymous
That structure? Why?
Empty
  • Empty
It produces an even and an odd function that add to the exponential function because: \[f(x) = \frac{f(x)+f(-x)}{2}+\frac{f(x)-f(-x)}{2}=g(x)+h(x)\] and we have the identities that g(x)=g(-x) and h(x)=-h(-x) like we expect from a sine or cosine.
Empty
  • Empty
Also I'm noticing we might be able to generalize your matrix a bit: \[\left[ \begin{array}c s & 1\\0 & 0\\\end{array} \right]^2=s\left[ \begin{array}c s & 1\\0 & 0\\\end{array} \right]\] or in other words \[\alpha_s^2=s \alpha_s\] So you have some control over what the square is, you can make s=0, 1, or -1 for different interesting but also easier to track properties. Something to play with.
anonymous
  • anonymous
Let's see \[ e^{-\alpha t} = 1 + (e^{-t}-1)\alpha \]So \[ e^{\alpha t}+e^{-\alpha t} = 2 + \alpha\bigg(e^t-1 + e^{-t}-1\bigg) = 2+\alpha (e^t+e^{-t}-2) \]So we have: \[ \frac{e^{\alpha t}+e^{-\alpha t} }{2} = 1+\cosh(t) \]For the other one: \[ e^{\alpha t}- e^{-\alpha t} =\alpha\bigg(e^t - e^{-t}\bigg) = \]So \[ \frac{e^{\alpha t}- e^{-\alpha t}}{2\alpha} = \sinh(t) \]
anonymous
  • anonymous
Hmmm, the only issue thought is that I know division by \(\alpha\) is problematic, just as it is for \(\epsilon\).
Empty
  • Empty
It is weird I'll agree with you hmm.\[\frac{a+b \alpha}{x+ y \alpha} = \frac{a+b \alpha}{x+ y \alpha} \frac{a-b \alpha}{x- y \alpha} = \frac{ax}{x^2-y^2} + \alpha \frac{bx-ay-by}{x^2-y^2}\]
anonymous
  • anonymous
For \(a+b\epsilon\) we have division problems at \(a=0\) (regardless of \(b\)), and for \(a+b\alpha\) we have division problems at \(a=-b\). You could say that for complex we have division problems at \(a^2+b^2=0\) for obvious reasons.
Empty
  • Empty
Yeah, I am not sure how I feel about "dividing by a matrix" but somehow it seems to just sorta work even though it definitely has problems going on.
anonymous
  • anonymous
I think we have to say that at these places, the magnitude ought to be defined as \(0\).
anonymous
  • anonymous
Everything we have done so far can be generalized as "Using a \(2\times 2\) matrix to define multiplication for a \(2\) component vector."
Empty
  • Empty
Hahaha yeah basically.
Empty
  • Empty
But I think we have some interesting relationship that the matrices satisfy a relation like this: \[A^2=sA\] or \[A^2=sI\] So there's some sort of structure here. Hmmm. It's sorta all nonsense in a way though unless we can get something out of it I think.
anonymous
  • anonymous
We could even do Hadamard matrix for this: \[ a+bH = \begin{bmatrix} a+b & a+b \\ a+b & a-b \end{bmatrix} \]It appears that \(H^2 = 2I\), so: \[ (a+bH )(c+dH ) = (ac+2bd)+(ad+bc)H \]
anonymous
  • anonymous
That one quantum computing tutorial said something about Unitary matrices or something being special.
Empty
  • Empty
Oh interesting. I am sort of getting a new idea from this. Yeah a unitary matrix is basically just the complex version of an orthogonal matrix. An orthogonal matrix is the inverse of its transpose while a unitary matrix is the inverse of its hermitian, which is the conjugate transpose. The DFT uses this for instance.
Empty
  • Empty
\[M=\left[ \begin{array}c a & b\\c & d\\\end{array} \right]=a\left[ \begin{array}c 1 & 0\\0 & 0\\\end{array} \right]+b\left[ \begin{array}c 0 & 1\\0 & 0\\\end{array} \right]+c\left[ \begin{array}c 0 & 0\\1 & 0\\\end{array} \right]+d\left[ \begin{array}c 0 & 0\\0 & 1\\\end{array} \right]\] \[M=aA+bB+cC+dD\] So now if I square this I'm going to get a more in depth understanding of how each part influences the square rather than looking at our specific cases like we've been doing.
Empty
  • Empty
\[M= \left[ \begin{array}c A & B & C & D\\\end{array} \right]\left[ \begin{array}c a\\ b\\ c\\ d\\\end{array} \right]\] So now I've got a matrix of matrices.... Maybe this isn't the right path to take on this haha. Nevermind.
anonymous
  • anonymous
\[ H^n = \begin{cases} 2^{k}I & n=2k\\ 2^{k}H & n=2k+1 \end{cases} \]I think the power of \(n\) is important, since we can use it in the MacLaurin series of \(e^x\).
Empty
  • Empty
Yeah I think you're right about that. Splitting it up like you've done is also going to make it nice for thinking about the sine and cosine functions
anonymous
  • anonymous
\[ e^{tH} = \sum_{n=0}^\infty \frac{(tH)^n}{n!} = \sum_{k=0}^\infty \frac{t^{2k}}{(2k)!} + H\sum_{k=0}^\infty \frac{t^{2k+1}}{(2k+1)!} \]Hmmm, not quite trignometric, since we don't have the \(-1\) coefficient.
anonymous
  • anonymous
Oh wow. \[ e^{tH} = \cosh(t) + H\sinh(t) \]
anonymous
  • anonymous
That is funky
anonymous
  • anonymous
I mean the fact that \(\cos_H (t) = \cosh\). Seems really coincidental.
Empty
  • Empty
Haha weird, the ghost of Hadamard knew his name would coincide with Hyperbola or something XD
anonymous
  • anonymous
Okay, so here is my opinion. If you have some algebra: \[ a+bA \]where \(A\) is some \(2\times2\) matrix, then what if we define it so that: \[ a+bA = re^{A\theta} = r\bigg(\cos_A(\theta) + A\sin_A(\theta)\bigg) \]Somehow this makes sent to me because \(\theta\) is an angle and \(r\) is a magnitude. I suppose there are still some issues though, because multiplication of \(a+bA\) won't always give you some sort of rotational result. Maybe this pattern is only worthwhile for certain cases like complex numbers.
Empty
  • Empty
No I think you're right, I was thinking something along the same lines, that for every matrix we essentially can define a pair of sine and cosine functions. I am just wondering if we can perhaps extend this to nxn matrices and instead of creating even and odd functions (even is the 2 of 2x2 matrices) so we can make other like "mod 3" functions for our 3x3 matrices for instance.
UsukiDoll
  • UsukiDoll
You guys should publish a novel on this lol xD
Empty
  • Empty
Haha we just have to publish, writing the novel we already did it lol
anonymous
  • anonymous
Anyway, for rules mentioned before: \[ A^2=cA\implies A^n = c^{n}A \\ A^2 = cI \implies A^n = \begin{cases} c^kI &n=2k \\ c^kH &n=2k+1 \end{cases} \]
Empty
  • Empty
slight correction:\[ A^2=cA\implies A^n = c^{n-1}A \\ \]
anonymous
  • anonymous
Whoops, that looks right.
anonymous
  • anonymous
For that first case \[ e^{At} = \sum_{n=0}^\infty \frac{(At)^n}{n!} =A\sum_{n=0}^\infty \frac{c^{n-1}t^n}{n!} = \frac{A}{c}e^{ct} \]
anonymous
  • anonymous
For the second case, I think I'm getting: \[ e^{At} = \cosh(ct) + A\sinh(ct) \]
anonymous
  • anonymous
Okay, the first case I think I was a bit too hand wavy though... let me double check.
Empty
  • Empty
Now that I'm thinking about these generalizations I thought why not make the matrix square to a different matrix altogether? But then I thought, what about if we use group theory? Let's instead just use group theory which already has multiplication defined and allows us to do the same exact things, but we're no longer confined to what a matrix multiplication gives us.
anonymous
  • anonymous
Can you find matrix such that: \[ A\neq I \land A^3=I \]I'm not sure, but my gut feeling is we need at least a \(3\times3\) to actually get that.
UsukiDoll
  • UsukiDoll
so we have to find... a matrix that makes this true so we can't have Matrix A equal to an identity and have A^3 = to an identity matrix
UsukiDoll
  • UsukiDoll
so A can't be an identity matrix but using matrix multiplication A(A)(A) that result should come up to an identity matrix
Empty
  • Empty
Yeah good point, just work it out with variables and solve the system of equations... but who wants to do that? haha
anonymous
  • anonymous
We're talking about a very ugly system of equations.
anonymous
  • anonymous
http://www.wolframalpha.com/input/?i=%7B%7Ba%2Cb%7D%2C%7Bc%2Cd%7D%7D%5E3+%3D+%7B%7B1%2C0%7D%2C%7B0%2C1%7D%7D
Empty
  • Empty
True, but we might be able to simplify it. I think tensor notation makes each equation simpler to look at in a sense. \[A^i_jA^j_kA^k_l = \delta ^i_l\] Or in terms of individual components (in linear algebra terms) where n is the row and m is the column of the entry: \[a_{n1}a_{11}a_{1m}+a_{n1}a_{12}a_{2m}+a_{n2}a_{21}a_{1m}+a_{n2}a_{22}a_{2m}=\delta_{nm}\] So this represents our 4 equations. Plug in n and m equal to 1 or 2 to get them.
ganeshie8
  • ganeshie8
\[A=\begin{bmatrix}1&0 \\0&e^{i5 \pi/6}\end{bmatrix}\]
anonymous
  • anonymous
I can easily find a solution for \(3\times 3\) though. \[ A = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \]Then \[ A^3=I \]And I think: \[ A_c = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ c&0&0 \end{bmatrix} \implies (A_c)^3 = cI \]
anonymous
  • anonymous
@ganeshie8 In our context, the matrix you have provided technically would be something along the lines of a \(4\times 4\), once expanded to real numbers.
ganeshie8
  • ganeshie8
Ohk I see, I'm not able to follow the conversation completely as it is way too above my head lol
ganeshie8
  • ganeshie8
does this work http://www.wolframalpha.com/input/?i=%7B%7B1%2C2%7D%2C%7B-3%2F2%2C-2%7D%7D%5E3+
anonymous
  • anonymous
Yeah, that could work.
Empty
  • Empty
Ahhh how did you think up either of these matrices, I'm impressed @ganeshie8 !
ganeshie8
  • ganeshie8
I don't think I let wolfram think lol
anonymous
  • anonymous
That matrix would seem to work like: \[ A^n = \begin{cases} I &n=3k \\ A &n=3k+1 \\ A^2 &n=3k+2\end{cases} \]
anonymous
  • anonymous
Maybe ganeshie can help identify the MacLaurin series we get?
anonymous
  • anonymous
\[ e^{At} = \sum_{k=0}^\infty \frac{t^{3k}}{(3k)!}+A\sum_{k=0}^\infty \frac{t^{3k+1}}{(3k+1)!}+A^2\sum_{k=0}^\infty \frac{t^{3k+2}}{(3k+2)!} \]
anonymous
  • anonymous
http://www.wolframalpha.com/input/?i=sum+t%5E%7B3n%7D%2F%283n%29%21++for+n+%3D+0+to+infty For the first one. This looks messy.
Empty
  • Empty
\[e^{At}=f(t)+A g(t)+ A^2 h(t)\] where we have \(\omega = e^{i 2 \pi /3}\) and analgously we have our ENTIRELY REAL FUNCTIONS satisfying these "even odd" mod 3 symmetries: \[f(t)=f( \omega t) = f( \omega ^2 t) \\ g(t)=\omega g( \omega t) = \omega^2 g(\omega^2 t) \\ h(t) = \omega^2 h(\omega t) = \omega h (\omega^2 t)\] And additionally we can represent: \[f(t) = \frac{e^{At}+e^{\omega At} + e^{\omega^2 A t}}{3}\] and we have two more corresponding ones, where when we divide by 3 in the denominator we add A and A^2 to match just like we would for complex sine we have to add an i.
Empty
  • Empty
Maybe I'm off a bit, since I believe f'(t)=g(t) and g'(t)=h(t) and h'(t)=f(t), but I think what I've said is mostly correct, I am sorta just saying these from memory I didn't actually calculate any of this right now, but I've done this particular thing before.
anonymous
  • anonymous
I think the link I gave will give you \(f(t)\), and it's messy.
anonymous
  • anonymous
I'm not sure though if there is a unique matrix solution to the equations anymore, though.
Empty
  • Empty
Now I feel like we should just throw away matrices and use group theory since the entries of the matrix are starting to hinder us when we really aren't using them for anything.
anonymous
  • anonymous
But without matrices, it's hard to check out results.
Empty
  • Empty
What's to check?
anonymous
  • anonymous
Matrices allow easy definitions of inverses.
Empty
  • Empty
Groups allow us to arbitrarily define inverses and we can insure that it's closed simultaneously if we want
anonymous
  • anonymous
If \(A^3 = 1\), then how would you even go about \(A^{-1}\)?
Empty
  • Empty
It's its own inverse in that case. |dw:1435918078087:dw|
anonymous
  • anonymous
You're saying \(A^2 = A^{-1}\)?
Empty
  • Empty
No sorry, here so from this cayley table |dw:1435918188833:dw| I should have just drawn out the whole table. In order to check that every element has a unique inverse we simply check to see that there is only one identity element per column and per row.
anonymous
  • anonymous
I think there is a good chance that you can find a matrix which represents any sort of variable you come up with.
anonymous
  • anonymous
Hmmm
Empty
  • Empty
I don't doubt it, but I don't think we particularly gain anything from playing with elements in the matrices except longer calculations. Plus it looks like we have multiple different types of matrices that end up obeying the same rules, so it doesn't feel very unique and this shifts the focus a bit I guess. I don't know, just something to think about.
Empty
  • Empty
We now have the ability to define noncommutative structures and other larger weirder things I think while it still being manageable since we're looking it up in a chart rather than carrying out multiplication on nxn matrices which becomes difficult for n=3 and higher. Suppose we want \(A^{8}=I\) I think we would be in trouble.
anonymous
  • anonymous
Hmm, to be really generalized for the 2 component case, we can define: \[ A^2 = aI + bA \]For the \(n\) component case , we can define: \[ \bigg( \sum_{k=0}^{n-1} a_kA^k \bigg)\bigg( \sum_{k=0}^{n-1} b_kA^k \bigg)=\bigg( \sum_{k=0}^{n-1} c_kA^k \bigg) \]For coefficients as vectors \(\mathbf a\) and \(\mathbf b\) we say \(\mathbf c =f(\mathbf a, \mathbf b)\).
Empty
  • Empty
Ultimately it seems like we're really just using these matrices to define how we multiply "complexish numbers" in our vector space and plugging them into power series to see if we get a cool set of "rotation" functions.
anonymous
  • anonymous
I suppose there might be some kinds of \(f\) which can't be achieved through matrix multiplication.
Empty
  • Empty
I'm sorta not sure how to raise A to the third power from this definition: \(A^2 = aI+bA \implies A^3 = ?\)
Empty
  • Empty
The convolution you've written there is interesting to me though, I would like to understand this more I don't quite know what you're saying but I think I like where it's going.
anonymous
  • anonymous
\[ A^3 = aA + b(aI+bA) = ab + (a+b^2)A \]
anonymous
  • anonymous
I'm trying to generalize, but hmmm
Empty
  • Empty
Interesting your definition of A^2 means we can write: \(A= \frac{1}{b}A^2 - \frac{a}{b}I\)
anonymous
  • anonymous
Okay, the most general I would ever want to get is to say we're looking for multiplication functions, which can be described as \(f:\mathbb R^d \times \mathbb R^d \to \mathbb R^d\). Obviously that will keep things closed.
anonymous
  • anonymous
But for \(d=2\), and working in a vector space, the only thing we really need to define is \(A^2\) because algebra will solve the rest.
Empty
  • Empty
Sounds good to me. I feel like from this perspective we can define multiplication between any of our "complexish" numbers to be anything from rotations, to shear, to whatever and then we can just go ahead and define the group theory elements to match what we want.
anonymous
  • anonymous
\(\color{blue}{\text{Originally Posted by}}\) @wio \[ A^3 = aA + b(aI+bA) = ab + (a+b^2)A \] \(\color{blue}{\text{End of Quote}}\) For complex numbers, we would say \((a,b) = (-1,0)\), and so: \[ i^3 = (-1)(0)+((-1)+(0)^2)i = -i \]
Empty
  • Empty
That way we can simply calculate stuff with algebra knowing it obeys our rules.
anonymous
  • anonymous
For dual numbers \((a,b) = (0,0)\), and so: \[ \epsilon^3 = 0 \]For \(\alpha\), we'd say \((a,b) = (0,1)\), and we get: \[ \alpha^3 = (0)(1) + ((0)+(1)^1)\alpha = \alpha \]
anonymous
  • anonymous
Hmmm, interesting.
anonymous
  • anonymous
If we consider only cases where \((a,b) \in \{-1,0,1\}^2\), that gives us about \(9\) combinations we can mess with.
anonymous
  • anonymous
Hmm, interesting.
Empty
  • Empty
Hmmm I'm trying to find some interesting multiplication definition.
anonymous
  • anonymous
Using: \[ A^2 = a+bA \]as a generalization, then: \[ (c+dA)(f+gA) = (cf+adg) + (cg+df+bdg)A \]Hmmm, it looks overly general, but I've been thinking something...
anonymous
  • anonymous
If we want to do \[ \frac{c+dA}{f+gA} \]All we need is to find a way to get rid of that \(A\) in the denominator.
anonymous
  • anonymous
I think the only way this can be fun is if we have something we want to try to find.
anonymous
  • anonymous
We have found \(\sin_i\) and \(\cos_i\) for the different conic sections, so that is off the table.
Empty
  • Empty
Ellipse or a torus is what I've been searching for, but I think I am too tired to think up something that'll do that.
anonymous
  • anonymous
But those are 3d objects, not conic sections
anonymous
  • anonymous
And ellipse corresponds to original trig functions
anonymous
  • anonymous
Well, hmmm, I suppose circle is a type of ellipse so...
Empty
  • Empty
I think an ellipse might be nice to be able to have operations that let us rotate the ellipse itself like this |dw:1435921176282:dw| Also what's stopping us from doing more than one? We can have something like quaternions with multiple parts interacting.
Empty
  • Empty
One idea is that whatever manages the ellipses will collapse down into complex numbers
Empty
  • Empty
But ellipses can stretch and rotate, something circles can't do. I mean you can rotate a circle, but it's meaningless.
Empty
  • Empty
I don't know, hmm.
anonymous
  • anonymous
Okay, now that I think about it... \(i\) corresponds to the circle, a class of an ellipse. \(\epsilon\) corresponds to a specific class of parabola. And then \(H\) corresponded to a class of hyperbola.
anonymous
  • anonymous
And \(H\) would be like \((2,0)\), I think. That is: \[ H^2 = (2) + (0)H = 2 \]
Empty
  • Empty
Yeah, actually since these are all sorta just separate sections of a conic maybe we can do a linear combination of these to get an ellipse?
anonymous
  • anonymous
So clearly we are getting conic sections for \((x,0)\) configurations.
Empty
  • Empty
I never really even bothered to try to calculate something like: \[(x+yi+z \epsilon)^2\]
anonymous
  • anonymous
Conic sections have a certain property, eccentricity or something.
Empty
  • Empty
Yeah I don't know either haha, I just know that elliptic integrals are hard and finding arclength, there's a possibility that what we're doing gets us something interesting to play with there.
anonymous
  • anonymous
https://en.wikipedia.org/wiki/Eccentricity_(mathematics)
anonymous
  • anonymous
For a circle, we use \((-1,0)\). For a hyperbola we use \((2,0)\). For a parabola we use \((0,0)\). Maybe it is just \((x-1,0)\) where \(x\) is the eccentricity. Just a hypothesis.
Empty
  • Empty
That sounds like a great idea to me
anonymous
  • anonymous
Eccentricity of an ellipse is \(\sqrt{1-b^2/a^2}\).
Empty
  • Empty
I'm too tired to really learn anything new right now but it'll give me something to think about later
anonymous
  • anonymous
Yeah, maybe another day we can experiment.
anonymous
  • anonymous
Come up with some way to test hypothesis
anonymous
  • anonymous
And perhaps figure out how \((0,y)\) changes things up.
anonymous
  • anonymous
It's clear that \(\alpha\) gave us some clear insight into how that might work.
Empty
  • Empty
Yeah definitely, I don't think we should throw away matrices they helped us quite a bit in concrete ways of determining what products _should_ be. For instance right now I'm just calculating \(\epsilon * i \) and \(i*\epsilon \) as something to explore with my brain later lol.
Empty
  • Empty
In some sense we're sorta thinking in terms of instead of functions of a complex variable, more like functions of a 2x2 matrix variable.
anonymous
  • anonymous
Looking at the eccentricity of a hyperbola, I think my hypothesis was wrong. Hadamard seemed to give us a standard hyperbola, so its eccentricity should be: \[ \sqrt{1+\frac {1^2}{1^2}} = \sqrt{2} \]And \(\sqrt{2} + 1 \neq 2\).
anonymous
  • anonymous
I'm going to bed, but now I have a goal.
Empty
  • Empty
In my mind I'm imagining the conic section. If you cut it one way you get a circle and if you cut it the other way you get a hyperbola. I'd like to rotate that plane that we're cutting through to somewhere inbetween there. Actually... I think I know how to do it, because earlier we had: \[\left[ \begin{array}c 0 & 1\\s & 0\\\end{array} \right]\] where s=1 representing a hyperbola, s=0 representing a parabola, and s=-1 representing a circle. And I know that this is like the same spacing, so if -1
anonymous
  • anonymous
It's clear to me that the matrix you have presented corresponds to \((s,0)\). Hmm
anonymous
  • anonymous
Is there something to correspond to \((0, t)\), now I wonder...
Empty
  • Empty
Haha well perhaps we have chosen the transpose arbitrarily, let's just choose the lower triangular version of this maybe?
anonymous
  • anonymous
I guess it would be going back to: \[ A^2 = tA \]We know for \(t=1\) we can use: \[ \begin{bmatrix}1&1\\0&0\end{bmatrix} \]
anonymous
  • anonymous
Since that is what was used for \(\alpha\).
Empty
  • Empty
Oh right
Empty
  • Empty
Woah absolutely fascinating. Instead of calling them real and imaginary parts they call it bosonic and fermionic directions for real and \(\epsilon\) directions. This is used for particle physics. http://prntscr.com/7ob3pm
anonymous
  • anonymous
I think I found it:\[ A = \begin{bmatrix} 1&1 \\ t-1&t-1 \end{bmatrix} \implies A^2 = tA \]
UsukiDoll
  • UsukiDoll
This is probably the longest thread I've ever been on xD!
anonymous
  • anonymous
I wonder if there is a matrix such that \[ A^2 = sI + tA \]If we can find such a matrix, then matrix would be just fine.
anonymous
  • anonymous
It's kinda funny, because ultimately what we have here is: \[ A^2 -tA-sI \]Which is like a quadratic equation for matrices.
anonymous
  • anonymous
This is an ugly solution, but: \[ A = \bigg(\frac{t+\sqrt{t^2+4s}}{2} \bigg) I \implies A^2 = sI +tA \]But since \(A=cI\), it sort of doesn't count. I wonder if there is another solution.
anonymous
  • anonymous
the reason it lets you do automatic differentiation is because dual numbers capture a property of nilpotency used by Fermat, Newton, etc. that \(\varepsilon^2=0\) (recall Fermat's https://en.wikipedia.org/wiki/Adequality ); this is used in functional languages to explicitly extend real computational functions to dual numbers and 'compute' their derivatives in a way unlike symbolic or numerical differentiation. it's mainly used in machine learning contexts to come up with derivatives of complex functions (like a feedforward neural net) for error propagation in learning algorithms fyi
anonymous
  • anonymous
$$f(a+b\varepsilon)=f(a)+\varepsilon f'(a)+\frac12\varepsilon^2 f''(a)+\dots=f(a)+\varepsilon f'(a)$$this is kinda the same idea behind the trick used in numerical differentiators to evaluate derivatives of \(x\) at \(x+i\) for smaller error terms
anonymous
  • anonymous
ps the 'hyperbolic' numbers you were talking about are just the split-complex numbers with \(j^2=1\)
anonymous
  • anonymous
that is the most straightforward way of encoding split-complex numbers in \(M_2(\mathbb{R})\) https://en.wikipedia.org/wiki/Split-complex_number#Matrix_representations
anonymous
  • anonymous
and as you noticed \(j^2=1\) ends up causing \(|a+bj|=1\) to giv ethe unit hyperbola (a^2-b^2=j\) and its paramterized by \(\exp(bj)=\cosh(b)+j\sinh(b)\)
anonymous
  • anonymous
the problem with an ellipse is that there is no 'unit' ellipse, but we can define an ellipse as \(a^2+k^2b^2=1\) which just reduces to \(a+bw\) where \(w\) is some scaled kind of \(i\) (specifically \(w=ki\)), so it ends up giving rise to the same algebra \(\mathbb{C}\) as just \(i\) alone -- boring. let's rehash the different types of conics: 1) ellipses (using \(i^2=-1\)) 2) parabolas (using \(\varepsilon^2=0\)) 3) hyperbolas (using \(j^2=1\)) ... note the correspondence which is actually a consequence of the limitations we face when trying to come up with two-dimensional unital algebras over the reals https://en.wikipedia.org/wiki/Hypercomplex_number#Two-dimensional_real_algebras
Empty
  • Empty
\(\color{blue}{\text{Originally Posted by}}\) @wio I wonder if there is a matrix such that \[ A^2 = sI + tA \]If we can find such a matrix, then matrix would be just fine. \(\color{blue}{\text{End of Quote}}\) Yes because the Cayley-Hamilton theorem says every square matrix satisfies its own characteristic equation. That is to say we can replace A with \(\lambda\) which represent the eigenvalues of A and solve. \[\lambda^2 = s+t \lambda\] The solution to this quadratic has two roots so we can throw it into a 2x2 diagonal matrix: \[\left[ \begin{array}c \lambda_+ & 0\\0 & \lambda_-\\\end{array} \right]\] So this is a nice trick for making symmetric, diagonal matrices that obey any polynomial equation.
nincompoop
  • nincompoop
great post

Looking for something else?

Not the answer you are looking for? Search for more explanations.