Get our expert's

answer on brainly

SEE EXPERT ANSWER

Get your **free** account and access **expert** answers to this and **thousands** of other questions.

See more answers at brainly.com

Get this expert

answer on brainly

SEE EXPERT ANSWER

Get your **free** account and access **expert** answers to this and **thousands** of other questions

I'll admit, I'm being ambiguous on purpose here haha.

\[A=\begin{bmatrix}0&\alpha \\0&0\end{bmatrix}\]

Alright you win, I can't pull a fast one on you for a nilpotent matrix.

https://en.wikipedia.org/wiki/Dual_number

interesting, is that the only matrix form with degree 2 ?

so they do stuff like transforming a square into rhombus etc is it

don't have an A matrix with 0 entires ...then A^2 won't be equal to 0

A^2 = A(A) with matrix multiplication

ohhhhhhh

I thought it's not possible. I wasn't thinking about matrices.

nice thats because \(\epsilon^k=0\) for \(k\gt 1\)

Is there any way to compare dual numbers?

Actually it seems \(\delta = \epsilon^T\), so we would say: \[
i = \epsilon - \epsilon^T
\]

I wonder if there's a quaternion analog for dual numbers as well

What happens when you do \(\epsilon^0\)? Are we saying it is \(1\) or indeterminate?

I am fine with the idea of interpreting it as the identity matrix I think.

But might not be closed under multiplication:

I had just computed: \[
(a+b\alpha)(c+d\alpha) = ac+(ad+bc+bd)\alpha
\]

I think in general we can say that for matrices \(A^0 = I\)

How would you define modulus or magnitude for dual numbers?

I guess it's really asking what your inner product would be.

I think that imaginary numbers would use the determinate for the inner product.

Probably \[\left[ \begin{array}c 0 & 1\\1 & 0\\\end{array} \right]\]

This is illuminating: http://prntscr.com/7o9ctw

complex plane

Out of curiosity can we extend Fermat's Last theorem to matrices?
\[A^n+B^n=C^n\]

Well, not completely sure about that last one but

Though I'm not sure what the underlying assumptions of that identity are.

It's more of a definition though, so...

I'll try to play around with the alpha values a bit to see if I get to it as well

That structure? Why?

I think we have to say that at these places, the magnitude ought to be defined as \(0\).

Hahaha yeah basically.

Oh wow. \[
e^{tH} = \cosh(t) + H\sinh(t)
\]

That is funky

I mean the fact that \(\cos_H (t) = \cosh\). Seems really coincidental.

Haha weird, the ghost of Hadamard knew his name would coincide with Hyperbola or something XD

You guys should publish a novel on this lol xD

Haha we just have to publish, writing the novel we already did it lol

slight correction:\[
A^2=cA\implies A^n = c^{n-1}A \\
\]

Whoops, that looks right.

For the second case, I think I'm getting: \[
e^{At} = \cosh(ct) + A\sinh(ct)
\]

Okay, the first case I think I was a bit too hand wavy though... let me double check.

We're talking about a very ugly system of equations.

\[A=\begin{bmatrix}1&0 \\0&e^{i5 \pi/6}\end{bmatrix}\]

Ohk I see, I'm not able to follow the conversation completely as it is way too above my head lol

does this work
http://www.wolframalpha.com/input/?i=%7B%7B1%2C2%7D%2C%7B-3%2F2%2C-2%7D%7D%5E3+

Yeah, that could work.

Ahhh how did you think up either of these matrices, I'm impressed @ganeshie8 !

I don't think
I let wolfram think lol

Maybe ganeshie can help identify the MacLaurin series we get?

I think the link I gave will give you \(f(t)\), and it's messy.

I'm not sure though if there is a unique matrix solution to the equations anymore, though.

But without matrices, it's hard to check out results.

What's to check?

Matrices allow easy definitions of inverses.

If \(A^3 = 1\), then how would you even go about \(A^{-1}\)?

It's its own inverse in that case. |dw:1435918078087:dw|

You're saying \(A^2 = A^{-1}\)?

Hmmm

I suppose there might be some kinds of \(f\) which can't be achieved through matrix multiplication.

\[
A^3 = aA + b(aI+bA) = ab + (a+b^2)A
\]

I'm trying to generalize, but hmmm

Interesting your definition of A^2 means we can write:
\(A= \frac{1}{b}A^2 - \frac{a}{b}I\)

That way we can simply calculate stuff with algebra knowing it obeys our rules.

Hmmm, interesting.

Hmm, interesting.

Hmmm I'm trying to find some interesting multiplication definition.

I think the only way this can be fun is if we have something we want to try to find.

We have found \(\sin_i\) and \(\cos_i\) for the different conic sections, so that is off the table.

But those are 3d objects, not conic sections

And ellipse corresponds to original trig functions

Well, hmmm, I suppose circle is a type of ellipse so...

One idea is that whatever manages the ellipses will collapse down into complex numbers

I don't know, hmm.

And \(H\) would be like \((2,0)\), I think. That is: \[
H^2 = (2) + (0)H = 2
\]

So clearly we are getting conic sections for \((x,0)\) configurations.

I never really even bothered to try to calculate something like:
\[(x+yi+z \epsilon)^2\]

Conic sections have a certain property, eccentricity or something.

https://en.wikipedia.org/wiki/Eccentricity_(mathematics)

That sounds like a great idea to me

Eccentricity of an ellipse is \(\sqrt{1-b^2/a^2}\).

Yeah, maybe another day we can experiment.

Come up with some way to test hypothesis

And perhaps figure out how \((0,y)\) changes things up.

It's clear that \(\alpha\) gave us some clear insight into how that might work.

I'm going to bed, but now I have a goal.

In my mind I'm imagining the conic section. If you cut it one way you get a circle and if you cut it the other way you get a hyperbola. I'd like to rotate that plane that we're cutting through to somewhere inbetween there. Actually... I think I know how to do it, because earlier we had:
\[\left[ \begin{array}c 0 & 1\\s & 0\\\end{array} \right]\]
where s=1 representing a hyperbola, s=0 representing a parabola, and s=-1 representing a circle. And I know that this is like the same spacing, so if -1~~
~~

It's clear to me that the matrix you have presented corresponds to \((s,0)\). Hmm

Is there something to correspond to \((0, t)\), now I wonder...

Since that is what was used for \(\alpha\).

Oh right

I think I found it:\[
A = \begin{bmatrix}
1&1 \\ t-1&t-1
\end{bmatrix} \implies
A^2 = tA
\]

This is probably the longest thread I've ever been on xD!

ps the 'hyperbolic' numbers you were talking about are just the split-complex numbers with \(j^2=1\)

great post