Empty
  • Empty
does \(A^3+B^3=C^3\) have any solutions as nxn invertible matrices with integer entries?
Mathematics
  • Stacey Warren - Expert brainly.com
Hey! We 've verified this expert answer for you, click below to unlock the details :)
SOLVED
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
jamiebookeater
  • jamiebookeater
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
ganeshie8
  • ganeshie8
1x1 matrices are out due to fermat's last thm
ganeshie8
  • ganeshie8
oh wait, trivial solutions with 0,1,-1 etc are allowed right ?
Empty
  • Empty
I don't know, does FLT allow negative integers? I would prefer only positive integers but I don't care just really thought this would be a fun extension. :P

Looking for something else?

Not the answer you are looking for? Search for more explanations.

More answers

Empty
  • Empty
My first thoughts are to diagonalize these, which might end up essentially turning the problem into FLT in each of the entries, but I think this proof only shows that similar matrices can't obey FLT for matrices. So what about nonsimilar matrices?
ganeshie8
  • ganeshie8
bruteforce spitted out this http://www.wolframalpha.com/input/?i=%7B%7B1%2C3%7D%2C%7B0%2C1%7D%7D%5E3%2B%7B%7B-1%2C0%7D%2C%7B1%2C-1%7D%7D%5E3%3D%7B%7B0%2C3%7D%2C%7B1%2C0%7D%7D%5E3
Empty
  • Empty
More formally: X, Y, Z are diagonal matrices similar to A, B, and C related by: \[A=PXP^{-1}\]\[B=PYP^{-1}\]\[C=PZP^{-1}\] then \[A^n+B^n=C^n\]\[PX^n P^{-1}+PY^nP^{-1}=PZ^nP^{-1}\]\[P(X^n +Y^n)P^{-1}=PZ^nP^{-1}\]\[X^n +Y^n=Z^n\] which is Fermat's last theorem in each of the entries, which has been proven to be false, so there is no such solution of this form in matrices. There may still exist a form in nonsimilar matrices. Hmm.
ganeshie8
  • ganeshie8
does that mean eigen vectors need to be different ?
anonymous
  • anonymous
Can you be certain that when you diagonalize them, their surrounding matrices will be the same?
Empty
  • Empty
@ganeshie8 Yeah the eigenvalues must be different. @wio No, you can't which is why this question isn't over yet haha.
ganeshie8
  • ganeshie8
looks there are many solutions in 2x2
ganeshie8
  • ganeshie8
infinitely many it seems..
anonymous
  • anonymous
Wait, if two matrices have the same eigen values, does that mean they'll have the same surrounding matrices?
anonymous
  • anonymous
I guess I can't completely remember all the theorems about eigen vectors
anonymous
  • anonymous
Is there something else motivating this question?
Empty
  • Empty
Hmmm wait yeah I guess I am no quite right. I am thinking of the matrices that have the same eigenvalues are similar, not eigenvectors like @ganeshie8 is saying, since the eigenvectors make up P and P inverse.
anonymous
  • anonymous
Oh right, so \(P\) is a matrix of eigen vectors and the diagonal vector is just the eigen values.
Empty
  • Empty
No, nothing is really motivating this other than I wanted to know if FLT also applied to matrices as well or not. I think it came up the other day when talking about the dual numbers but I forget why I thought that, I'll probably go back and visit it after this though.
anonymous
  • anonymous
The invertible restriction does rule out a lot of matrices though
ganeshie8
  • ganeshie8
We found few solutions, so clearly FLT is not applicable to matrices.
Empty
  • Empty
What solutions did we find?
ganeshie8
  • ganeshie8
scroll up
Empty
  • Empty
Ahhh ok I missed it, I guess do you mind if I restrict ourselves to only positive integer entries?
anonymous
  • anonymous
The following matrix: \[ A = [a_{i,j}] = \begin{cases} 1 & i+1\equiv j\pmod n\\ 0 \end{cases}\]Which looks like this in the \(3\times3\) case: \[ A = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \]Has the property \(A^n = I\). If you make the bottom left entry be \(s\), then you have \(A^n = sI\).
ganeshie8
  • ganeshie8
so 0 is also not allowed ?
Empty
  • Empty
Hmm. Good question, I don't know. I think having zero as an entry should be allowed as long as it's an invertible, but the zero matrix itself shouldn't, for obvious trivial reasons. 0^3+A^3=A^3 for instance.
anonymous
  • anonymous
\[ A^3 + A^3 = 2I \]
anonymous
  • anonymous
Hmmm, maybe that doesn't help
anonymous
  • anonymous
\[ (A_7)^3 + (A_1)^3 = 8I = (2I)^3 \]
Empty
  • Empty
No I think you're right, this solves it, FLT doesn't apply awesome!
ganeshie8
  • ganeshie8
Nice! that shows FLT doesn't apply for power 3
ganeshie8
  • ganeshie8
or power of form \(3k\)
Empty
  • Empty
Oh true, but I think we can extend this to arbitrary powers maybe by just adding an extra column and row with all 0s and a 1 on the new diagonal entry.
ganeshie8
  • ganeshie8
you're talking about matrix dimensions ?
anonymous
  • anonymous
You can define \(A_{n,k}\) as follows: \[ \begin{bmatrix} 0&I_{n-1}\\ k&0 \end{bmatrix} \]And \((A_{n,k})^n = kI_n\).
Empty
  • Empty
Wait, yeah sorry I was off. I guess what we really showed was that this is true for 3x3 matrices of power 3. But really we need arbitrary dimension matrices for arbitrary powers and I accidentally went to thinking of the other way.
Empty
  • Empty
Interesting, how did you find this solution @wio ?
anonymous
  • anonymous
When \(k=1\), it's just a permutation which shifts everything to the right. You need \(n\) shifts to get to the identity matrix. Fortunately the \(k\) multiplication only gets applies to each row once.
anonymous
  • anonymous
Make that \(n-1\) shifts
ganeshie8
  • ganeshie8
beautiful!
anonymous
  • anonymous
Likewise, if you have \((A_{n,l})^2\), it's a permutation matrix that shifts twice for each multiplication. So it will take \(n/2-1\) shifts to get the the identity, I believe. Obviously you need even dimensions for this to get you back to the identity.
Empty
  • Empty
Nice, I really like this. The only problem I have with it is that it tells us there's always a specific matrix size for every power, but maybe there are exceptions where powers don't match the dimension of the matrix. The only reason this seems important to me is because we can consider the 1x1 matrix dimension case as being an exception where it fails for power 3 and higher, so there could be others out there with other special powers and dimensions that follow some pattern.
anonymous
  • anonymous
Whoops, I keep making little mistakes. Anyway, \[ \large [(A_{n,k})^p]^{n/p} = (A_{n,k})^n = kI \]If we let \(n/p=3\) then \(n=3p\), so we can do this method for \(3m\times 3m\) matrices. We would still need a method for other cases.
Empty
  • Empty
Ok here's the table I'd like to sorta fill out for me to feel satisfied the problem is solved haha. $$A_{n \times n}^p+B_{n \times n}^p=C_{n \times n}^p$$ \begin{array}{|c|c|c|} \hline \text{ } \text{n } & \text{p} \\ \hline \text{ } 1 & p \ge 3 \\ \hline \text{ } 2 & \\ \hline \text{ } 3 & \\ \hline \text{ } 4 & \\ \hline \end{array}
anonymous
  • anonymous
Hold on, we know it is false when \(n=1\).
anonymous
  • anonymous
I've only shown it for cases \(n=3p\).
Empty
  • Empty
Only for p>2, since we have pythagorean triples.
anonymous
  • anonymous
I found another interesting matrix. let \(L_{n,k}\) be an \(n\times n\) matrix of all zeros except the bottom left entry is \(k\). \[ (L_{n,k}+I_n)^n = nL_{n,k}+I_n \]I'm wondering if this could have good uses.
Empty
  • Empty
Interesting I think this is good, that can definitely help us out I think.
anonymous
  • anonymous
You could also write that as: \[ (L_{n,k}+I_n)^n = L_{n,nk}+I_n \]
Empty
  • Empty
Do we have t place the entry at the bottom left or can it be any arbitrary location as long as it's not on the main diagonal?
anonymous
  • anonymous
It doesn't have to be \(n\) either. \[ (L_{n,k}+I_n)^p = L_{n,pk}+I_n \]
anonymous
  • anonymous
Can be anywhere, but corners are the only places we can expect to exist.
ganeshie8
  • ganeshie8
\[\begin{bmatrix}1&0\\k&1\end{bmatrix}^n=\begin{bmatrix}1&0\\nk&1\end{bmatrix}\]
anonymous
  • anonymous
Yeah, basically that's the 2D case
anonymous
  • anonymous
The only issue is that \(L^{2} = 0\), and it isn't inevitable so we can't use it as a candidate.
Empty
  • Empty
No, but we can make invertible matrices out of invertible ones by adding them, so they're not useless.
Empty
  • Empty
In fact any nxn matrix you make with all zeroes and one single entry on an off diagonal squares to 0. The proof would be to look at the square in summation notation: $$ A^2 = \sum_j a_{ij}a_{jk} $$ All the entries will be multiplying by 0 except for if you have a single term on a diagonal which would correspond to \(a_{ij} = a_{ji}\) which we have restricted.
anonymous
  • anonymous
Found another interesting matrix. Suppose you reflect the rows of the identity matrix, so something like \(T = \delta_{i,n-j+1}\). For the \(3\times 3\) case we have: \[ T = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \]It is always the case that \(T^2=I\).
anonymous
  • anonymous
This matrix flips the elements
Empty
  • Empty
Ahhh yeah this is called the exchange matrix.
anonymous
  • anonymous
For cases where the dimensions are even, we can have the bottom half entries be \(k\) and it will result in \(kI\) when squared.
anonymous
  • anonymous
Otherwise, having all entries as k gives \(k^2I\), obviously
anonymous
  • anonymous
Okay, I found a really interesting matrix, but I've not completely formalized how to construct it. I think what you do is you start with the \(A_{n,1}\) matrix, and then you shift the \(n\)th row with the \(m\)th row. I think with this matrix, well call it \(B\), you have: \[ B^m = I \]
anonymous
  • anonymous
Not sure when it is certain to work.
Empty
  • Empty
The whole reasoning behind proving that all nxn zero matrices with a single off diagonal element squaring to 0 allows us to separate a single matrix into a diagonal matrix in a linear combination with a lot of other matrices that square to 0, which makes it easier to simplify powers I think. Hmmmm I think I like your reasoning better though it looks like it's going to take us somewhere. This is a tricky thing to try to prove.
anonymous
  • anonymous
I am thinking you should always be able to find a permutation matrix such that \(P^m = I\). The shift case is just where \(m=n-1\), when \(n\) is the dimensions
anonymous
  • anonymous
I just am not sure how to construct it
Empty
  • Empty
I think you're right because the permutation matrices essentially correspond to modular arithmetic so I think we can actually use Fermat's little theorem to show this. I'm not entirely sure how to construct them but I am certain we can which might be all we need.
anonymous
  • anonymous
Matrix multiplication is associative, right?
Empty
  • Empty
Yeah you better believe it. It's just not commutative in general.
Empty
  • Empty
Something I'm reading right now: http://math.stackexchange.com/questions/625140/a-fermats-flt-look-like-for-matrices
UsukiDoll
  • UsukiDoll
Matrix multiplication isn't commutative due to the fact that \[AB\neq BA\] holds the result from AB doesn't equal the result from BA
Empty
  • Empty
If A and B are 2D rotation matrices, they'll commute for instance, or if the transformations are 'orthogonal' to each other. For instance stretching and rotation are orthogonal. These are really all more easily captured by the complex number form of these matrices: \[re^{i \theta}\]
anonymous
  • anonymous
Are you allowing non integer matrices?
ganeshie8
  • ganeshie8
below seem to work for all powers http://www.wolframalpha.com/input/?i=Table%5B%7B%7B%7B1%2C0%7D%2C%7B0%2C1%7D%7D%7B%7B0%2C1%7D%2C%7B1%2C0%7D%7D%7D%5En%2B%7B%7B%7B2%2C0%7D%2C%7B0%2C1%7D%7D%7B%7B0%2C1%7D%2C%7B1%2C0%7D%7D%7D%5En-%7B%7B%7B3%2C0%7D%2C%7B0%2C1%7D%7D%7B%7B0%2C1%7D%2C%7B1%2C0%7D%7D%7D%5En%2C%7Bn%2C1%2C10%7D%5D
ganeshie8
  • ganeshie8
wolfram must be lying, that doesn't look correct
Empty
  • Empty
\(\color{blue}{\text{Originally Posted by}}\) @wio Are you allowing non integer matrices? \(\color{blue}{\text{End of Quote}}\) No, I was just sorta adding a note for @UsukiDoll to add to her comment about 2x2 matrices and commutativity.
Empty
  • Empty
@ganeshie8 Hmmmm.... I don't know, I guess I'll just have to check them on matlab or something. All I have is the online compiler for it though. Weird.
ganeshie8
  • ganeshie8
I'm trying to construct the solutions using the method in that mse post http://gyazo.com/31615b52acb858db5875d66eaec0d715
anonymous
  • anonymous
I've been thinking of something... Note matrices like: \[ I = \begin{bmatrix} 1 \\ 2 \\ \vdots \\ n \end{bmatrix} \]Where the number represents the column which has a \(1\), and all else is \(0\). The shift matrix mentioned earlier is expressed as: \[ R = \begin{bmatrix} 2 \\ 3 \\ \vdots \\ n \\ 1 \end{bmatrix} = \begin{bmatrix} 1+1 \\ 2+1 \\ \vdots \\ n+1 \end{bmatrix} \]Obviously, when number is \(>n\), we just subtract \(n\) to get its actual value. This allows us to denote permutation matrices.
anonymous
  • anonymous
I put \(R\) because it shifts right. We have shift left by doing \(R^{n-1} = L\). Notice that\[ R^k = \begin{bmatrix} 1+k \\ 2+k \\ \vdots \\ n+ k \end{bmatrix} \]
Empty
  • Empty
I see, essentially we're cycling through with these matrices.
anonymous
  • anonymous
We can have a swapping matrix, which always swaps the \(k\)th row with the \(1\)st row: \[ S_{k} = \begin{bmatrix} k \\ 2 \\ \vdots \\k-1 \\ 1 \\ \vdots \\ n \end{bmatrix} \]We can swap the \(k\)th row with the \(j\)th row by using \(S_kS_jS_k\).
anonymous
  • anonymous
It turns our that \(R = S_2S_3\dots S_n\).
anonymous
  • anonymous
So we need to find a permutation that takes 3 cycles to return the the identity
Empty
  • Empty
Well we can represent every permutation matrix by permuting those columns of the identity matrix, so as long as we have an nxn matrix we can shift the elements to make the permutation matrix that cycles by 1. Since it's the permutation matrix on n elements, we know its n power will be the identity. It's weird and fascinating because permuting the permutation matrix is the permutation we are doing simultaneously. I find it to be amazing when I learned this.
anonymous
  • anonymous
The nice thing about these swap matrices is that it doesn't matter how large the matrix is. We can just ignore the larger dimensions and focus on the top row.
anonymous
  • anonymous
So, for example \((S_2)^2 = I\), where as \((S_2S_3)^3 = (S_3S_2)(S_2S_3)=I \)
anonymous
  • anonymous
And I believe that \((S_2S_3\ldots S_m)^m = I\).
anonymous
  • anonymous
You are basically making a shift matrix out of the first m rows, remaining the rest as is.
Empty
  • Empty
I see so m
anonymous
  • anonymous
\[ m\leq n \]
Empty
  • Empty
Ahhh ok, right of course
anonymous
  • anonymous
However, permuting this around, I'm not sure what happens with non-one values.
anonymous
  • anonymous
Suppose we have \(k^m = p\). Then we can use a matrix of this form: \[ \begin{bmatrix} A_{m,p} & 0 \\ 0 & kI_{n-m} \end{bmatrix}^m =\begin{bmatrix} pI_{m} & 0 \\ 0 & pI_{n-m} \end{bmatrix}= pI \]
Empty
  • Empty
I'm pretty confident that a permutation matrix that has within it an m-cycle will equal the identity matrix when raised to the mth power.
anonymous
  • anonymous
Now, for a \(2\times 2\) matrix, we have a problem. It's only going to have have 2-cycle permutations. So we won't be able to find one for odd cases.
anonymous
  • anonymous
So far we have \(m\leq n\) and \(2 | m\) as conditions.
Empty
  • Empty
A small example of the intuition I have when thinking of raising a 6x6 matrix with a 4-cycle to the 4th power: |dw:1436091055012:dw|

Looking for something else?

Not the answer you are looking for? Search for more explanations.