anonymous
  • anonymous
I'm trying to mathematically formalize the "green eyed logic puzzle" problem described in this video: https://www.youtube.com/watch?v=98TQv5IAtY8
Mathematics
  • Stacey Warren - Expert brainly.com
Hey! We 've verified this expert answer for you, click below to unlock the details :)
SOLVED
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
chestercat
  • chestercat
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
anonymous
  • anonymous
Let me rephrase the problem in a different way that is somewhat equivalent: 1) There are 100 robots. 2) Every robot knows everything that can be logically deduced from what they currently know. 3) Every robot knows the color of every other robot. 4) Every robot knows whether or not another robot has shut down. 5) If a robot knows its own color to be red, it will shut down at the first second of the next minute. 6) Every robot knows the facts listed above. 7) Every robot is red. We introduce the following fact: 8) Every robot receives a message, which they know has been sent to all other robots, which states that at least one robot is red. The question of the problem is then: What happens to the robots? The answer presented by the video is that all robots shut down after 100 minutes. I've been thinking about how to formalize the problem.
anonymous
  • anonymous
I'm going to start by simply only considering the colors of the robots, and only considering \(4\) robots. We'll give each robot a number from \(1\) to \(n\). We'll represent a robot's knowledge base as a tuple of sets. The \(i\)th element in the tuple will be the possible states a robot thinks the \(i\)th robot can be in. So for the \(1\)st robot, we will represent it's knowledge base as follows\[ (\{n,r\}, \{r\}, \{r\}, \{r\}) \]Another way we could represent this knowledge base is as follows\[ \{(n,r,r,r) ,(r,r,r,r)\} \]
anonymous
  • anonymous
Now I'm going to introduce a new concept, a means of combining two knowledge bases. We can think of it as this: What robot \(1\) can deduce robot \(2\)'s knowledge base to be. First, consider robot \(2\)'s knowledge base: \[ (\{r\}, \{n,r\}, \{r\},\{r\}) \]However, when robot \(1\) is deducing what robot \(2\) can know, it must add its own ignorance into \(2\)'s ignorance. Even thought robot \(1\) knows robot \(2\) knows robot \(1\)'s color, it has to consider both possibilities. The result is as follows: \[ (\{n,r\}, \{n,r\}, \{r\}, \{r\}) \]Another way to represent this is: \[ \{(n,n,r,r), (n,r,r,r), (r,n,r,r), (r,r,r,r)\} \]

Looking for something else?

Not the answer you are looking for? Search for more explanations.

More answers

anonymous
  • anonymous
Finally we can consider what robot \(1\) must deduces that \(2\) must deduce that \(3\) must deduce that \(4\) must deduce, and the result is: \[ (\{n,r\},\{n,r\},\{n,r\},\{n,r\}) \]And another way to represent this is as: \[ \{n,r\}\times\{n,r\}\times\{n,r\}\times\{n,r\} \]Where \(\times\) is the Cartesian product.
anonymous
  • anonymous
I will call this the "global knowledge". This represents the list of all states a robot can deduce the other robots could think to be possible. Remember that when your knowledge set is large, that means there is more uncertainty and so you are actually more ignorant. The omniscience robot would have a knowledge set of one state (the truth), while the completely ignorant robot would have a knowledge set of all states (every possibility). Knowledge lets you eliminate states. The set \(\{n, r\}^4\) also incidentally the set of all possible states, meaning that the global knowledge is completely ignorant.
anonymous
  • anonymous
The local knowledge for any robot is its own knowledge set. A statement is said to introduce no information if it doesn't modify any robot's local knowledge. However, though the message sent out all robots did not modify any local knowledge, it modified the global knowledge. It made the global knowledge out to be: \[\{n,r\}^4 \setminus \{(n,n,n,n)\}\]
anonymous
  • anonymous
Even \(2^4 = 16\) is a lot to write out, let's just start with a case of \(2\) robots... The knowledge set of robot \(1\): \[ \{(n, r),(r,r)\} \]The knowledge set of robot \(2\): \[ \{(r, n),(r,r)\} \]The knowledge set of the messages sent out to the robots: \[ \{(n, r), (r, n), (r,r)\} = \{n,r\} \times \{n,r\} \setminus \{(n,n)\} \]That is, since at least one robot is red, the state \((n,n)\) can be ruled out. All robots intersect this with there own knowledge set and see no change, so technically none of them learned anything new from it. However, let's consider what robot \(1\) could consider robot \(2\) to know before the message: \[ \{(n,n), (n,r), (r,n), (r,r)\} = \{n,r\}\times \{n,r\} \]To be clear, robot \(1\) knows that that robot \(2\) knows robot \(1\)'s color to be red or not red... but it doesn't know what \(2\) knows. Hence this is what robot \(1\) can deduce what robot \(2\) might know. The message doesn't modify robot \(1\)'s own knowledge set, but it does modify this knowledge set to be \[ \{(n, r), (r, n), (r,r)\} \]Since robot \(1\) knows this message was sent to robot \(2\) as well.
anonymous
  • anonymous
i'm sorry to come to someone else's tag here and ask for help but this is an emergency! and I need help ASAP! i'd really appreciate it if one of u helped!
anonymous
  • anonymous
Hello?? Anyone here?
anonymous
  • anonymous
Now let's go to the \(n=3\) case. Here is the global knowledge set. I'm partitioning it based on whether robot \(1\) is red or not\[ \{(n,n,r), (n,r,n), (n,r,r)\} \cup\{ (r,n,n),(r,n,r), (r,r,n), (r,r,r)\} = A\cup B \]The \(A\) case is very familiar. It is like the \(n=2\) case is happening for robots \(2\) and \(3\).
anonymous
  • anonymous
Guys, I'm open to suggestions, questions, etc.
anonymous
  • anonymous
I'm not completely sure what I'm doing here.
UsukiDoll
  • UsukiDoll
I see set theory being used.. like A or B so the set of a is umm the chances of the robot being not red or red and the set of B is a bigger chance of the robot being red.
anonymous
  • anonymous
Yes
anonymous
  • anonymous
\((r,n,n)\) means robot 1 is red, robot 2 is not red, robot 3 is not red.
anonymous
  • anonymous
Robot 1 can deduce \((r,r,r)\) and \((n,r,r)\) because it knows the other two are red.
anonymous
  • anonymous
Robot 2 can deduce \((r,n,r)\) and \((r,r,r)\) because it knows the other two are red.
anonymous
  • anonymous
Robot 1 asks the question: What can both myself and robot 2 know? It realizes that both know the third robot is red. It deduces: \((n,r,r),(n,n,r),(r,n,r), (r,r,r) \). That is, it deduces ever state where the final coordinate is \(r\).
UsukiDoll
  • UsukiDoll
(r,r,r) means robot 1 is red...so as robot 2 and 3
anonymous
  • anonymous
Then robot 1 asks the question, what do we all know together. Sadly, there is nothing that they all collectively know.
dan815
  • dan815
here's another way to go about it, as time passes they know its not just atleast one but atleast 2 amd 3 and so on
dan815
  • dan815
see if you can model that
dan815
  • dan815
for example the 3 case, once a day passes and no one has left, all 3 know now that atleast 2 of them must have green eyes
dan815
  • dan815
how to model that hmm
UsukiDoll
  • UsukiDoll
|dw:1435310256377:dw|
UsukiDoll
  • UsukiDoll
there your green eyes is present rofl
anonymous
  • anonymous
Yeah I was thinking shutdown needs to be a part of it as well, so we need to have \(\{n,r,s\}\), where \(s\) is the shutdown state. We can also have time simple be \(n\in \mathbb N\).
anonymous
  • anonymous
I've also noticed that if I added a robot with non-red lights, it would would see 100 robots with red, while every other robot would see 99 robots with red. So by the 100th minute, it would not deduce it self to have a red light.
anonymous
  • anonymous
There is one thing about this puzzle which I've always found extremely dubious, and that is the assumption that the robots make about the other robots. They assume that other robots will use inductive reasoning, but that assumption is only say if other robots make that assumption. It's circular deduction, isn't it?
anonymous
  • anonymous
It makes sense in the n=2 case
anonymous
  • anonymous
Hmmm, I guess it does make a bit more sense. My model is getting a bit better. so an example state might be:\[ (s,s,s,s,4) \]meaning that they all shut down at minute 4
anonymous
  • anonymous
A robot only needs to consider what he thinks a red eyes robot is thinking.
anonymous
  • anonymous
So at time of message, robot 1 looks at red robot 2 and deduces his knowledge set to be: \[ (1, n/r, n/r, r,r,r ,...) \]All the red robots have to be counting the same number of red robots.
anonymous
  • anonymous
They just need to be able to deduce whether there is \(n-1\) or \(n\) red robots.
dan815
  • dan815
for example if it sees 2 red and 1 non red then it knows if its red then the 2 red should dissappear in 3 days, if they dissappear in 2 days then it was also non red
dan815
  • dan815
so every robot should have a count total number of red and non red if they see anyone dissappear on days = (number of red -1 ) then it knows it is non red and continues to function, if it sees number of red and number of red days have pased then it shuts off, all robots will follow same method
dan815
  • dan815
i wonder how it would work, if new robots were added at different times into the mix
anonymous
  • anonymous
It would fall apart
dan815
  • dan815
ya, but i wonder if there is still some way to keep track or like atleast be close, like if the new robot sees someone dissappear then it knows the the previous count of the other robots +/- 1
anonymous
  • anonymous
Adding non-red robots won't change anything, because it won't change any robot's counts and robots know non-red won't shut down
dan815
  • dan815
right
anonymous
  • anonymous
Adding a red robot will depend on what the red robots know and what the other robots know the red robot knows
anonymous
  • anonymous
If the robots know that the red robot got the message and knows the time, it's no different than if it was there from the beginning.
anonymous
  • anonymous
I think otherwise it would work as if the robot wasn't added

Looking for something else?

Not the answer you are looking for? Search for more explanations.