At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
go to this website http://en.wikipedia.org/wiki/Scratch_(programming_language)
with 0 and 1 check out the assembly language
by implementing mathematics on transistor and diode (in short) A game of 0,1 (electronics)to opcodes(eletronics codes) to mnemonics(assembly language) to higher level language(mapping keywords, identifier and their order to mnemonics by compiler)go through compiler design to know competely
I struggle with this a lot and would love to know what it is about 1s and 0s and a CPU that make something happen. How are instructions built into the CPU such that a human readable language can make sense to it? my reading makes me better informed about many things but this topic seems to be so much like one of those circular philosphical questions. aaaagggh!!! Or was this on of those trick "if only DEAD people understand hexadecimal how many people understand hexadecimal etc....?
The answer would contain many books, you just have to find out yourself
Yes, that I am finding out! But it's a good experience.
They are basically superimposed over some known languages.
@Cryptic: On the electronic level 1's and 0's just tell a circuit to be on or off. The processor know which switches to switch through machine language which is our 1's and 0's this is the lowest level programming language and is what all other higher level languages are converted to at some point. Assembly language introduces mnemonics (stuff other than 1's and 0's). Assembly languge is a low-level language too, but it is also converted to ML because that is the only language a processor can understand. Both machine language and assembly language are architecture specific, that is to say that to program in them you must know the processor you are working on pretty well and programs written in these languages will only work on the processor they were written for. High-level languages like C and so forth introduce English words into the language. This makes them much easier to understand from a human perspective, with the trade off that they are slower than ML and ASM. These languages have to be compiled into ML for the processor to understand. These languages are not architecture specific, but their compilers are (sort of; one compiler can work on many different processor, but compilers are platform specific, i.e. a PC compiler will not work on a Mac). Interpreted languages (like Java for instance) are high-level languages that also use English words to be more programmer friendly. However, interpreted languages work much differently than compiled languages. With an interpreted language. The source code is compiled into and intermediate language. In Java this intermediate language is called bytecode. For the intermediate language to run it has to have an interpreter that translates the bytecode into ML for the processor to execute. In Java this interpreter is call the Java Runtime Environment (JRE) and is handled but the Java Virtual Machine (JVM). The advantage of interpreted languages is that the same compiled program can run on any platform with the caveat being that the machine attempting to run the program has the interpreter installed (the JVM). This makes interpreted languages quite useful when one needs an application to run on any machine. The trade off for this cross platform functionality is in processing speed. Interpreted languages are slower than compiled languages and exponentially slower than low level languages. This is an overly simplified explanation and there are a lot of little nuances in different languages that make them unique, but in a nutshell, this is how a processor 'knows' how to execute commands that are something other than 1's and 0's.
Thanks, I'll try it again tomorrow. My head still can't get there - how does the CPU know? Is it the physical structure that only allows certain pathways to open? I know I'll get there in the end but it hurts my head at present, anyway off to my IT class for the day!
Think of it like this: as the electricity travels through the wires (as 1's because remember 1 means on) of the IC, it comes to gateways. These gateways are triggered by the electricity which opens the gateway for another 1. That one can travel further until ir reaches another gateway triggering that so that another 1 can get through until finally a 1 gets to the part of the IC that needs to be ultimately switched on. In this way the electricity is routed to the various parts of the processor that handle different functions.