At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
Distributed Processing means the instructions are spread across n cores (but mostly referred as n computers, connected by some means), while threading means the instructions are spread across n threads. A thread is a sequence of instructions "that can be managed independently by an operating system scheduler" (wikipedia). A CPU core can only run 1 instruction per each tick, so distributed processing can run n instructions at a time (where n is the number of cores), achieving true concurrency. A downside is that to increase the performance (allowing it to do more instructions per each tick), one often have to buy more hardware, which is expensive. On the other hand, threading achieves concurrency by "rotating" between each thread, performing 1 instruction from 1 thread, then move onto another. This is a much more feasible solution under limited hardware, but this is not true concurrency. A CPU and a Core is actually the same thing. Multi-core processor just straps 2 or more CPUs into 1.
As tyteen describes, I would think of distributed processing as a compute problem that is solved using software that distributes parts of the problem data across multiple computers, connected on a network, and then combines results after processing is complete. Condor is an example, as is grid computing. See these links for more info. http://research.cs.wisc.edu/condor/ http://en.wikipedia.org/wiki/Grid_computing Distributed processing is used extensively in financial modeling, molecular and genetic analysis, weather and climate modeling, and many other scientific areas. It is also used extensively in Hollywood where you may have heard of render farms created by studios to process each frame of a film at 2k x 2k per frame and 24 frames per second, it quickly makes sense to use a large network of lower cost machines to make this processing happen faster. If someones says, it is a distributed job using 32 cores, they most likely are using core to refer to 32 single core machines distributed on a network. In the last 5 years or so, CPU designers developed multi-core architectures that include additional processing cores on a single chip. Dual and quad core designs are prevalent today in both desktop and mobile CPU's. Graphics card companies exploit multi core architectures even more, mainly because there is more benefit to parallelism in graphics processing, than there currently is for general purpose CPU's. So while OS designs have long supported multi-threading (independent chunks of code that run simultaneously) on single cores, now that a CPU has multiple cores, threads can be delivered to separate cores, providing even more speed up. This wikipedia article needs some work but can give the general idea. http://en.wikipedia.org/wiki/Multi-core_CPU Finally, the best part is that software like CUDA provides libraries to allow any application to leverage the hundreds of compute cards found in your graphics card. This promises massive speed ups in apps besides video and 3D, allowing developers to access more cores than ever on a single machine. http://en.wikipedia.org/wiki/CUDA Sorry it got so long, but I hope that helps. Very exciting times my friend, very exciting!
thanks guys, does the summary look valid : 1) "core" or "cpu" both are same, used to refer to same thing. 2) "multi-core processor" has many core/cpus in one "single chip" 3) distributed processing : we say its a distributed job, when a job is distributed across "multiple COMPUTERS/MACHINES." 4) threading : we say a job is running in multiple threads, when it is running on "SAME MACHINE/COMPUTER," but taking advantage of parallelism offered by "multi-cores onchip" ?
multi-threading doesn't always provide paralellism. it depends what type of thread your process is having... i mean user thread or kernel thread. kernel thread is responsible for parallelism in multi-proccessor system.