A deep controversy in the emerging programming world isn't about Java vs C#. It's not about whether to adopt web 2.0 or to embrace open source. There is something more basic than any of this.
The first key decision that a developer needs to make when adapting their architecture to parallelism is, how should I express it? What programming paradigm best captures the current and future needs that my users have versus the future trend of modern computer architectures towards more cores? And frankly, which technology can I master the fastest and get quick results?
Back in the early 80s, the most practical way to write parallel code was using the process model in Unix. The fork()/pipe()/exec() model was easy to use and understand. Or you could write your modules in a dataflow model: reading standard input, processing, and writing standard output. Then string these independent programs together with pipes. The other nice feature is that you could run these processes on the same system or spread them around to a group of systems, without changing a line of code.
Not long after though, threading models started appearing in both Windows and Unix. Today, all of the major programming platforms offer some kind of native threads
However, the process model isn't dead! Look at SOAs for example. Service Oriented Architectures are essentially using the process model to take advantage of the investment people have made in developing solutions. By wrapping these solutions in XML and SOAP and turning them into services, you can now compose new applications and solutions much more quickly. These new application compositions have the beauty of either running all on the same system on multiple cores, or running on separate systems in a grid, something that is hard to achieve in a threaded application. And from my perspective, there is a fair amount of discipline in writing good threading code which doesn't burden the process model.
So if I want to adapt my code to multi-core, should I thread? Or use the process model?
One way to decide is based on the synchronization needs of the solution. If you have a client application which needs to drive a display as the main I/O device, then you have a single event (screen update) which really can't easily be shared across process boundaries. Threading in this case would be the best choice. But if you are processing across I/O devices which can be updated in parallel, like disk storage, then a process model might make the best choice. There may be some hybrid solutions as well: services in an SOA environment which are themselves compute-intensive and more amenable to threading.
Whichever direction you decide to go, there are plenty of Intel resources to help you out.
The opinions in this piece are mine alone and do not reflect the official position of Intel on products or strategies.