Originally published in Queue vol. 12, no. 10—
see this item in the ACM Digital Library
Mohamed Zahran - Heterogeneous Computing: Here to Stay
Hardware and Software Perspectives
Hans-J Boehm, Sarita V. Adve - You Don't Know Jack about Shared Variables or Memory Models
Data races are evil.
Satnam Singh - Computing without Processors
Heterogeneous systems allow us to target our programming to the appropriate environment.
Dorian Birsan - On Plug-ins and Extensible Architectures
Extensible application architectures such as Eclipse offer many advantages, but one must be careful to avoid "plug-in hell."
(newest first)I genuinely hope to be able to prove this wrong at some point. I'm not looking for today's or tomorrow's claimed raw performance, that's a limiting perspective - hardware can and will improve and be optimized for any purpose. It is good to give hardware people something to do.
The software and microarchitecture is the limiting factor here. We are too lazy to re-implement an entire platform from scratch (Itanium anyone? or all the countless obscure or failed chipsets?) .. we want hardware that will run our existing applications. So we virtualize, translate, emulate, etc.. because we're lazy. But that's fine - what works, works.
But if your applications were all just libraries in your environment, and errors created data, and there was no such thing as compiling...
Can I compile almost arbitrary programs and run them on the GPU these days? Sure. But they won't be nearly as efficient. Compute-intensive codes might enjoy a speedup vs. the host CPU, but unless they fit a certain profile, you won't see the same orders-of-magnitude efficiency gains. (Efficiency includes energy efficiency as well as raw performance.) To me, that makes the GPU special purpose.
What's a general purpose processor then? It's a processor that doesn't really stand out as orders of magnitude more efficient for any particular task. Some general purpose processors are more efficient than others (A7 more efficient at pointer chasing than A15, for example), but they're all within small factors of each other. Some are tuned for peak raw performance, others for peak energy efficiency, but they don't have a particular application they're driving to extreme efficiencies.
I also agree with Benoit's point above that at the SoC level, an SoC is easily special purpose even if one or more processors on board are general purpose, if the peripheral mix is specialized. In the business I'm in, we combine ARMs, DSPs, and custom accelerators for FFTs, networking, etc. because we're specializing those chips for a certain set of markets. The same ARM on a different chip could go into a general purpose Linux server too.
One last note about the definition "war". For me the separation between a micro-controller and a microprocessor lies more in the addition of extra built-in peripherals in a micro-controller than in the internal architecture. Taking into account the peripherals, a micro-controller is never a general purpose processor even it it contains one.
Industry trends to leave special purpose architecture (transputer,Freescale DSP, PIC...) to more standard 32 bits "multi purpose" architecture based most of the time on ARM core. I think we lost a lot of diversity in the process.
Finally, an article has to be a little be catchy and only represent the point of view of the author.
While this article offers some interesting analysis, I feel that it would be stronger if you dropped the argument with the "general purpose" definition. It might make for a catchy title, but otherwise is off-putting.