June 15, 2019 | Palace Hotel, San Francisco
ACM-IMS Interdisciplinary Summit on the Foundations of Data Science

ACM and the Institute of Mathematical Statistics are bringing together speakers and panelists to address topics such as deep learning, reinforcement learning, fairness, ethics, and the future of data science. Jeannette Wing and David Madigan are the event Co-Chairs.


March/April 2019 issue of acmqueue The March/April 2019 issue of acmqueue is out now

Subscribers and ACM Professional members login here

Computer Architecture

  Download PDF version of this article PDF

Error 526 Ray ID: 4dbccbcbeb734714 • 2019-05-24 05:16:11 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 12, no. 10
see this item in the ACM Digital Library



Mohamed Zahran - Heterogeneous Computing: Here to Stay
Hardware and Software Perspectives

Hans-J Boehm, Sarita V. Adve - You Don't Know Jack about Shared Variables or Memory Models
Data races are evil.

Satnam Singh - Computing without Processors
Heterogeneous systems allow us to target our programming to the appropriate environment.

Dorian Birsan - On Plug-ins and Extensible Architectures
Extensible application architectures such as Eclipse offer many advantages, but one must be careful to avoid "plug-in hell."


(newest first)

Jacob Munoz | Wed, 12 Nov 2014 05:41:38 UTC

I genuinely hope to be able to prove this wrong at some point. I'm not looking for today's or tomorrow's claimed raw performance, that's a limiting perspective - hardware can and will improve and be optimized for any purpose. It is good to give hardware people something to do.

The software and microarchitecture is the limiting factor here. We are too lazy to re-implement an entire platform from scratch (Itanium anyone? or all the countless obscure or failed chipsets?) .. we want hardware that will run our existing applications. So we virtualize, translate, emulate, etc.. because we're lazy. But that's fine - what works, works.

But if your applications were all just libraries in your environment, and errors created data, and there was no such thing as compiling...


Joe Zbiciak | Tue, 11 Nov 2014 16:09:12 UTC

When I think "special purpose", I think "provides an order of magnitude or more advantage at the particular thing it's useful for." That doesn't exclude using it for other purposes. For example, GPUs are very, very good at graphics, easily a couple orders of magnitude beyond the host CPU in terms of efficiency.

Can I compile almost arbitrary programs and run them on the GPU these days? Sure. But they won't be nearly as efficient. Compute-intensive codes might enjoy a speedup vs. the host CPU, but unless they fit a certain profile, you won't see the same orders-of-magnitude efficiency gains. (Efficiency includes energy efficiency as well as raw performance.) To me, that makes the GPU special purpose.

What's a general purpose processor then? It's a processor that doesn't really stand out as orders of magnitude more efficient for any particular task. Some general purpose processors are more efficient than others (A7 more efficient at pointer chasing than A15, for example), but they're all within small factors of each other. Some are tuned for peak raw performance, others for peak energy efficiency, but they don't have a particular application they're driving to extreme efficiencies.

I also agree with Benoit's point above that at the SoC level, an SoC is easily special purpose even if one or more processors on board are general purpose, if the peripheral mix is specialized. In the business I'm in, we combine ARMs, DSPs, and custom accelerators for FFTs, networking, etc. because we're specializing those chips for a certain set of markets. The same ARM on a different chip could go into a general purpose Linux server too.

Benoit Callebaut | Sun, 09 Nov 2014 17:10:54 UTC

Very interesting article. It goes through a lot of concepts in this "short" article. I think it is long enough to give a good overview of the situation but far too short to explain in details every aspect of the problem. A lot of interesting points like an history of the chip/computer companies strategies (from specifically purpose built main frames to of the self computers in clusters...),HPC consideration & benchmarks,operating system and compiler design could have given more insight/arguments but they are way out of the scope of this article.

One last note about the definition "war". For me the separation between a micro-controller and a microprocessor lies more in the addition of extra built-in peripherals in a micro-controller than in the internal architecture. Taking into account the peripherals, a micro-controller is never a general purpose processor even it it contains one.

Industry trends to leave special purpose architecture (transputer,Freescale DSP, PIC...) to more standard 32 bits "multi purpose" architecture based most of the time on ARM core. I think we lost a lot of diversity in the process.

Finally, an article has to be a little be catchy and only represent the point of view of the author.

YetAnotherBob | Sun, 09 Nov 2014 15:45:07 UTC

General Purpose means that the processor can be used for any purpose that can be fit into it's available memory. If it is Turing Complete, then it is General Purpose. This is as opposed to a Spcial Purpose processor. Such things can be built perform specific functions much faster than any General Purpose Processor. to Some are built including both general and special purpose processor The math co-processor implemented it's math functions in gate logic, to speed up the mathematical operations. It did not, however include some of the functions required for Turing completeness. Early Linux systems often included a library to mimic the math co-processor for systems which didn't include one. From the 80486 onward, Intel processors have included the math co-processor on the same chip with the general purpose chip. To me, the Author here is pointing out that most of the current 'special purpose' processors are actually also general purpose. The distinction thus becomes meaningless.

ARaybould | Sun, 09 Nov 2014 13:51:01 UTC

@ET3D: In the essay form of writing, it is not uncommon for the title to be merely the starting point for a possibly wide-ranging discourse. This can result in a discursive ramble, but when done well, it can lead readers to a point of view they had not considered before. If you approach such essays with the attitude of grading them on how closely they adhere to the issue raised by the title, you will miss some interesting reading.

ET3D | Sun, 09 Nov 2014 08:04:51 UTC

That's an awfully long article to write on a definition. You say that the definition of "general purpose processor" has changed over time, and now doesn't include microcontrollers. I feel that it's your own definition and you're trying to press a point that you yourself have created. If you look at "general purpose" in the more usual sense of "not geared toward a narrow field of operation", then there is a lot of sense in general purpose processors. That is why microcontrollers, which are general purpose processors, are so successful. "General purpose" doesn't mean "one size fits all".

While this article offers some interesting analysis, I feel that it would be stronger if you dropped the argument with the "general purpose" definition. It might make for a catchy title, but otherwise is off-putting.

Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.