A Conversation with John Hennessy and David Patterson:
They wrote the book on computing.
As authors of the seminal textbook, ’Computer Architecture: A Quantitative Approach’, John Hennessy and David Patterson probably don’t need an introduction. You’ve probably read them in college or, if you were lucky enough, even attended one of their classes. Since rethinking, and then rewriting, the way computer architecture is taught, both have remained committed to educating a new generation of engineers with the skills to tackle today’s tough problems in computer architecture, Patterson as a professor at Berkeley and Hennessy as a professor, dean, and now president of Stanford University.
Chip Measuring Contest:
The benefits of purpose-built chips
Alan Kay once said, "People who are really serious about software should make their own hardware." We are now seeing product companies genuinely live up to this value. It is exciting when the incumbents known as the chip vendors are being outdone, in the very technology that is their bread and butter, by their previous customers. Let's dive into some of the interesting bits of these purpose-built chips: the benefits of economics, user experience, and performance for the companies building them.
Computing without Processors:
Heterogeneous systems allow us to target our programming to the appropriate environment.
From the programmer’s perspective the distinction between hardware and software is being blurred. As programmers struggle to meet the performance requirements of today’s systems, they will face an ever increasing need to exploit alternative computing elements such as GPUs (graphics processing units), which are graphics cards subverted for data-parallel computing, and FPGAs (field-programmable gate arrays), or soft hardware.
DOA with SOA:
Adopting this architectural style is no cure-all.
It looks like today is finally the day that we all knew was coming. It was only a matter of time. An ambulance has just pulled up to haul away Marty the Software Manager after his boss pummeled him for failing to deliver on promises of money savings, improved software reuse, and reduced time to market that had been virtually guaranteed merely by adopting SOA (service-oriented architecture). Everything could have been so different for Marty. If only there had been a red-hot market for a software application that fetched the price of London gold, converted the price from pounds to dollars, calculated the shipping costs for the desired quantity, and then returned a random verse from the King James Bible.
Discipline and Focus:
Transcript of interview with Werner Vogels, CTO of Amazon
When it comes to managing and deploying large scale systems and networks, discipline and focus matter more than specific technologies. In a conversation with ACM Queuecast host Mike Vizard, Amazon CTO Werner Vogels says the key to success is to have a relentless commitment to a modular computer architecture that makes it possible for the people who build the applications to also be responsible for running and deploying those systems within a common IT framework.
Heterogeneous Computing: Here to Stay:
Hardware and Software Perspectives
Mentions of the buzzword heterogeneous computing have been on the rise in the past few years and will continue to be heard for years to come, because heterogeneous computing is here to stay. What is heterogeneous computing, and why is it becoming the norm? How do we deal with it, from both the software side and the hardware side? This article provides answers to some of these questions and presents different points of view on others.
How to Design an ISA:
The popularity of RISC-V has led many to try designing instruction sets.
Over the past decade I've been involved in several projects that have designed either ISA (instruction set architecture) extensions or clean-slate ISAs for various kinds of processors (you'll even find my name in the acknowledgments for the RISC-V spec, right back to the first public version). When I started, I had very little idea about what makes a good ISA, and, as far as I can tell, this isn't formally taught anywhere.
OS Scheduling:
Better scheduling policies for modern computing systems
In any system that multiplexes resources, the problem of scheduling what computations run where and when is perhaps the most fundamental. Yet, like many other essential problems in computing (e.g., query optimization in databases), academic research in scheduling moves like a pendulum, with periods of intense activity followed by periods of dormancy when it is considered a "solved" problem. These three papers make significant contributions to an ongoing effort to develop better scheduling policies for modern computing systems.
On Plug-ins and Extensible Architectures:
Extensible application architectures such as Eclipse offer many advantages, but one must be careful to avoid “plug-in hell.”
In a world of increasingly complex computing requirements, we as software developers are continually searching for that ultimate, universal architecture that allows us to productively develop high-quality applications. This quest has led to the adoption of many new abstractions and tools. Some of the most promising recent developments are the new pure plug-in architectures. What began as a callback mechanism to extend an application has become the very foundation of applications themselves. Plug-ins are no longer just add-ons to applications; today’s applications are made entirely of plug-ins.
There’s No Such Thing as a General-purpose Processor:
And the belief in such a device is harmful
There is an increasing trend in computer architecture to categorize processors and accelerators as "general purpose." Of the papers published at this year’s International Symposium on Computer Architecture (ISCA 2014), nine out of 45 explicitly referred to general-purpose processors; one additionally referred to general-purpose FPGAs (field-programmable gate arrays), and another referred to general-purpose MIMD (multiple instruction, multiple data) supercomputers, stretching the definition to the breaking point. This article presents the argument that there is no such thing as a truly general-purpose processor and that the belief in such a device is harmful.
To PiM or Not to PiM:
The case for in-memory inferencing of quantized CNNs at the edge
As artificial intelligence becomes a pervasive tool for the billions of IoT (Internet of things) devices at the edge, the data movement bottleneck imposes severe limitations on the performance and autonomy of these systems. PiM (processing-in-memory) is emerging as a way of mitigating the data movement bottleneck while satisfying the stringent performance, energy efficiency, and accuracy requirements of edge imaging applications that rely on CNNs (convolutional neural networks).