Listen to an MP3 of this article  

Custom Processing - Transcript

Transcript of interview with IBM chief scientist Peter Hofstee

Custom Processing - Transcript

MICHAEL VIZARD: Hello, and welcome to this edition of the ACM Queuecast with your host, Mike Vizard. That's me. Today we're going to talk about system on a chip and some of the design issues that go with that, and more importantly, some of the newer trends, such as the work that IBM is doing around the cell processor to advance the whole system on a chip processor. To that end, we've invited Peter Hofstee, Chief Scientist for the cell processor project that is being funded by IBM, Toshiba, and Sony, to talk to us today about how the whole system on a chip marketplace might change in the advent of the invention of the cell processor, and what technology is driving that. Welcome, Peter.

PETER HOFSTEE: Hi, Mike. Good talking to you.

MV: Good talking to you. I guess one of the things that drives me a little bit crazy, as someone who's been watching the industry for a while, is that we've been talking about system on a chip design for what feels like the better part of five to ten years now, and yet when I look for the adoption and where's the impact for the marketplace, it's hard to put my finger on any one thing that says, "Wow! This is how it changed the universe." What are the issues do you think with some of the adoption of system on a chip historically, and where do you think we are in its life cycle?

PH: Yes. Well, I think system on a chip may be one of these things that sort of creep up on you. So on the one hand, if you are in the middle of it, you might be looking at it and say, "Oh, this is taking forever, and when will we get there?" At the same time, at some point you stand still and you look around you and you say, "Hey, gosh, a lot of processors now have integrated memory controllers, for example. Processors typically have multiple cores on a chip and are integrating more IO interface type functions, as well." So I think in many ways the system on a chip era has already arrived and cell I think is an example of that and maybe one of the more clear examples.

MV: So what do you think are some of the more successful system on a chip designs out there that people may not recognize because, you know, all they see is what the thing does as its function, but they don't really get a good look inside in terms of how it runs?

PH: There are a lot of examples, and of course systems on a chip have been around for a long, long time, right? So, microcontrollers from way back when, when they were introduced, I'm sure were aggregating multiple functions that were present in those systems, and in some sense, I think some of the more recent interesting examples are of course all the communication functions that become integrated with chips, you know, much more of the function that used to be in separate LSIs and things like cell phones and so forth.

So I do very much look at it as a continuous process. I am very intrigued. Of course being a microprocessor architect I look at microprocessors and I am very intrigued by the levels of integration that we are now achieving in microprocessors and what it does to the architecture of microprocessor systems, really.

MV: What are the alternative choices? Let's say if I was going to go build something, and I was looking at a system on a chip design, is there some other way I would go about this maybe using general purpose processors from an AMD or an Intel? And what are kind of the tradeoffs that I should be thinking about when I make that kind of decision?

PH: I think everybody is being forced in more or less the same direction. You know, some companies are faster about this than others. There also are some technology hurdles that maybe keep people from integrating things at the rate they would like to. I don't believe that longer term there is a real alternative. I think ultimately the pace of integration and the fact that we can no longer really scale our performance at the rate we would like to by keeping all the function in similar type units, and that we do have to follow some path towards differentiation and include things like the synergistic processors or other more accelerated type functions in our systems. I think we will have to find ways to bring these functions on the chip and create at least for high volume applications, create single chip solutions.

Now, there's always a tradeoff between the volumes that you will produce and the cost of packaging your components individually. So if you need to pursue 16 different systems with a handful of different components, and none of these 16 systems have the volume that warrant integrating those components into a single chip, then you will see the individual components, like I think we can probably still improve in the effectiveness of our packaging techniques and assembling techniques for such systems.

But I'd say the only force that sort of fundamentally keeps systems fragmented instead of integrated into SOCs is volumes. And as we are more able to capture large market segments with these integrated solutions -- and it looks like cell is able to capture quite a few different market segments -- then you will see a strong drive towards integration and system on a chip.

MV: It feels like the tipping point may have been when people discovered that as they tried to improve the clock speeds on processors, they kept running into more and more heat issues, so then they figured out that the next best way to go to achieve the performance levels they're looking for was to move to a system on a chip design, where they could get the memory closer to the core processor. And that in turn happened last year, so maybe the end effect of that is we won't really see it take place in terms of actual products on the market using that design for another six to 12 months out, and we're kind of in the middle of that tipping point. Is that about right?

PH: I think we are in the middle of this. You know, we were placing a bet on this five years ago, and it's come about the way we thought, similarly to integrating memory controllers to deal with the challenge of increased memory latency. And I think the power limit that we have is very much driving people towards multi-core, right? If you have a problem that can be effectively executed on a concurrent system with multiple cores,that is a much more energy efficient way of going about computation than trying to make one core really fast. So this is what is driving us to multi-core.

And I think that going even further, and we are a bit ahead of that in the case of cell, I think it also will drive us towards specialization of cores, where different cores will be optimized for different functions, and in the case of cell, a core that is focused on the control function and then another one that is focused on the compute function.

MV: Now, why don't we --

PH: I think that really drives a system on a chip type architecture, rather than trying to make a monolithic processor, very big, and keep all the other stuff around it on separate LSIs.

MV: Why don't we try to get into what exactly is a cell processor because most people, at least in my opinion, are just now wrapping their heads around dual-core, multi-core. They have a vague idea of exactly what a system on a chip design means, and then outcomes you guys with the next generation of something that, you know, initially when you first hear about it sounds more like science fiction than reality. (Laughter)

PH: Yes. So what cell really is, it's a microprocessor with an integrated memory controller -- in our case, dual channel XDR RAM based (?) memory controller; a lot of bandwidth -- 25.6 kilobytes per second; an on-chip coherence fabric, so the organization of the processor is like an SMP on a chip. The architecture of it is based on the power architecture. As you know, this is the IBM line of RIS processors that we use in everything from embedded systems to servers. And then the new element that we've added is the synergistic processor element, and this is a separate, independent RIS processor with a memory unit that -- the MA unit that moves data between shared system memory and something we call a local store, and the RIS processor really operates on this local store. And we have eight of these synergistic processors on the single die as well.

And then also connected to the chip fabric is an IO controller -- that's a highly configurable IO subsystem, and we can use it either for a coherent interface, so we can build multiprocessor systems with multiple cell chips, or we can use this interface to connect to bridge chips and get into more standard IOs in the system. So, yeah, a lot of different components within a single chip.

MV: So, all right, it's a core concept that feels like there are specific cell chips that can be used for specific functions, and then there are other processors that are orchestrating the interaction of all those different chips so that you get this holistic system on a chip.

PH: Yes. So the power processor is very much the orchestrator in the system and within the chip. And the synergistic processors are the more compute-oriented or data plane processors. So the memory controller, the power processor, and the eight synergistic processors, and the IO controller are all within a single chip. So again, a lot of different components, and very much I'd say a strong example of system on a chip. Also, and this is where a cell may be a bit different from other SOCs, a modular design, but a very highly customized design, so each of these blocks, like the synergistic processor or the power processor, are fully custom designed.

MV: Now, as part of moving to this, am I having the equivalent of a move from say SIS to RIS type processors, where I'm going to see a significant change in the way a compiler needs to behave? And how does that change the way I need to think about creating applications around this?

PH: Yeah. So as the transition to RIS was this kind of a transition to a nonhomogeneous multi-core architecture very much needs its complement in an effort on the software and compiler side. So, you know, RIS defined a new contract between the hardware and the software. And to some extent, cell does this, too.

Part of this is a continuation of a path that we're already very familiar with in our service systems. So if you look at cell as an SMP type system, there's a whole bunch of programming techniques that go along with that, that at least in the server world are very well-known. But then in addition to that, the synergistic processors have the data movers. You know, some of the very old compilers sometimes know how to deal with this, but the more modern compilers have to learn and re-learn some of these things.

MV: So are we having a back-to-the-future moment here, where we're going to combine some of the older compiler techniques with the newer compiler techniques?

PH: Yes. Actually, somebody asked me a question not too long ago, what I thought the ultimate architecture was. And my answer was that I didn't believe that such a thing existed because what architecture does is it tries to bridge between the physical properties of the technology that you're working with and the programmer. And as these technologies change, the architecture has to adapt to fill that gap.

And in the case of cell, when the microprocessor was introduced, yes, system memory was only a few cycles away, and a demand-driven model of going after that memory and bringing data in that you needed was fine. The model that we have with cell, which is more of a shopping list type of approach and you go out and you get the things that you need before you operate on them, is actually similar to what you had in the very early days, where also you may have had your main store on a spinning drum or something like that. And at that time, your processing capability, though much much much lower than it is today, was also significantly faster than your data store. So some of the techniques that people had to deal with those kinds of systems actually indeed come back into relevance right now, which is very interesting.

MV: How do I build a network of a cell processor-based system on a chip design? I've heard some people describe the firmware that sits on top of this architecture as, quote/unquote, floating.

PH: I think that for the most part, once you get to sort of a socket level and go up to systems from there, cell is a fairly conventional processor. I think the comment that you heard maybe referred to the fact that these synergistic processors are very easily reconfigured to take on tasks that maybe in other systems might have been taken on by LSIs, that were much less flexible. And so you get the kind of system configuration that may have required you to reprogram an SPGA or even replace LSIs in the system by simply rededicating an SPE to a different type of task, which gives you the impression of a very flexible and fluid kind of a system, so maybe that was what was behind that.

MV: Are there any downstream unique benefits to security in this architecture? Because a lot of times people point to Microsoft as being the source of insecurity in the systems, but there's a conversation that says the operating system is just dealing with the way the hardware from Intel was built in the first place. So if I look at this architecture, given that there are cell processors for different kinds of functions, can I embed or have a different approach to security?

PH: Yes. We actually do. This is another example of this reconfigurability that I just talked about, we have a function in the architecture -- it's called isolate load -- where a synergistic processor element first clears its internal state and then it brings in a piece of code and/or data, and it authenticates that and can decrypt it, also. And then if that code authenticates, it will start executing that code, but in a mode where it's very much isolated from the rest of the system. It can still control data coming in and out, but the rest of the system has very little control over the processor at that point.

It has the advantage that even if the operating system -- and we actually have a level underneath that, the hypervisor, that allows multiple operating systems to run. So even if you do not trust the hypervisor or the operating system, the integrity of the synergistic processor in this state only depends on the integrity of essentially the application code running on that processor, rather than the integrity of the operating system or the hypervisor, and I think that that's a lot more tractable than to try and write a secure operating system or even a secure hypervisor.

MV: So, given all that, what is your expectation for when you think cell processor-based system on a chip designs are going to be the mainstream way people are going about building the next generation of at least dedicated computers, if not all computers?

PH: I imagine it may take us a little while. The cell processor, even though it is a very modular design and therefore is reasonably easy to reconfigure into different applications and things, it's still -- you know, it's not as much effort as a new full custom design, but not as little effort as a conventional SOC. So I think we would have to improve our ability to quickly build derivatives to be able to change this balance between the volume that you need to rationalize a new design and the number of such designs that you can support. But I do believe that the SOC concepts that are incorporated in cell will lead us in that direction.

MV: So do you think at the end of the day we'll get system on a chip price points down to a place where it will start to essentially marginalize general purpose core processor approaches and maybe over time the general processor approach, or general purpose approach disappears or becomes a minor set of the market, or is it more of a 50/50 split? How do you think it will shake out?

PH: That's a very interesting question, and I don't want to claim to have the answer. I do believe that the answer probably is going to depend more on software than on hardware. I mean, there are a number of fixed costs, right, for doing a design. In particular you need a mask set. But then again, you can put multiple SOCs, if they're small enough, off in a single vehicle. So even for relatively low volumes, once your methodology is robust so that you can count on having first time right designs and so on, I can envision a lot of different SOCs being designed and manufactured.

I think what ultimately drives people towards more general purpose systems is if the software does not support the different kinds of elements that are included in such an SOC approach. So just as the processor architecture is needed to move forward, you need to define these contracts between software and hardware. I think ultimately to really move forward, SOCs need to drive for a similar software/hardware contract. And I don't quite see that yet.

MV: So there needs to be more standardization on the software side, especially among the compilers, around the SOCs.

PH: I believe so. I think that if you really want to sort of change the balance between what is done with SOCs versus general purpose processors that may look more like SOCs, but still would be recognized more as a general purpose processor, I think a lot of focus on software, as well as chip methodology, is going to be needed.

MV: So, last question. What is your best piece of advice for somebody who's going out today to start a project around an SOC approach? What are the "gotchas" that they should be looking for, and what do you think they should be thinking about?

PH: If you can get your hands on an idea that is just fundamentally very sound, there tend to be ways of making that work and making it work financially. So my advice -- very much colored by my own past, I guess -- is to try and make sure that you really deeply understand the fundamentals and how this is going to lead you into the future. And if you really understand that, I think it's hard to go wrong.

MV: Great. Peter, thanks for your time today and thanks for joining us, and we wish you the best of luck going forward.

PH: Thanks so much, Mike. I enjoyed talking to you.


Originally published in Queue vol. 5, no. 1
Comment on this article in the ACM Digital Library

More related articles:

Brendan Burns, Brian Grant, David Oppenheimer, Eric Brewer, John Wilkes - Borg, Omega, and Kubernetes
Though widespread interest in software containers is a relatively recent phenomenon, at Google we have been managing Linux containers at scale for more than ten years and built three different container-management systems in that time. Each system was heavily influenced by its predecessors, even though they were developed for different reasons. This article describes the lessons we’ve learned from developing and operating them.

Rishiyur S. Nikhil - Abstraction in Hardware System Design
The history of software engineering is one of continuing development of abstraction mechanisms designed to tackle ever-increasing complexity. Hardware design, however, is not as current. For example, the two most commonly used HDLs date back to the 1980s. Updates to the standards lag behind modern programming languages in structural abstractions such as types, encapsulation, and parameterization. Their behavioral semantics lag even further. They are specified in terms of event-driven simulators running on uniprocessor von Neumann machines.

John R. Mashey - The Long Road to 64 Bits
Shakespeare’s words often cover circumstances beyond his wildest dreams. Toil and trouble accompany major computing transitions, even when people plan ahead. To calibrate “tomorrow’s legacy today,” we should study “tomorrow’s legacy yesterday.” Much of tomorrow’s software will still be driven by decades-old decisions. Past decisions have unanticipated side effects that last decades and can be difficult to undo.

© ACM, Inc. All Rights Reserved.