With the release of Toy Story in 1995, Pixar Animation Studios President Ed Catmull achieved a lifelong goal: to make the world's first feature-length, fully computer-generated movie. It was the culmination of two decades of work, beginning at the legendary University of Utah computer graphics program in the early 1970s, with important stops along the way at the New York Institute of Technology, Lucasfilm, and finally Pixar, which he cofounded with Steve Jobs and John Lasseter in 1986. Since then, Pixar has become a household name, and Catmull's original dream has extended into a string of successful computer-animated movies. Each stage in his storied career presented new challenges, and on the other side of them, new lessons. In our interview this month, Catmull shares some of the insights he has gained over the past 40 years, from the best way to model curved surfaces to how art and science interact at Pixar.
Interviewing Catmull is Stanford computer graphics professor Pat Hanrahan, a former Pixar employee who worked with Catmull on Pixar's acclaimed RenderMan rendering software, for which they share a Scientific and Engineering Oscar. Hanrahan's current research at Stanford focuses on visualization, image synthesis, virtual worlds, and graphics systems and architectures. He was elected to the National Academy of Engineering and the American Academy of Arts and Sciences and is a member of ACM Queue's Editorial Advisory Board.
PAT HANRAHAN You're one of those lucky guys who got to go to the University of Utah during what was probably one of the most creative times in computing. All these great people were there. You were advised by Ivan Sutherland, and Jim Clark was there, as were Alan Kay, John Warnock, and many other incredible people. What was it like? Did you all hang out together and work together?
ED CATMULL We were all quite close. The program was funded by ARPA (Advanced Research Projects Agency), which had an enlightened approach, and our offices were close together, so it was a great social environment.
Dave Evans was the chairman of the department and Ivan was teaching, but their company, Evans and Sutherland, took all their excess time. The students were pretty much independent, which I took as a real positive in that the students had to do something on their own. We were expected to create original work. We were at the frontier, and our job was to expand it. They basically said, "You can consult with us every once in a while, and we'll check in with you, but we're off running this company."
I thought that worked great! It set up this environment of supportive, collegial work with each other.
PH It seems that you published papers on every area of computer graphics during that era. Your thesis—and I don't even know if you ever published a lot of what was in there—described z-buffering, texture-mapping algorithms, and methods for the display of bicubic surfaces. You wrote papers on hidden-surface algorithms, anti-aliasing, and, of course, computer animation and geometrical modeling. Your interests seem so wide ranging—not the way typical grad students today approach research, where they'll home in on one particular subtopic and drill down to bedrock on it.
EC I guess I didn't know any better.
PH Did you have a favorite from all of that work? What do you think was your most inspirational thought during that period?
EC For me, the challenge was to figure out how to do real curved surfaces, because other than quadric surfaces, which were way too limited, everything was made up of polygons. So, my first idea was to find a way to actually bend polygons to have them look right.
PH How do you bend a polygon?
EC Well, it was kind of ad hoc. And it largely worked, but as you might imagine, if you don't have a well-defined surface, then you're going to come up with cases that break. So, I figured out how to work with B-spline surfaces, but the difficulty was that it would take way too long to render a picture. At the time, I thought my most ingenious idea was a new method for subdividing a surface. It was the equivalent of a finite difference equation, except that it split the curve in half each iteration. Think of a line: with a difference equation, four adds gets you the next point in a cubic curve; every four adds gets you a new point. I came up with a method such that every four adds got you the point in the middle of the curve. It turns out that for doing subdivision surfaces, it's really fast.
PH Can it be made recursive?
EC It was recursive. I thought this was my neatest idea at the time, and to my knowledge, it actually was. It's an original contribution for difference equations. But as computers got faster, it just wasn't the main problem. I implemented it, and sure enough, it was fast; but it still took 45 minutes to make pictures on the PDP-10.
PH Evans and Sutherland Corporation placed heavy emphasis on realtime graphics, but you were willing to step back a little and say, "I know there's this emphasis on making things efficient, but we should explore this frontier of non-realtime stuff, and see what's possible—not just what we can do in realtime at the time."
EC It's true there was a division, but the breadth of the support from Dave and Ivan encompassed both approaches. At the time, I wanted to develop the technology so we could make motion pictures. I had a very clear goal.
PH Was your goal different from the rest of them?
EC Yes, but that was perfectly fine. In fact, Ivan at one point started a film company called the Electric Picture Company. He was going to hire Gary Demos and me, but they couldn't get the funding, so it fell apart. Ivan had an interest in pushing the direction of motion pictures, and he knew that's what my drive was. I was working on the good-looking pictures, and they were working on interactivity.
PH When I first got interested in graphics in grad school, I heard about this quest to make a full-length computer-generated picture. At the time I was very interested in artificial intelligence, which has this idea of a Turing test and emulating the mind. I thought the idea of making a computer-generated picture was a prelim to, or at least as complicated as, modeling the human mind, because you would have to model this whole virtual world, and you would have to have people in that world—and if the virtual world and the people in it didn't seem intelligent, then that world would not pass the Turing test and therefore wouldn't seem plausible.
I guess I was savvy enough to think we weren't actually going to be able to model human intelligence in my lifetime. So, one of the reasons I was interested in graphics is I thought it had a good long-term career potential. I never thought when I entered the field that by the time I died we would have made a fully computer-generated picture, but I thought it would be great fun, and eventually it would happen. Did you ever have thoughts like that?
EC Oh yes, but when I graduated my goal was not as lofty as emulating all of reality; it was to make an animated film. That was more achievable, and I thought that it would take 10 years. This was 1974, so I thought by 1984, we might be able to do it. I was off by a factor of two: it took 20 years. I remember giving talks and saying at the time, "Look at the table in front of you. Nobody has been able to make a picture that comes anywhere close to capturing the complexity of even just that table in front of you." As we started to get close to that, it stopped being a meaningful thing to say. And, of course, now we're way past that.
We believed that achieving the appearance of reality was a great technical goal—not because we were trying to emulate reality, but because doing it is so hard that it would help drive us forward. That is, in fact, what happened. We were trying to match the physics of the real world, and in doing that we finally reached the point where we can create convincingly realistic images. Reality was a great goal for a while. Now we have non-photorealistic rendering goals and other things like that that have supplemented it. For a number of years, animation and matching reality were very useful goals, but I never thought of them as the ultimate goal.
We were also fairly good at analyzing how much compute power it would take, and this was at a time when others were buying Cray computers. We were at Lucasfilm at the time, and the feeling was that if anybody could afford a Cray computer it would be Lucasfilm. But from our point of view it was nuts, because, we asked, "How much compute power will it take to do a whole movie?" The answer was that it would take 100 Cray-1s. We realized we were so far away that we shouldn't even waste time being jealous. There were other things we had to do.
PH That's an interesting story because you had this 10-year vision and you didn't try to do it too soon. If something is too far in the future, typically you can't enumerate everything that must be done to achieve your goal; but you seem to have figured out the steps, and you kept building toward that ultimate goal.
EC I always believed that you need to look at the steps—we had to consider what the computing would provide, the economics, and the software solutions.
And if you look at the underlying infrastructure, even at that time we all knew Moore's law and that it was going to continue into the foreseeable future. We knew there were a lot of things we didn't know how to do, and we could list what they were: modeling, animation, and simulation. Back then, those were the clear problems we had in front of us. We didn't want to waste money on a Cray, because we hadn't solved these other problems.
PH That was a very brilliant analysis. But how did you get the funding? These days it would be really hard to convince somebody to fund me for 10 years. Yet, you've worked with various people—first Alexander Schure, then George Lucas, and then Steve Jobs—and they all seemed willing to invest in long-term development.
EC Interestingly enough, when I graduated from Utah, I tried to go to another university, but I couldn't find one that would buy into that long-term plan. Of course, I had just come out of a place where they did do that, and I always looked at ARPA at that time as being a spectacularly successful example of governmental policy moving things in the right direction. It had very low bureaucracy and trusted that if you give funding to smart people, some interesting things will happen. I came up out of that environment, and to this day I believe it's a great thing to do. That doesn't mean you won't have some failures or some abuses along the way, but that model of funding universities was spectacularly successful.
Unfortunately that wasn't the way it worked at the rest of the schools I was applying to, so I originally got a job doing CAD work at Applicon. Then Alex Schure [founder of the New York Institute of Technology] came along, and he wanted to invest in animation and make me head of the computer graphics department.
He didn't have all of the pieces necessary to do it, but he was the only person willing to invest in it. We had this remarkable group of software people coming to New York Tech, and Alex was essentially supporting them, but the technical people there knew that an element was missing: they didn't have the artists or the other components of filmmaking. Alex didn't understand that. He thought we were the filmmakers, and that rather than being part of a larger thing, that we were the solution. Unfortunately, he never got full credit for what he did because of that little bit of a blind spot on his part. He certainly made a lot happen for which he hasn't gotten a lot of credit.
Eventually, I moved on to Lucasfilm, where a very interesting thing happened, which I don't think people quite understand. At the time Lucasfilm was making the second Star Wars film, and the people at ILM [Industrial Light and Magic, Lucasfilm's special effects division] were the best guys in the world at special effects. So George [Lucas] took me to them and said, "OK, we're going to do some computer graphics." These guys were very friendly and very open, but it was extremely clear that what I was doing was absolutely irrelevant to what they were doing.
PH They didn't get it?
EC They didn't think it was relevant. In their minds, we were working on computer-generated images—and for them, what was a computer-generated image? What was an image they saw on a CRT? It was television.
Even if you made good television, it looked crappy by their standards. However, from their point of view, it was George's decision and he could do with his money what he wanted. It wasn't as though he was taking anything away from them—it's just that computer graphics was not relevant to what they were doing on that film.
I look back at this and see that what we had—and this is very important—was protection. When you're doing something new, it's actually fairly fragile. People don't quite get what it is. Sometimes even we don't necessarily get what it is. When people don't get it, and they've got their immediate concerns, it's hard for them to see the relevance. In ILM's case, the immediate concern was to make a movie, and we truly weren't relevant to the job at hand.
Because of that, what we needed was protection at that early stage. That's what we'd had at the University of Utah. ARPA was essentially coming in and protecting "the new," even though we didn't know what "the new" was. It's kind of a hard concept because when most people talk about "the new," they're actually talking about it after the fact. They look back and say how brilliant you were at seeing all this, and so forth. Well, it's all nonsense. When it is new, you don't know it. You're creating something for the future, and you don't know exactly what it is. It's hard to protect that. What we got from George was that protection.
The reason I was thinking of that model was that we had a software project here at Pixar to come up with the next generation of tools and assigned that task to a development group. But we had a different problem. The people responsible for our films didn't look at that development group as this sort of odd thing that somebody else was paying for; they looked at the group as a source of smart people that they could use for the film. We had given the development group a charter to come up with new software, and a year later I found out that that whole group had been subverted into providing tools for the existing production.
PH I see. So, they didn't get quite enough protection?
EC They didn't get enough protection, so I started it up again and put a different person in charge. A year later I came back and found that that group had been entirely subverted by production again. So, I thought, "OK, the forces here are much more powerful than I realized."
When we did this the third time we actually put in really strong mechanisms, so basically we set it up to protect it. We also did one other thing: we brought in a person from the outside who was very experienced in delivering bulletproof software. As we went through this, however, we found that everything was on schedule, but that the deliverables were shrinking in order to stay on schedule. At some point you don't deliver enough to make a film, and in that case the schedule slips. We had someone to keep things on schedule, but he didn't want to deliver the software until it was perfect.
Now we had gone to the opposite extreme, where the protection was keeping it isolated from engaging with the user. That's when we put Eben Ostby in charge because Eben knows what it means to engage. Now we're going through the bloody process of taking new software and putting it in production. But we've been through this before, and we know it's a painful, messy process.
To me the trick is that you've got to realize you have two extremes—full engagement and full protection—and you have to know what you're doing to be able to move back and forth between the two. For me, R&D is something you really need to protect, but you don't set it up with an impermeable wall. There comes a time when you need to go into the messy arena, where you actually begin to engage.
PH It seems that this is true not just in your business but for any software project.
EC Yes, this idea can be applied everywhere.
PH Among the many things that are inspiring about Pixar, and one way you've had a huge impact on the world, is that you changed many people's views of what computing is all about. A lot of people think of computing as number crunching whose main application is business and engineering. Pixar added an artistic side to computing. I've talked to many students who realize that art can be part of computing; that creativity can be part of computing; that they can merge their interests in art and science. They think of computing as a very fulfilling pursuit.
I think you've inspired them because you have these incredible artistic people here, and you have incredible technologists here, and you obviously have an interest in both. What's your view on how art and science interact in a place like Pixar?
EC Two things pop into my mind. The first one comes from being in a position where I can see world-class people on both the art and technical sides. With both groups of people, there's a creative axis and there's an organization axis of managing and making things happen. If you look at these criteria, the distribution of creative and organization skills is the same in both groups. People might think that artists are less organized. It turns out it's all nonsense.
PH I agree completely. Most people think scientists are these really precise, rational, organized people, and artists are these imaginative, emotional, unpredictable people.
EC What you find is that some artists actually are that way, and some are extremely precise and know what they want. They're organized. They lead others. They're inspirational.
PH There's an incredible craft to it, too. Both the craft of programming and the craft of art can be very detailed and precise.
EC If you think about the craft of laying out a major software system, you have an architect, and you have a lot of people contributing to it. Well, in a film the director is an architect who is orchestrating contributions from a lot of people and seeing how it all fits together. The organizational skills to do that are similar.
We have production people on films. Well, we have production managers in software who help organize and put things together. They're not writing the code, but they're making sure that the people work together and that they're communicating. There are good ones and bad ones, but the structure is the same.
And just as you can have a bug in software, you can also have a bug in your story. You look and say, "Well, gee, that's stupid!" or "That doesn't make any sense!" Well, yeah, it's a bug!
My second observation about the interaction of art and science is related to the early days of Disney when filmmaking and animation were brand new. This was also part of a technical revolution at that time. They had to figure out how to do color and sound and matting and so forth. They were working out all of those things for many years before the technology matured. Now people look back historically and all they see is the art that came out of it. Very few people pay attention to the role of the changing technology and the excitement of what went on there.
When computer graphics became practical, it reintroduced technical change into this field and invigorated it. I believe that technical change is an important part of keeping this industry vital and healthy.
Yet the tendency of most people is to try to get to a stable place. They just want the right process, which I think is the wrong goal. You actually want to be in a place where you are continually changing things. We're writing our new software system now—strictly speaking, we don't have to do that. I believe the primary reason for doing it is to change what we're doing. We're keeping ourselves off balance, and that's difficult to explain to people. Most people don't want to be in an unstable place; they want to go to the comfort zone, so you're actually fighting the natural inclinations of most people when you say where we want to be is a place that's unstable.
PH You're in a very unusual situation to have so much art and science mixed together. It would be nice if software companies and technology companies had more of those two kinds of people involved.
At Stanford we have an arts initiative in place right now, and one reason it's popular is not because everybody is going to become an artist, but because everybody should learn about art and the processes that artists use, such as drawing and sketching and brainstorming. We think that even if you're a mechanical engineer or a computer scientist, if you get exposed to the arts, you'll be more innovative. Art adds so much to technology and science just by encouraging a few different ways of thinking.
EC Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe.
PH Which is what you want in scientists, right?
EC That's right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. There's a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].
The notion is that if you learn to draw, it doesn't necessarily mean that you are an artist. In fact, you can learn how to play basketball, but that doesn't mean you can play for the Lakers. But there's a skill you can learn. The same is true with art. This is a skill to learn—for observation, for communication—that we all should have.
The second thing is that there is a notion whereby creativity is applied only to the arts. Those of us who are in the technical areas realize the creative component in those areas. The things that make people creative are very much the same in both fields. We may have different underlying skills or backgrounds, but the notion of letting go and opening up to the new applies in the same way.
PH When I used to run into you you would always be carrying around some math book, and I always got the sense you wanted to work on a few more cool math and technical problems. Did you ever have a chance to do that? I know you're busy, but is there anything like that you're looking forward to working on?
EC Well, yes, there still is one. I spent a lot of time trying to get an intuitive feeling for complex numbers. Even though I got good grades in complex analysis when I was in college, I always felt that I was missing some feeling for it. I wanted to have the feeling for how it worked, but all the books explained things in the same way. Part of it was the terminology; for example, I think the word imaginary is the wrong word. If they called them rotation numbers it would have been an easier thing to get. Basically, I was trying to get at it, and I couldn't find it in a book, although I did find one exception: Visual Complex Analysis, by Tristan Needham [Oxford University Press, 1999]. He had been a student of Roger Penrose [at the Mathematical Institute].
PH I remember you pointed that book out. That is a fabulous book.
EC The unfortunate thing was that he got pulled into administration. I felt he should have written another book, because he was such an inspiration in the way he thought about those things.
PH It was such a visual book, too. It had lots of great diagrams. You got the essence of the ideas; it wasn't just a bunch of derivations.
EC I got concepts from it that I never got from any other work on analysis. So, I sat down and wrote a program to try to understand this space. I graphed complex series, and then I would look at the complex plane. In doing that, I began to see things, and understand things, that I didn't see before.
I recognize that a lot of physicists actually get that. It took a while to realize that some of the terminology was just about the rotation, but because of the way they approached it, I didn't make that mental leap to get there.
Then I was trying to get to the next place, which was: what does it mean to think about matrices of complex numbers? That's when I ran out of time, but there were some interesting things that I wanted to do in hyperbolic spaces having to do with relativity and physics. For years I wanted to do that, and I still believe there's something there.
LOVE IT, HATE IT? LET US KNOW
© 2010 ACM 1542-7730/10/1100 $10.00
Originally published in Queue vol. 8, no. 11—
see this item in the ACM Digital Library
David Crandall, Noah Snavely - Modeling People and Places with Internet Photo Collections
Understanding the world from the sea of online photos
Jeffrey Heer, Ben Shneiderman - Interactive Dynamics for Visual Analysis
A taxonomy of tools that support the fluent and flexible use of visualizations
Robert DeLine, Gina Venolia, Kael Rowan - Software Development with Code Maps
Could those ubiquitous hand-drawn code diagrams become a thing of the past?
Brendan Gregg - Visualizing System Latency
Heat maps are a unique and powerful way to visualize latency data. Explaining the results, however, is an ongoing challenge.