Interviews

  Download PDF version of this article PDF

A Conversation with Mario Mazzola

To peek into the future of networking, you don’t need a crystal ball. You just need a bit of time with Mario Mazzola, chief development officer at Cisco. Mazzola lives on the bleeding edge of networking technology, so his present is very likely to be our future. He agreed to sit down with Queue to share some of his visions of the future and the implications he anticipates for software developers working with such rapidly evolving technologies as wireless networking, network security, and network scalability.

Currently responsible for leading Cisco’s overall R&D strategy, as well as managing Cisco’s entire engineering organization, Mazzola has also served both as senior vice president of new business ventures and senior vice president of the enterprise line of business during the 10 years he has been at Cisco. He first came to Cisco when the company he co-founded, Crescendo, was acquired in 1993. Before that, he also co-founded David Systems, which set out to integrate PBX and LAN technologies.

Holding up Queue’s side of the conversation is Stu Feldman, a man with his own impressive communications credentials. Currently the vice president of Internet technology at IBM, Feldman also gained a wealth of experience as a computer science researcher and research manager at Bell Labs, Bellcore, and IBM Research.

STU FELDMAN At this juncture, what’s your overall vision of networking, looking out over the next five years?

MARIO MAZZOLA At Cisco, obviously, networking is really our core business. Even though we’re mostly known for routing and switching technology, we try as much as possible to take a broader view of the networking infrastructure, and we certainly spend much of our time thinking of ways to evolve and expand the reach of networking.

With regard to the global networking infrastructure in particular, we believe it’s going to be important, first of all, to provide a much higher level of scalability. We also think it’s important to embed the level of intelligence required to support new services. In this respect, we believe security capabilities are among the most important.

We, of course, already have security products, including a line of appliances for firewalling and for virtual private networks (VPNs), as well as for intrusion detection and prevention. But moving forward, our intention is to build more and more preventive measures into the networking infrastructure itself. We’ve already started to work closely with a few customers in this respect.

SF What are the hot networking issues you think software developers ought to be thinking about over the next couple of years?

MM The big upcoming challenges, I think, have to do with scalability and flexibility. Solutions that are optimized for a specific environment in a specific context, I think, are probably too limiting in a highly networked context. What’s needed are applications that can scale readily and adapt dynamically to rapidly changing requirements.

Clearly, performance and effectiveness and time-to-market remain very important considerations, but I think global architectures that offer considerable scalability and flexibility of requirements are becoming increasingly crucial.

SF Could you give a few specific examples of the opportunities you’d be looking at if you were developing applications for clients or servers operating in these vast networks?

MM Even within Cisco, we’ve seen our scalability requirements grow from thousands of nodes to many hundreds of thousands of nodes—even millions in some instances. Providing for this sort of scalability will clearly challenge developers. And to what degree possible, you want to do this without compromising performance or efficiency.

Personally, I think finding ways to balance between the simplicity and cost-effectiveness required in a specific context and the scalability and flexibility required for work across the network will prove to be quite challenging for the developer.

SF And then there’s the matter of security to tangle with as well. With Cisco’s recent purchase of Linksys, you’re likely to have a major impact on both the enterprise and consumer markets for wireless. Traditionally, you were mainly on the enterprise side of the wireless picture, but now suddenly you’re everywhere. So can I get your views on how application developers should be looking to address both the enterprise and consumer sides of the wireless market—particularly with regard to security?

MM I think it’s obviously going to be very difficult, if not impossible, to address the security requirements of both large enterprises and the home market with a single business model and a single technology.

Still, we believe that given our presence not only in large and medium enterprises but also in the service arena, we have an opportunity with our acquisition of Linksys to expand the value we bring to the party in terms of coherent manageability, coherent provisioning, and consistent security. There are trade-offs, of course, that we are going to address. But ideally, we’ll be able to provide an extra level of manageability and capability that leverages our presence among service providers, while also bridging to the professional environment within enterprises, providing essentially the same level of connectivity and, in certain cases, a number of services that actually connect enterprise and home.

SF Well, that obviously has huge market implications, assuming you really do manage to bridge the telecommute, the home, and the place of business. But what do you think that implies for the rise of new applications or new server requirements behind them?

MM Again, provisioning on a large scale is an area where we are already making important investments. We see that the combination of wireless technology and broadband—with fast access to DSL lines and cable—through long-reach Ethernet will allow for an important synergy.

My own belief is that wireless will become truly pervasive at the home level. Because all the different types of appliances are so close together in the home, wireless will be very convenient. But we think this calls for a non-intrusive but very effective remote monitoring and management capability—not only to ensure continuity, but also to provide for secure access to resources.

In that regard, we think technologies that so far have been only narrowly applied within certain sectors—storage management in support of virtualization, for example—will become more pervasive so as to allow logical resources to be mapped to physical resources in a way that is completely transparent to the end user. We think all these layers of software will be vital in providing services that will be very important to clients. And we think that the global network, both at the consumer level as well as within large enterprises, has an important role to play in this technical evolution.

We believe it’s important to provide for the mapping of virtualization services and security services from within the network itself, for example, because we feel this will provide for a level of scalability you couldn’t possibly achieve with a single-server type of solution. We also believe that one of the inherent advantages of implementing these capabilities within the network is that you get better mechanisms for not only monitoring performance, but also enforcing policies.

SF Switching, if I may, to some more specific security issues, what aspects do you feel—as a practical matter—you’ll actually be able to address: encryption, encryptography choices, management of network nodes and devices? We all know about the default or configuration problems, the choices of which kind of wireless networking to use, for example. But then it gets more interesting as the multiple types of wireless—ranging from house wiring for consumer uses to the 802.11 family—evolve to the 3G-flavored stuff. So, basically, I have two questions for you. First, what’s your view of how the various fundamental modes will evolve? And then, based on that, how do you think people will manage to handle the security problem?

MM This is clearly a very complex, global subject. We think the technology today is reasonably simple, architecturally speaking. For example, it’s possible to provide bulk encryption capability not only for external, very high-speed optical connectivity—as in 10 gigabits and faster—but it’s also possible to have global encryption for all of the internal infrastructure of a company because everything has to funnel through a single switch or router. At the same time, it’s possible to support client encryption at a high level of efficiency. For example, deploying a Novell infrastructure that offers the proper level of privacy for each client is achievable in a cost-effective way.

Obviously more challenging are these issues related to the identification and authorization of services and the potential within the context of an enterprise to create distinct classes of users and access privileges—and to be able to handle all this in a very dynamic way, allowing for the benefits of plug-and-play in terms of adds, moves, and changes. Now that is obviously much more challenging to accomplish. And no less challenging is finding a way to provide for a proper level of manageability. We tend to find among our larger customers a preference for interfaces that are more structural, whereas among many small or medium-sized businesses (and even some large enterprises), we find more of a bias toward graphical user interfaces and more intuitive ways of enabling and disabling services.

So one of the challenges for us comes in terms of providing something that is semantically coherent but still manages to accommodate different management styles. And in this respect I have to say security represents one of our greatest challenges. Here again, I have to stress the importance of tightly coupling management tools we do provide with the global security infrastructure. That’s because, traditionally, companies like ours and others have been dealing with security management as part of the global management of a network. Ideally, there may be a benefit in doing that, but generally you find that the depth and level of understanding of security requirements don’t typically travel well from a security-oriented group to another group that’s broadly focused on network management. So this is another area for us that is quite delicate and loaded with lots of triggers. But I expect our industry as a whole to make important advances in this crucial area.

Again, I apologize for insisting continuously that scalability is another dimension that must be accounted for when assessing the security challenge. The difficulties encountered in an environment of even several thousand nodes are really quite different from those we can anticipate in an environment in which security and provisioning of services must be handled for millions of nodes.

SF Can I ask you then to comment on just how it is people are going to manage that transition from 103 nodes 106 nodes? On the other hand, how can developers and individual homeowners be reasonably assured that their applications are going to run in a secure environment—particularly if they’re not security experts themselves?

MM We tend to think of appliance-based security as being more applicable to smaller environments. Whereas, when you look at fairly large networks, we think real scalability is achievable when the proper level of enforcement and management is embedded in the network infrastructure. So one of our priorities in terms of migration is to provide an appropriate underlying operating system and algorithms and features and commands and management tools. Ideally, that will provide for a high degree of convergence between the basic technologies used in appliances and the basic technologies found in the global infrastructure.

Toward this end, we already have an interesting program in partnership with other companies to provide in the global infrastructure for firewalling and pervasive intrusion protection.

And it’s my personal opinion that with particularly large software projects such as this, it’s typically easier to design for the most general case and the highest degree of flexibility and then to try to scale all this down to smaller environments. To do the opposite, I think, represents a much more difficult challenge.

The tricky part is that it’s difficult to apply the same principle to hardware and systems. Because technologies advance, it’s often easier to increase performance by first applying the new technologies to small systems and then working your way up to the larger systems. In any event, that’s much more likely than drastically reducing the price of something that was built as a very large infrastructure. So what we have is an instance where the signposts don’t necessarily point in opposite directions, but also are not necessarily homogeneous on both the software and hardware fronts.

SF That’s a fascinating observation about bringing technology down on the software side versus pushing it up on the hardware side.

MM We have the benefit of some experience. We have found that it often becomes possible with the proper investments to increase performance and port density and capabilities in a more cost-effective way when you push new technologies up from the smaller platforms rather than the other way around. And generally speaking, the overall interconnect structure of systems tends to be the most difficult to modify and streamline.

We are still very optimistic about advances in silicon, so something that a priori tends to realize interconnectivity performance improvement as a result of very high-speed serial links, as opposed to major improvements in the infrastructure of interconnected backplanes, is likely to prove more cost-effective.

On the other hand, with software, a design that is too optimized for a certain platform tends to be too rigid and not open enough to the level of expandability and scalability that’s really required.

SF I’d like to ask you to address a couple more topics. One has to do with the huge family of 802.11 standards. We all live on 802.11b and we know 802.11a is coming. But we also know the security provisions of both are pretty bad. Can you tell me what Cisco’s commitment is to 802.11 in all its guises, and what you think the implications of 802.11 are for overall mobile security as we continue to move through all the letters? Or do you think 802.11 is just a bad idea?

MM I will admit this is all somewhat confusing. And there are certainly many people in Cisco who are much closer to all these standards than I am. But, in a nutshell, I think the combination of 802.11a and g will probably end up proving the most pervasive going forward. This is at least what our Ethernet Access Technology Group believes right now.

But we’re actually quite agnostic in these matters. We’ll obviously place our bets as we proceed in allocating our development resources. But the difference here is that we never make such things a question of principle.

Anyway, for now we think the combination of 802.11a and 802.11g has the best chance of being broadly deployed. That said, we’ve also built support for all the other flavors of 802.11 into our products.

SF On the security side, I’d say you face some pretty clear risks, because all of the protocols—except maybe 802.11i—have plenty of known security holes. And then as we move, as you argue, toward network-based, large-scale security and intrusion management, you introduce new targets and new platforms for launching denial-of-service attacks and the like. Are you taking any steps toward reducing these new risks?

MM Yes, this is an area that is particularly important, and I think it’s also an area where we still need to see a lot of innovation. Algorithms that have been applied to devices for years now need to be mapped in a cost-effective way to the large infrastructures. We are only at the very beginning of this, to be frank. But this is one area in which I think we’ll see a great deal of emphasis and effort in the years ahead. I agree with you that one of the current impediments to full deployment of wireless technology within enterprises clearly has to do with an insufficient level of security. So this is an area that will challenge us greatly. Part of that challenge for us will be to push the envelope with new technologies, while at the same time trying to make our new technologies available for standardization.

Therefore, I have to say that if we look at the global challenges ahead of us both in terms of software protocols and hardware technologies, we’re just at the very beginning. And I think that, if anything, the rate at which changes are being made is going to accelerate, not diminish. But that, I think, just means we’re in a very exciting environment. Q

acmqueue

Originally published in Queue vol. 1, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Geoffrey H. Cooper - Device Onboarding using FDO and the Untrusted Installer Model
Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device's trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.


Brian Eaton, Jeff Stewart, Jon Tedesco, N. Cihan Tas - Distributed Latency Profiling through Critical Path Tracing
Low latency is an important feature for many Google applications such as Search, and latency-analysis tools play a critical role in sustaining low latency at scale. For complex distributed systems that include services that constantly evolve in functionality and data, keeping overall latency to a minimum is a challenging task. In large, real-world distributed systems, existing tools such as RPC telemetry, CPU profiling, and distributed tracing are valuable to understand the subcomponents of the overall system, but are insufficient to perform end-to-end latency analyses in practice.


David Crawshaw - Everything VPN is New Again
The VPN (virtual private network) is 24 years old. The concept was created for a radically different Internet from the one we know today. As the Internet grew and changed, so did VPN users and applications. The VPN had an awkward adolescence in the Internet of the 2000s, interacting poorly with other widely popular abstractions. In the past decade the Internet has changed again, and this new Internet offers new uses for VPNs. The development of a radically new protocol, WireGuard, provides a technology on which to build these new VPNs.


Yonatan Sompolinsky, Aviv Zohar - Bitcoin’s Underlying Incentives
Incentives are crucial for the Bitcoin protocol’s security and effectively drive its daily operation. Miners go to extreme lengths to maximize their revenue and often find creative ways to do so that are sometimes at odds with the protocol. Cryptocurrency protocols should be placed on stronger foundations of incentives. There are many areas left to improve, ranging from the very basics of mining rewards and how they interact with the consensus mechanism, through the rewards in mining pools, and all the way to the transaction fee market itself.





© ACM, Inc. All Rights Reserved.