Arm your Applications for Bulletproof Deployment: A Conversation with Tom Spalthoff

Companies can achieve a reliable desktop environment while reducing the time and cost spent preparing high-quality application packages.

Queue: Hello, this is another edition of the ACM Queuecast Premium Edition with your Host, Mike Vizard. Joining me today is Tom Spalthoff, who's the Systems Engineer from Macrovision, the leading provider of application packaging solutions. Tom, welcome to the show.

Spalthoff: Thanks, Mike.

Queue: What is the fundamental problem that people experience when they're distributing applications out to actually be deployed? Because we spend an inordinate amount of time focusing on building the applications, and yet what seems to hang people up most is when the application actually gets rolled out and deployed.

Spalthoff: Sure, I mean it's great to use the automated deployment solutions on the market today, the SMS and the Zenworks and the Landesk and the Tivolis and those guys, but what people tend to not pay close enough attention to is I've got to make sure that what I'm deploying works and won't cause problems once it gets installed on those desktops. So when you push that button to deploy to hundreds or thousands of work stations all at once, to have confidence that the application has been tested, that it's not going to put you into DLL hell and conflict with other apps that are on those machines already.

So it's really the customization and testing of those packages that becomes critical.

Queue: So why is it that we don't do enough of that testing? Is that just an overlooked part of the process? Or, is there something in the way that the process is structured that kind of pushes us to overlook this part of the process?

Spalthoff: Well, it's hard, right? The bottom-line is to accurately test a new application against all of your baseline images and all of the applications that they could potentially coexist with becomes a very hard problem, and that's taking advantage of the Windows Installer technology and Microsoft's work in the MSI realm, makes its own sense, but by doing that, the details of an installation are made public. They are stored in the tables of an MSI, and through solutions like Macrovision's Admin Studio, we can load those details into a central repository -- what we call an application catalogue -- which facilitates testing.

So now that I know everything about what an installation is going to do, the files it's going to lay down, the registry settings -- all of that stuff -- I can essentially run queries on that to say if I were to lay down this new application that I've just gotten with this baseline image, with these patches, with these other applications, what will happen? Will it overwrite things? Will it conflict with things?

And then more basic testing things: is it a well-formed MSI, do we pass the Microsoft validation rules, is it ready for Vista? Any corporate standards that I have about not putting icons on the desktop or things in a Startup menu? Can I test all of these things so that when I do push that button to distribute the files out to, the applications out to those desktops or work stations, that I have confidence it's going to work the way I intended to and it's going to look the way I want it to look.

Queue: I guess I wonder if there's something fundamentally amiss in the process because on the one hand we have application developers who spend all the time billing applications. Then we have a different set of people, I guess we'd call setup authors, who actually manage the deployment of the applications, and they kind of work in different environments and don't really communicate a lot.

Spalthoff: Yes. I mean that's certainly one-half of the equation, particularly with financial institutions we see a significant amount of internally-developed applications that are occupying the desktops, as many as 50 percent of what a financial institution might be managing is internally-developed.

Other organizations may see that number grow smaller and they're dealing with mostly commercial apps.

But using tools like InstallShield which becomes a part of Admin Studio, you are able to use their wizards to take projects, development projects, and build them directly into MSIs. There's nothing worse than an internal group developing an application, think they're doing you a favor, by creating a nifty setup for you, that you then have to go back and repackage because you don't know what it contains, you don't know what it's going to do, and you just can't release it without understanding that stuff first.

So yes, to take advantage of some of the tools, to automatically build MSIs and then feed that into your testing process with the rest of your MSIs and applications is usually a more streamlined process, and will get those apps out quicker.

Queue: Is there a value proposition to coming up with a standard set of tools for doing the testing? That helps bridge that divide?

Spalthoff: Yes. What we typically see is that either the testing process is being circumvented: 'hey, we just don't have time,' 'we don't have the luxury to re-image work stations and load the apps and test it six ways to Sunday.' So it's either not getting done, which is more that hand grenade approach: let me just throw it out there and hope that the Help Desk is prepared to handle all the calls that might come in.

Or, you've got to make use of some of the automated tools like what we provide, in order to do that testing in realtime and still meet your commitments, service level agreements, and things like that with the end customers.

But you said something interesting as well. The process becomes critical, particularly as packaging teams grow, you've got more than one or two guys doing this, how you communicate what you're working on, where it is in the process, how you communicate that back to the folks who are requesting these things becomes much, more and more important, and we see folks employing solutions like our workflow manager tool, which integrates with the packaging tools and allows you to institutionalize your processes right into the tools so that as you bring new packagers on, as you might bring in consultants or contractors in to do packaging work, you can really dictate the process that they're going to go through so that regardless of whether it's Jane or Joe or Ginny or Jim doing the packaging, you have confidence that what comes out the backend is going to be constructed the same way, it's going to tested the same way, and then it's going to behave in the fashion that you expect.

Queue: You mentioned that hand grenade approach and I guess maybe 10 years ago, you might have been able to get away with that, but my impression at the moment is it feels like there's a 'patch du jour,' an update every week, that needs to roll out because of some security issue, and you just can't get away with rolling out the hand grenades any more.

Spalthoff: Well, you just can't take the chances, right? With the compliance rules that are now in effect, you can't not know what an install is going to do to a machine when it gets there. You have to know what files it's going to lay down. You have to know what kind of potential holes it might open up. So we see that with our patch impact manager tool as well, that, hey, if I can get the details of those Microsoft patches in, I can load them into that central repository that contains all the information about my baseline OS's and my applications, I can know with a great deal of certainty what kind of impact those patches are going to have so that if it's not going to wreak havoc on my environment, I can get that out as quickly as possible, and if it is going to impact things, at least I know where those impacts are and I can focus my testing.

With patches, we see the same thing, that most people get, they end up having a -- Patch Tuesday rolls around, they get the information from Microsoft. They have a maintenance window on Saturday night that they've got three days to test. They do as much testing as they can reasonably do in that time, and they hope for the best, and with tools like ours, you can have a much higher degree of confidence that you're not throwing that hand grenade but you're actually doing exactly what you intend to do when you roll out those patches and that makes your environment safer.

Queue: Right, and beyond the security risks, the Help Desk isn't going to take that kind of abuse any more. The minute there's a problem, they start screaming up and down the line that there's a problem with the application. They don't distinguish between a problem with a deployment methodology and a problem with the application. They just say the application is broken.

Spalthoff: Well, yes, and the lines get very fuzzy, right? So what went wrong. Was it the deployment tool did something bad or was it -- so you're right. Increasing that confidence makes everybody's life a little easier because there's far better things for an I-team to be doing than running to desktops and trying to fix problems that might have been prevented with better testing.

Queue: How do people solve those problems today if they don't have a testing tool like yours in place? Are they doing a lot of individual testing, that's more of a hit or miss approach? I guess my question is, if you have that approach, how do you scale, when you're talking about hundreds and sometimes thousands of applications?

Spalthoff: Yes. It's very difficult. What we typically hear is that folks will -- they've got a ghost image that has their base OS and their core applications on it, and if they get a new app, they'll ghost the machine and they'll load it up and see what happens and if everything looks okay, maybe they'll grab another baseline OS and try that on that as well.

Typically then they pilot it. So they'll send it out to a handful of people, and if that goes okay, then they'll broaden it.

But the ultimate, the end result is that it just takes longer and longer to do all these things. So it's a difficult public relations issue for these internal groups where folks for better or for worse say, look, I get a CD at home. I load it up on my machine, and it just works. It takes 10 minutes and I'm done. Why does it take two weeks when I get a new application upgrade or a new patch or something that needs to go onto my machine before I can get it loaded onto my desktop?

So to the extent that you can shorten that cycle and do as much testing as possible, in as short a period of time, then the happier people are and, ultimately, the smaller the packaging team that you need and there's by using automated tools, those economies of scale pay off in both employee productivity and your IT-team, but also in increased, proved uptime for the apps and the desktops that you're supporting.

Queue: There's also a new trend out there where people are talking about the need to have a web-based model around iterative development where I'm sending out new features on an almost monthly or quarterly basis, almost like a software as a service kind of model rather than doing a full-bored here's my upgrade to this application that I'm going to do once every 18 months, which means the testing cycle has to be continuous, right?

Spalthoff: Yes. The idea that these are onetime deals is a fallacy. You've got to be prepared, and you've got to have a solid process in place that you expect these updates to come, whether it is monthly or quarterly or annually or whatever it is, they're coming and to not have it be disruptive to your workflow and your day-to-day operations and just have it go through a prescribed process.

If an application vendor is producing an MSI, even if it's monthly, to load that through, to run a battery of automated tests against it, and then to be able to deploy that, if you're using automation, that's not an onerous process. If you're doing that stuff manually and it's going to take you a week to test that, and then you've only got three more weeks before the next one comes out, now you've got serious productivity issues.

Queue: So how does your tool actually work in terms of testing and what's the process and how long does it actually take to test something? Is that dependent upon the application size or what?

Spalthoff: There are a couple of variables, but the time is measured in minutes and hours, not days and weeks. A typical testing scenario for a decent-sized enterprise usually includes either repackaging. If it's still a legacy app, it's not an MSI, you need to get it into an MSI format. So we provide repackaging tools that will essentially do that conversion.

Once it's into an MSI, there's typically standards within an enterprise that say, look, here's how we want the entry in the add-remove programs to look, don't want anything on the startup menu, we don't want desktop icons, we don't want quicklaunch buttons -- we don't want any of that stuff.

So we're going to set up standards and a template and apply that to every application that we are going to deploy. So typically, there's some customization that goes on. And then once it's set up like that, then the MSI typically gets loaded into our application catalogue, so that validation rules can be run against it. That's a well-formed MSI versus the standard set Microsoft has established, and we'll run it against our conflict analysis so that we can say, hey, what applications might this conflict with if it were to coexist with them on a desktop?

And then we've got a variety of ways we can remediate those conflicts so that problems don't exist.

From there, we've got a couple of other testing tools to do lock-down testing and to do Vista-readiness testing and to check that the file associations work right.

And what's nice is through some of our automated tools and our package expert feature, you're able to automate that stuff and say, look, here are the battery of tests I want to run against any application that comes through here and post the results. In many cases, if there are common issues with the MSI, we can automatically fix them.

So you might have 80 problems or 80 issues with something and 77 of them can be automatically resolved and that allows you to just focus your remediation stuff on those three remaining issues.

So, at the end of the day, we're going to automate pretty much everything that you would do by loading the app onto a clean machine or onto your baseline OS, and keep the results with the application, in the application catalogue, so you've got a nice history, an audit trail of what happened, what changed, why it changed, who did the changes, and all that stuff.

So that process is somewhat contingent on how many applications there are. If you're testing against a couple of dozen apps, it runs quicker than if you're testing against a few thousand applications, but you're still talking about processes that run for minutes and potentially hours, not days and weeks.

Queue: You mentioned Vista. So I guess this is a good time to bring up a pending shift that we're seeing, moving from Windows XP to Vista.

Is there anything unique about rolling out applications in a Vista environment from a testing perspective that people should be aware of?

Spalthoff: Well, one of the more important things you'll want to pay attention to is the user access control stuff. They've added additional granularity to the permissions that each user can have on a machine. So it gives administrators a lot of functionality around how to lock down a machine. They've got -- it's more than just an On-Off, is it locked down or not? So you've got different levels in there.

So you do need to pay attention to how an application interacts with user-access control. The Restart Manager feature within Vista also gives you some functionality that may prevent you from having to restart a work station when you're installing your applications.

So checking for those things and taking advantage of those things are typically stuff that you're going to want to do during that repackaging process. With our tools, we have a battery of tests that we run, and we'll flag things, say, if it's going to try and restart, can we make some changes so that the Restart Manager is employed and that doesn't happen?

And if there's going to be issues with permissions and access control, we can flag those before you actually deploy rather than after.

And what's nice is that as we talk to folks, we hear everything from, yes, we're going to Vista as soon as we can. Right? 'This summer, we're going to start rolling out Vista' to 'yes, maybe someday in a couple of years, we're going to do that.'

But if you're using these kind of tools, getting your apps ready for Vista now can be just a part of the process that you go through so that when somebody from above says, okay, now is the time to go to Vista, at least the application portfolio that you're managing can be ready for that, so that it's not a massive undertaking to migrate all these apps from XP to Vista.

You can say, well, as a part of our packaging process, we've been making sure that as new updates come in for the apps that we're ready for Vista now.

Queue: Are there going to be challenges of supporting a mixed Vista Windows XP environment? Because I don't think I know of anybody who's just going to go to all Vista overnight.

Spalthoff: You know it's similar to when people went to XP. The same kind of migration plans. We heard when people were moving from 2000 to XP, is what we're expecting when people are moving to Vista, that it'll be sort of a drawn-out process, and that's why having your testing plans and having your different baseline OS's loaded into an application catalogue so you can test for all of those environments will save you a lot time.

If you think about an environment where you do have multiple OS's and multiple baselines for different groups within the organization, that testing process can be pretty onerous, if you're trying to do it manually.

Queue: So last question, Tom. What would be your best advice to people as they approach the entire testing process and think about how to actually minimize their cost, their security exposure, and just the amount of time that they themselves have to put into the project?

Spalthoff: Well, I would start with focusing on the process. There are certainly a good number of resources available today on how to prepare applications for deployment. Microsoft's Business Desktop Deployment stuff, and their Solution Accelerator Program goes a long way towards establishing some baselines around here are the things that you'll want to do to prepare applications for deployment. So look at that, and then automate as much as you can. The tools are available to embrace some of these testing platforms so that you can do it as quickly as possible, and you don't have to skip that step before you deploy.

So, focus on the process, so that it's repeatable, it's measurable, that when you realize that you've got way too much work for the people that are on your staff to perform, that you've got metrics that people can point to and say, yes, we need to add staff or we've got an additional headcount that we don't need. At least you've got some hard numbers that you can point to, and then let the technology do the heavy-lifting wherever possible.

Automate the tests, make sure that you've got a central repository of all your applications to test against, and of course always use MSI.

Queue: This has been another edition of the ACM Queuecast with your host Mike Vizard. This edition has been sponsored by Macrovision, the leading provider of application packaging solutions. Please visit www.Macrovision.com for more information, and Tom, I'd like to thank you for being on the show!

Spalthoff: My pleasure, Mike, thanks for having us.

acmqueue

Originally published in Queue vol. 5, no. 2
Comment on this article in the ACM Digital Library





More related articles:

- From Liability to Advantage: A Conversation with John Graham-Cumming and John Ousterhout
Software production (the back-end of software development, including tasks such as build, test, package and deploy) has become a bottleneck in many development organizations. In this interview Electric Cloud founder John Ousterhout explains how you can turn software production from a liability to a competitive advantage.


Martin J. Steinmann - Unified Communications with SIP
Communications systems based on the SIP (Session Initiation Protocol) standard have come a long way over the past several years. SIP is now largely complete and covers even advanced telephony and multimedia features and feature interactions. Interoperability between solutions from different vendors is repeatedly demonstrated at events such as the SIPit (interoperability test) meetings organized by the SIP Forum, and several manufacturers have proven that proprietary extensions to the standard are no longer driven by technical needs but rather by commercial considerations.


Jason Fischl, Hannes Tschofenig - Making SIP Make Cents
The Session Initiation Protocol (SIP) is used to set up realtime sessions in IP-based networks. These sessions might be for audio, video, or IM communications, or they might be used to relay presence information. SIP service providers are mainly focused on providing a service that copies that provided by the PSTN (public switched telephone network) or the PLMN (public land mobile network) to the Internet-based environment.


David A. Bryan, Bruce B. Lowekamp - Decentralizing SIP
SIP (Session Initiation Protocol) is the most popular protocol for VoIP in use today.1 It is widely used by enterprises, consumers, and even carriers in the core of their networks. Since SIP is designed for establishing media sessions of any kind, it is also used for a variety of multimedia applications beyond VoIP, including IPTV, videoconferencing, and even collaborative video gaming.





© ACM, Inc. All Rights Reserved.