homewho are hackers?the riskswho's responsibleprotecting yourselfinterviews

interview: steven b. lipner

photo of steven b. lipner

He is a senior security analyst for the Microsoft Corporation.
Among all the current concerns about the security of information technology, you keep hearing suggestions that big companies like Microsoft should be taking a lead by improving the security of the software that they sell. How valid is that?

Well, I would say that companies are obligated to provide secure software for customers who want to operate on the internet, and that's something we're doing. One of the big things that we did with Windows 2000, the new operating system that we shipped in February, was to commit to, and then honor the commitment, that we would not ship with any known security vulnerabilities. Basically, security was a showstopper issue for that product. If there was a security vulnerability that was discovered in the product, the development team stopped ship or delayed ship until they had resolved that issue.

But you don't even need a password for Windows 2000?

Windows 2000 supports encrypted passwords and supports a wide variety of security features. The user can configure a Windows 2000 security feature so that it doesn't require a password. That is something that we leave to the user. What we say is that we don't want to legislate morality on our customers. If you have a system that's in your home, that's not connected to the internet or that's used only as an internet client, under certain circumstances, you know that not having a password may be a perfectly reasonable thing for you to do. On the other hand, on my system that operates on the internet 8, 10, 12 hours a day, and can be a server as well as a client, I have a strong password and we've configured that system using out of the box features to require a very strong and complex and a long password. So we give our customers the flexibility to use our systems as securely as they need them. But we also given them the choice to make that trade-off between functionality and security. And that's a fact of life.

. . . Another criticism is that when Windows NT is delivered to the end user, all ports are open, and that creates a situation where any burglar could slip through in many, many places. Is that putting too much of an onus on the person who buys the product?

First, the specifics of services that are running by default are something that's a little more complicated than "all ports are open." But more to the point, we include a set of security templates in the box, and in particular with Windows 2000. There are security configuration editor scripts, so that with a single click, you can say, "This is a system I'm going to use in this particular way as a domain controller, as a server, as a workstation, with high security, medium security, low security." Then basically we apply those configuration templates, those scripts, to lock the system down in a way that's appropriate to its usage, and we give customers that flexibility.

We also operate one of the major web sites on Microsoft.com, Microsoft Security. We disseminate configuration templates and scripts through that web site so that customers can see what their operations are and have that choice.

We give our customers the flexibility to use our systems as securely as they need them. But we also given them the choice to make that trade-off between functionality and security... The consequences of any sort of slip-up lie almost exclusively on the buyer. Because of the warranty and liability issues, how long is that situation going to continue? How long before some outside agency says, "Well, I think it's time for the company to take on a bit more of the risk?"

The risk of security issues is always a shared one between the user and the supplier. Technology users have a responsibility because they're setting up the systems. They know what the environment is. They know what their requirements are. They know how sensitive their information is. And so they have the ultimate choice of configuring the system and installing it. We have an obligation to provide products that can be used appropriately by our customers, and that definitely includes security. We believe that if we don't provide a level of security that our customers need and demand, the market will tell us that. And in terms of security, the market certainly has increased the priority of security over the last five to ten years of explosive growth of the internet. . . .

In a prior job at another company, I had built what the US government called an A-1 system, a system that was as secure as the US Defense Department knew how to make it. And we put years and millions of dollars into doing that. And then, at the end of that development project, I made the decision to cancel it, because nobody wanted to buy it.

And the moral of the story is . . .

The moral of the story is that usability, flexibility, and security are a set of trade-offs and customers don't want systems that are so secure that they can't use them. They want systems that are secure and that they can use. . . .

One frequent consumer criticism is that the business is driven by marketing, and that security is assigned a really low priority on the front end.

Today, customers need secure systems. That sort of criticism would basically say that it would be silly for us to have a policy that we're going to stop ship if we have a security vulnerability. But that's a policy that we did have with Windows 2000.

Are you telling me that the criticism is obsolete?

I can't talk about the past. I've only been with Microsoft about a year. But certainly, in my experience, there has been the policy, particularly with Windows 2000, but with other products as well. If we find a vulnerability, that's a showstopper condition. We won't ship.

. . . Can there ever be a totally secure system or a bug-free system?

I don't believe so. When we attempted to build an A-1 system as secure as man knows how to build, I'm sure that we had vulnerabilities and bugs left in it. What we know how to do is to make it better. Every year . . . our tools get better and then we commit ourselves, as we have, to correcting the vulnerabilities that are left. . . .

Hackers say that a virus a month is being developed. Can you keep up with that?

We do keep up with that. We'd rather have fewer vulnerabilities, and we're making progress on that score through some of the tools that we apply during development. But when vulnerabilities are found, the test then for a vendor is what do you do about it? And we don't cover them up. We don't try to deny them. We acknowledge them. We fix them as fast as we can. We find other vulnerabilities that are related to that one that we may not have considered, and we fix those. Then we send email out to 120,000 people that says, "We found this vulnerability. Here's what the details are, protect yourself. Go download the patch." And we do that in a very open and forthright way.

Just today, a critic of the software industry was pointing out a couple of bugs out there, like "buffer overrun" . . . that hackers can exploit. How come they're getting away . . . with that? You haven't been able to get at them?

Buffer overrun is just a fancy way of saying that I supply more data than you allocated storage for. And when I do that, bad things can happen. We've automated tools into our development process that allow us, as we build new versions of our software, to automatically detect and eliminate buffer overruns. Windows 2000 benefited from that to some extent. . . . I'm not going to say there's never going to be another buffer overrun in a Microsoft piece of software. But what I will say is that technology is a response to buffer overruns. . . . We're applying technology and we're making real good progress on that promise.

A cynic might say that bugs are good for business--that Microsoft will keep developing new products for each bug.

Well, in fact a cynic might say that but I would love to never issue another security bulletin in my life.

What more can you do about issuing patches? . . . The patches may be there, but the average person is not hearing about the patches, and isn't using them.

That is a real concern for us. One of the things that makes me sad is when somebody gets hit with the vulnerability that we have corrected. When we do a security patch, we post it to our web site. We send out email to a list of folks who subscribe to our mailing list, which 125,000 people have done. In addition, we send out that email to the security interest mailing list, and they re-distribute it. The other thing that we do is to take those security updates and post them to what I call the "Windows Updates" and "Office Updates" sites. Those are automated web sites. A customer's machine will pretty automatically go to those sites, check for the availability of the new patches and install them. I would love to have more mechanisms. That is one of the things that we worry about constantly--how do we get the word out better? How do we get customers to install the patches? For the average consumer, signing up for Windows Update, getting the . . . notification, and installing the patches that Windows Update tells you to install is a real good way to keep safe. . . .

Hackers frequently find bugs in Microsoft products before you do. How important are those hackers in the whole picture?

We want to find vulnerabilities and issues in our product from any source, and we want to take action to keep our customers safe. We welcome the reports from those customers. They send mail to secure@Microsoft.com. We correspond with them. We evaluate every report that comes in, and if it is a bona fide vulnerability, we fix it. So they're a real source of information and ways that we can help keep our customers safe. We do ask them when they report to keep those vulnerabilities private until we can fix the problem, assuming there is one. We do that because we think our customers are best served by having a complete packaged finished solution that we put out on our web site. If the hacker, if the security researcher works with us, we acknowledge him in the bulletin that results. Microsoft works with hackers to protect our customers, and we like protecting our customers.

I started off this whole project with the sense that a hacker was a kind of graffiti spray painter, or vandal. What is a realistic profile of the hacker community?

The hacker community is so wide, so varied in composition, competence, and motivation, that it is not possible to generalize, to put sort of a sound bite of the hacker or the hacker community. There is a wide range of folks. We work very cooperatively with a lot of them. Others do things that we wish they would not, but our bottom line is protecting our customers, and we will work with anybody who reports information to us that we need to know to protect our customers.

The public is currently hearing a lot about . . . the wide dispersion of technology and the dangers of terrorism and mischief. What is a realistic level of alarm?

The key thing for customers is to be prudent, to take best practices. Security is something that you can do. . . . So I think to say that security is a non-issue is oversimplifying and trivializing, and I won't say that. At the same time, to say that the sky is falling is alarmist, and I think it is an overstatement. And I won't say that.

home · who are hackers? · risks of the internet · who's responsible · how to be vigilant · interviews
discussion · video excerpts · synopsis · press · tapes · credits
FRONTLINE · wgbh · pbs online

some photos copyright ©2001 photodisc
web site copyright 1995-2014 WGBH educational foundation