Thursday, June 08, 2006

.

Dawn of the Dead

Yesterday we talked about how attackers can break into a web server and install software that can infect visitors' computers. But we've been told that before we enter personal data at a web site, we should look for the little "lock" symbol in the browser window. That lock symbol, we're told, means that we're at a secure web site, and only then can we be sure that our information is safe. Surely, our computers can't be turned into "zombies" when we use secure web sites.

To respond to that, we have to understand what a "secure web site" is — that is, what the lock symbol means. Simply put, when we visit a web site whose URL has the prefix "http:", we send requests to that server, and receive its responses, "in the clear" — unencrypted and unverified. Our request might be rerouted to another address on the Internet without our knowledge, and someone who's able to intercept the data as it travels the Internet will be able to read it. But when we use a site whose URL prefix is "https:", we use a protocol called "SSL" (or its open-standard version, "TLS"). That protocol encrypts all data that passes between the web browser and the server — end to end encryption — and checks the credentials of the server (and, optionally, of the user's computer, though that's seldom used in the environment we're talking about). The lock symbol tells us that for the web page we're looking at, we got there using SSL/TLS and the server credentials were verified.

But what does that really mean? For one thing, it means that what we send to the server and what we receive from it are encrypted, and that's good (but see below). And what about those "server credentials"? That's what we call a "certificate" (defined by the ITU's X.509 standard). A certificate is a digital "package" that contains, among other things, a encryption key, a digital identity, and "signatures" by one or more "certificate authorities", which vouch for the identity. The web browser makes sure that the identity in the web server's certificate matches the URL you're visiting. It checks that it knows and trusts at least one of the certificate authorities that signed the certificate. And it uses the encryption key to secure your session with the server.

Note that none of that assures you that the URL you went to is trustworthy. For one thing, anyone can register a domain name and get a (legitimate) certificate for that domain. If I get you to visit www.cheat-you-blind.biz, and I've gotten a certificate for it from one of the major CAs, my server's credentials will verify. If you're a little warier, I might have to use a domain like "b1gbank.biz" when I'm pretending to be "bigbank.biz"; would you always notice the difference? And I can even make my own certificate for "bigbank.biz" that's not signed by a trusted CA. Your browser will give you a warning pop-up, sure... but a good portion of users will click "accept it anyway". Apart from all that, if, as is apparently so in the Circuit City case, if the server itself is compromised then nothing need be faked at all — SSL/TLS has no bearing on whether the server is safe to use.

What's more, the use of SSL, in itself, doesn't assure you that your data can't be snooped. There are two reasons for that: weaknesses in older version of SSL, and weaknesses in some encryption techniques that are still in use. For the first, SSL 1.0 and 2.0 are obsolete and should no longer be used. Yet support for SSL 2.0 is in all web browsers and web servers, and is generally enabled by default. To see this, go to your web browser's options and find the security options (in Internet Explorer it's on the "Advanced" tab, at the bottom of the list; in Firefox it's on the "Advanced" page, on the "Security" tab). While you're there, why not un-check the "Use SSL 2.0" box? Nearly all web servers support SSL 3.0 or TLS, and you shouldn't trust the very few that don't. And yet a colleague's study shows that well over 90% of the web servers still allow SSL 2.0.

Related to that is the fact that there are several encryption methods that may be used, and an assortment of key lengths. Some of the methods, particularly with shorter key lengths, are fairly easily cracked. There's little reason, these days, to make the weaker choices, and yet some servers do — sometimes even when both sides would support a stronger choice.

Why are SSL 2.0 and the option for too-short keys still around? Migration. When SSL 3.0 was introduced, software was set up to support both, so it could work both ways until everyone got the new software. But that's one problem with remaining compatible: pretty much everyone has the new software now, and has for a while, but it's too easy to leave the compatibility features around. And they leave open the possibility of "downgrade attacks", where the attacker tricks the server and the browser into negotiating a weaker communication link than they might — a link that the attacker can now tap into.

 

Tomorrow: "OK, so I'm a zombie, damn. They still can't get through my firewall, can they?"

No comments: