My anti-spam colleague John Levine wrote last week about why we can’t make the Internet secure.
In a discussion about a recent denial of service attack against Twitter, someone askedRead the post. The gist of it is that the pieces that make up the Internet were not originally designed with security in mind, and, while we’ve tried to change a lot of that since then, no one wants to foot the costs of making the necessary changes now.Some class of suppliers must be making money off of the weaknesses. Anybody out there have a prescription for the cure?
The basic answer to your question is that the people who run the net, all umpteen million of us, have collectively decided that it’s cheaper to live with the damage that criminals cause than to deal with the problems that let them do it. Change that attitude, then we can talk.
That last point is central. There’s no one thing that we can close up to fix things. There’s no one weak point, but many, many weak points all over the Internet, from the machines we connect to the network, to the protocols at the lower network layers, to the application protocols on top, to the applications that use them. Attackers can and do attack ever piece of the system.
And trying to close it all is expensive, in more than one sense. It’s very costly to produce bug-free software, and no matter how hard you try and how much money you spend, you’ll ultimately fail — some bugs will slip through, and some of those will be exploitable as security holes. It’s costly to block users with insecure computers or software, it’s costly to eschew free but buggy software, it’s costly to upgrade everything to use newer, more secure protocols.
Then, too, as we secure the protocols we see reluctance to deploy the new ones, and even more reluctance to demand them and to cut off participants who don’t use them. We see an example of the deployment problems in the long delays in implementing DNSSEC, which defends the Domain Name System — a crucial, basic part of locating addresses on the Internet — from integrity breaches. Further examples of it abound. My service provider, for example, does not support the use of a secure way to access or send email through POP3 or SMTP (and so I don’t use my ISP’s email service).
As we develop security-related protocols such as DKIM (Domain Keys Identified Mail), we have to answer concerns from the Internet community about partitioning the Internet (the domains that use the new protocols vs those that don’t), about “flag days” (a set time after which the new protocols will be mandatory, and we’ll no longer accept things done “the old way”), about whether the changes will be compatible with old software that hasn’t been updated in years.
Service providers could enforce minimum security standards for computers on their networks, something that many companies already do with their employees. Computers that are detectably infected or vulnerable to it could be blocked from the network until they’re secured. It wouldn’t be perfect; it might be a good step. But it requires implementation of automated scanning, a staff to follow up, and — probably most importantly — angering paying customers by telling them that you won’t serve them. To vary what John says, as long as businesses would rather have zombie computers on their networks than risk offending customers by chucking them, we can’t solve much.
On top of all of that, we users ourselves are weak points, refusing to follow advice that’s given to us every day. We continue to use weak passwords, we continue to share passwords, we continue to visit rogue web sites when we should know better, we continue to open email attachments that store evil software onto our computers. We continue to use public computers in places like Internet cafes to log into secure systems, such as our email and credit-card accounts.
The other day, as I was searching the Gmail forums for something else, I found the following question posted:
Since I started using Firefox 3.51, my Gmail has been painfully slow. Opening messages, opening Gmail, even coming to the Help page are all s l o w. I saw a reply to a similar problem here that recommended changing back to an http rather than https connection. That seems to have made it faster. Can anyone tell me why this is, and is Google working on this so I can go back to a secure connection in the future?Right, that seems to have worked — turning off the security made it a little faster (it turns out, of course, that that wasn’t his basic problem).
But that recommendation is rather like saying, “Oh, it takes you a little longer to get into your house when you have to fiddle with the keys? I tell you what: why don’t you just leave the door unlocked? You’ll be able to get in faster that way.” And, yet, the user was happy to do it.
When you think about asking the question that the guy in John’s post asked, consider how much more you’re willing to pay for your computer, all the software you use on it, and your Internet service. And consider whether you’re willing to have chunks of the Internet inaccessible because they haven’t been secured.
 I don't mean to imply that all free software is terribly buggy, nor that software you pay for is necessarily better. It’s just one of the trade-offs.