Thursday, June 11, 2009

.

“Branding” for email and web sites

John Levine notes that using secure branding can be an effective way to combat phishing. “Secure“ branding, in this sense, means using trusted authorities to verify the credentials of a web site or email sender, and then displaying branding information, such as a logo, in a trusted area of the application window, done in a way that would be hard to attack.

John is certain right, in principle, but the difficulty is that principle doesn’t necessarily translate into reality, for a number of reasons.

First, I’ll note that the “extended” security certificates that newer browsers recognize and use to display additional visual cues (the “green bar” he refers to) are really what the standard certificates were originally meant to be. That is, in the beginning, Internet certificate authorities were supposed to do reasonable vetting of businesses before issuing them certificates.

We can see how well that worked: not at all. Anyone can get a certificate for any domain they own (and sometimes for ones they don’t). Beyond that, anyone can create their own certificates, and then convince users to accept them... that part is easy enough, because when a browser or email program runs across a certificate it doesn’t trust, it asks the user what to do, and most users have simply learned to say “trust it,” for reasons I’ve discussed before.

OK, so the theory is that now we have a version that gets it right, and that these extended certificates really will only be issued to properly vetted organizations. That is, paypal.com can get one, but poypol.com can not, even though it is a properly registered domain as of this writing (registered on 30 March 2009, with fake “whois” information). I’m not sure how we assure ourselves that the vetting will last, and will not fall into the “anyone who pays can get one” trap, nor how we can prevent a bad guy from tricking the system somehow. But let’s stipulate, for now, that it’s true, and that no phisher will ever get a “green bar” in the browser frame.

So let’s look at how different browsers show this. Click the image below to see it full-sized.

Browser samples
The first three images show three browsers — Firefox, Safari, and Opera, all run on MacOS — and you can see (the red-circled bits) that they each use a different way to display the green “this has been verified” information. Firefox puts it at the beginning of the address field, Opera puts it at the end, and Safari puts it at the top right of the window, in a shade of green that barely looks green to me, actually. Personally, I think the Firefox way looks the coolest. But cool or not, the key is for it to convey crucial information, and they all do that.

Only, because each browser does it in a different way, things are open to confusion. We might think that because most users only use one browser, that wouldn’t be a problem, that each user would get used to her own browser. In practice, though, the difference does confuse the situation, because most users are less aware than we think about what the browsers do, what the cues are, and what those cues mean.

In Why Phishing Works, a study presented at CHI 2006, by Rachna Dhamija of Harvard University and two University of California colleagues, users were asked questions about what browser cues they noticed, and what the users thought they meant. They were also asked to explain what they thought they had agreed to when they clicked on security pop-ups that warned them of problems with certificates.

The results show that we cannot assume that users will understand what the computer is trying to tell them, or how. For example, some users in the study did not notice the padlock symbol — the symbol in the browser frame that tells you that you have an encrypted “SSL” connection to the server — at all. Of those who did, some didn’t understand what the padlock symbol was telling them. Some didn’t realize that there’s any difference between symbols in the browser frame (under control of the browser) and those in the web page itself (under control of the web site, good or bad). And some actually “gave more credence to padlock icons that appeared within the content of the page.”

There’s no reason to think that the “green bar” situation will be any different. The fourth image above shows a mock-up that I made of how a fake PayPal web site could, using the domain poypol.com, put a fake paypal.com green bar within the web page itself. A smart web site could even use the identification that it receives from the browser to make a browser-specific fake. It won’t fool everyone, of course, but it’s likely to fool a pretty high proportion of the users.

We’ve also seen very clever junk web sites that use scripts that pop up small, menu-less browser sub-windows positioned to cover key parts of the main window. One such site very effectively covered the address bar with its own replacement, a technique that could fool all but the most savvy users into thinking that there was a legitimate green “OK” in the trusted browser frame.

Of course, a well-designed system such as John describes will help some users, and I’m not saying that we shouldn’t do it. But it will take a lot of cooperation from software vendors, and a great deal of user education. And what I worry about is that, while we may protect some portion of the user community, we could well make things much worse for the users who don’t understand, but who get false security from their misunderstanding.

2 comments:

Sue VanHattum said...

I didn't even realize this green bar you speak of was there. (Just one data point ...)

Ray said...

And green is a terrible colour to choose for those of us who are colour-challenged. Green, red, brown.. all the same. Just another blob of data on the screen, but it doesn't stand out at all.