Tuesday, January 26, 2010

.

Follow-up on credit-card fraud

I noted this, in my recent post about credit-card fraud:

[...] and the bank will just write off the $1000 loss as a cost of doing business. It’s small change, compared with what some crooks get away with.

Why don’t the banks do something about this?

Well, they do. They block transactions, freeze accounts, and the like. They have their equivalent to email anti-spam software: fraud-detection software that looks at every transaction and decides when things are suspicious — and when they’re sufficiently suspicious that it warrants an immediate freeze on the account.

And, as with anti-spam software, the banks’ fraud-detection software runs into false positives. Many, many false positives. The way the banks control their losses is by setting their fraud detection to be very sensitive, and not worrying about false positives — essentially, not worrying about them at all.

I find it interesting that this is a customer inconvenience that they’re willing to inflict, because banks are generally terrified of useful security mechanisms that would cause change and inconvenience for their customers. Banks would much rather eat the fraud losses than make inconvenient changes, even when those changes would be truly effective and would be something people would soon get used to.

That’s why they’ve been introducing feel-good “security” that adds little real value — things such as login images and arrays of “security questions” (which, as I’ve said before, most often make things worse) — and will never adopt something like two-way SSL/TLS authentication, which actually does.

Two-way authentication could make a good step toward stopping phishing, by making the knowledge of a user’s account number and password insufficient to break into the account. Credentials would be stored on the user’s computer (and/or cell phone or other device), and those credentials would be used to validate a secure connection to the bank, using asymmetric cryptography techniques. The “password” that the user enters would only allow use of the credentials on the device, but neither the password nor the credentials would themselves be sent over the network. An attacker would need to steal both the password and the device in order to be able to log in.

But it would require the user to install something — at least the security credentials, and perhaps also some software, depending upon the implementation — on every device that could log into the bank. And the user could no longer access her bank account from someone else’s computer (nor, for example, from an Internet cafe, but users shouldn’t be doing that anyway).

And that’s where things become inconvenient at a level the bank isn’t willing to deal with. It’s one thing to tell a customer to call the customer service line and confirm a transaction. It’s another to expect the customer to install security certificates or software, and it’s still another to limit where she can log in from — and to no longer be able to say that you provide online access from anywhere.

There’s a similar situation with U.S. banks’ reluctance to distribute credit cards equipped with smart chips — which, as Nathaniel points out in the comments, doesn’t stop bogus “card not present” transactions, but which does address the issue of skimming, what most likely happened to me. This reluctance mystifies me, as it seems the inconvenience to the customer in this case is limited to receiving a new card to replace the old one, and having to remember and use a PIN. Users in Europe and Asia seem to have had no problem switching.

Meanwhile, estimates put the collective cost of credit card fraud in the billions of dollars.

1 comment:

The Ridger, FCD said...

I couldn't use my card in shops in England last year because it wasn't chipped.