Following on from a previous post about how the Verified By Visa and Mastercard SecureCode are training users to give up their identity to anyone who asks for it, apparently some lovely boffins at Cambridge have written a paper on it. (via)
MySpace store users original passwords in clear-text, and return them by email on request. Enough said really. FAIL.
For reference purposes, there are better ways to do this:
One step better: don’t return the original password (potentially revealing additional information to an attacker), just create a generated one or a one-off link that allows a new one to be created by the user.
Two steps better: don’t store the original password at all, store a one-way hash instead, that way even an attacker who compromises the DB can’t see it (assuming you do it right).
While that’s fairly common for lower-risk web apps from cash-strapped start-ups and solo developers, for someone like Monster it seems inappropriate. Monster run sites all over the world, have a clear revenue stream, and they store an awful lot of personal information. Exactly the kind of information that’d be useful to identity thieves.
A few SSL certificates won’t break their bank account unless it was breaking anyway.
It’s got me wondering though. What proportion of sites actually bother with SSL? Sadly the only stats I’ve found on SSL adoption are some vague hints at data from Netcraft (scroll to the bottom). These stats seem to indicate that only 60 out of the “top 1000” sites use SSL, but I’m not sure exactly what criteria they’re using to gather those numbers.
Has anyone got any idea what proportion of sites use HTTPS for login?
The short version is that these mechanisms train consumers to provide their private account data to anyone claiming to be the card issuer. The problem is that there’s no way for the user to know that the data is being transmitted to the vendor rather than an imposter.
It makes you wonder what requirements were given to the people that designed the process. From my experience these verification processes cause a not insignificant drop-off in the success rate of payment processes, ie. fewer sales. With that (and the security problem) in mind it’s a fair bet that this particular family of verification mechanisms won’t last that long.
International job site Monster has suffered a serious security breach and an undisclosed portion of its user database is now in criminal hands.
Prompted by Monster’s warning to change your passwords if you use their site, I decided out of curiosity to see if there were any tell-tale signs that they store actual passwords, rather than hashes. Following their instructions, I dutifully changed my password. They didn’t send the original, only a link to change it (which suggests they probably store hashes rather than plain text passwords). The link only worked once, subsequent attempts were blocked, which is good.
Then I found a security hole, which for the sake of responsible disclosure I will not reveal now. I’ve emailed Monster and asked them to get in touch with me to sort the problem out.
[Updated – Monday 26th Jan] On further thought, the barrier to entry on this exploit is so high (man in the middle) and the time pressures are so immediate (lots of people will be changing their passwords right now) that I think it’s right to publish it, and responsible disclosure weighs on my side. It’s pretty simple. Monster’s forgotten password tool transmits your passwords over HTTP, rather than HTTPS.
Whoever developed this obviously thought a little about man-in-the-middle, as the password parameter is ‘obscured’.
Below is the unencrypted new password, visible to anyone between my web browser and Monster’s servers.
This somewhat scary story could up the arms race between software producers and black-hat hackers. The concept is that by comparing two versions of the same program, one with a flaw, and one with that flaw patched, you can automatically generate code that exploits that flaw.
It’s (hopefully) a way off being an immediate and active threat, but it could mean that services such as Windows Update could themselves act as a resource for those looking for exploits.
Possible repercussions might (and I’m guessing here) include techniques that make it more difficult or expensive for hackers to use this technique, such as reducing users right to choose when to install security updates, attempting to introduce false positives to slow hackers down, or increasing the the number of changes bundled with each release.