Monster security FAIL

Sunday 25th January, 2009

International job site Monster has suffered a serious security breach and an undisclosed portion of its user database is now in criminal hands.

Prompted by Monster’s warning to change your passwords if you use their site, I decided out of curiosity to see if there were any tell-tale signs that they store actual passwords, rather than hashes. Following their instructions, I dutifully changed my password. They didn’t send the original, only a link to change it (which suggests they probably store hashes rather than plain text passwords). The link only worked once, subsequent attempts were blocked, which is good.

Then I found a security hole, which for the sake of responsible disclosure I will not reveal now. I’ve emailed Monster and asked them to get in touch with me to sort the problem out.

[Updated – Monday 26th Jan] On further thought, the barrier to entry on this exploit is so high (man in the middle) and the time pressures are so immediate (lots of people will be changing their passwords right now) that I think it’s right to publish it, and responsible disclosure weighs on my side. It’s pretty simple. Monster’s forgotten password tool transmits your passwords over HTTP, rather than HTTPS.

Whoever developed this obviously thought a little about man-in-the-middle, as the password parameter is ‘obscured’.

Below is the unencrypted new password, visible to anyone between my web browser and Monster’s servers.


Daily routines in programming

Wednesday 21st January, 2009

Today I came across Daily Routines, a list of the habits and discipline of notable figures. The post about Ernest Hemingway caught my eye because it reminded me of part of my old chum Simon‘s routine.

Hemingway used to stop writing at points where he knew what was going to happen next. The logic being that he’d then never face blank page syndrome. By the time he came to a difficult problem he’d already coasted into the zone with an easy stretch of work.

Simon’s logic is an exact translation of Hemingway’s. He makes sure he leaves himself half way through an easy stretch of programming, the major problems of which have already been solved. That way he’s left with a smooth ramp into the zone.

Recognising specific products in images

Wednesday 21st January, 2009

Yesterday Andrew Stromberg pointed me to the excellent IPhone app by image-matching outfit Snaptell.

Snaptell’s application takes an input image (of an album, DVD, or book) supplied by the user and identifies that product, linking to 3rd party services. This is equivalent to the impressive TinEye Music but with a broader scope. As Andrew points out, the app performs very well at recognising these products.

Algorithmically the main problems faced by someone designing a system to do this are occlusions (e.g. someone covering a DVD cover with their thumb as they hold it) and transformations (e.g. skewed camera angle, or a product that’s rotated in the frame)

There are a number of techniques to solve these problems, (e.g. the SIFT and SURF algorithms) most of which involve using repeatable methods to find key points or patterns within images, and then encoding those features in such a way that is invariant to rotation (i.e. will still match when upside-down) and an acceptable level of distortion. At query-time the search algorithm can then find the images with the most relevant clusters of matching keypoints.

It seems like Snaptell have mastered a version of these techniques. When I tested the app’s behaviour (using my copy of Lucene in Action) I chose an awkward camera angle and obscured around a third of the cover with my hand and it still worked perfectly. Well done Snaptell.

More Outlook abuse

Tuesday 20th January, 2009

It’s been a while since I complained about Outlook but don’t panic, it’s not all better now. I am still very annoyed with it, I’ve just been trying to concentrate on more productive blog posts.

Today’s Outlook rant is about the attachment previewer. Outlook 2007 has this feature that (quite rightly) allows you to preview attachments in the reading pane. This works brilliantly for text files and Office documents.

Can you guess what happens if someone sends, oh I don’t know, a “.sql” file, or a “.java” file? These are plain text files that could be displayed even more easily than an HTML email.

Instead you get presented with a link to find and download more previewers. What’s on the page at the other end of that link? Downloadable previewers perhaps? No, you naive young scamp, there’s absolutely nothing of use whatsoever. That’s right, it’s impossible to preview them without writing your own file previewer in .NET, or possibly attempting some pith-helmeted registry botching (via).

I love you Outlook!

Welcome to the image link revolution

Monday 19th January, 2009

The hyperlink revolution allowed text documents to be joined together. This created usable relationships between data that have enabled one of the biggest technological shifts of the recent age… large scale adoption of the internet. Try to imagine Wikipedia or Google without hyperlinks and you’ll see how critical this technique is to the web.

We’re on the verge of another revolution, this time in computer vision.

Imagine a world were the phone in your pocket could be used to find or create links in the physical world. You could get reviews for a restaurant you were standing outside without even knowing its name, or where you were. You could listen to snippets of an album before you bought it, or find out where nearby has the same item for less. You could read about the history of an otherwise unmarked and anonymous building, get visual directions, or use your camera phone as a window into a virtual game in the real world.

A team at the university of Ljubljana (the J is pronounced like a Y for anyone unfamiliar) have released a compelling video demonstrating their implementation of visual linking. They use techniques that I assume are derived from SIFT to match known buildings in an unconstrained walk through a neighbourhood. These image segments are then converted into links to enable contextually relevant information.

When you combine this with other other techniques, such as the contour-based work being done by Jamie Shotton of MSR and you start to see how that future will appear. Bring in the mass adoption of GPS handsets driven by the Iphone amongst others and it’s pretty clear there’s going to be a change in the way people create and access information.

The only questions are who, and when.

New web platform

Tuesday 13th January, 2009

At Pixsta this week we chose our development platform for the next stage of the team’s development. We evaluated a lot of tools under a variety of criteria, and debated long and hard.

In the end we chose Grails, the Rails-like web framework for the Groovy language. Reasons include:

  • It runs on a familiar Java stack (Hibernate, Spring, etc.)
  • Script is compiled to Java bytecode
  • It uses convention over configuration
  • We have a few projects under our belts with this framework already

Goodbye PHP, we will not meet again.

Incogna monetise pure image search

Monday 12th January, 2009

I must have missed the launch of this feature, but Incogna’s most recent blog post talks about how they’ve implemented visual advertising. The results vary, but overall they’ve implemented it well.

I’ve written about Incogna’s image search before, but there’s more to add; when using this tool, as a user you have no visibility into the depth or type of data available to you. Nor does the app currently give control over movement, other than using text search and query images.

Establishing context (or, lost in the supermarket)

Any fans of Steve Krug’s usability classic will recognise the metaphor here. If you’re in an aisle in a supermarket you can see both the length of the aisle and the content of the shelves (at least the ones near you). You also know your rough position in the store, and can see signs and the contents of shelves.

Using that input data you can navigate (with a few hiccups) anywhere in the store.

Incogna’s app currently allows you to compare visually, and to search using text, but the depth and type of results remains hidden. As such there’s no real way to effectively navigate within the data set.

I should be clear at this point that this isn’t a criticism of Incogna’s app. This is not a problem with an easy or obvious solution. What I’m suggesting is that there’s still scope for some killer navigation features in this area.

Making money

The monetisation feature on Incogna appears only when their system thinks it can produce a good match between your search and the sponsored products. This is a wise move, since irrelevant ads would ruin the user experience.

It seems like the results use mainly visual comparison data, possibly with some categorisation thrown in. It worked brilliantly with pictures of trucks, but curiously while I was browsing Canon cameras it presented sponsored ads for televisions (both are rectangular I suppose).

Having fun

The main issue standing in the way of Incogna’s revenue stream is that their app is not yet fun to use. As mentioned above there’s no sense of position or direction. You can’t learn anything about the images you find without clicking through to the source site, and you can’t properly refine your search…  you have to start again, which means that there’s no big advantage over Google, or any other text-based image search.

More another time.