Terminology, similarity search, and other animals

Thursday 30th April, 2009

In the walkway-level study room of my old Physics department there’s a desk, where I once found this timeless conversation etched into the surface like a prehistoric wooden version of Twitter:

Protagonist: – “You’re a mook”

Antagonist: – “What’s a mook?”

Protagonist: – “Only a mook would say that”

Aside from any revelations about the emotional maturity of undergrad physicists, I think the lesson here is that it speeds up comminucation if both parties use the same terminology and know what it means.

My area of the CBIR industry has a terminology problem. I’d like to have a vocabulary of terms to describe the apps that are emerging weekly.

Visual Search, Image Search, or Visual Image Search

We’re working on image search, of a sort, although the image isn’t necessarily the object of the search, nor does image search describe only CBIR-enabled apps. We’re searching using visual attributes of images, but “visual search” as a term has already been marked out by companies that visualise text search.

Similarity search

This one seems to hit the consumer-facing nail on the head, for some apps at least. Technologically I’d include audio search and image fingerprinting apps like Shazam and SnapTell in my term, but for consumers there may be no obvious connection so perhaps this is a runner.

Media As Search Term (MAST)

Media As SearchTerm describes for me the group of apps that use a media object such as an image or an audio clip as a search query to generate results, either of similar objects or of instances of  the same object. I think MAST sums up what I’d describe as my software peer group (media similarity and media fingerprinting apps), although it doesn’t seem as snappy as AJAX. Ah well.


Recognising specific products in images

Wednesday 21st January, 2009

Yesterday Andrew Stromberg pointed me to the excellent IPhone app by image-matching outfit Snaptell.

Snaptell’s application takes an input image (of an album, DVD, or book) supplied by the user and identifies that product, linking to 3rd party services. This is equivalent to the impressive TinEye Music but with a broader scope. As Andrew points out, the app performs very well at recognising these products.

Algorithmically the main problems faced by someone designing a system to do this are occlusions (e.g. someone covering a DVD cover with their thumb as they hold it) and transformations (e.g. skewed camera angle, or a product that’s rotated in the frame)

There are a number of techniques to solve these problems, (e.g. the SIFT and SURF algorithms) most of which involve using repeatable methods to find key points or patterns within images, and then encoding those features in such a way that is invariant to rotation (i.e. will still match when upside-down) and an acceptable level of distortion. At query-time the search algorithm can then find the images with the most relevant clusters of matching keypoints.

It seems like Snaptell have mastered a version of these techniques. When I tested the app’s behaviour (using my copy of Lucene in Action) I chose an awkward camera angle and obscured around a third of the cover with my hand and it still worked perfectly. Well done Snaptell.