Progress, Trust, and Going to the Pub: London Search Social write-up

Thursday 14th January, 2010
The Elgin on Ladbroke Grove

The Elgin, via flickr/Ewan-M

There’s a theory that claims expertise and associated salaries increase more rapidly in cities than they do in the country because physical proximity decreases the cost of sharing ideas (I’m desperately trying to dig up the source amongst the noise; which is either ironic or poetic depending on how you look at it).

The interwebs are a different beast. Proximity doesn’t exist in the same way, perhaps instead becoming a cultural rather than a geographic measure of separation. The cost of spreading an idea on the internet is more related to trust than physical distance.

On the internet, that trust must be earned by expertise and clear communication. In the physical world people behave differently, people tend to trust each other more quickly and be more open when they can look each other in the eye, or buy each other a beer.

[Only on a technology blog would you find a justification this obscure for going to the pub.]

On Tuesday we held this month‘s London Open Source Search Social at The Elgin in Notting Hill. This was the first time we’d used our shiny new Meetup account to organise the event, so it was nice not to have to send out reminders manually (laziness #ftw).

A few notes from the evening for those whose memories are as bad as mine

There’s plenty missing, and some of this may be fictitious.

Bruno from Jobomix talked about his use of Hadoop to detect duplicate job data, leading to a conversation about Pig and Cascade, then other distributed systems like Scala. Ben from OneIS brought up the subject of Duby, a Ruby-like-but-tidier language targeting the JVM, and when prompted gave us an outline of his company’s free-text graph store.

We talked about duplicate detection in various fields, thresholds, and the cost of false positives. We touched on human relevance testing; Richard told us he’d found people generally need to be paid to do it and not for more than 30 minutes at a time.

Joao from the Royal Library of the Netherlands told us how they digitise and index millions of pre-digital documents per month. Ben told us about a method of querying Xapian from Postgres using an SQL JOIN.


Shopachu – Incogna’s new visual product browser

Tuesday 5th January, 2010

In the back half of last year visual search outfit Incogna released their visual shopping browser Shopachu. I’ve followed some of Incogna’s previous releases so I thought I’d share some thoughts on this one too.

What does it do?

This site has a very similar model to our own consumer-facing MAST app; Empora. It makes money by sending consumers to retailer sites, who for obvious reasons are willing to pay for suitable traffic. The main forces that influence the design of a site like this are retention, and the clickthrough and conversion rates of your traffic:

Retention – you need to impress people, then ideally remind them to come back to you

Clickthrough – you need to send a good proportion of visitors to retailers in order to make money

Conversion – if the visitors you send aren’t interested in buying the clicked product then the retailers won’t want to pay for that traffic on a per-click basis (although they might be interested in the CPA model, which doesn’t pay until someone buys)

First Impressions

People’s first impressions are usually determined by a combination of design and how well a site conforms to their expectations. I’ve probably got distorted expectations considering my experience working with this type of application, but in that respect I was pleasantly surprised; Shopachu has some good features and makes them known. In terms of design I was less impressed, the icons and gel effects don’t seem to fit and I think there are whitespace and emphasis issues (sorry guys, trying to be constructive).

Finding stuff

It’s fairly easy to find things on Shopachu. The filters are easy to use (although I could get the brand filter to work, could be a glitch). The navigation is pretty easy, although it doesn’t currently provide second generation retail search features like facet counts (i.e. showing the number of products in a category before you click on it).

The biggest interesting technological problem I’ve noticed with their navigation is the colour definitions. There’s a big difference between a colour being present in an image, and the eye interpreting that colour as being present in an image. I think there are some improvements to be made in the way colours are attributed to images (e.g. here I’ve applied a pink filter but am seeing products with no pink returned). Similarly there’ll be another marked improvement with better background removal (e.g. here I’d applied a light blue filter and am seeing products with blue backgrounds).

Similarity search

Shopachu’s similarity search is quite different to Empora’s.  They’ve chosen to opt for maximum simplicity in the interface rather than user control, resulting in a single set of similarity search results. In contrast, Empora allows users to determine whether they’re interested in colour similarity, or shape similarity, or both. Simplicity often wins over functionality (iPod example #yawn) so it’ll be interesting to see how they do.

Another issue is the quality of the input data. This challenge is the same for Empora, or anyone else aggregating data from third parties, in that category information is inconsistent. One effect of this is that when looking at the similarity results for an often poorly-classified item like a belt you may also see jewellery or other items that have been classified as “accessories” or “miscellaneous” in the retailer’s data, another effect is that you often see duplicate items.

Keeping the traffic quality high

An interesting design decision for me is that the default image action on Shopachu is a similarity search, i.e when you click on the image it takes you to an internal page featuring more information and similar products. This is in contrast to the default action on Empora or Like.com, which is to send the visitor to the retailer’s product page.

The design trade-off here is between clickthrough and conversion rates. If you make it easy to get to the retailer your clickthrough rate goes up, but run the risk of a smaller proportion converting from a visit to a purchase. Here Shopachu are reducing that risk (and also the potential up-side) by keeping visitors on their site until they explicitly signal the intent to buy (the user has to click “buy” before they’re allowed through to the retailer).

Getting people hooked

There are a few features on Shopachu aimed at retention, namely Price Alerts and the ability to save outfits (Polyvore style). These features seem pretty usable, although I think they’re still lacking that level of polish that inspires passionate users. I’d be interested to know what the uptake statistics look like.

In summary

I think this implementation shows that Incogna have thought about all the right problems, and I think have clearly got the capability to solve the technological issues. On the down-side; cleaning up retailer’s data is a tough business which will be time-consuming, and I think they need to find a little inspiration on the visual design side.


Open Source Search Social

Thursday 5th November, 2009

It’s been a little while since the last Open Source Search Social, so we’re getting really imaginative and holding another one, this time on Wednesday the 18th of November. As usual the event is in the Pelican pub just off London’s face-bleedingly trendy Portobello Road.

The format is staying roughly the same. No agenda, no attitude, just some geeks talking about search and related topics in the presence of intoxicating substances.

Please come along if you can, just get in touch or sign up on the Upcoming page.


Bing abbreviation of b(or)ing?

Wednesday 3rd June, 2009

I can’t comment on the technical architecture of Bing, but ultimately it doesn’t matter that much. Bing is trying to make solving a solved problem look new by adding a big photograph of a man standing on a mountain. Bing is b(or)ing. As Hugh MacLeod says, Microsoft: change the world or go home.

Maybe they can get some traffic by doing deals but ultimately a mousetrap is just a mousetrap unless it can do something other mousetraps can’t do. Until then I don’t see it making any waves.


Open Source Search Social

Thursday 28th May, 2009

Following on from the undeniably interesting Search/Lucene social in London last month we’re organising another one… this time broadening the scope a little to other OS search projects and related geekery… Solr, Hadoop, Mahout, etc.

We’re meeting up on Monday the 15th of June, at The Pelican pub (nearest tube Westbourne Park).

If you’re working in the search field and fancy an informal chat then come along. Please sign up on Upcoming or drop me a line if you fancy coming along.

Update 28th May, 16:19:- Added Upcoming link


Terminology, similarity search, and other animals

Thursday 30th April, 2009

In the walkway-level study room of my old Physics department there’s a desk, where I once found this timeless conversation etched into the surface like a prehistoric wooden version of Twitter:

Protagonist: – “You’re a mook”

Antagonist: – “What’s a mook?”

Protagonist: – “Only a mook would say that”

Aside from any revelations about the emotional maturity of undergrad physicists, I think the lesson here is that it speeds up comminucation if both parties use the same terminology and know what it means.

My area of the CBIR industry has a terminology problem. I’d like to have a vocabulary of terms to describe the apps that are emerging weekly.

Visual Search, Image Search, or Visual Image Search

We’re working on image search, of a sort, although the image isn’t necessarily the object of the search, nor does image search describe only CBIR-enabled apps. We’re searching using visual attributes of images, but “visual search” as a term has already been marked out by companies that visualise text search.

Similarity search

This one seems to hit the consumer-facing nail on the head, for some apps at least. Technologically I’d include audio search and image fingerprinting apps like Shazam and SnapTell in my term, but for consumers there may be no obvious connection so perhaps this is a runner.

Media As Search Term (MAST)

Media As SearchTerm describes for me the group of apps that use a media object such as an image or an audio clip as a search query to generate results, either of similar objects or of instances of  the same object. I think MAST sums up what I’d describe as my software peer group (media similarity and media fingerprinting apps), although it doesn’t seem as snappy as AJAX. Ah well.


Google Image Similarity first impressions

Tuesday 21st April, 2009

Right in line with my too-obvious-to-be-worth-anything prediction, Google have just released a Labs image similarity feature for Google Images. Others have commented on this already, but obviously this is hugely interesting for me because of my currently work on Empora‘s exploratory visual search so I’m going to throw my tuppence into the ring aswell.

Below are my first impressions.

Product impact

Google Similar Images (GSI) offers just one piece of functionality, the ability to find images that are similar to your selected image. You may only select images from their chosen set, there’s no dynamic image search capacity yet. Similar images are displayed either as a conventional result set when you click on “similar images”, or as a list of thumbnails in the header when you click through to see the original source.

The aims of this work will be (broadly):

  1. Keeping up with the Joneses. The other major search engines are working on similar functionality and Google can’t be seen to fall behind.
  2. User engagement. The more time you spend exploring on Google, the more their brand is burned into your subconscious.
  3. Later expansion of search monetisation. Adsense and Adwords get a better CTR than untargeted advertising because they adapt to the context of your search. If context can also be established visually there seems like strong potential for revenue.

Getting results

The quality of results for a project like this are always going to be variable as the compromises between precision, recall, performance, and cost are going to continue to be sketched out in crayon until more mature vocabularies and toolsets are available. That said, Google need to keep users impressed, and they’ve done pretty well.

A few good examples:

A few bad examples:

Under the hood

Once the “qtype=similar” parameter is set in the URL, the only parameter that affects the set of similar images is the “tbnid” which identifies the query image. The text query parameter does not seem to change the result set, only changing the accompanying UI. While this doesn’t allow us to draw any dramatic conclusions it would allow them to pre-compute the results for each image.

The first clear conclusion is metadata. Google have obviously been leveraging their formidable text index, and why not. The image similarity behaviour indicates that the textual metadata associated with images is being used to affect the results.  One of the clearest indicators is that they’re capable of recognising the same individual’s face as long as that person’s name is mentioned. Unnamed models don’t benefit from the same functionality.

My second insight is that they’re almost certainly using a structural technique such as Wavelet Decomposition to detect shapes within images. The dead give-away here is that search results are strongly biased towards photographs taken from the same angle.

I suspect that they’re not yet using a visual fingerprinting technique (such as FAST) to recognise photographs of the same object. If they were doing this already I suspect that they’d have used this method to remove duplicate images. This may well come later.

Finally

All in all my impression is that they’ve implemented this stuff well, but that there’s a lot more yet to come. Namely:

  • Handling of duplicates, i.e. separation between searching for the similar images and instances of the same image
  • A revenue stream