Shopachu – Incogna’s new visual product browser

Tuesday 5th January, 2010

In the back half of last year visual search outfit Incogna released their visual shopping browser Shopachu. I’ve followed some of Incogna’s previous releases so I thought I’d share some thoughts on this one too.

What does it do?

This site has a very similar model to our own consumer-facing MAST app; Empora. It makes money by sending consumers to retailer sites, who for obvious reasons are willing to pay for suitable traffic. The main forces that influence the design of a site like this are retention, and the clickthrough and conversion rates of your traffic:

Retention – you need to impress people, then ideally remind them to come back to you

Clickthrough – you need to send a good proportion of visitors to retailers in order to make money

Conversion – if the visitors you send aren’t interested in buying the clicked product then the retailers won’t want to pay for that traffic on a per-click basis (although they might be interested in the CPA model, which doesn’t pay until someone buys)

First Impressions

People’s first impressions are usually determined by a combination of design and how well a site conforms to their expectations. I’ve probably got distorted expectations considering my experience working with this type of application, but in that respect I was pleasantly surprised; Shopachu has some good features and makes them known. In terms of design I was less impressed, the icons and gel effects don’t seem to fit and I think there are whitespace and emphasis issues (sorry guys, trying to be constructive).

Finding stuff

It’s fairly easy to find things on Shopachu. The filters are easy to use (although I could get the brand filter to work, could be a glitch). The navigation is pretty easy, although it doesn’t currently provide second generation retail search features like facet counts (i.e. showing the number of products in a category before you click on it).

The biggest interesting technological problem I’ve noticed with their navigation is the colour definitions. There’s a big difference between a colour being present in an image, and the eye interpreting that colour as being present in an image. I think there are some improvements to be made in the way colours are attributed to images (e.g. here I’ve applied a pink filter but am seeing products with no pink returned). Similarly there’ll be another marked improvement with better background removal (e.g. here I’d applied a light blue filter and am seeing products with blue backgrounds).

Similarity search

Shopachu’s similarity search is quite different to Empora’s.  They’ve chosen to opt for maximum simplicity in the interface rather than user control, resulting in a single set of similarity search results. In contrast, Empora allows users to determine whether they’re interested in colour similarity, or shape similarity, or both. Simplicity often wins over functionality (iPod example #yawn) so it’ll be interesting to see how they do.

Another issue is the quality of the input data. This challenge is the same for Empora, or anyone else aggregating data from third parties, in that category information is inconsistent. One effect of this is that when looking at the similarity results for an often poorly-classified item like a belt you may also see jewellery or other items that have been classified as “accessories” or “miscellaneous” in the retailer’s data, another effect is that you often see duplicate items.

Keeping the traffic quality high

An interesting design decision for me is that the default image action on Shopachu is a similarity search, i.e when you click on the image it takes you to an internal page featuring more information and similar products. This is in contrast to the default action on Empora or Like.com, which is to send the visitor to the retailer’s product page.

The design trade-off here is between clickthrough and conversion rates. If you make it easy to get to the retailer your clickthrough rate goes up, but run the risk of a smaller proportion converting from a visit to a purchase. Here Shopachu are reducing that risk (and also the potential up-side) by keeping visitors on their site until they explicitly signal the intent to buy (the user has to click “buy” before they’re allowed through to the retailer).

Getting people hooked

There are a few features on Shopachu aimed at retention, namely Price Alerts and the ability to save outfits (Polyvore style). These features seem pretty usable, although I think they’re still lacking that level of polish that inspires passionate users. I’d be interested to know what the uptake statistics look like.

In summary

I think this implementation shows that Incogna have thought about all the right problems, and I think have clearly got the capability to solve the technological issues. On the down-side; cleaning up retailer’s data is a tough business which will be time-consuming, and I think they need to find a little inspiration on the visual design side.


Incogna monetise pure image search

Monday 12th January, 2009

I must have missed the launch of this feature, but Incogna’s most recent blog post talks about how they’ve implemented visual advertising. The results vary, but overall they’ve implemented it well.

I’ve written about Incogna’s image search before, but there’s more to add; when using this tool, as a user you have no visibility into the depth or type of data available to you. Nor does the app currently give control over movement, other than using text search and query images.

Establishing context (or, lost in the supermarket)

Any fans of Steve Krug’s usability classic will recognise the metaphor here. If you’re in an aisle in a supermarket you can see both the length of the aisle and the content of the shelves (at least the ones near you). You also know your rough position in the store, and can see signs and the contents of shelves.

Using that input data you can navigate (with a few hiccups) anywhere in the store.

Incogna’s app currently allows you to compare visually, and to search using text, but the depth and type of results remains hidden. As such there’s no real way to effectively navigate within the data set.

I should be clear at this point that this isn’t a criticism of Incogna’s app. This is not a problem with an easy or obvious solution. What I’m suggesting is that there’s still scope for some killer navigation features in this area.

Making money

The monetisation feature on Incogna appears only when their system thinks it can produce a good match between your search and the sponsored products. This is a wise move, since irrelevant ads would ruin the user experience.

It seems like the results use mainly visual comparison data, possibly with some categorisation thrown in. It worked brilliantly with pictures of trucks, but curiously while I was browsing Canon cameras it presented sponsored ads for televisions (both are rectangular I suppose).

Having fun

The main issue standing in the way of Incogna’s revenue stream is that their app is not yet fun to use. As mentioned above there’s no sense of position or direction. You can’t learn anything about the images you find without clicking through to the source site, and you can’t properly refine your search…  you have to start again, which means that there’s no big advantage over Google, or any other text-based image search.

More another time.


Incogna goes live

Tuesday 25th November, 2008

Congratulations to the Incogna guys for getting their visual search engine out of Alpha. Good work.


Image search and Incogna’s beta

Saturday 15th November, 2008

I’ve written previously about work by Canadian image search technologists Incogna to harness the power of graphics processors to index images, but I’ve only just found time to try out their image search beta.

A bit of context

The challenges surrounding image search are significant and numerous (just like the opportunities). To start with, an image search engine has to have most of the main features and properties of a text search engine, so problems include some degree of natural language processing, semantic indexing, and all the problems that come with creating a distributed search index.

When you add image comparison into the mix that puts even more load on your infrastructure in terms of bandwidth, storage, the CPU required to extract usable data from images, and the data structures required to hold that visual data.

While those aspects of a search engine design can be tricky to get right, I don’t see them as the hardest problem. I think the hardest problem is caused by the fact that a picture paints a thousand words. An image contains so much information that it’s hard to know exactly what aspect of it a user is looking at… and hence what they mean by similarity and relevance. For example, if a user uses a holiday snap from Yosemite as a query image, are they looking for other pictures of Yosemite, or for shirts like the guy in the picture is wearing?

The point I’m getting to is that nobody has really nailed this problem yet, because it’s hard.

Incogna Beta

Playing with this search engine is fun. The user interface is slick and responsive, and search results are returned in a fraction of a second (it’s hard to overstate how important the perception of speed is to user experience). Looking under the hood the interface is using Prototype to handle animation and talk to a JSON API, which is a solid choice.

Getting results

From the results returned from a few similarity searches, they seem to be calculating similarity using some sort of shape analysis, some colour data, and also making significant use of non-visual metadata.

The input data they’ve crawled seems to have come from a fairly wide range of sources, although it’s hard to see how big their index is in total since they return quite a tight result set, opting for precision rather than recall.

That leads me to my only criticism of the beta, which is that precision is only useful if you can be sure you have a reasonable understanding of what is relevant to a user. The beta doesn’t offer the user a mechanism to help determine relevance; the “thousand words” problem I described above. In order to become really useful, users have to be able to help the search engine decide what is relevant. That said, I don’t have all the answers myself, and Incogna seem like a smart bunch. I expect them to have some very interesting ideas over the next 12 months.


Visual search indexing using parallel GPUs

Tuesday 23rd September, 2008

Building an image search index takes quite a lot of processing power. Apart from all the usual mucking about that building a regular search index entails, you also have to download, resize, and analyse all the images that you want in your index. That analysis itself will consist of many different tasks, usually including the use of visual features to analyse colour, texture, shape, etc. and the use of classifiers to recognise specific objects.

Canadian visual search outfit Incogna have taken an interesting approach to image processing, from what I can tell building their image search indexes using massively parallel GPUs. Asking around the team, that technique has anecdotally produced some very successful tests, so I’ll be keeping an eye on these guys in future.