It’s great to see fellow Pixstanaut Tomás Lin talking at the forthcoming Grails Exchange conference in December. He’ll be talking about building rich GUI apps with Flex and Grails. There are still a few tickets left if you can make it.
Today I’m introducing my first ever guest post, written by Pixsta‘s own Rohit Patange about some great work he’s been doing with the guidance of Tuncer Aysal. You’ll be able to see the results of their work shortly on our consumer-facing site Empora. – RM
We at Pixsta are interested in understanding what is in an image (recognise and extract) and do so in an automated way that involves a minimum amount of human input.
Our raw data (images and associated textual information) come from a variety of retailers with considerable variation in terms of data formats and quality. Some retailer images are squeaky clean with white backgrounds and a clear product depiction while others have multiple views of the product, very noisy backgrounds, models, mannequins and other such distracting objects. Since we only care about the product, an essential processing step involves identification of all image parts and the isolation of individual products, if several are present in the retailer image.
The n-shoe case:
Let’s take the case of retailer images with multiple product views. This is most commonly encountered in shoe images. Let us call each of the product views a ‘sub-image’.
When we talk about similar shoes we talk about a shoe being similar to the other (note the singular). We have to disregard how the shoe is presented in the image, the position of the sub-images, the orientation and other noise. If we do not do so, image matching technology tends to pick out images with similar presentation rather than similar shoes. Typically a retailer image (a shoe they are trying to sell) will have a pair of sub-images of shoes in different viewing angles. Pictorially with standard image matching we get the following results for a query image on the left:
Even though the image database contains images like:
These are not in the result set despite them being much closer matches, because of the presentation and varying number of sub-images. To overcome this drawback, we have to extract the sub-image which best represents the product for each of the images and then compare these sub-images. For the sub-image to be extracted, the image will need to go through the following processing steps:
- Determine which of the sub-images is the best represents the shoe.
- Extract that sub-image.
- Determine the shoe orientation in that sub-image.
- Standardise the image by rotation, flipping and scaling.
All the product images (shoes in this case) go through this process of standardisation, resulting in a uniform set of images. Pictorially the input and the output image of the standardisation process are:
Let’s look at the procedure in more detail assuming that the image has been segmented into background and foreground.
- The first step is to identify all the sub-images on the foreground. The foreground pixels of the images are labelled in such a way that different sub-images have different label to mark them as distinct.
- After the first iteration of labelling there is a high possibility that a sub-image is marked with 2 or more labels. Therefore all connected labels have to be merged.
- The third step is to determine which of the sub-images is of interest; that is picking the right label.
- Once the right sub-image is extracted the orientation of this sub-image is corrected to match a predefined standard to remove the differences in the terms of size of the product image, orientation (the direction the shoe is pointing towards) and the position of the shoe (sub-image) within the image.
All product images (shoes in this case) go through this process before the representative information from the image is extracted for comparison. Now the results for the query image will look like:
Generally there are two shoes in an image. But the method can be extended to ‘n’ shoes.
So now all the excitement of the launch has settled down and we’re back into routine I think it’s time for a quick walk through the functionality (which won’t take that long since we haven’t put that much live yet; there’s a lot of interesting functionality left to come).
Hunting vs. gathering
Plenty of people go into a shop armed with a plan. They know what they want, or at least what specific need they need to fill. Others like to browse, look at what there is, what other people are doing, and generally wait for inspiration or recommendation. We’ve tried to fulfil both of those patterns using the both standard “search vs. browse” split, but have tried to improve both.
When you view an item, for example this orange Ghibli bag, we obviously show a picture, description, etc. and link to the retailer. All standard stuff for a shopping aggregator. What we’ve added is that we also show the most visually similar items in our collection, according to three different sets of criteria:
- We show the most similar bags by shape, so that anyone who’s interested in a particular style or type of bag can see them straight away.
- We show bags in the most similar colours, so anyone who was drawn to that bag because of its colour can see lots of other bags that they may also be interested in.
- We show products from other categories in the same colour, in case users want to colour-coordinate.
In addition to the regular search options you’d expect (category, keywords, etc.) we also allow people to search by the overall colour of the item (from the top right corner of any page). Now in terms of technology I’m not particularly happy with this functionality yet, but I’m a perfectionist. It already performs a lot better visually than the Amazon equivalent*, and I know that we’ve got big improvements in the pipeline.
* To be fair to Amazon their results are better than they look. The products they show are available in the query colour, they just choose to show only the first image, so their results look broken by visual inspection.
Back to the physical shop metaphor
What we’re trying to do is help the searchers search by enabling them to search using visual data, effectively the equivalent to training all the staff in a shop to be able to answer questions like “have you got anything that goes with these shoes?”.
At the same time we’re trying to help the browsers by sorting each department by type and colour, so they always know where they’re going.
Obviously this is fairly fresh territory so there’ll always be wrinkles that need ironing out, but on the whole I think the trend towards smarter indexing is inevitable, and the indexing of visual information is part of that (that’s a whole other post).
Yesterday night we finally broke a bottle of champagne against the side of the good ship Empora and watched her slide out of the dock. We’ve been working on the project for the past couple of months, so it’s a pleasure to see it go live.
As well as the usual search functionality you’d expect on a retail site, Empora enables searching and browsing using the content of product images (currently either women’s clothes or men’s clothes). When you view a product you’re also shown items that may relate to it visually, either in terms of shape or colour.
As with any project there are always things I’d change, and things that aren’t done yet, but overall I’m pretty chuffed with what our team has accomplished so far. We’re by no means finished though. Expect big things in the near future.
At Pixsta this week we chose our development platform for the next stage of the team’s development. We evaluated a lot of tools under a variety of criteria, and debated long and hard.
- It runs on a familiar Java stack (Hibernate, Spring, etc.)
- Script is compiled to Java bytecode
- It uses convention over configuration
- We have a few projects under our belts with this framework already
Goodbye PHP, we will not meet again.
I must have missed the launch of this feature, but Incogna’s most recent blog post talks about how they’ve implemented visual advertising. The results vary, but overall they’ve implemented it well.
I’ve written about Incogna’s image search before, but there’s more to add; when using this tool, as a user you have no visibility into the depth or type of data available to you. Nor does the app currently give control over movement, other than using text search and query images.
Establishing context (or, lost in the supermarket)
Any fans of Steve Krug’s usability classic will recognise the metaphor here. If you’re in an aisle in a supermarket you can see both the length of the aisle and the content of the shelves (at least the ones near you). You also know your rough position in the store, and can see signs and the contents of shelves.
Using that input data you can navigate (with a few hiccups) anywhere in the store.
Incogna’s app currently allows you to compare visually, and to search using text, but the depth and type of results remains hidden. As such there’s no real way to effectively navigate within the data set.
I should be clear at this point that this isn’t a criticism of Incogna’s app. This is not a problem with an easy or obvious solution. What I’m suggesting is that there’s still scope for some killer navigation features in this area.
The monetisation feature on Incogna appears only when their system thinks it can produce a good match between your search and the sponsored products. This is a wise move, since irrelevant ads would ruin the user experience.
It seems like the results use mainly visual comparison data, possibly with some categorisation thrown in. It worked brilliantly with pictures of trucks, but curiously while I was browsing Canon cameras it presented sponsored ads for televisions (both are rectangular I suppose).
The main issue standing in the way of Incogna’s revenue stream is that their app is not yet fun to use. As mentioned above there’s no sense of position or direction. You can’t learn anything about the images you find without clicking through to the source site, and you can’t properly refine your search… you have to start again, which means that there’s no big advantage over Google, or any other text-based image search.
More another time.
Back in May, Alex Iskold over on ReadWriteWeb kicked off a discussion of how “semantic search” technologies are doing, and where they’re headed. I came across the article again recently and it prompted me to write this.
Semantic search has often been named as the successor to Google. This is a prediction which I think misses two key points.
You don’t have to be a semantic search company to do it
Extracting and presenting structured data from unstructured or partially structured sources is part of the top-down approach to the Semantic Web (aka. Web 3.0, apparently). The basic idea is that using language analysis, machine learning and databases of entities you can understand content, rather than just processing it statistically like 20th century search engines. This gives you the possibility of a richer and tighter search experience, e.g. an initial search for “bush” could then be easily narrowed to only include articles about the Australian bush rather than George W.
While semantically-driven faceted search is still the domain of Grapeshot, Clusty, etc. the underlying technologies are already in use by mainstream search engines. Even image search engines such as Pixsta use semantic technology to extract structured data from unstructured documents (in our case, the documents happen to be images).
Google will not be killed with minor features
When was the last time you had to click through to the second page of search results? In fact, when was the last time you had to scroll past the fold to the 10th result? Maybe some of you have recently, but I’d bet it doesn’t happen often.
What this says is that for the main search engine use case, text-driven statistical search is good enough. Without a killer feature for mainstream users semantic search engines will not be able to tempt them away from the very simple tool they’ve already learned how to use. I agree with Iskold’s point that these companies need to create a very good user interface… although I disagree that this will be enough to win search market share.
It’s not all doom and gloom though. Semantic technology is impressive. If you get a chance to try out a tool like Silobreaker you’ll find some very interesting user interface work and some impressive data analysis happening behind the scenes. In my opinion it’s niches like these (Silobreaker is a semantic tool for news search and political research) where users have enough motivation and specialisation to move away from the top 5 search results on Google/Yahoo/Live.
I’ve got a couple more IPhone apps to add to my previous list of apps.
This is the IPhone version of the popular phone service. Just open up the app, tap on “tag music” and it’ll have a crack at identifying whatever music you’re listening to. Seems to work well so far, although I haven’t tested it with anything obscure or in noisy areas.
Conceptually this is pretty close to what we’re working on here at Pixsta, but with audio rather than pictures.
A free E-Book reader. You can get access to lots of public domain content, as well as select paid content from some publishers. Very handy for reading those stubborn classics on a cramped tube train. Currently reading a Bertrand Russell book that I otherwise would never get round to.