It’s great to see fellow Pixstanaut Tomás Lin talking at the forthcoming Grails Exchange conference in December. He’ll be talking about building rich GUI apps with Flex and Grails. There are still a few tickets left if you can make it.
Today I’m introducing my first ever guest post, written by Pixsta‘s own Rohit Patange about some great work he’s been doing with the guidance of Tuncer Aysal. You’ll be able to see the results of their work shortly on our consumer-facing site Empora. – RM
We at Pixsta are interested in understanding what is in an image (recognise and extract) and do so in an automated way that involves a minimum amount of human input.
Our raw data (images and associated textual information) come from a variety of retailers with considerable variation in terms of data formats and quality. Some retailer images are squeaky clean with white backgrounds and a clear product depiction while others have multiple views of the product, very noisy backgrounds, models, mannequins and other such distracting objects. Since we only care about the product, an essential processing step involves identification of all image parts and the isolation of individual products, if several are present in the retailer image.
The n-shoe case:
Let’s take the case of retailer images with multiple product views. This is most commonly encountered in shoe images. Let us call each of the product views a ‘sub-image’.
When we talk about similar shoes we talk about a shoe being similar to the other (note the singular). We have to disregard how the shoe is presented in the image, the position of the sub-images, the orientation and other noise. If we do not do so, image matching technology tends to pick out images with similar presentation rather than similar shoes. Typically a retailer image (a shoe they are trying to sell) will have a pair of sub-images of shoes in different viewing angles. Pictorially with standard image matching we get the following results for a query image on the left:
Even though the image database contains images like:
These are not in the result set despite them being much closer matches, because of the presentation and varying number of sub-images. To overcome this drawback, we have to extract the sub-image which best represents the product for each of the images and then compare these sub-images. For the sub-image to be extracted, the image will need to go through the following processing steps:
- Determine which of the sub-images is the best represents the shoe.
- Extract that sub-image.
- Determine the shoe orientation in that sub-image.
- Standardise the image by rotation, flipping and scaling.
All the product images (shoes in this case) go through this process of standardisation, resulting in a uniform set of images. Pictorially the input and the output image of the standardisation process are:
Let’s look at the procedure in more detail assuming that the image has been segmented into background and foreground.
- The first step is to identify all the sub-images on the foreground. The foreground pixels of the images are labelled in such a way that different sub-images have different label to mark them as distinct.
- After the first iteration of labelling there is a high possibility that a sub-image is marked with 2 or more labels. Therefore all connected labels have to be merged.
- The third step is to determine which of the sub-images is of interest; that is picking the right label.
- Once the right sub-image is extracted the orientation of this sub-image is corrected to match a predefined standard to remove the differences in the terms of size of the product image, orientation (the direction the shoe is pointing towards) and the position of the shoe (sub-image) within the image.
All product images (shoes in this case) go through this process before the representative information from the image is extracted for comparison. Now the results for the query image will look like:
Generally there are two shoes in an image. But the method can be extended to ‘n’ shoes.
So now all the excitement of the launch has settled down and we’re back into routine I think it’s time for a quick walk through the functionality (which won’t take that long since we haven’t put that much live yet; there’s a lot of interesting functionality left to come).
Hunting vs. gathering
Plenty of people go into a shop armed with a plan. They know what they want, or at least what specific need they need to fill. Others like to browse, look at what there is, what other people are doing, and generally wait for inspiration or recommendation. We’ve tried to fulfil both of those patterns using the both standard “search vs. browse” split, but have tried to improve both.
When you view an item, for example this orange Ghibli bag, we obviously show a picture, description, etc. and link to the retailer. All standard stuff for a shopping aggregator. What we’ve added is that we also show the most visually similar items in our collection, according to three different sets of criteria:
- We show the most similar bags by shape, so that anyone who’s interested in a particular style or type of bag can see them straight away.
- We show bags in the most similar colours, so anyone who was drawn to that bag because of its colour can see lots of other bags that they may also be interested in.
- We show products from other categories in the same colour, in case users want to colour-coordinate.
In addition to the regular search options you’d expect (category, keywords, etc.) we also allow people to search by the overall colour of the item (from the top right corner of any page). Now in terms of technology I’m not particularly happy with this functionality yet, but I’m a perfectionist. It already performs a lot better visually than the Amazon equivalent*, and I know that we’ve got big improvements in the pipeline.
* To be fair to Amazon their results are better than they look. The products they show are available in the query colour, they just choose to show only the first image, so their results look broken by visual inspection.
Back to the physical shop metaphor
What we’re trying to do is help the searchers search by enabling them to search using visual data, effectively the equivalent to training all the staff in a shop to be able to answer questions like “have you got anything that goes with these shoes?”.
At the same time we’re trying to help the browsers by sorting each department by type and colour, so they always know where they’re going.
Obviously this is fairly fresh territory so there’ll always be wrinkles that need ironing out, but on the whole I think the trend towards smarter indexing is inevitable, and the indexing of visual information is part of that (that’s a whole other post).
Yesterday night we finally broke a bottle of champagne against the side of the good ship Empora and watched her slide out of the dock. We’ve been working on the project for the past couple of months, so it’s a pleasure to see it go live.
As well as the usual search functionality you’d expect on a retail site, Empora enables searching and browsing using the content of product images (currently either women’s clothes or men’s clothes). When you view a product you’re also shown items that may relate to it visually, either in terms of shape or colour.
As with any project there are always things I’d change, and things that aren’t done yet, but overall I’m pretty chuffed with what our team has accomplished so far. We’re by no means finished though. Expect big things in the near future.
At Pixsta this week we chose our development platform for the next stage of the team’s development. We evaluated a lot of tools under a variety of criteria, and debated long and hard.
- It runs on a familiar Java stack (Hibernate, Spring, etc.)
- Script is compiled to Java bytecode
- It uses convention over configuration
- We have a few projects under our belts with this framework already
Goodbye PHP, we will not meet again.