Progress, Trust, and Going to the Pub: London Search Social write-up

Thursday 14th January, 2010
The Elgin on Ladbroke Grove

The Elgin, via flickr/Ewan-M

There’s a theory that claims expertise and associated salaries increase more rapidly in cities than they do in the country because physical proximity decreases the cost of sharing ideas (I’m desperately trying to dig up the source amongst the noise; which is either ironic or poetic depending on how you look at it).

The interwebs are a different beast. Proximity doesn’t exist in the same way, perhaps instead becoming a cultural rather than a geographic measure of separation. The cost of spreading an idea on the internet is more related to trust than physical distance.

On the internet, that trust must be earned by expertise and clear communication. In the physical world people behave differently, people tend to trust each other more quickly and be more open when they can look each other in the eye, or buy each other a beer.

[Only on a technology blog would you find a justification this obscure for going to the pub.]

On Tuesday we held this month‘s London Open Source Search Social at The Elgin in Notting Hill. This was the first time we’d used our shiny new Meetup account to organise the event, so it was nice not to have to send out reminders manually (laziness #ftw).

A few notes from the evening for those whose memories are as bad as mine

There’s plenty missing, and some of this may be fictitious.

Bruno from Jobomix talked about his use of Hadoop to detect duplicate job data, leading to a conversation about Pig and Cascade, then other distributed systems like Scala. Ben from OneIS brought up the subject of Duby, a Ruby-like-but-tidier language targeting the JVM, and when prompted gave us an outline of his company’s free-text graph store.

We talked about duplicate detection in various fields, thresholds, and the cost of false positives. We touched on human relevance testing; Richard told us he’d found people generally need to be paid to do it and not for more than 30 minutes at a time.

Joao from the Royal Library of the Netherlands told us how they digitise and index millions of pre-digital documents per month. Ben told us about a method of querying Xapian from Postgres using an SQL JOIN.

Advertisements

As the browser war hots up, Google has Bing in its sights

Monday 11th January, 2010

Google Chrome advertising (via flickr/iainpurdie)

As any self-respecting nerd will have noticed, and others have already noted, Google recently started advertising its Chrome web browser on billboards and in newspapers around the UK. This represents an escalation of the second phase of the browser wars, and one of the few occasions Google has resorted to billboards to advertise a product.

Why bother advertising a free product?

The answer to why Google are advertising Chrome (which is a free download) is unsurprisingly similar to the answer to the bigger question; why bother building and supporting a free product?

Google make money by monetising user’s searches. People are great at optimising finding and using short-cuts, and modern browsers have built-in search bars. In short, more people using your search bar means more money, and Chrome (like Firefox) defaults to searching on Google.

Billboards – dated but still relevant

Let’s face it, it’s not Google’s style to put up great big billboards. It’s not smart, it’s not targeted, it’s not high-tech. However, ironically those attributes are exactly why they work in this situation.

Google’s main competitor in the search space is Microsoft (who have incidentally been advertising their search engine Bing heavily) and Microsoft’s largest user-base is the slow-moving majority who get Internet Explorer bundled with their PC. Via its default status in Internet Explorer Bing is by default used by that same slow-moving majority.

Since the majority is too big to be worth the extra cost of targeting; the common or garden billboard is a suitable way to get through to them (at the same time as reinforcing the brand with nerds who already know about it).


Shopachu – Incogna’s new visual product browser

Tuesday 5th January, 2010

In the back half of last year visual search outfit Incogna released their visual shopping browser Shopachu. I’ve followed some of Incogna’s previous releases so I thought I’d share some thoughts on this one too.

What does it do?

This site has a very similar model to our own consumer-facing MAST app; Empora. It makes money by sending consumers to retailer sites, who for obvious reasons are willing to pay for suitable traffic. The main forces that influence the design of a site like this are retention, and the clickthrough and conversion rates of your traffic:

Retention – you need to impress people, then ideally remind them to come back to you

Clickthrough – you need to send a good proportion of visitors to retailers in order to make money

Conversion – if the visitors you send aren’t interested in buying the clicked product then the retailers won’t want to pay for that traffic on a per-click basis (although they might be interested in the CPA model, which doesn’t pay until someone buys)

First Impressions

People’s first impressions are usually determined by a combination of design and how well a site conforms to their expectations. I’ve probably got distorted expectations considering my experience working with this type of application, but in that respect I was pleasantly surprised; Shopachu has some good features and makes them known. In terms of design I was less impressed, the icons and gel effects don’t seem to fit and I think there are whitespace and emphasis issues (sorry guys, trying to be constructive).

Finding stuff

It’s fairly easy to find things on Shopachu. The filters are easy to use (although I could get the brand filter to work, could be a glitch). The navigation is pretty easy, although it doesn’t currently provide second generation retail search features like facet counts (i.e. showing the number of products in a category before you click on it).

The biggest interesting technological problem I’ve noticed with their navigation is the colour definitions. There’s a big difference between a colour being present in an image, and the eye interpreting that colour as being present in an image. I think there are some improvements to be made in the way colours are attributed to images (e.g. here I’ve applied a pink filter but am seeing products with no pink returned). Similarly there’ll be another marked improvement with better background removal (e.g. here I’d applied a light blue filter and am seeing products with blue backgrounds).

Similarity search

Shopachu’s similarity search is quite different to Empora’s.  They’ve chosen to opt for maximum simplicity in the interface rather than user control, resulting in a single set of similarity search results. In contrast, Empora allows users to determine whether they’re interested in colour similarity, or shape similarity, or both. Simplicity often wins over functionality (iPod example #yawn) so it’ll be interesting to see how they do.

Another issue is the quality of the input data. This challenge is the same for Empora, or anyone else aggregating data from third parties, in that category information is inconsistent. One effect of this is that when looking at the similarity results for an often poorly-classified item like a belt you may also see jewellery or other items that have been classified as “accessories” or “miscellaneous” in the retailer’s data, another effect is that you often see duplicate items.

Keeping the traffic quality high

An interesting design decision for me is that the default image action on Shopachu is a similarity search, i.e when you click on the image it takes you to an internal page featuring more information and similar products. This is in contrast to the default action on Empora or Like.com, which is to send the visitor to the retailer’s product page.

The design trade-off here is between clickthrough and conversion rates. If you make it easy to get to the retailer your clickthrough rate goes up, but run the risk of a smaller proportion converting from a visit to a purchase. Here Shopachu are reducing that risk (and also the potential up-side) by keeping visitors on their site until they explicitly signal the intent to buy (the user has to click “buy” before they’re allowed through to the retailer).

Getting people hooked

There are a few features on Shopachu aimed at retention, namely Price Alerts and the ability to save outfits (Polyvore style). These features seem pretty usable, although I think they’re still lacking that level of polish that inspires passionate users. I’d be interested to know what the uptake statistics look like.

In summary

I think this implementation shows that Incogna have thought about all the right problems, and I think have clearly got the capability to solve the technological issues. On the down-side; cleaning up retailer’s data is a tough business which will be time-consuming, and I think they need to find a little inspiration on the visual design side.


New Home for the London Search Social

Wednesday 16th December, 2009

To avoid the somewhat annoying (and hopefully temporary) problem that not everyone in the world reads my blog, I’ve created a new home for our search social meet-ups over on Meetup.com.

Sign up on the London Search Social page to get notifications of events.


Open Source Search Social

Thursday 5th November, 2009

It’s been a little while since the last Open Source Search Social, so we’re getting really imaginative and holding another one, this time on Wednesday the 18th of November. As usual the event is in the Pelican pub just off London’s face-bleedingly trendy Portobello Road.

The format is staying roughly the same. No agenda, no attitude, just some geeks talking about search and related topics in the presence of intoxicating substances.

Please come along if you can, just get in touch or sign up on the Upcoming page.


Guest post – Similarity search: The Two Shoe Problem

Thursday 30th July, 2009

Today I’m introducing my first ever guest post, written by Pixsta‘s own Rohit Patange about some great work he’s been doing with the guidance of Tuncer Aysal. You’ll be able to see the results of their work shortly on our consumer-facing site Empora. – RM

We at Pixsta are interested in understanding what is in an image (recognise and extract) and do so in an automated way that involves a minimum amount of human input.

Our raw data (images and associated textual information) come from a variety of retailers with considerable variation in terms of data formats and quality. Some retailer images are squeaky clean with white backgrounds and a clear product depiction while others have multiple views of the product, very noisy backgrounds, models, mannequins and other such distracting objects. Since we only care about the product, an essential processing step involves identification of all image parts and the isolation of individual products, if several are present in the retailer image.

The n-shoe case:

Let’s take the case of retailer images with multiple product views. This is most commonly encountered in shoe images.  Let us call each of the product views a ‘sub-image’.

When we talk about similar shoes we talk about a shoe being similar to the other (note the singular). We have to disregard how the shoe is presented in the image, the position of the sub-images, the orientation and other noise. If we do not do so, image matching technology tends to pick out images with similar presentation rather than similar shoes. Typically a retailer image (a shoe they are trying to sell) will have a pair of sub-images of shoes in different viewing angles. Pictorially with standard image matching we get the following results for a query image on the left:

Visual similarity query showing product presentation affecting results

Even though the image database contains images like:

Two shoes pointing to the right

These are not in the result set despite them being much closer matches, because of the presentation and varying number of sub-images.  To overcome this drawback, we have to extract the sub-image which best represents the product for each of the images and then compare these sub-images. For the sub-image to be extracted, the image will need to go through the following processing steps:

  • Determine which of the sub-images is the best represents the shoe.
  • Extract that sub-image.
  • Determine the shoe orientation in that sub-image.
  • Standardise the image by rotation, flipping and scaling.

All the product images (shoes in this case) go through this process of standardisation, resulting in a uniform set of images. Pictorially the input and the output image of the standardisation process are:

Shoes segmented and standardised to point right

Let’s look at the procedure in more detail assuming that the image has been segmented into background and foreground.

  • The first step is to identify all the sub-images on the foreground. The foreground pixels of the images are labelled in such a way that different sub-images have different label to mark them as distinct.
  • After the first iteration of labelling there is a high possibility that a sub-image is marked with 2 or more labels. Therefore all connected labels have to be merged.
    Segmented shoe images
  • The third step is to determine which of the sub-images is of interest; that is picking the right label.
    Choosing an image segment
  • Once the right sub-image is extracted the orientation of this sub-image is corrected to match a predefined standard to remove the differences in the terms of size of the product image, orientation (the direction the shoe is pointing towards) and the position of the shoe (sub-image) within the image.
    Single shoe pointing to the right

All product images (shoes in this case) go through this process before the representative information from the image is extracted for comparison. Now the results for the query image will look like:

Resulting query showing standardised similarity

Generally there are two shoes in an image. But the method can be extended to ‘n’ shoes.


Bing abbreviation of b(or)ing?

Wednesday 3rd June, 2009

I can’t comment on the technical architecture of Bing, but ultimately it doesn’t matter that much. Bing is trying to make solving a solved problem look new by adding a big photograph of a man standing on a mountain. Bing is b(or)ing. As Hugh MacLeod says, Microsoft: change the world or go home.

Maybe they can get some traffic by doing deals but ultimately a mousetrap is just a mousetrap unless it can do something other mousetraps can’t do. Until then I don’t see it making any waves.