Semantic Vectors – semantic indexing on top of Lucene

Thursday 23rd April, 2009

For anyone interested in adding semantic structures on top of their unstructured or semi-structured data I recently came across Dominic Widdows’ Semantic Vectors project.

It’s not a big enough project to survive the ‘contributor departure’ test, but it’s in active development and reading the code didn’t make my eyes bleed, so may be worth a look if that’s your bag.


Welcome to the image link revolution

Monday 19th January, 2009

The hyperlink revolution allowed text documents to be joined together. This created usable relationships between data that have enabled one of the biggest technological shifts of the recent age… large scale adoption of the internet. Try to imagine Wikipedia or Google without hyperlinks and you’ll see how critical this technique is to the web.

We’re on the verge of another revolution, this time in computer vision.

Imagine a world were the phone in your pocket could be used to find or create links in the physical world. You could get reviews for a restaurant you were standing outside without even knowing its name, or where you were. You could listen to snippets of an album before you bought it, or find out where nearby has the same item for less. You could read about the history of an otherwise unmarked and anonymous building, get visual directions, or use your camera phone as a window into a virtual game in the real world.

A team at the university of Ljubljana (the J is pronounced like a Y for anyone unfamiliar) have released a compelling video demonstrating their implementation of visual linking. They use techniques that I assume are derived from SIFT to match known buildings in an unconstrained walk through a neighbourhood. These image segments are then converted into links to enable contextually relevant information.

When you combine this with other other techniques, such as the contour-based work being done by Jamie Shotton of MSR and you start to see how that future will appear. Bring in the mass adoption of GPS handsets driven by the Iphone amongst others and it’s pretty clear there’s going to be a change in the way people create and access information.

The only questions are who, and when.


Incogna monetise pure image search

Monday 12th January, 2009

I must have missed the launch of this feature, but Incogna’s most recent blog post talks about how they’ve implemented visual advertising. The results vary, but overall they’ve implemented it well.

I’ve written about Incogna’s image search before, but there’s more to add; when using this tool, as a user you have no visibility into the depth or type of data available to you. Nor does the app currently give control over movement, other than using text search and query images.

Establishing context (or, lost in the supermarket)

Any fans of Steve Krug’s usability classic will recognise the metaphor here. If you’re in an aisle in a supermarket you can see both the length of the aisle and the content of the shelves (at least the ones near you). You also know your rough position in the store, and can see signs and the contents of shelves.

Using that input data you can navigate (with a few hiccups) anywhere in the store.

Incogna’s app currently allows you to compare visually, and to search using text, but the depth and type of results remains hidden. As such there’s no real way to effectively navigate within the data set.

I should be clear at this point that this isn’t a criticism of Incogna’s app. This is not a problem with an easy or obvious solution. What I’m suggesting is that there’s still scope for some killer navigation features in this area.

Making money

The monetisation feature on Incogna appears only when their system thinks it can produce a good match between your search and the sponsored products. This is a wise move, since irrelevant ads would ruin the user experience.

It seems like the results use mainly visual comparison data, possibly with some categorisation thrown in. It worked brilliantly with pictures of trucks, but curiously while I was browsing Canon cameras it presented sponsored ads for televisions (both are rectangular I suppose).

Having fun

The main issue standing in the way of Incogna’s revenue stream is that their app is not yet fun to use. As mentioned above there’s no sense of position or direction. You can’t learn anything about the images you find without clicking through to the source site, and you can’t properly refine your search…  you have to start again, which means that there’s no big advantage over Google, or any other text-based image search.

More another time.


Semantic search is not a “Google killer”

Sunday 11th January, 2009

Back in May, Alex Iskold over on ReadWriteWeb kicked off a discussion of how “semantic search” technologies are doing, and where they’re headed. I came across the article again recently and it prompted me to write this.

Semantic search has often been named as the successor to Google. This is a prediction which I think misses two key points.

You don’t have to be a semantic search company to do it

Extracting and presenting structured data from unstructured or partially structured sources is part of the top-down approach to the Semantic Web (aka. Web 3.0, apparently). The basic idea is that using language analysis, machine learning and databases of entities you can understand content, rather than just processing it statistically like 20th century search engines. This gives you the possibility of a richer and tighter search experience, e.g. an initial search for “bush” could then be easily narrowed to only include articles about the Australian bush rather than George W.

While semantically-driven faceted search is still the domain of Grapeshot, Clusty, etc. the underlying technologies are already in use by mainstream search engines. Even image search engines such as Pixsta use semantic technology to extract structured data from unstructured documents (in our case, the documents happen to be images).

Google will not be killed with minor features

When was the last time you had to click through to the second page of search results? In fact, when was the last time you had to scroll past the fold to the 10th result? Maybe some of you have recently, but I’d bet it doesn’t happen often.

What this says is that for the main search engine use case, text-driven statistical search is good enough. Without a killer feature for mainstream users semantic search engines will not be able to tempt them away from the very simple tool they’ve already learned how to use. I agree with Iskold’s point that these companies need to create a very good user interface… although I disagree that this will be enough to win search market share.

It’s not all doom and gloom though. Semantic technology is impressive. If you get a chance to try out a tool like Silobreaker you’ll find some very interesting user interface work and some impressive data analysis happening behind the scenes. In my opinion it’s niches like these (Silobreaker is a semantic tool for news search and political research) where users have enough motivation and specialisation to move away from the top 5 search results on Google/Yahoo/Live.