Visualising activity on a project

Friday 5th March, 2010

We’ve got a few new faces here and are saying goodbye to some old ones, so now seems like a good a time as any to look back at what we’ve been doing. Below is a clipe from YouTube showing activity in our code repository since we started a fresh one in late 2008.

If you’re interested it was created using Code Swarm and MEncoder.

Advertisements

Automated tests for GSP views in Grails

Friday 26th February, 2010

Test-driven development (TDD) is handy if used sensibly, and we’re feeling the need to make our automated tests a little broader. The Grails site has great documentation on setting up tests for Controllers and Services, but I couldn’t find a decent explanation of how to set up tidy automated tests for GSPs… so without further ado this is what I did.

I want tests to be:

  • Easy to write (and hence read)
  • Low maintenance, i.e. I don’t want to have to update tests whenever I make changes that aren’t important to the test
  • Good at picking up unexpected behaviour rather than changes to HTML structure

Running your GSP from TestApp

Grails ships with a handy class called GroovyPagesTestCase which allows your test to get the output of a given GSP file given a defined input, like this:

def file = new File("grails-app/views/myview.gsp")
def model = [someVariable:12345]
def htmlString = applyTemplate( file.text, model )

We’re passing in the text from the GSP file as a template, along with a model comprised of whatever variables and mock objects the view should need.

Now I’ve got a string containing a bunch of HTML, okay, that’s the right direction but my lazy gene is not satisfied yet.

Note 1: If your template calls other templates then it makes life easier to use absolute URLs, e.g. templateUrl"/>

Note 2: This method assumed you’ll specify an explicit model for each sub-view, e.g. myObj:myObj]}"/>

From a sticky HTML mess to something useful

The easiest way to get from unparsed HTML to a useful searchable structure seems to be Groovy’s XMLSlurper. It parses XML rather than HTML by default, but you can instantiate it with a more HTML-friendly parser like TagSoup like so:

def slurper = new XmlSlurper( new org.ccil.cowan.tagsoup.Parser() )
def parsedHtml = slurper.parseText(text)

Easy.

Pulling it all together

import grails.test.GroovyPagesTestCase
import org.ccil.cowan.tagsoup.Parser

class GspXyzTests extends GroovyPagesTestCase
{
	boolean transactional = false
	def slurper = new XmlSlurper( new Parser() )

	// This test looks for a specific thing in the resulting parsed HTML
	void testSomeImportantOutput() {

		//Open file containing GSP under test
		def file = new File("grails-app/views/myfolder/template.gsp")
		assertNotNull(file)

		//Obtain result of GSP page
		def text = applyTemplate(file.text, [
			pagenumber:123,
			pagesize:10,
			someMockObject:[
				foo:"bar",
				nestedMockObject:[
					[id:12345],
					[id:67890]
				]
			]
		])

		def html = slurper.parseText( text )

		// test some aspect of the parsed structure, the trick would be to make the test resilient to a degree of cosmetic change
		assertEquals 1, html.head.meta.list().findAll{ it.@name?.text().toLowerCase() == "robots" }.size()

	}

}

Sad to see Modista die

Monday 11th January, 2010

I should state at this point that these are personal opinions, not those of my employer.

While the first reaction when a potential competitor folds is to celebrate, that reaction is both immature and short-sighted. Not only is it a sad state of affairs that one company can destroy another using just the costs of the patent infringement process (regardless of whether the patent is valid or the infringment is demonstrable), but since competition is the fire that drives progress and innovation; seeing competition fold for any reason other than poor products is always disappointing.

Daniel Tunkelang has a eulogy of Modista over on the Noisy Channel.

Respect to AJ Shankar, Arlo Faria, and any others on the team; you guys did some impressive work.


As the browser war hots up, Google has Bing in its sights

Monday 11th January, 2010

Google Chrome advertising (via flickr/iainpurdie)

As any self-respecting nerd will have noticed, and others have already noted, Google recently started advertising its Chrome web browser on billboards and in newspapers around the UK. This represents an escalation of the second phase of the browser wars, and one of the few occasions Google has resorted to billboards to advertise a product.

Why bother advertising a free product?

The answer to why Google are advertising Chrome (which is a free download) is unsurprisingly similar to the answer to the bigger question; why bother building and supporting a free product?

Google make money by monetising user’s searches. People are great at optimising finding and using short-cuts, and modern browsers have built-in search bars. In short, more people using your search bar means more money, and Chrome (like Firefox) defaults to searching on Google.

Billboards – dated but still relevant

Let’s face it, it’s not Google’s style to put up great big billboards. It’s not smart, it’s not targeted, it’s not high-tech. However, ironically those attributes are exactly why they work in this situation.

Google’s main competitor in the search space is Microsoft (who have incidentally been advertising their search engine Bing heavily) and Microsoft’s largest user-base is the slow-moving majority who get Internet Explorer bundled with their PC. Via its default status in Internet Explorer Bing is by default used by that same slow-moving majority.

Since the majority is too big to be worth the extra cost of targeting; the common or garden billboard is a suitable way to get through to them (at the same time as reinforcing the brand with nerds who already know about it).


Shopachu – Incogna’s new visual product browser

Tuesday 5th January, 2010

In the back half of last year visual search outfit Incogna released their visual shopping browser Shopachu. I’ve followed some of Incogna’s previous releases so I thought I’d share some thoughts on this one too.

What does it do?

This site has a very similar model to our own consumer-facing MAST app; Empora. It makes money by sending consumers to retailer sites, who for obvious reasons are willing to pay for suitable traffic. The main forces that influence the design of a site like this are retention, and the clickthrough and conversion rates of your traffic:

Retention – you need to impress people, then ideally remind them to come back to you

Clickthrough – you need to send a good proportion of visitors to retailers in order to make money

Conversion – if the visitors you send aren’t interested in buying the clicked product then the retailers won’t want to pay for that traffic on a per-click basis (although they might be interested in the CPA model, which doesn’t pay until someone buys)

First Impressions

People’s first impressions are usually determined by a combination of design and how well a site conforms to their expectations. I’ve probably got distorted expectations considering my experience working with this type of application, but in that respect I was pleasantly surprised; Shopachu has some good features and makes them known. In terms of design I was less impressed, the icons and gel effects don’t seem to fit and I think there are whitespace and emphasis issues (sorry guys, trying to be constructive).

Finding stuff

It’s fairly easy to find things on Shopachu. The filters are easy to use (although I could get the brand filter to work, could be a glitch). The navigation is pretty easy, although it doesn’t currently provide second generation retail search features like facet counts (i.e. showing the number of products in a category before you click on it).

The biggest interesting technological problem I’ve noticed with their navigation is the colour definitions. There’s a big difference between a colour being present in an image, and the eye interpreting that colour as being present in an image. I think there are some improvements to be made in the way colours are attributed to images (e.g. here I’ve applied a pink filter but am seeing products with no pink returned). Similarly there’ll be another marked improvement with better background removal (e.g. here I’d applied a light blue filter and am seeing products with blue backgrounds).

Similarity search

Shopachu’s similarity search is quite different to Empora’s.  They’ve chosen to opt for maximum simplicity in the interface rather than user control, resulting in a single set of similarity search results. In contrast, Empora allows users to determine whether they’re interested in colour similarity, or shape similarity, or both. Simplicity often wins over functionality (iPod example #yawn) so it’ll be interesting to see how they do.

Another issue is the quality of the input data. This challenge is the same for Empora, or anyone else aggregating data from third parties, in that category information is inconsistent. One effect of this is that when looking at the similarity results for an often poorly-classified item like a belt you may also see jewellery or other items that have been classified as “accessories” or “miscellaneous” in the retailer’s data, another effect is that you often see duplicate items.

Keeping the traffic quality high

An interesting design decision for me is that the default image action on Shopachu is a similarity search, i.e when you click on the image it takes you to an internal page featuring more information and similar products. This is in contrast to the default action on Empora or Like.com, which is to send the visitor to the retailer’s product page.

The design trade-off here is between clickthrough and conversion rates. If you make it easy to get to the retailer your clickthrough rate goes up, but run the risk of a smaller proportion converting from a visit to a purchase. Here Shopachu are reducing that risk (and also the potential up-side) by keeping visitors on their site until they explicitly signal the intent to buy (the user has to click “buy” before they’re allowed through to the retailer).

Getting people hooked

There are a few features on Shopachu aimed at retention, namely Price Alerts and the ability to save outfits (Polyvore style). These features seem pretty usable, although I think they’re still lacking that level of polish that inspires passionate users. I’d be interested to know what the uptake statistics look like.

In summary

I think this implementation shows that Incogna have thought about all the right problems, and I think have clearly got the capability to solve the technological issues. On the down-side; cleaning up retailer’s data is a tough business which will be time-consuming, and I think they need to find a little inspiration on the visual design side.


New Home for the London Search Social

Wednesday 16th December, 2009

To avoid the somewhat annoying (and hopefully temporary) problem that not everyone in the world reads my blog, I’ve created a new home for our search social meet-ups over on Meetup.com.

Sign up on the London Search Social page to get notifications of events.


Grails Exchange in December

Monday 23rd November, 2009

It’s great to see fellow Pixstanaut Tomás Lin talking at the forthcoming Grails Exchange conference in December. He’ll be talking about building rich GUI apps with Flex and Grails. There are still a few tickets left if you can make it.