Gertrude Stein was right. There is no there there. There is no Facebook. There is no Google. There is no Amazon. There is no such thing as a Zynga game. There isn’t even a Bing.
I’m not talking about what it’s like living off-grid, by choice or necessity.
I’m talking about the fact that when we interact online with any of these major services, we interact in one of the local reality zones of their multiverses. The dominant large-scale consumer internet apps and platforms do not exist in a single version. They all simultaneously deploy multiply variant versions of themselves at once, to different people, and pit them against each other to see which ones work better. The results of these tests are used to improve the design. Or so goes the theory. (It should be borne in mind that such a process was probably responsible for the “design” of the human backbone.)
This test-driven approach to design development and refinement has been promoted and democratised as a “must-have” for all software-based startups. Eric Reis, of Lean Startup, is probably the most famous. (Andrew Chen is worth checking out, too, for a pragmatic view from the trenches.)
How do the big platform providers do it? Lashings of secret sauce are probably involved. But there is a lot of published commentary lying around from which the main ingredients of the sauce can be discerned – even if the exact formulation isn’t printed on the label. Here are some resources I’ve found useful:
- Greg Linden, the inventor of Amazon’s first recommendation engine, has a nice collection of fireside tales about his early work at Amazon in his blog, including how he got shopping cart recommendations deployed (spoiler: by disobeying an order – and testing it in the wild)
- Josh Wills, ex-Google, now Director of Data Science at Cloudera, talks about Experimenting at Scale at the 2012 Workshop on Algorithms for Modern Massive Data Sets, and provides some analytical and experimental techniques for meeting the challenges involved
- Ron Kohavi, ex-Google, now Microsoft, has a recent talk and a recent paper about puzzling results from experimentation and how his team has solved them, in his 2012 ACM Recsys Keynote speech, and his 2012 KDD paper.
There are some commonalities of approach from the ex-Googlers. Assigment of people to experiments, and experimental treatments, is done via a system of independent layers, so that an individual user can be in multiple experimental treatments at once. Kohavi talks about how this can go wrong, and some ways of designing around it using a modified localised layer structure.
Another efficiency-boosting practice is the use of Bayesian Bandit algorithms to decide on the size of experimental groups, and length of experiment. This practice is most familiar in clinical trials, where adaptive experimentation is used to ensure that as soon as a robust effect has been found, the trial is halted, enabling the ethically desirable outcome that beneficial treatments are not withheld from those who would benefit, and injurious treatments are stopped as soon as they are identified as such. It’s so much flavour of the month that there is now a SaaS provider, Conductrics, which will enable you to use it as a plugin. They also have a great blog so check it out if you’re interested in this topic. Google Analytics Content Experiments also provide support for this, in a more constrained way.
So there are lots of hints and tips about optimising the mechanics of running a test. But there isn’t as much talked about what to test, and how to organise a series of tests. Which is, for most people, the $64 million question. This is something I’ve done some thinking on and talking about and advising on. I’m still working it through, though – and if you are too, and you know of any interesting resources I’ve missed -do share them with us.