When there is no there there: going large with A/B testing and MVT

Gertrude Stein was right.   There is no there there.   There is no Facebook.  There is no Google.  There is no Amazon.  There is no such thing as a Zynga game.   There isn’t even a  Bing.

I’m not talking about what it’s like living off-grid, by choice or necessity.

I’m talking about the fact that when we interact online with any of these major services, we interact in one of the local reality zones of their multiverses.   The dominant large-scale consumer internet apps and platforms do not exist in a single version.  They all simultaneously deploy multiply variant versions of themselves at once, to different people, and pit them against each other to see which ones work better.   The results of these tests are used to improve the design.  Or so goes the theory.  (It should be borne in mind that such a process was probably responsible for the “design” of the human backbone.)

This test-driven approach to design development and refinement has been promoted and democratised as a “must-have” for all software-based startups.   Eric Reis, of Lean Startup, is probably the most famous.  (Andrew Chen is worth checking out, too, for a pragmatic view from the trenches.)

How do the big platform providers do it? Lashings of secret sauce are probably involved.    But there is a lot of published commentary lying around from which the main ingredients of the sauce can be discerned –  even if the exact formulation isn’t printed on the label.   Here are some resources I’ve found useful:

  • Greg Linden, the inventor of Amazon’s first recommendation engine, has a nice collection of fireside tales about his early work at Amazon in his blog, including how he got shopping cart recommendations deployed (spoiler:  by disobeying an order – and testing it in the wild)
  • Josh Wills, ex-Google,  now Director of Data Science at Cloudera,  talks about  Experimenting at Scale at the 2012 Workshop on Algorithms for Modern Massive Data Sets, and provides some analytical and experimental techniques for meeting the challenges involved
  • Ron Kohavi, ex-Google, now Microsoft, has a recent talk and a recent paper about puzzling results from experimentation and how his team has solved them, in his 2012 ACM Recsys Keynote speech, and his 2012 KDD paper.

There are some commonalities of approach from the ex-Googlers.   Assigment of people to experiments, and experimental treatments, is done via a system of independent layers, so that an individual user can be in multiple experimental treatments at once.   Kohavi talks about how this can go wrong,  and some ways of designing around it using a modified localised layer structure.

Another efficiency-boosting practice is the use of Bayesian Bandit algorithms to decide on the size of experimental groups, and length of experiment.   This practice is most familiar in clinical trials, where adaptive experimentation is used to ensure that as soon as a robust effect has been found, the trial is halted, enabling the ethically desirable outcome that beneficial treatments are not withheld from those who would benefit, and injurious treatments are stopped as soon as they are identified as such.   It’s so much flavour of the month that there is now a SaaS provider, Conductrics,  which will enable you to use it as a plugin.  They also have a great blog so check it out if you’re interested in this topic.   Google Analytics Content Experiments also provide support for this, in a more constrained way.

So there are lots of hints and tips about optimising the mechanics of running a test.   But there isn’t as much talked about what to test, and how to organise a series of tests.  Which is, for most people, the $64 million question.   This is something I’ve done some thinking on and talking about and advising on.   I’m still working it through, though  – and if you are too, and you know of any interesting resources I’ve missed -do share them with us.

Social games: getting smarter faster

The terms ‘games’ and ‘data warehousing’  used to hang about in different semantic  ‘hoods.   No longer.   Social games publishers are bringing business intelligence bods on board by the bucketload: 

There is even a stealth-mode startup dedicated to social games optimisation, Turiya Media.    The CEO,  Chetan Ramachandran,  and the CTO,  Shalom Tsur, both have experience of  Big Data and predictive analytics.   Amongst other things, they are reportedly planning to develop a revenue stream based on the uplift they provide to vendors’ virtual goods revenue, though personalised recommendations.   Any service provider willing to put skin in the game in this way is worth watching.

Of the developers’ actual and prospective hires, Zynga’s seems the most directional.   Yes,  it all boils down to KPIs –  once you’ve boiled it all down.  But the gotcha is that there is no set recipe for exactly how this boiling ought to be done.   (Lighting a fire underneath your data is sometimes tempting but is fundamentally not productive.)  It’s the conceptual route you take to understanding what drives your KPI’s that is important.  It’s what winnows the wheat from the chaff, analysis-wise.     And that’s where it starts to get fun.   The Zynga hire is interesting because the fact that players are part of a social network is interesting, and important.  Understanding how it is important is… interesting.      

Any app worth the hassle of building it probably deserves to get some analysis that assesses the extent to which it is achieving its goals.     But, for social games, the incentive to investigate user behaviour intelligently is even greater:

  • barriers to customer loss are low, customers can ever so easily let their fingers do the walking, to another game, or to another leisure activity altogether, so high retention is vital for vendor success
  • products are not physically distributed, so what’s on offer can be changed on the fly, to fix what’s wrong,  to test variant configurations, or, in the limit, to dynamically personalise and optimise the rules of the game experience
  • inter-player game play is a driver for distribution, satisfaction, and retention

There are some massive challenges associated with doing design-driven analytics for social games.    But since the penalty for getting it wrong is death, everyone’s dong it.   Vendors are under evolutionary pressure to get smarter faster – just like their games.

Money-making mechanics in social games

The most popular social games on Facebook and other social networks are free to play.   So… where’s the money?  

Techcrunch recently published a ‘teardown’ of social games market leader Zynga, which estimated Zynga could be achieving a 30% net margin.   This estimate is probably wrong in some of its details, but the overall answer is perhaps not wrong by a great deal.   (I’ve done this kind of stuff myself, and I think being ‘perhaps not wrong by a great deal’ is pretty good.)

Of course, it’s not just a matter of build it and they will come.  According to Facebook official statistics (on 18 May 2010), there were over 550,000 applications on the platform.   It goes without saying that not all developers are profitable (or even revenue-generating).  And, according to the ever-interesting AppData site, on May 18 2010 there were 68,867 Facebook applications with fewer than 10k monthly average users (MAU).   And that’s only the ones they are bothering to track.

Interestingly, there seem to be even more small-userbase applications on Facebook than you’d expect based on Zipf’s law (which is a good predictor of the frequency distribution of a wild and wonderful range of natural phenomena, from city size, through to income distribution, earthquakes magnitude, and company size):


So, there are lots of plankton swimming in the sea – maybe even more than you’d expect.  With your glass half full, you could attribute this to the perceived attractiveness of the medium.    But it’s not all plankton out there.   There are also some blue whales.  As of May 18 1010, there were 24 Facebook applications with more than 10 million MAU, and, of these, 16 were games.   

These (and other) top social games developers are  extracting value from their users in a variety of ways, both direct and indirect:  

  • Playdom, who are on MySpace and to a lesser extent Facebook, get 70% of their revenue from direct user payments, and 10% from advertising, and 20% from ‘offers’ (according to Techcrunch, Nov 2009)
  • Zynga, the Big Kahuna of social games developers, gets one third of its revenue from direct user payments, one third from advertising, and one third from ‘offers’ (according to Techcrunch Nov 2009)   although since Zynga cross-promotes its own applications, it’s important to understand whether this advertising revenue is net or gross.   (That said, a more recent BusinessWeek article claimed 90% of Zynga’s revenue came from virtual currency.)
  • Serious Business, which was acquired by Zynga earlier this year, used to have a large proportion of revenue from advertising, but as of January 2010 was getting 90% of its revenue from virtual goods, and 90% of that from direct payments, aiming for 100% direct payments, according to an interview quoted by one of its VC backers, LightSpeed Venture Partners
  • MoshiMonsters, a social game environment aimed at children, had 14 million users as of April 2010,  and is growing revenue 20% per month and is ‘very profitable’, according to a recent news article.  At present its main revenue earner appears to be subscriptions, which deliver premium membership privileges,  but it is also looking at offering real-world goods (such as cuddly toys, and T-shirts), based on characters in its virtual world.

Overwhelmingly, the money developers make from direct user payments comes from the purchase of in-game credit which can be used to fund the purchase of virtual goods and services.   Premium content subscriptions are also used, but are less common.  They make particular sense for developers of games such as Moshi Monsters, where the ultimate end-user (aka  ‘the kid’), is not usually actually the person who pays (aka ‘the parent’).  In this case subscriptions lower the friction involved in the transaction, and also act as a welcome control on total expenditure.

There appear to be two main indirect methods of monetising game players.   The first is the sale of in-game advertising space, which usually goes through a 3rd party broker (e.g., AdKnowledge).    The success of this option relies on its ability to take attention away from the game.  This has the potential side effect risk of weakening the user’s involvement with game.  So, it does have some drawbacks.    But it appears to be a welcome enough source of revenue.  

In theory, the ads served up a side dish by a game could be so attractive they would actually reinforce players’ desire to play.   However in practice this is rare, because of a lack of tailored inventory on the supply side and on the contextual delivery side.    The potential exists to make much more sophisticated use of games as an ad medium, but the underlying logic of when and what to serve up to whom would have to be radically re-thought.   Knowledge of people’s game behaviour, both contextual (‘oh zut,  I just lost again!’) and characteristic (‘I am a hoarder and a risk avoider’), could potentially deliver a unique form of behavioural targeting, over and above that offered by other media.    In practice, nobody on the creative or the technical side appears to be making the effort to design for this in a skilful way.   Yet.   (The same could be said of the targeting opportunities inherent in  Facebook advertising, mind you.  But it will come.)   [Sez me.]

The second indirect way of making money from users is via ‘offers’,  where another company, brokered by a third party, pays the developer when the user takes up an offer, and the user is rewarded for taking up the offer by in-game credits.    These types of offers vary, and include participation in market research, or taking up a free trial.  There was a lot of noise made by TechCrunch last year about scammy offers, which seems to have been quite effective.

All methods of making money rely on the developer having  ‘enough’ users who convert in the desired way.  (Even plain old advertising has its day of measurement reckoning these days.)    A lot of craft and graft goes into engineering the timing and nature of invitations to convert.

Exactly what counts as a ‘large enough’ conversion rate is a much-cherished secret in this market.    (TechCrunch claims Zynga is making 1-2%. )   All methods of making money rely on a large enough portion of the user base getting involved enough in the game world to spend, at the very least, time, and at the most  money.   (Sometimes lots and lots of money, as in the case of the over-eager 12 year old who ‘invested’ £600 in Farmville, maxing out his life savings and making his mother’s credit card glow in the dark.) 

Exactly what people’s motivation for this is, is a very interesting question.   I’ve previously talked about how social game environments can be pleasant, and how they grow by colonising  existing schemas for communicating and exchanging social meaning.   I think my analyses are right – but I also think that they don’t provide a full explanation of the compulsion to play.