Come and hear me talk at @developconf Weds July 11th

If you’re @DevelopConf next week please drop by and say hi.     I’m talking @ 4pm Weds.  About Ux and BI in games analytics, and tools and processes.

Heckle, ask q, throw nerfs – let’s make it lively.

Er, yes.  I do seem to be talking more than I’m writing these days….!

Exploiting competitor information via the Facebook API

Interesting day at Facebook’s London Mobile Hack yesterday.   Disclosure:  I  skipped the hack part, which was scheduled for 5pm to 9.30pm.  But I attended a full day of lectures before I snuck off.  So I hope I can wear the t-shirt without shame.

At one point I asked Simon Cross (who built the Graph Explorer) about the relationship between:

  • the custom actions you can create, make available for users to perform,  record inside Facebook on the users’ Graph, and make available for publication to your users’ TimeLines, Tickers and Newsfeeds (subject to Facebook’s algorithmic discretion, and subject to the user having granted your app the requisite write permissions)

and

  • the fine-grained activity permissions that users grant to applications, which you can inspect (and troubleshoot) via Graph Explorer

I wasn’t really sure why I was asking the question – I just had a nagging feeling I was missing something.  I didn’t understand how custom actions, which are extensible, were related to activity permissions which were (presumably but not necessarily) defined in the same way (but not necessarily set to the same value) for every app.

In retrospect I was just confused.   My thinking now is that there is no necessary relationship.  The custom actions which are graph extensions occur as a combination of app design, Facebook approval, then, at run time, are instantiated and populated via user agency occurring via the app.  These custom user actions doesn’t necessarily have any link to the FB app action permission schema,  although they of course might have a link if the app action involved actually involves, at a more abstract level, any of the types of actions which occur in the permission schema.  Ok well that’s sorted then.    Maybe.  Unless I’m actually wrong here, which of course I might be.  In which case tell me.

Setting aside for the moment whatever ontological muddles I might have gotten into,  the answer I got from Simon was, I think, much more interesting than the question I asked.   What he said was that subject to the appropriate permissions having been granted by the user, it was possible for an app to read data stored in the user’s graph by other apps.    He explained that this was because Facebook viewed all data as the user’s data, and it was, therefore, for the user to decide who could view it.

Here’s an example which I just fished out of the docset:

“If the user has granted your app with the user_games_activity permission then this api will give you scores for all apps for that user. Otherwise it will give you scores only for your app.”  (source:  https://developers.facebook.com/docs/reference/api/user/#friends accessed 13.34 GMT 6 March 2012)

This has a variety of interesting potential uses, which I am sure you are busy thinking about right this minute.

Of course – what’s sauce for the goose is sauce for the gander.   If the user has granted permission, you can see the trail other apps have left, but other apps will be able see what YOU have salted away in the graph, too.

Facebook DAU and MAU: what they tell you (and what they don’t)

DAU (daily active users) and MAU (monthly active users) data for Facebook application usage is easy to come by:  InsideNetwork publish daily leaderboards of DAU and MAU online, on Appdata.  But what does knowing DAU and MAU tell you?  

Is the DAU metric more like the “DAU-Jones index”, “The DAU of Pooh”, or “The DAU te Ching”?  

Shanti Bergel, in Designing For Monetization: How To Apply THE Key Metric In Social Gaming,  argues that DAU “has emerged as the key metric determining the popularity and potential of a social game”,  and the ratio of DAU to MAU, looked at over time, is important too.  As I understand it, part of Bergel’s argument is that since commercially interesting metrics such as ARPU (average revenue per user), or LTV (lifetime customer value) are difficult if not impossible to come by, on a comparative basis, we should enthusiastically squeeze whatever  meaning we can out of the data we can get, which is DAU and MAU.  In other words: love the one you’re with.  For Bergel, DAU and MAU are useful proxies (i.e. measures which stand in for measurements we are more interested in, but can’t get at directly).  Bergel interprets the DAU/MAU ratio as something that gives you a sense of “how well a game retains its users”, and says it “is an indicator of potential” which “demonstrates that the game is compelling and can successfully drive engagement”, although “it but does not speak directly to sales or earnings”.   (Since writing that, he’s joined Playfish.)  

Another interpreter of the DAU, Eric von Coelln,  has trademarked his own special name for the DAU/MAU ratio – he calls it “The Social Game Sticky Factor”.  In his post in Inside SocialGames, he compares a number of leading apps in terms of “stickiness”, and says that the sticky factor allows you to benchmark applications’ abilities to retain their users.   (Since writing that, he’s joined OMGPop, a social game developer, as VP.)

Yeah but.   Both Bergel’s and von Coelln’s interpretation of these metrics have serious flaws.    And I should add that both articles are worth reading, despite this, as they have lots of interesting stuff to say about design determinants of in-game user behaviour.  On the subject of metrics, though,  they are are misleading.

DAU is certainly an index of popularity, as Bergel says.   But for sure it isn’t a determinant.  (Or at least not in an obvious way.)      DAU is not really an indication of the potential of a game, either.     (Or at least not in an obvious way.)

The way DAU changes, from day to day, and how this rate of change changes, is perhaps a better determinant of popularity, and a better  illustration of potential, in the sense that this dynamic momentum information can give a hint of where an app might be in its lifecycle: on the up, peaking, or starting to lag.     But even here, there is an interpretation challenge.   One problem is that the standard model you might be tempted to use (if you are into reading tea leaves based on 2nd derivatives)  is probably inappropriate.   Another less technical problem is that many different roads lead to Rome.  

If a game is gaining DAU at an ever-accelerating rate,  this is certainly a signal worth watching.  Some light on some dashboard somewhere should be lighting up.   Depending on your taste in alerts, perhaps a siren should be honking.   What you know  is that something interesting is happening.      But here’s the catch.  You don’t know exactly what it is.  

You don’t know whether accellerating growth is due to new users joining because of massive ad spend, at the same time as old users are dropping out from boredom.   You don’t know if accelerating growth is happening because existing users have become massively incentivised with in-game rewards to recruit friends, and they, in turn, are recruiting new friends.   Both these possibilities are very interesting.  But they are interestingly different.

The same ‘devil in the detail’  gotcha is true of DAU/MAU, as a metric, which von Coelln has dubbed “The Social Games Sticky Factor”.  He says “If your application has a 33% sticky, it means that for every new user you bring in, you have a 33% shot at turning them into a daily user.”    Not so.    DAU/MAU tells you nothing whatsoever about whether any particular user ever comes back to the game, ever.  

Let’s say, as an extreme example, my app delivers a one-way-ticket-to-the-moon.  (And once you are on the moon, you can’t play the game again.   But it’s a good game, so people play it.)   As shown in the Moon Shot Game graphic below, a game in which no user ever played twice could still have a DAU/MAU of 33%.  This is the case on Day 30 of the Moon Shot game: DAU is 1, and MAU is 3, and DAU/MAU is 33%. 

When looking at active user numbers over a course of a specific period of time, whether it be a day  or a month, what you see is the additive result of two types of events:  ‘existing user plays game’, and ‘new user plays game’.    There are lots of different determinants of the probability of these two behaviours (and, as I said earlier, both Bergel and von Coelln are worth reading on this subject).    But unless you actually get explicit about modelling these two events as distinct, and unless you get specific about modelling how (and why) they are changing,  you won’t know what assumptions you are making.  And if you don’t know what assumptions you are making, you won’t be able to test whether or not your assumptions are sensible, or -even better – right. 

What would such a model look like?  Jon Radoff’s Social Application/Game Growth Model is an example of a growth model for social game use which uses customer lifecycles, in combination with viral recruitment, to model “total active customer” numbers.     His particular model doesn’t map directly onto DAU or MAU, because it doesn’t explicitly model the probability of a given customer playing, on a given day, or in a given month (or a given minute, or year).   Rather, being a customer is treated as a simple attribute which has a lifespan.  This is similar to the modelling assumptions that go into typical epidemiological models of disease spread: they don’t usually model how often you cough, just whether you get sick, and whether you get better.  To map to DAU, you would have to model the coughing.  (Also, in Radoff’s model, it might be interesting to assess the impact of modelling K as a random variable with a power law distribution.  But for that I would definitely need more cofffee. )   

Even more comprehensive customer lifecycle modelling can be found in Andrew Chen’s blog post, How to Create a profitable Freemium startup (spreadsheet model included!)  in which he drags cost and revenue into app use metrics, and comes up with some interesting conclusions about which differences make a difference.    Amongst other things, he demonstrates how customer LTV (life time value) is immensely sensitive to changes in retention rate.    Not surprising, but important.  However, much as you might like to to believe it, retention rate just isn’t something you can really tell for sure about from MAU, or from DAU, or from their ratio.  It would be nice if you could.  But you can’t.  That’s not the DAU’s fault.  It’s its nature.

When, then, of the true nature of the DAU?  Is it more like the Dow-Jones, The Tao of Pooh, or the Tao te Ching? 

My vote is that it is like a cross between the Dow-Jones and the Tao te Ching.:  it’s a hard number that needs creative interpretation.   But this interpretation process has many pitfalls for the unwary.  There many routes to the same DAU, and you have to try to figure out which one you think you’re on, to understand what it might mean.