Sticky Smartphone

Posts tagged flurry

0 notes &

How to Calculate Customer Lifetime Value with Flurry and Apsalar

I was looking around the web to see how people calculate Customer Lifetime Values, one thing became painfully clear. Most websites have an assumption that you already know your customer lifespan, or the customer lifetime… aka How long an average user will use your mobile app (or website or service).

Well, there’s an easy way to calculate it, and I’ll show you how. You can even get more advanced with segments. Here’s the simple way first.

First off calculate your Expected Average Customer Lifetime

Using Flurry

Open up Lifecycle Metrics and you will see retention rates over different time periods in the rolling retention.

(*Data is for illustrative purposes)

Usually the first month will have a large drop off (the people who try your app only once and decide against using it again). You can either factor this is and use this as your worst case scenario, or average it out over the months. In the above graph. The worst case scenario is a monthly retention rate of 60%. Averaging out would give a retention rate of around 75%.

The calculation is Lifetime Expectancy = 1/(1-RR) 

= 1/(1-.75) = 4 Months  (for 60% retention rate it is 2.5 Months)

*RR = Retention Rate

You can do the same for different time periods where necessary (e.g. weeks) and can get more accurate results when you use segmentation.

Using Apsalar

(*Data is for illustrative purposes)

This is the section you need to get your retention rates.  Apsalar cohort your data into weeks/days, so you will have to average out each column ( in the example table 34.18+37.59+36.47+33.83+31.4 / 5) = which is around 34.7%, with following rates of around 90%, (so for this 5 week period it’s around 78% retention rate average and 34% worst case scenario), and do the same for all other weeks. Then you will be able to calculate the average retention. Compile the same calculation as you did for Flurry and you will get a lifetime expectancy. Be noted that Apsalar will only let you do this for daily or weekly retention.

In Apsalar,  because you will have already created cohorts for your tables, you can create lifetime expectancy for every cohort group, which means you can compare the different groups for their different retention rates.

Lifetime Value

Calculate your monthly revenue, divide it by the number of active users that month, and that is the average revenue per user. If you have other variable costs per unit sale (e.g packaging), then deduct that from the average revenue to work out the average marginal profit per user. Multiple the average marginal profit per user by the expected lifetime and that is your customer lifetime value.

*If you had marketing campaigns to retain or acquire users, you need to deduct it from your average revenue per user.

NOTE: Pick with care, each analytics tools define and measure retention in different ways. e.g. For Apsalar, ir depends on the ‘cohort’ event you pick. For Flurry it’s an active session. Frequency also matters, someone who uses an app 2 times a month will appear in monthly retention figures, but will not appear in some of the weekly retention figures.

Segments

Both Apsalar and Flurry allow you to create segments, this means you can create lifetime expectancy and lifetime value for each segment that you create. Each segment may produce different amounts of revenue and have different lifetime expectancy. This means you can focus down on the segments that create the most value for you. Be careful though, different segments may well have different acquisition costs, so you need to calculate that to make sure you don’t spend more than the lifetime value of the customer.

Filed under lifetime value flurry apsalar lifetime expectancy customer lifetime value customer lifetime expectancy

5 notes &

How to use Flurry for split testing and engagement metrics for your mobile app

In my previous post. I talked about getting retention and engagement metrics out of split testing.

Here’s a practical example of how to do A/B testing using Flurry.

Create an App_Launch event that happens whenever your app is started or brought back from the background

When you log the event, pass it with a parameter A(name of split test) or B(name of split test). You can decide in advance if the app should use the ‘A’ version or the ‘B’ version using some device variable such as the MAC address or UDID.

For the purpose of this post, I will use a conventional ‘marketing’ conversion split test. The position of the in-app purchase button as the illustration of the ‘split test’. However, it could be for anything, the number of coins a new user receives on starting a game, the order of the tabs at the bottom, the layout of a particular screen, etc. In this example, people in group A have the in-app purchase in the top of the screen. People in group B have it pop-up. Marketing wants to know which positioning maximises conversions…but we want to also see the impact on engagement and retention, which I will talk about in a later post.

1) Create 2 segments inside Flurry

  • Go to the Manage -> Segments and press Create New Segment.
  • Press Add Custom Event and click on Only include users who triggered the event with these parameters and values
  • In the triggering event name put in App_Launch (or whatever you called it)
  • In the Parameter name put in A(name of split test)
  • Do the same for B

You have now created 2 segments that can dissect the user behaviour of both of these parties.

2) Check the conversion

  • Go to the Usage -> New Users Table and select the App version for the split test. This gives you the total number of new users who used the app with the split test in place. But you want to segment them, so select the ‘A’ Segment. In my example there are 6604 new users in segment ‘A’.

  • Go to the Events Summary screen and select the A segment. Click on the Event Statistics icon and you will get the number of people who where part of Group A who clicked on the in-app purchase

This shows 608 people converted. That’s just below 10% in conversion

Do the same for B, and now you can compare the conversion rates. In my example, B has 6320 and 478 conversions. Use a tool such as this online calculator and we find that it is statistically significant.

3) The Bonus Engagement and Metrics

If you go back to the event summary, you can download all the A and B data in CSV format. Go ahead and do that.

Then create a spreadsheet with 3 sheets. A, B and statistical significance. Then you can create a spreadsheet that can test your whole app across all its events, by adding a statistical test to each event. I used this basic formula-

=(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test A’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))+(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test B’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))

But there are plenty of other formulas that maybe more suitable for you..

Next, colour code the spreadsheet so that any significant differences are highlighted and then you can see the impact of the A/B test beyond the scope of just the conversion.

This can bring you many different insights. For example, conversion maybe higher for in app purchases but the number of people recommending or sharing the app using a tweet or facebook button decreases for that group.

Remember to check in which direction the result is statistically significant.


Filed under Flurry Mobile analytics split testing mobile app testing Mobile conversion Engagement metrics