Sticky Smartphone

Posts tagged mobile analytics

2 notes &

How to forecast active user growth in your app (Shareable spreadsheet)

One of the most important things when developing your mobile app (or web app) is determining how many people are returning to use it and how to measure if the changes you are making is really improving it for the users. The number of registered users is not a great measure of how your app is engaging users. You could have 1 million users signing up but none of them may return to use the app after the first time.

A better measure is to see how many people are actively using your app. However, this number can get confused with the number of people who are new users, so you need to work out how many people stop using your app for each group of new users and when they stop using it.  This makes it complicated to forecast the growth (or non-growth) of your app, making it hard to measure if your changes are having the desired  effect of making your app sticky. 

When is it time to get more users?

If your losing lots of users, you may not want to focus on acquiring lots of new users because they will likely only use your app once, so you need a good indication of when to start applying resources to get more users.

Spreadsheet

I’ve created a basic google spreadsheet that lets you to set up some active user goals and makes some forecasts for you. You can setup some targets for a future number of active users for a certain point in the future and understand how many people you need to acquire to hit your goal. 

This lets you decide if it is an appropriate time to turn on the taps for user acquisition. e.g. if achieving 100,000 active users in 50 weeks means acquiring 10,000,000 new users, it’s clearly not the time to plough financial resources into acquiring new users.

The Output


It will produce a chart like the above.

  • The Redline is the forecast of active users. 
  • The orange line is your real data. 
  • The blue bars are the number of new users you are acquiring for each data point (day, week, month.. it’s up to you). 
  • Target Active Users is the number of active users you hope to achieve
  • Target Period is the time by which you are looking to reach that number of active users
  • User Acqusition Growth rate is the linear % increase of the number of users you need to acquire for each time period. 
  • Total Users is the number of overall users you need to have acquired by the final period to hit your target number of active users

What do you need to Input?

You need to type in the Target number of active users and the target period you hope to achieve it by. To generate the graph you need to input into the fields described below

The first 2 fields are your baseline. It’s the number of new users you acquired in the last period and the number of active users you had in the last period.

Churn rates

Churn rate is the % of users you lose every period. e.g If you acquired 100 users in a period and in the next period, 30 of them are still active, your churn rate is 70%. However, usually the churn rate is high for the first period and subsequent periods are lower, this is because in the first period people are trying out your app and quite a few of them will decide it is not for them. The subsequent periods tend to have a more ‘natural’ churn.

So in this chart you need to apply a first period churn rate and a second churn rate for all other periods. You should be able to extrapolate these figures form your analytics tool.

New Users and Active Users

The last things to fill out is the number of new users you get every period. You can start by plugging in old data to see if your churn rate estimates are accurate. Fill in the actual number of active users and you can see how well the prediction is based on your inputted churn rates.

Fill in your new user forecasts and it should give you a forecast on the number of active users. 

Testing your performance

When you get new data for a period, you can input the number of new users and the actual active user number. Look at the numbers and the chart and if it starts diverging upwards, then the changes you have made to your app are having a positive effect on your churn rate.

You can reuse the chart with your new churn rates to re-estimate the targets you are trying to achieve.

If you find this useful, or know someone who may find this useful, spread it around.

The example spreadsheet can be found here

Image from Cillian Storm

Filed under active users mobile analytics churn rates user acquistion forecast predictive analytics

5 notes &

How to use Flurry for split testing and engagement metrics for your mobile app

In my previous post. I talked about getting retention and engagement metrics out of split testing.

Here’s a practical example of how to do A/B testing using Flurry.

Create an App_Launch event that happens whenever your app is started or brought back from the background

When you log the event, pass it with a parameter A(name of split test) or B(name of split test). You can decide in advance if the app should use the ‘A’ version or the ‘B’ version using some device variable such as the MAC address or UDID.

For the purpose of this post, I will use a conventional ‘marketing’ conversion split test. The position of the in-app purchase button as the illustration of the ‘split test’. However, it could be for anything, the number of coins a new user receives on starting a game, the order of the tabs at the bottom, the layout of a particular screen, etc. In this example, people in group A have the in-app purchase in the top of the screen. People in group B have it pop-up. Marketing wants to know which positioning maximises conversions…but we want to also see the impact on engagement and retention, which I will talk about in a later post.

1) Create 2 segments inside Flurry

  • Go to the Manage -> Segments and press Create New Segment.
  • Press Add Custom Event and click on Only include users who triggered the event with these parameters and values
  • In the triggering event name put in App_Launch (or whatever you called it)
  • In the Parameter name put in A(name of split test)
  • Do the same for B

You have now created 2 segments that can dissect the user behaviour of both of these parties.

2) Check the conversion

  • Go to the Usage -> New Users Table and select the App version for the split test. This gives you the total number of new users who used the app with the split test in place. But you want to segment them, so select the ‘A’ Segment. In my example there are 6604 new users in segment ‘A’.

  • Go to the Events Summary screen and select the A segment. Click on the Event Statistics icon and you will get the number of people who where part of Group A who clicked on the in-app purchase

This shows 608 people converted. That’s just below 10% in conversion

Do the same for B, and now you can compare the conversion rates. In my example, B has 6320 and 478 conversions. Use a tool such as this online calculator and we find that it is statistically significant.

3) The Bonus Engagement and Metrics

If you go back to the event summary, you can download all the A and B data in CSV format. Go ahead and do that.

Then create a spreadsheet with 3 sheets. A, B and statistical significance. Then you can create a spreadsheet that can test your whole app across all its events, by adding a statistical test to each event. I used this basic formula-

=(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test A’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))+(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test B’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))

But there are plenty of other formulas that maybe more suitable for you..

Next, colour code the spreadsheet so that any significant differences are highlighted and then you can see the impact of the A/B test beyond the scope of just the conversion.

This can bring you many different insights. For example, conversion maybe higher for in app purchases but the number of people recommending or sharing the app using a tweet or facebook button decreases for that group.

Remember to check in which direction the result is statistically significant.


Filed under Flurry Mobile analytics split testing mobile app testing Mobile conversion Engagement metrics

3 notes &

How to get retention & engagement numbers out of mobile analytics A/B testing

For most people, A/B testing analytics (or split testing) is all about conversion. How can I redesign this webpage to get more people to click through to my goal? How can I get more people to sign up. It’s all about acquisition, acquisition, acquisition… actually there’s more to it than that.

For mobile app analytics, A/B testing tools generally support campaign or content optimization  focused on conversion, clicks or triggers. This is fine if what you are doing is marketing focused, acquisition focused or activation focused.. but it doesn’t really help much with engagement or retention. What if you wanted to find out the impact of :-

  • Giving new players to your game different starting statistics?
  • Change the order that the app screens are presented?
  • Modify a tutorial page

etc. what change will increase retention, not just conversions. What change will increase use of other features?

What conventional A/B testing doesn’t tell you - 

  • Engagement - Which users are more engaged to my app as a whole, or feature X of my app because of A or B (besides the feature/layout under A/B test)?
  • Retention - Which users are more likely to be retained (increase retention) because of A or B
  • Features - Given that A or B increased retention, what features are used less or more with each group?
  • Deeper Activity - Does A or B increase activity anywhere else in my app?

In fact there’s a very easy way to do this using event attributes or event parameters (depends on the app analytics tool you use). In your app framework, assign people who are in the ‘A’ group with an event attibute/parameter as ‘Test Group A’ and likewise for ‘B’ in their App Launch event or whatever naming convention makes sense for you. You can give more unique names for different tests.

By doing this, you have segmented your users into A & B and now you can test the impact across every event/metric in your app.

Segment and extract all the data into a spreadsheet and you can then statistically test them like this -

In the above chart anything in Green was tested as statistically significant. This gives a deeper insight into what other changes occured due to the A/B test and means you can make A/B tests beyond thinking about ‘click’ or ‘tap’ conversions at the top level, remove features that don’t add to engagement, retention or revenue.

Secondarily, using tools which provide retention or lifecycle metrics, you can create a segment using the event parameter and see which version really does provide more retention and allows you to make more discoveries

  • Conversions where higher with A, but yielded lower retention. Customer Lifetime Value may be lower because of this so long term revenue could decrease
  • Conversion was higher with A, but higher value conversion with a deeper feature is higher in B
  • There is no significant difference in conversion between A or B, but retention is higher with A
  • There is no significant difference in conversion but engagement is higher with B

These are some of the potential discoveries just looking beyond the basic A/B test.

Do you have any mobile analytics tricks to share?

Image by Search Engine People Blog

Filed under a/b testing app analytics conversion engagement mobile a/b testing mobile analytics mobile apps mobile metrics retention retention and engagement split testing

9 notes &

How to use analytics and segmentation to find value from users with no account

Engaged Mobile User

I was reading this blog post by Fred Wilson and it occured to me that engaged users are not just about people who create accounts, regularly log in and contribute. But also about the different cross sections of people who are engaging with your app on their first visit, or returning visits on a passive level.  The gist of the linked blog post is about twitter and how many people visit it in a month.

  • 400M active users per month
  • 100M users who log in
  • 60M users who tweet

Usually, the 300M people who didn’t create an account are not measured as active users or engaged users. If we ignore it, we are missing out on the potential insights of those 300M users if we are not measuring correctly. If we look only at the drop-out points along the account creation funnels and see them only as failed conversions we are ignoring them as active users and failing to utilise the value they are gaining from just observing. These 300M should be segmented further (e.g. to returning and non-returning visitors) and we should analyse what they are doing to give us insights into their behaviour.

How About Mobile Apps?

One key difference between websites and mobile apps is re-discoverability. For many people, if they don’t like an app, they will delete it. For website, a new blog post, a link from a trusted source, a search engine can bring people back to the website, but for apps, once you have deleted it, the hurdles to re-find the app and re-install it are much higher than for websites. This poll shows that 26% of people uninstall an app after only using it once.

For mobile apps that require an account before being able to use the app, this can be a problem. The app could be redesigned to provide value to a user before they created an account. Then we can segment the groups and do some deeper analysis, and potentially reduce the number of app uninstalls. This opens up the chance to retain these “unconverted” users.

What are we measuring?

We are segmenting the users into different levels of engagement, in a similar way to how games can segment people into different ‘levels’ based on their progress in a game. This way we can discover different types of behaviours and insights to potentially convert to higher value actions.

The segments we are effectively using here are

  • Used the app once only
  • Used the app more than once (or on multiple days) but never created an account
  • Created an account
  • Contributed to system 

These segments gives us insights into the behaviour of each group, and allows us to optimize the app for each group to increase retention. Engagement is the key, changing the engagement into a revenue event or some other high value event can come later. The more people you keep engaged in these channels the more possible high value events that can occur. An example is referral, people who may not have created an account but are regular passive users may go on to contribute to its organic growth.

Secondarily, high value events for each of these segments may be very different. They are different user groups with different behaviours, what they find high value may be different for each group.

How do I track it and what actionable insights can I get?

Check the number of people who return to your app. Some analytics tools, such as Flurry can provide you with the number of users who are one-session users. Compare this with each iteration of your app and see if you can reduce the % of people who are one-session users. Segment this group so you can drill down into their behaviour. What events are they doing?  What events are they not doing when compared to returning users who have not created an account?  Can you improve the app to make one-session users less likely to leave? Some examples of things you may want to optimize based on what you discover -

  • Improve ways for non-account owners to refer your app or content.
  • Increase visibility of non-account owners to account owners. 
  • Increase accessibility of public contributions by account holders to non account holders

Just by providing more value at the beginning of your app, can help you to retain users and refer users. By identifying, segmenting and drilling down into their behaviours and comparing their behaviours to ‘active users’, it should give you insights on what to focus on to improve your app (or website). By engaging ‘non-active’ users, it’s possible to increase value to ‘active’ users as well.

Image courtesy of Flickr, Ed Yourdon

Filed under analytics mobile analytics engaged users active users inactive users