The royalty rate that Motorola Mobiity wants from Apple for using its patents is 2.25% and Google are stating they will not change the policy. Cries of “unfair”, they are of a FRAND (Fair, Reasonable and Non Discriminatory) nature, so is 2.25% excessive? Even Samsung are demanding 2.4%!
Apple have complained to ETSI (European Telecommunications Standards Institute) and Microsoft are supporting Apple’s complaint claiming that the rate is far too high.
So the real question is what is considered fair and is Apple’s complaint valid?
Why isn’t mobile be the same as the wired world?
Mobile has been different from the outset. Spectrum is owned by governments, is licensed for specific uses and is limited in its resources, this means that the technology required for delivering larger amounts of data on such a narrow amount of available frequencies requires investment and a return on the investment.
Apple and Microsoft do not have substantial patents in this area. They have both come from the wired world which has a different value chain and different (strong) players at each layer. There is no equivalent of Qualcomm in the wired world, there are no gate keepers that can prevent your mobile phone from being easily supported such as carriers.
Is it really fair, what has history told us?
Apple previously lost a patent dispute with Nokia. Probably involving patents of a FRAND nature. The value is around 4.5% of the cost of building the device. Previous patent agreements have been around 5%
Qualcomm won a patent case against Nokia, with Nokia settling for an alleged $1.8 Billion in patents and agreeing to pay a royalty of approx 2%. Qualcomm’s average royalty rate is around 3.5% according to this chart . At the time, Nokia were also complaining that Qualcomm were not complying to FRAND.
The main thing to note is that these royalty percentages are above what Motorola and Samsung are demanding. In this context they seem fair……. however, the main difference (in the Motorola case) is a % of what? Motorola are asking for a % of the average sales price, whereas previous cases are about a % of the cost of building a device.
Here is a BOM breakdown of the iPhone 4S -
WIth a sales price of $599 for the 16GB and $699 for the 32GB version that means Motorola are asking for around $13.48 and $15.72 per device respectively. Whereas at Qualcomm’s estimated 3.5% of BOM, that figure is $6.58 and $7.25 respectively. This is likely the driver of the ‘unfairness’ claim.
It’s not the % in itself, it’s what it’s a % of.
How much are they asking?
Smartphone growth is forecasted to slow to 22% in 2012. If Apple maintain their current market share then the amount they are looking for in 2012 is worth $1.7B.
Compare that to the BOM version which is worth only half a billion and is likely to be less due to deceasing prices of components over time.
Number of Patents
If we look at the LTE patent pool and see who owns what, this is another argument for the unfairness of the request. Although the dispute is not about LTE (where Apple has acquired Nortels patent pool), it puts the size of the patent pool and the amount Qualcomm request vs what Motorola/Google are requesting into perspective.
Is asking for $1.7B fair?
(Image from 401K)
One of the most important things when developing your mobile app (or web app) is determining how many people are returning to use it and how to measure if the changes you are making is really improving it for the users. The number of registered users is not a great measure of how your app is engaging users. You could have 1 million users signing up but none of them may return to use the app after the first time.
A better measure is to see how many people are actively using your app. However, this number can get confused with the number of people who are new users, so you need to work out how many people stop using your app for each group of new users and when they stop using it. This makes it complicated to forecast the growth (or non-growth) of your app, making it hard to measure if your changes are having the desired effect of making your app sticky.
When is it time to get more users?
If your losing lots of users, you may not want to focus on acquiring lots of new users because they will likely only use your app once, so you need a good indication of when to start applying resources to get more users.
I’ve created a basic google spreadsheet that lets you to set up some active user goals and makes some forecasts for you. You can setup some targets for a future number of active users for a certain point in the future and understand how many people you need to acquire to hit your goal.
This lets you decide if it is an appropriate time to turn on the taps for user acquisition. e.g. if achieving 100,000 active users in 50 weeks means acquiring 10,000,000 new users, it’s clearly not the time to plough financial resources into acquiring new users.
It will produce a chart like the above.
- The Redline is the forecast of active users.
- The orange line is your real data.
- The blue bars are the number of new users you are acquiring for each data point (day, week, month.. it’s up to you).
- Target Active Users is the number of active users you hope to achieve
- Target Period is the time by which you are looking to reach that number of active users
- User Acqusition Growth rate is the linear % increase of the number of users you need to acquire for each time period.
- Total Users is the number of overall users you need to have acquired by the final period to hit your target number of active users
What do you need to Input?
You need to type in the Target number of active users and the target period you hope to achieve it by. To generate the graph you need to input into the fields described below
The first 2 fields are your baseline. It’s the number of new users you acquired in the last period and the number of active users you had in the last period.
Churn rate is the % of users you lose every period. e.g If you acquired 100 users in a period and in the next period, 30 of them are still active, your churn rate is 70%. However, usually the churn rate is high for the first period and subsequent periods are lower, this is because in the first period people are trying out your app and quite a few of them will decide it is not for them. The subsequent periods tend to have a more ‘natural’ churn.
So in this chart you need to apply a first period churn rate and a second churn rate for all other periods. You should be able to extrapolate these figures form your analytics tool.
New Users and Active Users
The last things to fill out is the number of new users you get every period. You can start by plugging in old data to see if your churn rate estimates are accurate. Fill in the actual number of active users and you can see how well the prediction is based on your inputted churn rates.
Fill in your new user forecasts and it should give you a forecast on the number of active users.
Testing your performance
When you get new data for a period, you can input the number of new users and the actual active user number. Look at the numbers and the chart and if it starts diverging upwards, then the changes you have made to your app are having a positive effect on your churn rate.
You can reuse the chart with your new churn rates to re-estimate the targets you are trying to achieve.
If you find this useful, or know someone who may find this useful, spread it around.
The example spreadsheet can be found here
Image from Cillian Storm
Blackberry is dead! Not enough Apps! Actually, part of the reasons for their demise is because changing enterprise and carrier trends caught up with RIM’s previous decisions, so let’s begin by delving into RIMs history
History & Strategic Shift
RIM focused on enterprise early, originally a 2-way paging device, they created push email and transformed into a smartphone company in 2003 by focusing on push e-mail. This was their first critical decision, they moved from reseller channels and completely relied on carriers to sell and distribute their phones.
RIM positioned its BlackBerry as a carrier‐friendly platform via its data efficiency. Most of the world had low bandwidth data connectivity and blackberry was dominant with their compression algorithms.
With its smaller bandwidth footprint, it costs enterprises less when there is no all you can eat data plan and carriers save money because they do not need to spend as much money to increase the capacity of their network capacity for this service (see this article for more info - http://www.rysavy.com/Articles/2009_01_27_Rysavy_EMail_Efficiency.pdf)
Then they introduced Blackberry Messenger (BBM) in 2006 an excellent solution for enterprise, it allowed secure messaging for enterprise customers reducing their costs which increased incentive for data packages to be purchased from the carriers. Security and email became synonymous with Blackberry.
Chasing the golden egg
RIM started to chase the consumer market and relied heavily on BBM to attract the youth market. The attraction for this market was reduced SMS costs. RIM got caught up in the consumer smartphone battle losing market share.
By spreading themselves out, they lost focus on how the smartphone world changed causing delays in execution.
- Enterprise is shifting away from owning the mobile device for its employees and allowing employees to us their own phones.
- BBM as a youth market cannibalizes carrier SMS revenues reducing the incentive for carriers to push blackberries. After Android came along, Verizon didn’t push Blackberries with the same vigor.
- Apps such as WhatsApp, Line, Talkbox etc. give the youth a ‘newer’ phone with an equivalent low cost messaging system so they can shift away from BBM. Moreover, it’s not locked down to only one manufacturer which means they can ‘free message’ friends using different devices.
The new CEO has stated they will have a consumer focus, particularly at the entry point where people upgrade from feature phones to smartphones. This will be incredibly tough for Blackberry who have to garner more support for the consumer app ecosystem. Smaller development houses do not have the resources to support multiple platforms and they will be selective over which platform to develop for first. Blackberry is not the first on the list.
Part of reasoning for their focus on consumer is because they rely heavily on carriers to sell their devices and the carrier battlefield is currently focused on the consumer rather than the enterprise market. What might exaceberate the problem is that they plough their resources into consumers and end up losing the enterprise customer.
Cloud, multiple devices and security.
These are areas that are becoming increasingly important. For enterprise, the deployment of private cloud solutions, online collaboration and working on the go. For security, the increase in cyber attacks and ‘cyber espionage’ is on the increase, and for the prosumer, carrying multiple devices is a pain. 3 distinct areas that need addressing which RIM can approach.
Here’s what they could try -
- Expand the scope of their cloud services beyond messaging, email and IT administration. Consider CRM, or 3rd party developer solutions that can wrap around their cloud architecture
- Wrap up corporate collaborative productivity tools within their ‘secure cloud’. They already have a great reputation for security.
- Split the blackberry so that it’s both a personal and enterprise tool. If a phone gets stolen, there is extra security for the ‘enterprise’ half of the phone. The prosumer can then install apps etc. onto the ‘consumer’ half of the phone. Many people don’t want to mix their professional contacts with personal friends.
- Separate billing for different profiles on one device. An extra value service which carriers can benefit from which provides effective cost management for IT departments, and still allows the phone to be used as a personal device.
This tries to address a relevant market for RIM with their blackberry products and targets both the enterprise, prosumer markets with incentives for carriers to continue to push the device. If more data solutions go via blackberries cloud, it provides more incentive for carriers because of RIM’s data efficiency. Even recently, carriers such as Docomo are looking for data efficient solutions
Photo by arrayexception
There was a time when smartphones had different types of form-factors sometimes iconic, sometimes innovative, even crazy designs. However, in the past few years designs have become boring. Almost every new smartphone in the past 3 years years is either a black/white touchscreen slab or a QWERTY keypad.
I will look back at some of the designs that made people look twice at your smartphone, maybe in awe, sometimes in comedy. (I’ll try not to repeat similar designs)
The Kyocera 6035, flip phone with touch screen (left)
The Nokia 9210 is a clamshell with keyboard (right)
Sony Ericsson P800 a flip phone with a touchscreen (left)
One of the first camera enabled smartphones was the Nokia 7650 with a slide (right)
Nokia 3650 with its crazy rotary keypad. (Left)
Palm Treo 180 with a full QWERTY keyboard and see through flip. (Middle)
Blackberry 5810, full QWERTY slab.
Nokia N-Gage aka the Taco. Designed to be a handheld gaming device (left)
Motorola A920 - Touchscreen Slab (Middle)
Siemens SX1 - With the numerical keypad on the sides (right)
Nokia 7610 - Although candbar form factor, funky keypad design.
Nokia N90 is a clamshell with a rotatable camera barrel and rotatable screen.
HTC startrek - thin clamshell smartphone (Left)
Nokia N93 - Transformer phone, camcorder, flip and laptop modes with a 3X optical zoom
Nokia N95 - Double slider
iPhone 2G - Slab
T-Mobile G1 (HTC Dream) - Sliding QWERTY
This is the era of Touchscreen Slabs.
I think it’s about time smartphone industrial design had a shake-up and we get some of the crazy, interesting head-turning designs back.
In my previous post. I talked about getting retention and engagement metrics out of split testing.
Here’s a practical example of how to do A/B testing using Flurry.
Create an App_Launch event that happens whenever your app is started or brought back from the background
When you log the event, pass it with a parameter A(name of split test) or B(name of split test). You can decide in advance if the app should use the ‘A’ version or the ‘B’ version using some device variable such as the MAC address or UDID.
For the purpose of this post, I will use a conventional ‘marketing’ conversion split test. The position of the in-app purchase button as the illustration of the ‘split test’. However, it could be for anything, the number of coins a new user receives on starting a game, the order of the tabs at the bottom, the layout of a particular screen, etc. In this example, people in group A have the in-app purchase in the top of the screen. People in group B have it pop-up. Marketing wants to know which positioning maximises conversions…but we want to also see the impact on engagement and retention, which I will talk about in a later post.
1) Create 2 segments inside Flurry
- Go to the Manage -> Segments and press Create New Segment.
- Press Add Custom Event and click on Only include users who triggered the event with these parameters and values
- In the triggering event name put in App_Launch (or whatever you called it)
- In the Parameter name put in A(name of split test)
- Do the same for B
You have now created 2 segments that can dissect the user behaviour of both of these parties.
2) Check the conversion
- Go to the Usage -> New Users Table and select the App version for the split test. This gives you the total number of new users who used the app with the split test in place. But you want to segment them, so select the ‘A’ Segment. In my example there are 6604 new users in segment ‘A’.
- Go to the Events Summary screen and select the A segment. Click on the Event Statistics icon and you will get the number of people who where part of Group A who clicked on the in-app purchase
This shows 608 people converted. That’s just below 10% in conversion
Do the same for B, and now you can compare the conversion rates. In my example, B has 6320 and 478 conversions. Use a tool such as this online calculator and we find that it is statistically significant.
3) The Bonus Engagement and Metrics
If you go back to the event summary, you can download all the A and B data in CSV format. Go ahead and do that.
Then create a spreadsheet with 3 sheets. A, B and statistical significance. Then you can create a spreadsheet that can test your whole app across all its events, by adding a statistical test to each event. I used this basic formula-
=(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test A’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))+(((0.5*(‘Split Test A’!B2+’Split Test B’!B2))-‘Split Test B’!B2)^2)/(0.5*(‘Split Test A’!B2+’Split Test B’!B2))
But there are plenty of other formulas that maybe more suitable for you..
Next, colour code the spreadsheet so that any significant differences are highlighted and then you can see the impact of the A/B test beyond the scope of just the conversion.
This can bring you many different insights. For example, conversion maybe higher for in app purchases but the number of people recommending or sharing the app using a tweet or facebook button decreases for that group.
Remember to check in which direction the result is statistically significant.
For most people, A/B testing analytics (or split testing) is all about conversion. How can I redesign this webpage to get more people to click through to my goal? How can I get more people to sign up. It’s all about acquisition, acquisition, acquisition… actually there’s more to it than that.
For mobile app analytics, A/B testing tools generally support campaign or content optimization focused on conversion, clicks or triggers. This is fine if what you are doing is marketing focused, acquisition focused or activation focused.. but it doesn’t really help much with engagement or retention. What if you wanted to find out the impact of :-
- Giving new players to your game different starting statistics?
- Change the order that the app screens are presented?
- Modify a tutorial page
etc. what change will increase retention, not just conversions. What change will increase use of other features?
What conventional A/B testing doesn’t tell you -
- Engagement - Which users are more engaged to my app as a whole, or feature X of my app because of A or B (besides the feature/layout under A/B test)?
- Retention - Which users are more likely to be retained (increase retention) because of A or B
- Features - Given that A or B increased retention, what features are used less or more with each group?
- Deeper Activity - Does A or B increase activity anywhere else in my app?
In fact there’s a very easy way to do this using event attributes or event parameters (depends on the app analytics tool you use). In your app framework, assign people who are in the ‘A’ group with an event attibute/parameter as ‘Test Group A’ and likewise for ‘B’ in their App Launch event or whatever naming convention makes sense for you. You can give more unique names for different tests.
By doing this, you have segmented your users into A & B and now you can test the impact across every event/metric in your app.
Segment and extract all the data into a spreadsheet and you can then statistically test them like this -
In the above chart anything in Green was tested as statistically significant. This gives a deeper insight into what other changes occured due to the A/B test and means you can make A/B tests beyond thinking about ‘click’ or ‘tap’ conversions at the top level, remove features that don’t add to engagement, retention or revenue.
Secondarily, using tools which provide retention or lifecycle metrics, you can create a segment using the event parameter and see which version really does provide more retention and allows you to make more discoveries
- Conversions where higher with A, but yielded lower retention. Customer Lifetime Value may be lower because of this so long term revenue could decrease
- Conversion was higher with A, but higher value conversion with a deeper feature is higher in B
- There is no significant difference in conversion between A or B, but retention is higher with A
- There is no significant difference in conversion but engagement is higher with B
These are some of the potential discoveries just looking beyond the basic A/B test.
Do you have any mobile analytics tricks to share?
Image by Search Engine People Blog
Techcrunch made this claim (for the US market), but it’s far too early to call and history tells us that mobile and PC industries are not the same. Android and iOS has only been around a few years and has been swinging around wildly in its young history. If we look at mobile market share for the ‘top end’ in the past 20 years, there has probably been 5 different dominant OS’s/Manufacturers.
Motorola -> Palm -> Blackberry -> Apple -> Android
In the USA, Motorola dominated early on and then the ‘advanced’ phones marched in. By 2004 Palm were the dominant smartphone platform.
Fast forward 3 more years to 2007 and Blackberry take the US title.
The iPhone was introduced in 2007 and took 2 years to take the mantle, by September 2010 it was number 1 in the USA
Currently, it’s Android, which assumed the position in early 2011
The current incumbents (Apple & Android) are less than 5 years old! There is still plenty of time for change, and change has been happening fast. That’s an average of just over 4 years for each incumbent, but the trend suggests change is happening faster.
Apps make them sticky
“It’s all about the apps”. Actually, apps are only a part of the equation and having an ‘ecosystem’ worked well in the PC industry… This isn’t the PC industry. There’s a few examples and reasons why apps aare not the killer feature that makes a platform sticky as well as some other strategic factors.
Game Consoles are a prime example of people abandoning platforms, upgrading to a new one and buying a bunch of new games (apps). The next cycle of gaming consoles is coming soon, that means another new sales cycle of new games and apps.
Developers will always be looking for ways for recurring revenue, and upgrading or changing an ecosystem is a good way to generate sales. As the platforms evolve, developers want to tie in users to their system in much the same way that platforms want to control their users. Apps is the platform way, facebook do it via their social graph. The larger developers will want to evolve a way to wrestle some control over ‘its’ users, potential methods include web apps, although they are still not mature enough for mobile because ‘native apps’ perform better, but it gives users a lower hurdle to ‘switch’ platforms when it is ready.
Carriers do not want a single ecosystem in control. As soon as this happens, all the revenue will be taken away from them because they will lose bargaining power if everyone demands the same device. It’s in their interest to promote competition and they do this with their subsidies. They are the biggest difference between the PC industry and the mobile industry because they act as the gatekeeper to your wireless service. They can and will help swing platform battles. Nokia failed with Symbian in the USA… how many Nokia smartphones were subsidised by US carriers?
Manufacturers do not want to be owned by a platform that they have no control over. If a platform dominates, then the profits will be extracted out at the platform level, they lose loyalty towards their brand. Nokia abandoned Symbian and Meego, yet they still announce that they will work on Meltelmi. Samsung are working on Bada and now Tizen. In China there is Tapas, OMS. They also do not want to be controlled by the carrier, a prime example is the amount of control that Docomo have over Japanese manufacturers, they set the specifications, the standards and take most of the profits.
There are far too many layers battling over control that it does not become a consumer choice. Strategic alliances and enemies are created all the time. The original formation of Symbian was a strategic alliance against Microsoft. The split of Symbian and move to Android was a strategic move away from the control of Nokia (major Symbian shareholder).
The smartphone war has not ended.. it’s still in its early stages.
I was reading this blog post by Fred Wilson and it occured to me that engaged users are not just about people who create accounts, regularly log in and contribute. But also about the different cross sections of people who are engaging with your app on their first visit, or returning visits on a passive level. The gist of the linked blog post is about twitter and how many people visit it in a month.
- 400M active users per month
- 100M users who log in
- 60M users who tweet
Usually, the 300M people who didn’t create an account are not measured as active users or engaged users. If we ignore it, we are missing out on the potential insights of those 300M users if we are not measuring correctly. If we look only at the drop-out points along the account creation funnels and see them only as failed conversions we are ignoring them as active users and failing to utilise the value they are gaining from just observing. These 300M should be segmented further (e.g. to returning and non-returning visitors) and we should analyse what they are doing to give us insights into their behaviour.
How About Mobile Apps?
One key difference between websites and mobile apps is re-discoverability. For many people, if they don’t like an app, they will delete it. For website, a new blog post, a link from a trusted source, a search engine can bring people back to the website, but for apps, once you have deleted it, the hurdles to re-find the app and re-install it are much higher than for websites. This poll shows that 26% of people uninstall an app after only using it once.
For mobile apps that require an account before being able to use the app, this can be a problem. The app could be redesigned to provide value to a user before they created an account. Then we can segment the groups and do some deeper analysis, and potentially reduce the number of app uninstalls. This opens up the chance to retain these “unconverted” users.
What are we measuring?
We are segmenting the users into different levels of engagement, in a similar way to how games can segment people into different ‘levels’ based on their progress in a game. This way we can discover different types of behaviours and insights to potentially convert to higher value actions.
The segments we are effectively using here are
- Used the app once only
- Used the app more than once (or on multiple days) but never created an account
- Created an account
- Contributed to system
These segments gives us insights into the behaviour of each group, and allows us to optimize the app for each group to increase retention. Engagement is the key, changing the engagement into a revenue event or some other high value event can come later. The more people you keep engaged in these channels the more possible high value events that can occur. An example is referral, people who may not have created an account but are regular passive users may go on to contribute to its organic growth.
Secondarily, high value events for each of these segments may be very different. They are different user groups with different behaviours, what they find high value may be different for each group.
How do I track it and what actionable insights can I get?
Check the number of people who return to your app. Some analytics tools, such as Flurry can provide you with the number of users who are one-session users. Compare this with each iteration of your app and see if you can reduce the % of people who are one-session users. Segment this group so you can drill down into their behaviour. What events are they doing? What events are they not doing when compared to returning users who have not created an account? Can you improve the app to make one-session users less likely to leave? Some examples of things you may want to optimize based on what you discover -
- Improve ways for non-account owners to refer your app or content.
- Increase visibility of non-account owners to account owners.
- Increase accessibility of public contributions by account holders to non account holders
Just by providing more value at the beginning of your app, can help you to retain users and refer users. By identifying, segmenting and drilling down into their behaviours and comparing their behaviours to ‘active users’, it should give you insights on what to focus on to improve your app (or website). By engaging ‘non-active’ users, it’s possible to increase value to ‘active’ users as well.
Image courtesy of Flickr, Ed Yourdon