Engagement score is an at-a-glance metric which Totango adds to each account.
Its purpose is to give you a sense of how active and engaged a particular account is with your cloud application.
An account’s Engagement Score is calculated by looking at the aggregate amount of time spent, by all users of the account, with the application in the last 14 days.
This number is then normalized across all accounts of the same life-cycle stage to produce a number between 0 – 100.
Accounts with an engagement score of 0 were the least active during this period, whereas those with a score of 100 were the most active.
Example: For a setup with three lifecycle stages:
An account in the Paying stage with an Engagement Score of 99 is the most active Paying customer (together with any other accounts with the score of 99)
An account in the Activated stage with an Engagement Score of 50 is more active than half of the current Activated accounts
A New Trial account with an Engagement Score of 0 is the least active at this stage (usually reflecting a period of 14 days of no activity)
Take a step back and you’ll realize user-engagement is the single most important metric in any SaaS business.
In a world where customer evaluate your offering at their own pace, and can cancel their subscription at any time, the best way to maximize your business potential is to make sure users are engaged and see value in your offering.
And the best way to ensure that is to create a metric which can be monitored for change and improvement on an on-going basis.
It has been somewhat surprising for us to see that most SaaS companies (and we’ve spoken to hundreds by now) are somewhat at a loss as to how to actually measure their user’s engagement with their offering and application. In fact, when pressed many admit that, important as it is to their business, they actually *don’t* measure user engagement. Simply because they could not figure out a systematic way to!
Since we’re here to help SaaS companies do better in this area, here’s our 3 step guide to getting started.
Step 1: Segment your users into lifecycle stages
The signals engagement for a trial user that has just signed up vs. an established customer is very different. Trying to come up with an engagment metric that applies throughout all lifecycles is practically impossible. Consider the following (in a fictitious SaaS application):
User1: Signed up last week, has logged in 5 times, created a project with some content and reviewed our knowledge-base 3 times User2: Paying customer for a year. Last week logged in 5 times, created a project with some content and reviewed our knowledge-base 3 times
Clearly User1, as a new trial user, is exhibiting a good level of engagement, where-as the behavior of User2, a year into their subscription, is concerning at best.
We recommend you break down user lifecycle at least into the following stages. We also suggest some ideas of things you’d want to look at as you compute engagement at each stage
You’d want to apply a different engagement metric to users depending on where they are in the process.
Step 2: Create a scale
Engagement is not a binary metric. Users are not either engaged or unengaged, but rather fluctuate on a scale. We recommend creating the following buckets:
The time window to measure varies. we typically suggest 14 days – 30 days, depending on the application’s complexity.
For a top-line view, you eventually want to end up with a dashboard similar to that shown below.
The chart shows, number of total, highly engaged and lightly engaged users overtime. For convenience, we overlay important milestones (product releases, marketing campaigns, etc.) so we can see their affect on our users.
For example, we see a good pickup of total activated users after launch . Growth is mainly in lightly engaged users however. Important milestone 1 made almost no impact (maybe it wasn’t that important after all? Important milestone 2 on the other had, clearly increased the number of highly engaged users (we should do more of that)
Step 3: Constantly refine & improve
Your engagement metrics should not be static but evolve over time. You should constantly “test” them against users eventual decision to purchase or cancel their subscription. If they don’t provide a good enough prediction as to what a user is likely to do with their account, the metric and its underlying formula should be tweeked.
“Highly Engaged” Trial users should convert at a very high conversion-rate to paying accounts
“Gone” and “Fading” Paying users would tend to churn if left unchecked
Measuring engagement can be tricky, but is absolutely essential for success in a SaaS environment.
A corporate-wide engagement metric helps:
The product-teams improve the product’s value to customers
The sales team focus on trial accounts that matter most
The customer-success team identify and proactively manage paying-customers
And helps marketing teams bring more qualified, relevant leads
It should be part of every SaaS organization’s core-competency. Get it implemented in yours today!
Do you know how to measure your Customer Engagement? Our SaaS Dashboard can easily do that for you! Try it now for FREE
Over the next few weeks, Totango will be posting a blog series on best practices for measuring conversion rates of trial usage for Software-as-a-Service (SaaS). Trial conversion is arguably the single most important business metric for SaaS companies since the model is based on two key parameters: customer acquisition cost and customer lifetime value. The trial conversion moves customers from the acquisition phase to the lifetime value phase and as more potential customers become paying customers, the customer acquisition cost goes down and the customer lifetime value goes up. Simply put, the ratio between customer lifetime value and customer acquisition cost is the entire profit of a SaaS company.
It is important to make sure that the measurement of trial conversion addresses three basic concepts:
Simple to measure;
Simple to understand;
Unfortunately, trial conversion is not that simple to measure correctly (most organizations do it, but haphazardly) because there is no “single source of truth” per se. That is, trial conversion comes from multiple business processes (marketing and lead generation, in-house sales, and the product itself), which muddies the ability to measure it definitively. As a result, to get an accurate trial conversion number, organizations need to make sure that all the data collected is aligned among the business processes mentioned above.
The second challenge is “noise,” or trials that are “dead on arrival.” These users may have signedup for a trial, but have no intention of buying. They are just playing with the software because they can; it could be for educational reasons, it could be for other reasons. Taking these “dead on arrival” trails into account creates a very blurry picture, which is difficult to take action on.
Considering the challenges of measuring trail conversions (and the need for simplicity), the first step is to define the active, or effective trials (trials who came with the intention to buy and now are evaluating the service) and weed out the “dead on arrival” trials. There are different ways to do this of course, but one example could be measuring active trials based on a second day of usage or perhaps based on what the user is actually doing. Once the SaaS organization defines an active user, a baseline can be established. A baseline is taking the current number of trial conversions (and perhaps taking into account historical information as well, if available), and setting metrics around that.
With a baseline set that weeds out “dead on arrival” trials, organizations can tweak the service they sell or the various parts of their sales and marketing processes to improve trial conversions. Perhaps the organization needs to focus on marketing to get better leads because the current leads aren’t good enough. It could be that the sales process is not effective and it needs to be improved. Or it could be that the service itself needs improvement. Ultimately, the SaaS organization needs to measure continuously in order to put a finger on the right problem.
Imagine an organization that had, for the duration of July, 1,000 new signups for trial. Out of those accounts, 10 ended up “converting”. On the face of it, the conversion rate is 1%.
However, dig a bit deeper and in many cases, you see that many, if not most, of those 1,000 trials never had a “buying potential at all”, evident by the fact that they never did a serious evaluation of the service
(note: it would be nice if numbers in real life would be so round and simple to calculate in ones head!)
Why is this important? First off, because it gives a more real indication to what is going on within the sales team’s pipeline (they are succeeding in selling to 1 out of every 10 prospects not out of every 100), and it is easier to motivate people to improve a metric they intuitively feel is true.
But that is not it, in our next post, we’ll explore what the trial conversion metrics mean and how SaaS companies can best act on the data that is collected to increase conversion rates.