Back in September of last year, we asked people on our email list just one question:

▸ Considering your experience as a whole, how would you rate Saylor Academy?

From 750 people who got back to us, here are the numbers (1 is a “very low” rating; 10 is “very high”):

1 2 3 4 5 6 7 8 9 10
2.1% 1.3% 2.9% 2.4% 5.2% 6.2% 10.9% 21.1% 21.1% 26.9%

In early March of this year, we repeated the survey to our email list, but changed the question to one that gets at the same idea — how the Saylor Academy community feels about Saylor Academy — but is actually pretty different; namely:

▸ How likely are you to recommend Saylor Academy to a friend?

And here are the aggregate numbers from 1,537 votes cast:

1 2 3 4 5 6 7 8 9 10
6.8% 2.3% 1.3% 1.5% 2.3% 2.7% 5.8% 11.6% 12.0% 53.7%
 ↓  ↓  ↓  ↓  ↓  ↓  ↓  ↑

Okay. So…why did we do this? What does it all mean? What good is a single question survey that doesn’t really go deep on anything or try to segment the responders in any way?

Well, for one thing, a single-question survey gets responses (80.3% of those who viewed the first survey ultimately answered it, which is a solid response rate). Once you start asking too many questions, your response rate goes down; pretty soon, only the people who are particularly enthusiastic, particularly invested, or particularly fond of surveys bother to respond. People with criticism to get off their chest will also be disproportionately represented. In short, a one-question survey is more likely to get a broad range of responses; a longer survey is more likely to garner committed and polarized responses. To get the most accurate rating, we wanted a lot of people to respond, but the ratings above come with these caveats: the recipients are those who have chosen to be registered students with Saylor Academy, have chosen to receive additional emails from us, have chosen to open one of those emails, and have chosen to provide a response. That is a fairly committed and pro-Academy group of people, but still pretty diverse, and the range of responses bears that assumption out pretty well — we know, for instance, that many people on our list have not yet started a Saylor Academy course or signed up but chose to invest their study time elsewhere.

For another thing, we learned of a concept called Net Promoter Score courtesy of some friends of ours. The basic idea is pretty simple: 9s and 10s are probably “promoters”; 1s through 6s are probably “detractors”; 7s and 8s are favorable but relatively neutral in terms of how or if they talk about you to others. You just add up the total percentage of your promoters, subtract the total percentage of your detractors, and you have a number within a range of -100 to +100 that tells you roughly how well you are doing. Note we say “roughly”, lest some of you 6s, 7s, or 8s out there say “I recommend you guys all the time!” This ain’t science. Perhaps it seems like the 7s and 8s are getting ignored entirely, but they are your ready reserve of promoters-or-detractors-to-be and the size of this group affects the possible range of your NPS® significantly — in this system, everyone matters. This way of grouping ratings also tells a slightly different story than a simple bar graph would. A bar graph of these ratings is a nice way of patting ourselves on the back — it’s a skyward trend as one heads from 1 to 10! But that long tail from 1 to 6 conceals a sizable segment.

NPS from September 2015: 28

NPS from March 2016: 48.9

Here are the three primary groups visualized in charts:

NPS September 2015
NPS March 2016
September 2015
March 2016

 

Some thoughts on these numbers

First, you might want some context for what these numbers mean in comparison to other organizations and other industries, including major brands you’re familiar with and might love…or not. The website NPS Benchmarks allows you to browse scores — check out the “Industry overview” section to browse major categories.

What do Saylor Academy’s numbers mean in comparison to each other? Direct comparisons are difficult for a few reasons. First, back in September, we knowingly asked the “wrong” question. What we were supposed to ask is the question we posed in March, the likelihood that you would recommend Saylor Academy to a friend or colleague (or whomever).

That is, probably, a better question. For one thing, it takes away the burden of making a direct value judgement from the respondent. Maybe someone wouldn’t recommend Saylor Academy to a friend, but they enjoy it just fine for themselves. Maybe another person doesn’t like Saylor Academy all that much, but knows people who probably would. That is, the question absolves the respondent from having to say “I like you a lot” or “I do not like you a lot”; rather, the person can say “I think my friends would like you a lot” or “I do not think my friends would like you a lot”. That short emotional distance can make it a bit easier to deliver bad news (“Sorry, I really would not recommend you to someone else”) without feeling too cruel.

For another thing, for all its emotional distance, this question cuts right to the chase: are you, the student, most likely promoting us, trash-talking us, or staying mum?

Of course, we didn’t ask this question in September; we asked a different one, because we wanted, first, to know how people in our community feel about their experiences with Saylor Academy in the most holistic, stop-and-take-stock fashion. We figured that this probably still tells us a little bit about whether people are singing our praises or counting up our sins. We could always ask the “right” question in the future…and now, in the future, we have.

The most recent survey has a couple interesting hiccups; first, we accidentally sent the survey to an out-of-date list, one that did not include thousands of recently-registered students. We made up for that error, a few days later, by sending the survey to all the recent additions to our email list. In short, we have two clear sets of numbers — one made up primarily of veterans and one made up primarily of new students.

Moreover, the first group received the survey before we announced the closure of our eportfolio system and before we announced our most recent college credit opportunities. The second group received the survey immediately after those announcements. That’s a lot of conflating factors, so if this wasn’t exactly science before, it’s even less science now.

Interestingly, the two March groups give a fair similar breakdown of ratings — the greatest variance was 2.5 percentage points, and most ratings varied between groups by less than one percentage point.

The March group shows a clear pull ahead from September, with the bulk of the difference seen in those who gave us a 10, with nearly all of that increase coming from the “neutrals”. Given some of the changes we made last year — finally bringing an end to majors and shutting down certificate options for legacy courses, the level of ambivalence in September is not too surprising. Our growth rate is such that many of those polled in March were not around last year to experience these changes.

A final thought: in September, we made an invitation to comment part of the survey itself and we got several hundred illuminating responses. In March, commenting were also invited but required someone to reply to us (thus surrendering relative anonymity as well as requiring some extra effort). Here are a couple word clouds created from those September responses (a few days apart from one another).

NPS Cloud 2 - September 2015
NPS Cloud 1 - September 2015

Your comments are most welcome; we will repeat this survey again in September, taking into account lessons learned from the two polls above.