Ryan Walker 🚀

@r_yanwalker

Sharing everything I learn

How to efficiently work through survey feedback

Earlier last week I was sharing the process I use to efficiently work through survey feedback

December 20, 2021 ∙ 11 min read

stick figures

Earlier last week I was sharing with Crista the process I use to efficiently work through survey feedback, which reminded me of a quote from Matt Mochary,

If you find yourself explaining something more than once, it’s worth writing down as you’ll probably need to explain it again.

Matt Mochary, The Great CEO Within

So hence this blog — I hope this will serve as a reflection that this is by no means the only or necessarily the right way to do it, I just wasn’t satisfied with the current process.

The problem with the process that I’m trying to solve is when a product is still in the MVP stage and I’ve aggressively acquired a variety of users via every organic/DTC method in the book — I’ve got everyone from my aunt and uncle to my postie and his dog using the product. So how do I find out who’s feedback I should prioritise? Who is actually best suited for my product? What market is my product best fit for?

This problem is further exacerbated as the current process also falls victim to information overload. We want as much feedback as possible, but once it has come in we don’t know where to start unpacking it, and even worse when it comes to sharing it back we overshare with no overarching narrative to the information.

This method I describe below is how I efficiently work through survey feedback and craft a story that best highlights key information:

  1. Trace the lines I want to cut — Asking the right questions
  2. Cut to find the outliers — Who loves our product and why?
  3. Have an opinion — Articulating our observations
  4. Tell your story, own the narrative — Effectively communicating back to stakeholders

stick figure with pencil

Trace the lines I want to cut

No survey will be perfect or give better insight than talking to customers, but I believe this method can be used to refine how we approach feedback analysis.

The most important step before sending out a feedback survey is to make a note of the specific knowledge that it will provide, think about the purpose of the product, what success metrics or hypotheses do I have, and what do I need to quantify to help me to understand if I've achieved them?

The most common metric we use at Startmate is NPS. (Net Promoter Score). In short the purpose of NPS is to see what the likelihood is that an individual would be a promoter of a program or product. For context, at Startmate we aim for a program outside of MVP stage to be pushing for a NPS of 75+.

NPS by itself is only really good for external stakeholders and leaders to get a snapshot of how a program or product is tracking. The problem is that this doesn’t tell me what is going on or why, which means we need to add additional questions to help refine our analysis process.

Measure twice, cut once.

Every survey should have at least 1-3 categorical questions that will allow us to cut and segment information to help decipher the feedback. The questions used should inform the answers to our success criteria.

Ideally before I onboard users, I would ask them categorical questions in their onboarding that I can tag to their feedback to give me more context and as an added benefit, let me ask fewer questions in the final feedback survey for a better response conversion rate and instead focus on questions that let me benchmark a user’s progress over a timeframe.

Disclaimer: These categorical questions are additional to the typical feedback questions we would ask in our survey

One categorical question should be related to a user’s product experience. NPS is a good example of a categorical product experience question, although we ask people on a scale of 1-10, there are really only 3 categories, which gives us a good place to start segmenting the data later on:

  1. 😍 Promoter (9-10)
  2. 😐 Passives (7-8)
  3. 😕 Detractor (0-6)

Another question that we experimented with for the MVP of the Startmate Founder's Fellowship, alongside NPS, was Rahul Vohra’s method that they used at Superhuman. Where we instead asked:

“How would you feel if you could no longer use the product?”

  • Very Disappointed
  • Somewhat Disappointed
  • Not Disappointed

-Rahul Vohra, How Superhuman Built an Engine to Find Product Market Fit

The reason I like this question is in the framing — instead of using a push question such as NPS, where we ask the user if they will promote our product (a question that is fundamentally idealistic and would fail the Mom Test), this method frames the question as a pull by metaphorically pulling the rug out from under a user to see how they would feel, and gauging that response

It’s not anyone else’s responsibility to show us the truth. It’s our responsibility to find it. We do that by asking good questions.

-Rob Fitzpatrick, The Mom Test

stick figure on graphl

Cut to find the outliers

Segment to find your supporters and paint a picture of your high-expectation customers.

-Rahul Vohra, How Superhuman Built an Engine to Find Product Market Fit

We’ve traced our lines, the survey has been sent out and the feedback has come in, time to start cutting! The first place to start is with those categorical questions above and group users by their shared experience score. ie Promoter, passives, and negatives.

Start by measuring the percent who were a promoter (gave an NPS of 9-10) or if you’re using Rahul’s method, those who selected “Very Disappointed.” Rahul suggests that companies that struggled to find growth almost always had less than 40% of users categorised as a promoter or responded “very disappointed.” If all has gone well with the user test we’ll see a healthy percentage of +40% users in that promoter group, so then we can shift our focus to figuring out how do we get the rest of the fence-sitters (passives) into the promoter camp? This is where qualitative feedback becomes important.

What type(s) of people love the product?

Let’s have a more thorough look into our promoter group, what type(s) of people love the product? This should be relatively straightforward but here are some good starting questions:

  • What occupation do they work in?
  • What industry is their company?
  • What was their usage of the product/platform? - assuming you have relevant data for this
  • What stage was the user in before and after using the product? This can be an indicator of a group of users who had experienced what is called the Hero Journey whilst using the product
  • Age, gender, etc, there are countless examples

If there are significant groups in this segment, make note of these outliers as they will come in handy later. There's also a good chance there won't be any significant groups, don't worry as this is something we can optimise the user groups for in future product tests and surveys.

and, why do they love the product?

We now want to look through that promotor group and read the qualitative responses to our questions, as mentioned above this will differ from product to product. The overarching goal is to understand why promoters love the product. This can be one of the more manual parts of this process, but to avoid being overwhelmed I like to use a tally system.

I will put the survey question that is being answered at the top of the page and then proceed to read each response, as I’m reading I started bullet-pointing themes that show up, and as I read more feedback that is related to that theme I add a tally. For example:

What would make the product better for you?”

  • Faster support communication - IIIII
  • Clarity of onboarding instructions - II

Quite often I’ll read a response and will be unsure if it fits an existing theme, in this case, I will make a new theme for this point, and later on, I can always concatenate it to another theme later when more context is relevant, or make it a sub-theme under an existing one.

Once I’ve completed this process, I add my tallies and rank them from most to least, but before I start making any conclusions I need to now go through the passive group and repeat this process.


Passive users give better insight into our product than promoters.

This may be a controversial opinion but I strongly believe without the contrast of haters and fence-sitters to our promoters and evangelists, we will never truly understand what makes our product great. In the wise words of Ted Mosby,

Every night can’t be legendary. If all nights are legendary, no nights are legendary.”

-Ted Mosby, How I Met Your Mother

From going through our promoter group’s feedback we have already benchmarked what users love about the product, so now by following the same process of analysing the what and why of our passive user group we will be comparing the data against the promoter group to see what made them rate their experience differently which ideally will lead us to our best product wins.

Don’t be alarmed if, for the most part, we don’t see anything too different, most people will say the same feedback but they may have a stronger deposition to being pessimistic in the way they respond to questions (which isn’t helped if you’re akin to having a negativity bias). What we want to look out for are your outliers, groupings of feedback that are significantly different from our promoter group.

Significant differences are subjective and will vary from product to product but someone who’s spent enough time on a product should know when they see themes emerging that contrast the promoter group. I suggest working through the questions below to get the ol’ noggin working

  1. What areas did the passive group have a significantly different experience to the promoter group?
  2. and why did they have a different experience?
  3. What would push someone off the fence (passive) to a promoter?

Hopefully, by this point, conclusions and theories are probably starting to form, make note of them under each group of feedback to deal with later, we want to work through this efficiently and keep going through each category.

By the end, if we haven’t found any significant experience differences between the two groups, I suggest trying to group the passives and promoters differently, to see if you find something significant, if that fails you may not have enough user feedback or you need to change the persona definitions of users, which usually requires more information from them.


What do I do with my detractor user group?

Some will advise that you should politely disregard those who would be a detractor of your product. They are so far from loving you that they are essentially a lost cause. I agree with this in principle but would say that I prefer to put this group to the side till after I've finished my analysis, I don’t want to be ignorant to a potentially poor user experience, especially when you're unable to see the user in the digital world.

stick figure emojis

Have an opinion

We’ve cut our feedback into groups, understood what users we have and why they love our product. We probably have a few theories at this point based on the significant statistical differences we’ve observed between our promoter and passive users groups. We can now turn these into informed opinions and actions that we want to implement to improve the product.

Start by articulating your observation as a statement, for instance:

The level of founder connection and strength of their explorer group was the most important deciding factor to a fellow rating their experience highly

Now using your data observations from the last step, provide evidence to back up this statement. Examples of data that can be used: percentage comparisons, the difference in the number of users who reported an issue.

The level of founder connection and strength of their explorer group was the most important deciding factor to a fellow rating their experience highly”

  • 17/21 promoters said the connection to other fellows was their favourite part with only 6/21 saying they had an average or worse experience with Explorer groups.
  • In comparison, 21/25 passives said connecting with founders was the best part of the fellowship but had a higher number of 15/25 that said they had an average or worse experience with Explorer groups

and lastly, to drive the point home, use quotes to reinforce the reader’s belief in the theory

My Explorer group has been a huge part of why I enjoyed the SFF1. Being surrounded by people in the same stage and mindset is very helpful."

-Promoter respondent


Tell your story, own the narrative

In my last blog, I mentioned that in a decentralised team like Startmate, effectively communicating with team members and leaders is a necessary skill. Unfortunately, a bunch of opinions on a Google Doc won’t get us anywhere, but purposely owning the narrative to tell the story we want to tell is where we give magic to digital words and lead.

Effectively communicating our report is a lot easier when we remember in most instances the reader will not have had first-hand experience working with this set of users so it’s best to craft your report around these questions:

  • What impression do we want to convey after reading the report?
  • If a reader can only take one thing away from the report what would we want it to be?
  • What actions (if any) will we make from this report?
    • Optional: How will we keep ourselves accountable?

At the end of the day a fancy report with a bunch of stats and pullout quotes won’t convey the message by itself, only when we stand up and deliver a compelling story will we have a chance at exciting our team and successfully sharing our learnings.

Invite me to your inbox.

I share everything I learn. Unsubscribe at any time.