Improve survey analysis
1. Define what your company needs to get out of customer surveys.
For example, your customer feedback survey analysis can aim to identify what issues need to be tackled, so that the company can make more money, or which issues should be tackled to keep customers happy.
2. Design a customer feedback survey where the questions are structured so that the respondent can elaborate on answers they give. This is a laddered approach.
Opt for simple and effective question solutions, such as an open-ended question after a closed question. For example: How would you rate your satisfaction with ACME bank? How would you rate the following from ACME bank: Fees. Interest rates. Phone service. Branch service. Online service. ATM availability.
3. Clean your data before analysis by identifying outliers, deleting duplicate reports, and identifying contradictory, invalid or dodgy responses.
Identify respondents that muck up your data, such as speedsters, those who did not take enough time to complete the survey, and flatliners, those who picked the same answers over and over. Remove any respondents who did not fill in the survey in an expected length of time. An industry standard is to remove respondents who complete the survey in less than one-third of the median time. Review inputs where a respondent has provided gibberish answers, like random letters or numbers, on mandatory open-ended questions. Based on your judgement, decide whether to remove those. Flag and remove inattentive respondents who selected two or more fake items on red-herring questions that include fake brands or products in an answer list.
4. Manually code open-ended questions for small surveys and use computed sentiment analysis if you have thousands of responses.
Pick 200 randomly selected responses for a small survey and create a code frame with specific categories for the received answers. Use your judgement to manually match responses to the corresponding value. For example, if someone said “I think this brand is fun” that response would fall under code 1 – fun, if someone said the brand looked innovative, that response would fall under code 2 – innovative. If you run an algorithm sentiment text analysis for a big survey, choose clean, direct questions that do not lead respondents, avoid including follow-up questions in the analysis, and be wary of multiple opinions in a single question. For example, use “How do you feel about George Clooney?” instead of “What do you think about George Clooney?”. Don’t use “Why did you give us a low score?” kind of questions, as the question already leans toward a negative sentiment.
5. Summarize your cleaned and coded data to calculate Net Promoter Score (NPS) and customer satisfaction score.
Calculate your Net Promoter Score (NPS) by dividing respondents into three groups based on their score: 9-10 are promoters, 7-8 are passives, 0-6 are detractors. Then use this formula: % Promoters - % Detractors = NPS Recode your raw data when NPS reduces an 11-point scale to a 3-point scale, detractors, passives, and promoters, to freely run stats testing in software. Turn your detractor values (0-6) into -100, passives (7-8) into 0, and promoters (9-10) into 100. For a customer satisfaction score, ask customers to rate satisfaction on a scale from 1 to 5 with 5 being “very satisfied” on questions like “How would you rate your overall satisfaction with the service you received?” and use this formula to calculate the percentage of satisfied customers: (customers who rated 4-5 / responses) x 100 = % satisfied customers
6. Begin a driver analysis by stacking your data to find Aha moments.
Use a stack data format where the first column is your quantitative metric like NPS, while the second, third, and fourth columns are coded responses to open-ended follow-up questions. Use driver analysis to answer questions like “Should we focus on reducing prices or improving the quality of our products?” or “Should we focus our positioning as being innovative or reliable?”.
7. Choose the type of regression model you'll use for your driver analysis: linear regression or logistic regression.
Use linear regression when the outcome variable is continuous or numeric such as NPS. Use logistic regression when the outcome variable is binary, for example, has two categories like “Do you prefer coffee or tea?” Consult ebooks or tutorials to expand your knowledge on doing regression analysis.
8. Use analysis software such as Displayr, Stata, SAS, or SPSS to run logistic regression and see what values influence the NPS score the most.
For example, logistic regression can help you determine which of the brand attributes were most important in determining the NPS score. The further the t value is from 0, the stronger the predictor variable like Fun is for the outcome variable (NPS). Use an add-on like XLSTAT for Excel. Check additional resources like this Data Analysis YouTube video on how to conduct logistic regression using Excel. Run a second analysis to identify possible shortcomings and mistakes.
9. Use statistical significance testing, automated and built-in to most statistical packages, to learn when quantitative changes in your customer satisfaction feedback are significant.
Use guides for A/B testing statistics to work out your statistical significance manually. Familiarize yourself with the limitations of statistical significance.