Note: This post originally appeared on the GRBN Blog Published Sept. 4, 2017
If you’ve been paying attention you would have noticed the movement in the MR industry to return to a place of mutual respect with our respondents (and if you’re reading this post on the GRBN then most likely you have been). And, with that, you would have also heard the plethora of cries to shorten survey lengths. The reasons for this are widely documented, but if you need a refresher, google “survey length best practices” and you’ll find a wealth of information.
However, while we wholeheartedly agree with that recommendation, I would also like to suggest that you reserve room to add (and analyze) two additional questions at the end of your survey. The first asking the respondent to rate the survey experience, eg. On a scale of 1 – 5. The second being an open-ended one asking the respondent to share any feedback they’d like about the survey experience.
These 2 questions can provide some pretty rich insights to either validate or improve your respondent survey experience. If we are to expect participants to continue giving up 10, 15, dare we ask for 20(!) minutes of their time, we would be remiss to not move towards making the experience as pleasant as possible.
Consider the websites where you spend most of your time. Facebook, Twitter, Instagram, shopping sites, some well written and beautifully laid out news or blog sites. Whatever it may be, you can be sure the experience is clean, visually appealing, simple to navigate and designed to help you keep moving through the site with minimal clicking or moving of the mouse.
Surveys should be the same.
I won’t touch here on mobile-first design, but know that this is also of the utmost importance. My thoughts on making your research device/source agnostic for best representivity can be found here: Good To Know Blog: State Of The Industry.
One big caveat here is that the respondent’s perception of the experience may be biased by the simple fact of whether they were able to achieve “complete” status or not. To mitigate this bias, ask for feedback from survey terminations as well. In addition, closely monitor your drop-out rate which is the best leading indicator you have as to whether the survey is resonating (and working) with respondents. A sample supplier worth their salt will also be monitoring both drop-out rates and respondent feedback and sharing that with you so that incremental improvements can be made.
Now I have just 2 questions for you:
- How would you rate this article? 🙂 or 🙁
- What could we do better the next time?
Please comment below with your answers.
Link to original post: Just 2 More Questions