Response Bias: Revealing Why You Always Lie In Surveys
Zora Neale Hurston said, "Research is formalized curiosity. It is poking and prying with a purpose."
MENTAL MODEL
The response bias is our tendency to leave misleading answers on self-report questions. Think surveys, questionnaires, and interviews. This has gigantic implications for research where the scientists rely on self-report methods for data collection. Questionnaires have to be put together in a way that makes it likely the respondents will answer honestly — which is nearly impossible. For instance, imagine a researcher set out to investigate drug use on college campuses. How honest do you think the answers to such a survey would be? Would the students overreport or downplay their use? Exactly.
This bias occurs because we are trying to fit ourselves in a mold of societal norms. Basically, we answer not the truth, but what we think the truth “should be” or what the scientist “wants to hear”. In the case of research, it results in inaccurate data. In terms of ourselves, we flounder the opportunity to see ourselves critically to improve. It’s a lose-lose. Hence good researchers have to proceed with caution when designing self-report-centered studies. If respondents skew their answers, the study is useless or even harmful — as the findings can lead to inaccurate conclusions, which results in more studies trying to prove the same phenomenon, and so on.
Note, however, that response bias is not just one bias. Rather, it’s a cocktail of cognitive tendencies that mingle together making us not reflect our true feelings and behaviors. The foremost of these has to be the social desirability bias: we provide answers that are socially “acceptable” or “favorable”, rather than what we truly think. For example, in a survey about charity donations, we might overstate our donation frequency or size to appear generous. There is also the extremity bias: we have an inclination to use the extremes of most scales, either rating items as very high or very low, in lieu of the middle ground. Think customer satisfaction surveys where respondents consistently pick five or one star without considering moderate options.
But those are just our personal biases. There are also things the researcher has a direct influence on that can skew our answers. The acquiescence bias: we just agree with statements or questions regardless of the content, since that’s an easier and faster way to go about surveys. The wording, framing, and order effect: the way questions are framed and sequenced influences responses. If a survey asks, “Don’t you agree our service is excellent?” it is more likely to elicit a favorable response. The context matters as well. We are more polite and cautious when answering questions in face-to-face interviews. Contrast that to an anonymous online survey where we let it rip.
Real-world examples of response bias:
Market Research: a small business surveys its customers about satisfaction with a new product. What happens? Customers overstate satisfaction due to social desirability and fear of offending the founder. The company overestimates the quality of their product, and this results in misguided strategy down the line.
Employee Survey: workers are asked about workplace morale and leadership effectiveness. What happens? Employees provide overly positive responses to avoid consequences, especially in non-anonymous surveys. Management doesn’t receive an accurate picture of the issues, and necessary organizational improvements are not made.
Political Polls: voters are surveyed about their political views and opinions on controversial issues. What happens? Respondents tailor their answers to what they perceive as politically “correct” or socially “acceptable” within their peer group. The poll results are skewed, and the campaign ends up addressing the wrong issues.
How you might use (mitigate) response bias as a mental model: (1) make it anonymous — design surveys and interviews to be anonymous to reduce the pressure for people to conform to socially desirable responses, like via confidential interviews or online surveys; (2) carefully craft questions — make them neutral, clear, and unbiased, no ambiguity or one-sidedness; (3) give them a scale, not extremities — include positive, negative, and neutral options to pick from; (4) shake it up — randomize the order of questions to minimize how the sequence affects respondents; (5) trick them by asking indirectly — use indirect questions about sensitive topics, like by asking about what they think of a particular trend rather than their personal behavior to keep them from giving you a socially “acceptable” answer; (6) tell everyone about it — train your interviewers with this knowledge to gather accurate data!