The importance of methodological rigor in UX research
Is your data worth anything?
Mostly good — sometimes awful
As UX research has emerged as a distinct discipline, I’ve noticed an increasing variation in the quality of the studies being conducted. Many do it very well, many do it much better than I ever could. But I also come across studies and reports that leave me astonished. This is not the place to point fingers. But sometimes the dropout rate among respondents in a study is so high that the odds of drawing a correct conclusion are better if you flip a coin. And I’ve seen examples where statistical conclusions about hundreds of thousands of users are drawn from quantitative data based on just a handful of respondents.
For the quantitative insights delivered by UX research to be reliable and valuable, the work must be anchored in scientific methods. It’s not just about avoiding incorrect conclusions — it’s also about preserving the credibility of the UX research profession.
Methodology forms the foundation for reliable insights
I wouldn’t go so far as to to claim that all UX research has the ambition to be conducted according to strict scientific methods. And that’s a bit of a shame. Because applying methodological principles is exactly what gives us the opportunity to create truly useful and sustainable design solutions. Different disciplines and theoretical frameworks vary in their approach to research methodology. And in my view, there’s rarely full consensus on methods, at least within behavioral and social sciences. However, there are a number of fundamental concepts that most would agree are crucial to ensuring robust results in research. It can be debated wether all concepts are relevant for each type of study. But I think any descent study should at least be able to reflect on how these methodological aspects have been taken into consideration:
Reflect your population (sample size and representativeness)
Choosing the right participants is crucial to ensure that your results are representative of the broader user group. An insufficient or non-representative sample can lead to conclusions that are not applicable to the intended target audience.
Avoid Bias (keep your results objective)
Bias can occur when unconscious biases or systematic errors influence the results of a study. Being aware of and controlling for bias is central to ensuring that your results are not misleading.
Ensure stability (reliability)
Reliability means that a study yields consistent results when repeated under the same conditions. If a study lacks reliability, we cannot be sure that the results are trustworthy.
Measure what you intend to measure (validity)
Validity is about ensuring that your methods actually measure what they are intended to measure. Without validity, the insights from your research are questionable, no matter how carefully the work was otherwise conducted.
Place results in the right context (ecological validity)
Ecological validity is crucial to ensure that the results of a study are applicable in real user scenarios. If a study is conducted in an artificial environment, the results may prove irrelevant when applied to a user’s everyday context.
Think big (generalizability)
Generalizability refers to how well the results of a study can be transferred to a broader user group or other situations. This is an important factor to consider if the results are to be useful outside the specific context in which the study was conducted.
Build credibility (reproducibility)
Reproducibility means that others should be able to repeat your study and obtain the same results. This is a cornerstone for building credibility and ensuring that your insights are solid.
Test assumptions (hypothesis and hypothesis Testing)
A hypothesis is a prediction that can be tested empirically. Hypothesis testing is the process of systematically testing whether the hypothesis holds, which is fundamental for understanding if assumptions are correct.
Make the abstract concrete (operationalization)
Operationalization is about turning abstract concepts into measurable variables. It is necessary to empirically investigate complex phenomena.
Compare correctly (control group)
A control group is used to compare the effect of a specific manipulation against a group that has not been exposed to it. This is important for determining whether changes in the experimental group are truly due to the factor being studied.
Keep track of other influences (confounding variables)
Confounding variables can distort the results of a study if not controlled. By identifying and managing these factors, you ensure that conclusions are accurate.
Understand the statistics of your results (significance)
Statistical significance is a way to determine whether the results of a study are likely due to a real effect rather than by chance. This is central to knowing whether a hypothesis can be accepted or should be rejected.
Measure the impact (effect size)
Effect size measures the strength of the relationship between two variables and complements statistical significance by showing how meaningful a result is.
What are the consequences of sloppy methodology?
Ignoring or compromising on methodology in quantitative UX research can have serious consequences. When we don’t take these principles seriously, we risk drawing conclusions that don’t reflect reality, which can lead to design decisions that are ineffective or even harmful to the user experience. This, in turn, can lead to a loss of trust in UX research and UX design, both within organizations and among users.
How to avoid sloppy UX research?
Many UX researchers do it right and do it well. By drawing inspiration from and learning from those who already conduct their research based on well-founded methodological principles, we can ensure that our insights are both reliable and useful. Here are some important steps in that work:
Invest in preparation — plan carefully
A well-planned study takes time. What questions should the study answer? How should the study be designed to avoid bias? How can a representative sample be ensured? And much more. This is the time to bring out a whiteboard and brainstorm with the team.
Be systematic and transparent — analyze with care
Data analysis also requires time. It needs to be done with care to ensure that all assumptions, choices, and decisions are well-founded. It’s important to document the analysis process and communicate it in a way that others can understand the basis for the study’s conclusions and possibly challenge them. A well-documented process also makes it possible to replicate the study.
And perhaps most importantly, not every study is successful. Sometimes things just don’t work out. It’s crucial to be honest and transparent. If it’s not possible to draw conclusions or make well-founded conclusions from the study, don’t do it. It’s better to redo the study than to present questionable or incorrect results.
Leverage interdisciplinary knowledge — learn from others
There are many methods in disciplines such as psychology, sociology, and social anthropology that can serve as inspiration. By applying established scientific principles to UX research, we can improve the quality of the results and ensure that they stand the test of time.
Data is serious business
Scientific methodology is not a burden but an asset that gives us the tools to create better and more reliable insights. By investing time and care in our methods, we can ensure that our work is not only user-centered but also based on a solid scientific foundation. This is likely necessary to maintain trust in UX research and to continue developing our field.
By raising the standard of our methodology, we not only create better user experiences — we also build a stronger future for UX research as a credible and valuable discipline.
And everyone who consumes the collected data doesn’t always have the time or ability to critically evaluate the results presented. As soon as a result or conclusion becomes a graph or a chart in a presentation, it becomes a truth for many, regardless of the quality of the underlying study. That’s why it’s so important that UX research is conducted according to established methodology and provides reliable insights. I would even dare to say: better no UX research than sloppy UX research. I would rather guess and rely on gut feeling than act on unfounded assumptions.
Maybe it’s also a question about UX researchers demanding time and resources for doing it right. If asked to just push out that simple survey, maybe the answer should be “no” — or at least “why”?
But what about qualitative research?
Yes, there is a difference in approach and the types of insights gathered with qualitative research. But qualitative research must also be based on credible methods. However, this text focuses on the quantitative aspects of UX research.
I am one of three company mothers of One Day Interact. As a digital design agency, we seek to contribute to a positive development in a larger sense — for the individual, for humanity, the environment and the planet.