ChatGPT in User Research — an exciting collaboration or a dangerous temptation?
User research is an empathetic discipline that puts humans at the centre of the practice. Can tools that claim to replicate human behaviours like ChatGPT augment or hinder the practice? What are the situations where computational efficiencies can boost the process of user research and what are the safeguards needed to make the most of the algorithm?
Co-written with Kesta Kemp, User Research Consultant.
Our main purpose and goal, as user researchers, are to be the voice of the user in our projects and with our clients. We use a variety of research methods, from interviews to observational studies, to collect, analyse and present data and insights about users. However, these methods can be time-consuming, especially when analysing large volumes of data.
This is where ChatGPT comes in. ChatGPT is an artificial intelligence (AI) program that has been developed using advanced machine learning techniques and trained on vast amounts of textual data. ChatGPT is designed to generate human-like responses to user inputs by analysing and understanding the patterns and structures of human language. It can generate responses to a wide range of topics, from simple questions to complex conversations.
To illustrate this, we asked ChatGPT to write an introduction to this article. The previous two paragraphs consist of parts of its response.
We recently came together in one of our UR community crits to discuss the different points of view on ChatGPT and explore how user researchers are exploring it in both their practices and their personal lives. It soon became apparent that there were two distinct schools of thought within the community that can be represented by Moore’s Technology Adoption Curve.
Our community was split into the ‘Early Market’ and the ‘Mainstream Market’. Our ‘Early Market’ researchers have investigated ChatGPT and are learning how to formulate the right questions to ask to get the best out of ChatGPT. Our ‘Mainstream Market’ researchers were more cautious. They wanted to understand the impact that it may have on user research and witness further evidence of its effectiveness before widely or openly adopting the tool.
In this next section, we have collated the good, the bad and the future of ChatGPT in user research (according to us).
The Good
Learning from our ‘Early Market’ researchers, these are the top 3 areas where ChatGPT has added value to the user research practice.
1. Ideating to reduce blind spots — how can we generate hypotheses for further research?
ChatGPT can be seen as a “sounding board” to generate potential new ideas and hypotheses for further research, providing an additional point of view. This can be valuable when there is only one user researcher on a project, and they require instantaneous crit on their research. This additional perspective can help to reduce blind spots in research data and add layers to recurring user needs and insights.
2. ChatGPT, the UR Planner — how can we create templates for user research artefacts more efficiently?
ChatGPT can be used to create templates for a variety of research activities, including but not limited to research plans, note-taking sheets and formats, invitations for research communications and presentations for research playbacks. ChatGPT can be a powerful assistant for admin tasks such as recruitment outreach, spellchecking, creating contextual information for stakeholders, and designing new evidence-based ways to document research. All these applications can give space for a researcher to focus on the creative and subjective aspects of their roles without being burdened by repetitive admin tasks.
3. Faster quantitative analysis — how can we benefit from fast data processing functionalities to scale up research?
ChatGPT has the power to process large volumes of quantitative data and to perform complex analyses such as sentiment analysis, topic modelling or text classification. ChatGPT makes these trend and pattern identification methods less time-consuming and more accessible for user research to augment the quantitative analysis we are currently doing. This application of ChatGPT was controversial within the community. While we all agree that using the algorithm would make quantitative analysis much quicker, some felt that it was a riskier application which requires structured governance and strict privacy processes.
The Bad
Whilst there is evidence for its benefits, our ‘Mainstream Market’ researchers outlined some of the concerns they had around the tool’s impact. These concerns spanned from the impact it may have on skewing research data to the perception of the role of humans in research.
1. A risk of bias — how can we ensure that the data received from the algorithm is reliable?
ChatGPT is only as good as the data that it learns from. If there is no quality assurance on its training data, there is a risk that the outputs from ChatGPT may be biased or skewed. We believe that it is crucial that if researchers use ChatGPT as a starting point for their practice, it must be layered, caveated, and most importantly validated before it is distributed. The value that we bring to our clients and the trust that they place in our practice revolves around our skills as researchers being augmented, but not replaced by tools like ChatGPT.
2. Machines are not sentient, we are — should we be learning from them about human-centred design?
ChatGPT is just lines of code, it does not have the emotional capacity to understand humans and our complexities. It is using a library of data to produce outputs and is not capable of quantifying the deeply subjective human mind and its connections. Should we then be using it to support research analysis, which can sometimes be as subjective? We believe that it is crucial that entering research data into the algorithm does not lead to researchers becoming too far removed from the most important factor — their users. There is a risk that the intent of the experience shared by users is lost in an analysis conducted by AI. This is where the superpower of researchers is critical — to make sure that people’s lived experiences are treated with empathy and dignity.
3. The ‘threat’ of AI — what will be the impact on the value of user research?
It is still not uncommon for researchers to have to ‘sell’ their practice to prospective clients. By crediting ChatGPT’s potential powers and benefits in research, do we run the risk of diminishing the perceived value of the researcher? Could this collaboration with AI be misinterpreted as a replacement? These are questions that we will have to navigate and learn to tackle as ChatGPT enters our sphere to ensure UR as a human-centred practice is respected and valued. AI by itself will not replace humans doing their jobs, humans using AI, however, will take over those jobs.
The Future
As user researchers, we understand the caution around the use of ChatGPT as a language model as our jobs revolve around understanding the full range of the human experience. ChatGPT cannot provide non-verbal cues or feedback which are crucial to understanding these behaviours. It can however be a valuable tool for generating insights from large datasets quickly. We should keep in mind its limitations and review the information generated by the algorithm.
As two technology nerds, we are excited to see how the power of artificial intelligence tools like ChatGPT can transform our craft and improve the products and services that we create.
Over the coming months, we will continue to experiment with ChatGPT in various hypothetical scenarios and activities. Please share with us your own experiences so we can learn from each other. We are particularly excited about the latest release, GPT 4.0, where the algorithm can process audio and visual inputs and the launch of Microsoft 365 Copilot.
Special thanks to Avik Kar, Florian Mayer, Zaynah Hawa and Tamanna Akther for their insights.