AffinityGPT
A.I. UX researcher
for Affinity Diagramming
Aug 2022
4 weeks
Team
Solo Project
Figma Plugin
AI Development
Web Development
Tools
Figma
ChatGPT
Next.js
Goal
To make data-driven decisions, UX researchers regularly confront the demanding challenge of navigating through vast qualitative data sets. Recognizing the potential of generative AI, I embarked on this project to unlock new possibilities.
Design and deploy the solution entirely from scratch.
Timeline
Total Duration : 4 weeks
Solution
Simply drag all the sticky notes and hit the button. The generative AI will automatically cluster and categorize the data.
Users can instantly generate sample data with a single click, making it ideal for testing and verification purposes.
Research
To gain insight into the research workflow and its associated challenges, I conducted one-on-one, semi-structured interviews with five UX researchers and professors, each lasting 30 minutes.

To understand the potential and limitations of various generative AI frameworks, I conducted a comprehensive study, evaluating each for its practical application in UX research.
Key Takeways
Ideate
Video source : Nielsen Norman Group
The affinity diagramming process demands significant data analysis and clustering, tasks that are time-intensive and effortful. Leveraging AI can expedite this process, making it a prime method for this project.
Both Miro and Figma are popular platforms among researchers. Figma, in particular, has a robust plugin ecosystem already in use by many designers. Given its widespread adoption and the potential for real-world deployment, I opted for the Figma plugin.
Design
AI can automate many tasks with just a single input. To enhance this experience in the project, the user flow was designed to minimize user actions, requiring only two steps for the entire affinity diagramming process.
Consistent with the UX design philosophy, the UI was pared down to its essentials. Each segment of the design requires only two taps, dedicating each screen to a single, clear task.
Develop
To guarantee smooth transitions from user input to meaningful output, a dedicated web server was crafted using the strengths of Next.js, TypeScript, and Vercel.
Leveraging the capabilities of the ChatGPT API and the Langchain library, the goal was to refine user inputs into detailed prompts. This ensures that AI responses are not only accurate but also user-friendly and easily understandable.
In response to the structured JSON data returned from the server, the Figma plugin was designed to interpret and then aesthetically display the information, ensuring that users receive insights in a visually engaging manner.
User Test
The Figma plugin's compact UI made it challenging to provide comprehensive information. Based on feedback, the UI was refined to be more user-friendly and intuitive, guiding users more effectively.
A notable limitation of the AI application is the loading time, often taking up to 30 seconds for a server response, which could lead to user disengagement. The solution was to enhance the loading screen to stream data in real-time, ensuring users remained engaged and informed.
Reflection
AI revolution is just beginning.
The AI revolution in UX research is only beginning. Large Language Models (LLMs) like ChatGPT are introducing a new era of human-computer interaction. Instead of treating machines purely as tools, we're now approaching a phase where we can genuinely collaborate and seek inferences from them. Here are some reflections from the project:
Broadening AI's Role in UX Research:
I chose the Affinity diagram as a method to integrate AI, and the project's success demonstrated its effectiveness. But the potential of AI isn't limited to just this. It can be applied to other UX research methods too. For instance, AI can help evaluate interview questions, create sample responses, and even craft personas. In line with this, I've started another project named PersonaGPT which aims to generate product personas from just a single input.
The Importance of Reliability and Stability:
While AffinityGPT produces fascinating outputs, it's worth noting that these results aren't always expert-validated and can sometimes fall short. Hallucination is a current limitation of LLMs, affecting their trustworthiness. But this can be overcome with time and ongoing advancements. And while not tackled in this project, fine-tuning the model in the future could further ensure stability and accuracy.
©Hyun Bang 2023 Portfolio. All Rights Reserved.