My Exploration of Emerging Tech in Interior Design: The Research Bits
Introduction and context
Before the year ends, I wanted to work on another extended reality (XR) related side project. (P.S. XR is an umbrella term for AR/VR/MR). Therefore, I decided to look within my daily life for inspiration. As a growing adult, I am starting to approach the life stage where I need to think about housing. In Singapore’s context, it would be getting a flat from the Housing Development Board (HDB). These flats could either come build to order (BTO) or resale. Either way, these flats likely need to be renovated.
With XR becoming more mainstream with the introduction of the Meta Quest 3 and the Apple Vision Pro, as well as generative AI (genAI) getting better and better, I started to wonder whether there are opportunities we could tap on in the interior design and renovation space. Instead of replacing interior designers (IDs), could we empower them?
How I carried out user research
Creating a wireframe from competitor research
First I carried out some light competitor research on existing tools out there and came up with some rough ideas. This took the form of the wireframe you see below.

The initial idea was to help IDs go from 0 to 1 with the use of genAI as well as communicate their vision to prospective homeowners (let’s call them clients) via XR.
The ID would feed in a generic floor plan as well as mood images from Pinterest and a concept would be generated as a 3D model. This 3D model can then be edited down till the ID is satisfied. Subsequently, they can pan and view the model on desktop as well as view it on a XR headset with some hand UI to switch between different rooms. This would then be shared to the clients.
Although XR is getting more mainstream, clients may not have the XR headset. Therefore, I imagined it as the ID or the firm being the one who has the headset. They would come to the meeting with the XR headset and let the client experience an immersive version of their future home. The clients could also leave some asynchronous feedback via comments akin to Figma.
Methodology
I set out to do some user research on this topic, trying to understand the following things:
- What is an interior designer’s design process like when renovating a house
- What do they think of using an XR solution to show their clients
I carried out 4 semi structured user interviews with an interview portion to understand their process as well as a concept test to get their feedback on this wireframe.
These interviewees were recruited via my network with 3 of them being architects who had interior design experience and 1 being an interior designer herself.
Insights
After analysing the results via affinity mapping, there were 3 major insights that were discovered.
Insight 1: IDs and clients have multiple meetings until the design is finalised

The interior designer would have multiple meetings, the 1st meeting tends to be a non committal one for the ID to understand the clients’ vision for their future home, along with other nitty gritty details such as key collection date for their HDB flat. The IDs would also present some initial ideas on the layout of their house and how to optimise their space. This usually comes in the form of a 2D floor plan.
In the 2nd meeting, the ID would then start to show their client mood images (usually collated from Pinterest) to get an idea of the aesthetic they want. They would also collate a material board, which like the name implies, consists of materials used for tiling, walls and different parts of the house. They would show another iteration of the proposed design and would also have a detailed quotation of the renovation cost for the client to accept or decline.
After the client accepts, more detailed designs are done to figure out the nitty gritty of the renovation. This is where 3D modelling and their renders, detailed and realistic images from the model, come in. Simultaneously, IDs would also start applying for the renovation permit and actually start the renovation work. This is when they start wearing the hat of the project manager, working with different contractors to build different parts of the house, such as lighting, furniture, kitchens, toilets and what have you.
Insight 2: Clients tend to struggle to visualise 3D spaces from a 2D floor plan

As mentioned before, the IDs would show a 2D floor plan of their proposed layout to the client. IDs and architects often work within 2D drawings and have a lot of practice translating that to 3D space but the clients do not have this practice. Therefore, there tends to be a struggle to visualise 3D space from the floor plan.
So, why not just make a 3D model, that should help with the visualisation right? Well, thats because 3D modelling, especially when done properly, is a manual and time consuming job. This extra effort means IDs have to go back and take extra time to do up the model. In fact, they would even charge an additional fee to show realistic 3D renders of the proposed house.
So you have a 2D plan, then you must convert it into 3D. So, you have to model each element, figure out how to model them and then actually model it in a way that can be used later
However, they might show their clients a non rendered view of their 3D model. Their existing tools, such as Google Sketchup, would allow them to have an in person view and to pan around. This is usually done off the record, where clients cannot take pictures of this non rendered 3D view.
Due to this back and forth nature of the client and ID meetings, the participants did mention they wanted to generate some options and reflect changes quickly for feedback. This would allow them to validate ideas on the fly. This would be especially useful since new ideas might come up during discussion, whether it be trying different colours, materials or even layouts.
(After presenting to them) Then they will come back with their feedback. Yeah, so they might say oh, they like this direction. They don’t like this direction because of a certain thing. Sometimes during the discussion, there are ideas that come up.
Furthermore, with XR technologies participants suggested that it would be even better for visualisation as it would give the opportunity for clients to visualise their height and relative position in the space. And as XR headsets get more mainstream, clients would be more and more receptive to wearing such a headset and trying it out. In fact, participants said it would feel like a value added service to the client. And of course if the client does not want to wear the headset, having a desktop or tablet version would be a good fallback.
(Regarding the ID preparing an XR headset for the client to visualise the house) If I am to serve my customer like that, if I were the customer, I can instantly see what my house would look like, whether I like it or not.
Insight 3: Don’t truncate the process!

In the original wireframe, I went from feeding in an empty floor plan and some images straight to a 3D model with the materials and colour scheme applied to the furnished layout. I’ve learnt from participants that this was a huge leap in logic as they normally see the process in 2 separate steps:
- Figure out how to best furnish and optimise the space (usually via a 2D floor plan)
- Figure out the theme, ie materials and colours, and how to apply it to their layout (usually a mood board, material board and even 3D renders are needed here)
It is the same for like SIMs. You build the house you like first, bare or whatever. You roughly have an idea where the bed’s gonna go but you don’t choose the bed, nor do you choose the wallpaper until the very end right?
They would want to explore different options for the layout, such as what happens if you were to hack down a wall, without applying the theme first. Subsequently applying the theme can come in a later step.
Furthermore, when shown the wireframe, they see the tool as being used as a starting point, but not for the whole process. They would rather stick to the tools they are already comfortable with such as Google Sketchup to do their 3D modelling.
Finalised user stories
With that I finalised the insights into 2 user stories with multiple HMWs (How might we) that fall under it.

Lessons learnt
Recruiting fails
In order to recruit participants for the user interview and concept test, I first tried cold emailing a bunch of ID firms. To the surprise of no one, none of them got back to me.
What ended up being more successful was to tap into my existing network. I had friends who were architects and thus I checked with them if they had interior design experience. These were the first degree connections. Another tactic that was effective, was actually giving cold connection requests with personalised messages on LinkedIn. If there are mutual connections, this would be second degree connection, and you could mention that in your personalised message. This makes things more authentic and likely to succeed in recruitment.
AI & affinity mapping: NOT a match made in heaven
Having translated my transcript into sticky notes, I wanted to try Figjam AI’s sort stickies feature for affinity mapping. Mainly because it seems to advertise being able to group these sticky notes based on meaning using generative AI. However, these groups were unusable.

The groupings were far too generic for any useful analysis. Whats more, this grouping being done by the researcher helps to reinforce the understanding of what our participants said. Therefore, I did it the manual way. Perhaps, some things should not be automated by AI.
Wrapping up
It was an eye opening experience talking to all these people and learning about this problem space. In early 2024, I’ll start working on design explorations to solve some of the problems identified here. That would serve as a part 2 for this project. See you then and thanks for reading!