Future of UX and UI Design

In this article I will try to predict the future of UX and UI Design. It will consist of two parts.

Mateusz Małys
Bootcamp

--

First part will be dedicated to new and emerging platforms that we are designing for — wearables like smartwatches or augmented and virtual realities. Second one will be dedicated to the future of tools and methods we are all using to design interfaces and experiences.

You can also “watch” this article at my YouTube channel:

Current state of UX and UI Design and platforms

In the last 15 years our job as designers became more and more complicated. First major disruption in the field came in 2008 with introduction of the first iPhone and popularisation of smartphones with fully fledged apps and “true” Internet experience.

San Francisco Steven P. Jobs present first iPhone
MacWorld Conference & Expo 2007 — San Francisco Steven P. Jobs present first iPhone, source: Wikimedia

From this point forward we had to think and prepare our designs in context of new platforms or create new ways of designing flexible layouts that would work on both computers and smartphones. It led to the birth of an entire new specialisation — mobile design and new approaches to our work like adaptive or responsive design.

Responsive Design

It wasn’t just the resolution or space that we had to adapt to, it was the entire new, different approach to interactions between users and their devices. It’s also worth mentioning that during those last 15 years, the number of total users with access to the Internet doubled or even tripled from around 1,3 billion in 2008 to over 3 right now.

Share of the population using the Internet
Share of the population using the Internet, source: Wikimedia

That increase of numbers led to the emergence and further refinement of UX and UI patterns, most often referred to as “best practices”. In simple terms, guidelines for design of various usage scenarios and interactions (like input forms) which are easily recognisable by the users and provide optimal experience to them.

Next came tablets and smartwatches which complicated our work even further. Smartwatches left us with even less space to work with, to the point where even the simplest tasks were really challenging to design for. That forced us to adopt and propose new ways of interactions like voice control or gestures.

Woman controlling home devices with a voice commands
Woman controlling home devices with a voice commands, source: Envato Elements

Then we have both VR and AR realities where the boundary between physical and digital space starts to blur out. But even in AR and VR we are still relying mostly on existing concepts like buttons, content boxes, text inputs or virtual keyboards simply adapting them to new platforms. Sometimes you can encounter gestures or voice controls but for the majority of the cases we are relying on existing patterns.

Virtual keyboard in VR
Virtual environment — same “old” keyboard for inputs, source: Steam

Present day & Future of UX and UI Design

Finally, we arrived at the present day. Smartphones, smartwatches and other personal devices are well established as well as the patterns we are using in our designs.

AR and VR are getting traction and becoming more and more sophisticated. For the majority of interactions with our devices we are using our sight and I don’t think that thing will ever change. But, with the rise of companies like Elon Musk’s Neuralink we are at the very edge of controlling our devices with our minds.

Neural Implant and Electrode Array
Neural Implant and Electrode Array, source: Neuralink

I think that in the future we will still rely on the visual interfaces, the question is — where those interfaces will be displayed on. Will they be displayed on wearable devices like glasses or contact lenses? Or maybe directly on our retinas through implants? Or maybe there won’t be a need to display anything if the implants in our brains would just create an interface in our thoughts?

Augmented Reality Glasses — Microsoft HoloLens 2
Augmented Reality Glasses — Microsoft HoloLens 2, source: Microsoft

I assume that not everyone will be willing to implant something into their bodies so I’m predicting that physical screens will stay with us for a long time.

Future of Design Tools

Now, since we covered the future of interfaces and interactions, let’s move to the ways we will design them. The biggest concern that I keep hearing about is the risk of designers being replaced by A.I.. I think that for now and for the near future we are safe.

UX Design is just too complex to be automated. Machines would need to understand the intent, reason and goals that we would like to achieve through our designs.

It would need to combine various data inputs, propose and test flows or interfaces which is just impossible for the current technology. But, I can see the future where some of our designers’ tasks could be enhanced or automated by A.I. based tools.

It could for example help us perform research by analysing data we feed to it. Or help us build interfaces by generating them from our low fidelity sketches. Some of those kinds of tools already exist on the market and I imagine that this trend is just getting started.

With a service like Uizard you can translate your low fidelity wireframes into mockup of an app in minutes.

Headlime can write copy for any of your designs and other things, like ads.

Remove background can remove background from any photo, Fronty can generate html and css from a single, flat image.

Generated photos
Generated photos allow you to generate people’s faces through A.I.

Generated photos allow you to generate people’s faces so you can use them in your projects, worry-free.

I think that in the future we will use a set of various A.I. powered tools to automate or enhance our design processes, where we, as designers, would act more as an operator and guide machine generated outcomes.

Unless someone would come out with A.I. with consciousness, I think we can sleep without fear for our jobs or… lives.

--

--