The digital interfaces in ‘Moonshot’ are nice to look at, but implausible from a UX perspective

Moonshot, HBO Max’s latest offering, is an outer space rom-com about the human desire to find a place in the universe. The film is unexpectedly existential, making an interesting analogy between human settlers on Mars and the pilgrims in Massachusetts. But for a film whose Martian settlement premise seems within reach, the digital interfaces featured in Moonshot seem very implausible.
Now, let me preface by saying that Moonshot’s imagined technology of the future isn’t meant to be seriously analyzed. As a rom-com, the movie’s main goals are to entertain and be a fun date night watch. But because I spent the past year working as a UX Designer, I couldn’t help but take a critical lens to the theoretical technology included in the film.
So let me set the scene. The year is 2049. Mars is terraformed, and citizens of Earth can book a one-way ticket to start a new life on the orange planet for the small price of $1 million. Walt (Cole Sprouse) and Sophie (Lana Condor) both take the plunge. He’s a laid-back wanderer who wants to voyage into space to find himself, and she’s a type A scientist who needs to get over her aerophobia to be reunited with her boyfriend and foster family. What follows is your typical enemies-to-lovers romance.
Back to UX; one device that has no discernable value is an orb, which Sophie and her longtime boyfriend Calvin (Mason Gooding) each own. The orb has a holographic Bitmoji-like avatar inside, serving as a stand-in for the real person. While the orb looks cool, it has no real user value. Why would someone who is so far from their partner of 8 years choose to reminisce at a Bitmoji instead of just looking at a photograph? To make it weirder, the avatars speak in digitized chipmunk voices–not a viable alternative for an authentic human conversation. If Sophie really missed Calvin’s voice, calling or watching videos of him would be a much better option.
We don’t get to see much of Sophie’s orb’s functionality, since Walt accidentally breaks it during their first meeting. This is not to say the orb isn’t conceptually a good idea–it definitely has potential. In the event that I move to Mars and leave behind a holographic device for my parents to remember me by, I would 1) design my hologram off of a motion capture of my face taken by video and photographs and 2) use my own voice as the system’s voice. I would also create signature dialog responses of things I often say, personalizing the device for my parents.
Gary is a particularly fascinating machine featured in the movie. Users communicate with it mainly through voice. “Gary” seems to be the wake word, but unlike with Amazon’s Alexa, Gary allows for multi-turn conversations by default; these are conversations where Gary remains “on” and actively listens for a response instead of forcing the user to repeat the wake word. The system’s persona is intriguing — Gary sounds like a middle-aged British man with an appetite for wit and tendency toward being passive-aggressive. Why is a British systems persona used on an American college campus? While there are no clear answers, I’d like to believe that the film’s robot creator Dan Sudick made this decision on purpose. Consider that Gary works as a barista at a university coffee shop. Since coffee shops function as popular study spots, a robot mirroring the “brilliant British professor” archetype (think Oxford) could serve students well. Perhaps Gary doubles as an on-demand professor who students can ask questions to post office hours, with knowledge spanning the university course offerings. Now that would be a useful way to enhance students’ learning experiences beyond the classroom. Not to mention it would be super cool.
Gary’s sarcastic personality also compliments Walt’s naïveté in a way that serves the light-hearted tone of the film well, but probably wouldn’t work in real life. In one scene, Gary teases Walt by using his facial recognition feature to tell Walt a pretty girl who Walt thought was looking his way was actually looking at him (Gary), not at Walt. While Walt doesn’t seem to be bothered by Gary’s sarcasm, other users might.
I wonder if Gary’s personality intuitively changes depending on the user, or if users can customize the interface to match their preferences. For example, if someone as no-nonsense as Sophie worked at the coffee shop, their personality would clash with Gary’s. For Gary’s use case, it would be important to have a voice systems persona with a wider appeal. For instance, in one scene, Sophie feels unwell after her boyfriend’s holographic Bitmoji breaks, and she wishes malware upon Gary. The robot retaliates by locking the doors of the coffee shop, trapping Sophie inside. Visibly affected by her words, Gary says to Walt, “I hope she fails to find long-term companionship and learns the pain of a solitary life.” Gary is no doubt entertaining, but with such behavior, he runs the risk of harming people’s mental health..
In another scene, Walt uses facial recognition software on Sophie’s tablet to analyze if another character, Ginny, is into him. The software analyzes factors like her smile, how nervous she is, how wide she opens her eyes, and her heart rate to reach a conclusion. While there might be products in the future that analyze human emotions, I think they will die pretty quickly. Why? Because AI is more useful in performing objective, logical, and methodical tasks, usually concerned with the left brain. Emotions and other right brain matters are too ambiguous and subjective for AI to predict. Sure, there are algorithms that can guess at behavior for advertising purposes, but I don’t think machines will be able to match the intrapersonal knowledge and creativity that come with introspection anytime soon. For instance, I don’t think we are at a place where AI can listen to people’s stories, and then reflect on it’s own upbringing to develop a work of art. This would require a great deal of intrapersonal knowledge, which machines will never be able to match. There’s also no real use case for emotion recognition softwares–why would someone trust a robot over real people with matters of the heart? People today would prefer to dissect their emotions with therapists or friends, instead of interfaces like Siri, Alexa or Google Assistant, and I don’t see that changing in the future. Because Walt doesn’t have any friends or family, he relies on talking with robots to make sense of his emotional confusion. Surely, this would lead to even more confusion, and perhaps even unintentional gaslighting, as he might be led to believe ideas that aren’t true.
Aside from the futuristic gadgets and gizmos shown in the film, the movie features existing technology such as cell phones and video-calling apps. But even the reinterpretation of these classic graphical interfaces are incohesive. Walt’s phone lock screen has text in four different font sizes, all within a 3-inch screen, making it very unattractive. The sparsely spaced buttons are also no bigger than 20pt in diameter. I can only imagine how many errors this might cause. Smaller screen real estate only limits the functionalities of various features, serving no purpose to the user. There is no button to share or edit videos either, two very useful functions that I don’t see going anywhere anytime soon. And the phone’s video-calling app contains just three buttons: rewind, fast forward, and pause/play.
The visual design for the video player greatly differs from the visual design of Sophie’s tablet, which seems to be from the same company. The Earth-Mars video calling interface, in contrast, is classically futuristic, with semi-transparent blue parallelogram buttons setting the new-age tone. However, the ‘delete’ and ‘call’ buttons on the contact cards are virtually microscopic and sparsely spaced, so I’d be surprised if users didn’t delete all their contacts in a matter of seconds.
The only technology referenced in the film that feels actually marketable is a hypothetical one–Sophie’s idea for a algae-based waste reduction product, which we hear her mention, but don’t actually see. The actual digital products shown in the film are more concerned with aesthetics than with usefulness and usability.
The design aesthetics of the movie’s technology are almost as all over the place as their functionalities. For example, Sophie’s orb looks like a cloudy crystal ball that glows in saturated rainbow colors, just like party strip lights. While it looks futuristic, like 2049’s equivalent of retro, other technology in the movie has design influences spanning five other decades -the 1980s to the 2020s-, resulting in an eclectic hotpot of styles. Gary is like a talking beige Xerox machine with a graphical user interface in the style of an 80s arcade game. Sophie’s desktop keyboard is dull with bulky, cubed keys. The video player on Walt’s phone looks like the one on my LG Chocolate from 15 years ago.
The interfaces shown are highly unlikely to impact your opinion of the film as a whole. However, it’s clear that director Chistopher Winterbauem cares about technology, and so going the extra mile by working with a UX Designer to guide the designs wouldn’t have hurt. A greater emphasis on the user experience wouldn’t have changed how we view the film, but it could’ve offered a more compelling look at technology’s future, allowing us to ruminate about the possibilities of our world.