How could designers code (in the future)
Have you noticed how complex Figma projects are becoming nowadays? You need color styles, typography styles, visual effects styles, components, AutoLayout, and animations for the prototype. All these things are also going to be present in the project code the same way you described them on Figma!
This led me to the idea of why couldn’t we involve a designer in writing some code, connected with the interface to simplify this process. It could turn out to be a reasonable investment, saving time for not only “double coding” a new feature/screen visual, but also updating old ones, and documenting design.
In this small article, I’d like to fantasize about how designers could code interfaces. I believe modern front-end frameworks and libraries will evolve so that it will be possible for anyone shallowly involved in programming to do something in their competence in a single yet discrete space.
I am going to use some unreal programming language in my examples (reminding SwiftUI) to demonstrate my thoughts.

Level 0. Colors and Typography.
If you know programming a bit, you should guess these are just variables or some class objects. This amount of work could easily be done by a programmer, so no need to involve a designer to create and deliver prototypes faster, because copying 10 lines of code don’t seem to change anything seriously.
Even though, to move further we should acknowledge that any designer might save their styles in variables on a separate file, accessed globally in a project. Here’s an example of how I see this being written in code.
// Generally create a structure fully copying the Figma's
// representation of text styles, and save it to a variable
let title1 = TextStyle(
fontSize: 32
lineHeight: 1.1
letterSpacing: -0.02
)
// Create a group of associated colors, which is also possible
// in Figma. But in code it is more accessible, controlable,
// and very easy to edit and refer to. You don't have to look
// through the styles hierarchy in GUI list and multiple tap
// buttons to edit something.
let black = ColorStyleGroup(
name: "black",
[
ColorStyle(
name: "primary",
color: "#000",
opacity: 0.9,
comment: "Use it for all main texts which are not titles"),
ColorStyle(
name: "secondary",
color: "#000",
opacity: 0.6,
comment: "Use it for titles or captions")
]
)
Level 1. Layout
Two years ago, when I was working in a freelance studio, I tried coding some screens of my interface design on Flutter to present the prototype to the client faster. And it worked fine! I couldn’t handle routing, animations, and state management at that time, but it was pretty easy for developers to add it to the static prototype I prepared. Needless to say, I spent some time before producing some sane code just exploring the new framework, and our team’s practices. However, you don’t commonly change fully your developer stack working on the same product, so as a designer working in a persistent team, you could learn coding interfaces using one technology, and faster implement the visual part of your interfaces next time!
Let’s imagine the ideal world:
- You’re done with sketching, your customer (or product manager) is okay with the solution. They cannot wait to see a prototype!
- Open VS Code, and code some responsive interfaces. It should lead us to the same result, as if we used some upgraded Figma AutoLayout, and shouldn’t be harder than that.
- Tag your product manager to unblock coding tasks. Your team could see the use of the implemented design now.
- Oh no!!! Seems like you need to change your interface… How couldn’t you notice a detail!? Well, just open the project in VS Code again and fix it! Write down a change list for your team, and you’re done. No need to wait while your front-end developers will be free to assign them a task on updating the screen… They implement logic! The visuals are not their job.
- What about versioning then? It is all on GitLab now! (or GitHub whatever)
- Do you need to document your design? Surely yes, but now you can use comments in your code! When your design is responsive, you don’t need examples. A lot of documentation could be omitted then.
Syntax
Let’s look at an Instagram app header and imagine how it could be coded.

Row(justify: .spaceBetween) {
Row(gap: 8) {
Logo()
ArrowBottomIcon()
}
Row(gap: 24) {
NewPostIcosn()
MessagesIcon()
}
}
This example uses a declarative syntax which makes it so easy to comprehend, you don’t even need to understand what is “behind the scenes” (semantics). A lot of syntactic sugar is involved in the snippet, and I believe if we want to see designers involved in coding, we need syntactic sugar. Swift or Kotlin has a lot of this, and they weren’t even designed for “not tech guys”. The snippet above is 100% real in Swift, and completely replaces Figma’s AutoLayout. Why would you need to have a full copy of your project in one place (Figma), when the very same responsive copy exists in code, and you completely understand it?
Level 2. Logic
We don’t draw interfaces only, we consider cases. “What if a button is pressed” or “ what if the list is empty” and stuff. To implement Figma’s properties or variants, we need to use conditional branching.
I love examples, so let’s look at the way logical conditions could be used.

if list.isEmpty() { // Check the condition
Column() {
Text("Notes")
.textStyle(.title1) // Use the text style we created earlier
.colorStyle(.black.secondary) // Use the color style
Column(height: .fill) {
Text("Seems like you don't have notes")
.textStyle(.title4)
.colorStyle(.black.primary)
PrimaryButton("Create a note")
}
}
}
Here we meet up with the already known structure of columns (not rows this time), arguments, explaining the behavior of the container, modifiers to add text and color styles, and an if-statement on the very first line to check if some list (we’re gonna talk about it later) is empty.
But I was also thinking of an interesting feature, borrowed from Haskell, which is pattern matching.
// Let's imagine a structure describing a button
// while it is pressed. We add the keyword "where"
// and now this version of Button will be applied
// for pressed "Button"s only.
struct Button(context: Context, text: String) where
context.object.isActive {
Text("text")
.padding(.vertical, 12)
.padding(.horizontal, 16)
.cornerRadius(16)
.background(.blue.light)
// The first time we encounter something new
// which is not presented on Figma's toolkit.
// Here I add an action to the button to be
// launched by clicking. Here (only now) our
// design faced the logical part of the app.
.onClick(context.object.action)
}
This whole Level 2 is about implementing Figma but in code. I demonstrated, that it’s real to have an easy way to code a project’s screens, and components, still using styles, but having all the advantages described on Level 1 available.
Moreover, I am not satisfied with how Figma works with components. Sometimes, a component is just a wrapper over another component, representing it properly. But you are never allowed to control what parameters or modifiers are associated with a component. You can’t save and fixate on an essential list of modifications. Whereas in code you are free to implement whatever structure or logic you want.
This problem leads to vagueness while editing components because you don’t know if the updated component will look correctly on every other frame where you used it.
It is so fascinating for me to notice that the most useful and efficient techniques are borrowed from programming to Figma and presented in a truncated, and light way. It always feels like balancing between “being a serious tool to build an interconnected complex system, a functioning prototype”, and “being just a storage for mood boards, and quickly drawn sketches”. I feel like Figma is going to become much more complex, but at some point, it’s going to rest against the ceiling.
So let’s go ahead to the next level of consciousness, currently unreal in Figma, but still having to do a lot with design.
Level 3. View Models
There is an idea that any screen (view) could be described by a data model, like a database. As an example: when you open a weather app, it shows you the current temperature, humidity, and wind speed, but if you want to see more, you open a bottom sheet with details. Behind the curtains, your app probably had already downloaded all the data it needs to show you at the time you opened it, and saved it in a model. The first screen (view) has its own (view) model, which is the data to be shown only on this specific screen.
A view model might differ from the data loaded from the Internet. In the example above, the view model for the first screen should show the current temperature, humidity, and wind speed only, and nothing more. It may additionally locate the user’s name, or some advertising, which was surely downloaded from different sources.

Therefore, when we code a screen’s visual (the view), we use the data stored in the corresponding view model. That means referring to some real data instead of writing the “Lorem Ipsum” template. So when the design part is over, programmers just pass a correct view model to your view, and the prototype just works as it has to!
This approach is called MVVM — Model, View, ViewModel. I had already conducted some lectures on this topic, and this is my favorite picture.

What does it change in the designing process? Generally, you need to be more specific: what data you want to see, where to locate it, and if you need to show more or less than you expected. Usually, all the data each screen holds, and the user flow is already defined after wireframing and negotiation with the development team, so I expect this to be a small, but pleasant and effective addition to the development process.
Let’s see an example of this approach.
// Our view is called AppPage, we will use the viewmodel
// created for this exact screen, the AppPageViewModel
struct AppPage(context: Context, viewmodel: AppPageViewModel) {
Column() {
Row(justify: .spaceBetween) {
// Extract the user's avatar from the viewmodel
UserAvatar(viewmodel.userAvatar)
Button("Edit")
}
// Some view code here
// And the last line where the viewmodel defines the
// background color of the whole view. For example,
// depending on the time or weather.
}.background(viewmodel.background)
}
So the snippet above is not much more complicated than the Level 2 code examples. You still don’t need to compute or count something, all the data you need is stored in the view model: colors, pictures, texts… Even other Views could also be stored in the view model.
Level 4. Lists
There are a lot of situations when we need to show lists. Each programming language has a structure for storing homogeneous objects sequentially, which is usually called a list, or an array. When you need to go through the elements of a list, you could use a for-loop, but following the declarative programming principles, we should better use maps.
Map is a function matching a transformation to each element of a sequential collection. The transformation in our case is the View we need to obtain for each element of a list. The sequential collection is some raw data.
If you have a list of names, then each element is probably a word, which is perceived by programming languages as a string. We need to transform this string to a View so that we get a list of Views, which could be put into a Column so that we see a regular list of words located vertically.

Column() {
// Now let's so the map-function, converting
// each name into a View. We show, that we will
// refer to each element in the list "names" as
// a "name", describing the View.
viewmodel.names.map() { name in
// Here's the View we need to get. Just a Text.
Text(name)
}
}
You could ask me if you need to sort the data somehow or filter it. There certainly must be a trigger for this kind of behavior, if the screen’s data is not shown that way initially. So when the trigger event occurs, the logical part of the front end processes it, and the final result is pushed to the view model, which triggers updating the View using this model. This is how the separation works: logic is on one side, and the View is on another, yet they are tied up via events and callbacks, which are practically invisible for a designer while working on a View.
The problem with coding screens now, in my opinion, is that each View is strongly connected with its logic, which makes the code space, the file everything is being written down to, shared. For a developer, it means that an unskilled designer is going to intrude on the fragile, architectural, and peer-reviewed order built by the front-end team (which is reasonable). While a designer could easily ruin something and generally have too much responsibility assigned, accessing very critical parts of the app.
Level 5. Animations, scroll, and gesture events
Until now I completely understood the basic idea behind my concept and the implementation itself. Animations and some implicit events have a lot of implementations, none of which I imagine has the perfect separation between the logic and visuals.
Animations
Any animation could be presented as a transition from one state to another, and this is what is usually done in Figma out-of-box, where each frame is a different state. If you need to make a button rotate on a click or change a screen with pushing animation, you just put a transition properties for the whole frame, and voila.
The problem arises when we have specific properties animated with different durations and timing functions. So the paradigm changes a bit, and now we perceive animations as transforming states of properties, not whole states anymore. In different prototyping tools like After Effects or plugins (like Figmotion), we could just pin a parameter with a specific property on the timeline, while in code we need to assign those states by hand.
I will not try to imagine an example of how this could look in code, and how could you animate parameters for each object, because it needs more research and case checking, but as far as I know, it always requires some coding skills.
Scroll and gestures
Usually, you have a gesture or scroll listener, which triggers some transformations, which could also be animated. As we understand now, these transformations are reflected in the view model, but there are SO many cases you need to consider.
As an example, imagine reordering icons on your home screen. You pull one so that it perfectly fits another cell, and this new location is processed by the gesture listener, which reorders icons in the view model to free up space for the moved icon. It sounds perfect, but what is going on while an icon is moving? It should move simultaneously with your fingertips, and that means we have to locate the icon using absolute values.
Thus the scroll listener should remove the moving icon from the view model as soon as it is been selected by a long tap, and when the icon fits a new place, update the view model again with the very same icon in a new place. In parallel, the scroll listener renders an overlay with the selected icon, which moves together with the user’s fingers. And don’t forget, we need this to be ✨animated✨. So I am a bit confused if this is going to be fun and easy as the other 4 levels, but I can’t come up with a better idea for now.
Summarize the advantages
I started from preconditions, and now I’m going to add more background to why coding at least some parts of your designs (≤ Level 4) might be a good idea. I’ve already pointed out some advantages like:
- The prototype is already responsive, and ready to be used in the app
- Git as the version control system
- Omit documentation about the edge cases, and location of elements — you type it yourself
- Update the design whenever you approved it—it is separated from the rest of the code
- You implement components in code and keep the structure yourself
- Less time to integrate a feature or update old screens
- More control over your design, unlimited power to implement whatever you need
- In addition more productive work without GUI after getting used to code
Hidden advantages
What is Figma? It is a website or web app for sketching and prototyping. When you draw a screen using Figma’s UI, “behind the curtains” it saves your work as code or, simpler, text. Text is lightweight, but when you open the app again, add more visual effects or start prototyping, Figma renders the saved state and loads it to the RAM, which is slow and resource intensive. This surely makes Figma slower than any code-based solution a priori.
If we work with code, we need two things:
- An app to view, edit and save the code. No limits to lightness, you can do it in the terminal if you wish.
- A tool that is going to build and run the prototype. The prototype is going to be run offline on your computer’s hardware. Unlikely Figma is going to implement something like this.
So, generally, anything you do in Figma is going to work slower compared to the same thing in code.
The Interface Problem
I have experienced coding in VS Code, Notepad, and Terminal working with JS, Bash, XML, Haskell, and many other more casual tools. I don’t have a problem with switching to a new environment.
But what if you haven’t experienced coding yet? I was developing an online course for designers (surely not alone), and one of my tasks was writing a bunch of lessons about some HTML and CSS helpful for junior designers. You know it — when you open a vacancy and see “would be good to know some HTML and CSS” — these were my things. And after writing those lessons there was this final project — the largest and most significant assignment on the whole topic.
It was a little rash to think that you can just give a project on GitHub, a Figma project, and ask to update CSS and HTML to make it look like the prototype. It turns out, you need to know how a lot of things work in IDEs or the terminal. I was improving and simplifying the assignment for 6 months. I was adding more exercises which supposed to check the theory knowledge, or ask to code something in an already fully prepared environment, but still the most effective learning measure we discovered for this topic was a Zoom meeting where I just show:
- how to open the terminal,
- how to write commands in the terminal,
- how to find a file in the project’s structure,
- how to save the project and open it next time,
- how to keep your HTML structure clear so you find a closing tag quicker.
It is shocking to realize that a one-hour reading lesson full of new information and coherent exercises is just a useless mess if you don’t know the basics. So we had to teach these basics too.
I call it the “Interface problem”. You see, there are an incredibly huge amount of no-code solutions for those, who don’t know programming in basically any sphere (the one I encountered is website constructors). And I had never understood before how huge this gap between “experienced once” and “never tried” is. Seems like non-programmers are afraid of coding. So are designers.
There wouldn’t be so many ways not to code and solve your problem using GUI if it was so easy.
Conclusion
I am not inventing something new, this is my perception of modern frameworks like SwiftUI, Flutter, or Jetpack Compose as a designer. I see how Figma increases its functionality, trying to add even more smart and effective approaches borrowed from programming, while programming languages and frameworks become easier and more readable. Here I presented the way I think it might work the most effectively manner, even though it may not be true, and there is one really close yet different way:
- Integration of Figma with coding tools allows you to export your design as a bunch of coded Views. This will require Figma to upgrade the AutoLayout, and probably a lot of something else…
The thing I show above is real in Swift, but not in SwiftUI. The main reason for this is the lack of separation between logic and views. You can always try to do it by hand, but at some point, you will need to violate the rule and add some logic right in the same place where you describe the view. I haven’t tried other frameworks so extensive to be sure, but I guess the situation is the same for all of them. The reason for that is they are not meant for designers.
My approach supposes some technology purposely developing their tool to be used in tandem with designers. I understand this is a complicated system, and needs at least a year to be presented as the first draft, and even more time to be able to compete with other designing tools.
So what do you think? Does it look like a utopia to you?