How to:

What does it mean to have gen AI in Figma?

At Config '24, Figma announced the introduction of gen AI features. The announcement—and the resultant backlash—illustrates a lot about what we think AI can and should be able to do.

In case you didn't hear, Dylan Field, founder & CEO of Figma, announced that generative AI would be coming to Figma. Here's Jordan Singer and Dan Mejia discussing this feature in depth:

Designers are scared. They needn't be.

And yeah, everyone kinda lost their minds. In person at the event and across the internet, worry abounded that this meant the end of designers. This, combined with the AI's unfortunate propensity to create clones of existing design work, led to Figma pulling the feature for the time being (ostensibly exclusively to iron out the kinks, not because of the existential dread thing).

So, how should designers think about this radical transformation ahead of us?

Our rigidity is an direct response to our insecurity
— Eric Talbert

What does it mean if robots can be creative? What if they're better at it than us?

In a later talk, Reginé Gilbert, an educator, said "while [AI] can brew the perfect cup, it can't replicate the barista's flair for creativity".

But can't it, though? Isn't that what gen AI is supposed to be good at (and getting better by the day at)? Being creative?

I think there are a few subjects that people like to believe they have supremacy in. Creativity is certainly one. Emotions are another. Both are thought to be something that, no matter how much the robots improve, they'll never be able to do. Whether or not that's true is debatable (I think, for what it's worth, nobody has ever gotten ahead saying a technology is impossible) but more relevant to us today is why we seem to so strongly need to say it.

What does it say about us that we need to defend a stronghold of human-ness? And that it is these subjects that are the hills we choose to die on?

Later in the Leadership Collective group, I got a chance to chat with her one-on-one about her views on AI. I love Reginé (really—I think she's awesome), but on this subject she falls prey to two issues that consistently recur in conversations about gen AI, in my opinion:

  1. That our self-worth is a function of our ability to be productive, and that if AI is better than us at something we've inherently lost a part of that worth
  2. That AI is fundamentally and insurmountably limited in the kinds of things it can do and be

She mentioned this quote from Eric Talbert (who was in the audience): “Our rigidity is an direct response to our insecurity.” Indeed. When we, as humans, are insecure, we reach to find limitations of technology. Where we can't find it, we deride it as less important, and seek to compete with it in a zero-sum, capitalist economy in which our ability to do things is the foundation of our self-esteem.

We know AI has challenged us to reimagine how we design the world: what does it mean when supply—the means of production, the ability to create things—is infinitely expanded? We're entering a time of unbound plenty; one where anybody can create anything. What value do things have when anybody can easily make anything?

But what we're less able to come to terms with is the way AI is challenging what makes us human. We seem to care a lot about holding on to that. Are we even sure it's a good thing? I, for one, welcome our robot overlords. No, seriously. Robots haven't killed a single person yet. In that environment, is being human so great after all?

We've already gone through growing pains like this before

The modern era is littered with seeming invasions of the human by the mechanical. Once upon a time, you had to do math yourself; but now computers do it literally more than trillions of times faster. To depict something, you had to paint or draw, which is hard; then easy point-and-shoot cameras came along. Gary Kasparov was beaten by Deep Blue and AlphaGo beat everybody.

While there's been grumbling each time, each of these felt inevitable, understandable, and therefore, acceptable. Of course the "thinking machine" can do math faster than me—it's so logical! When people got worried, it was often the olds who didn't understand how technology works.

But our march toward destiny has finally reached the truly uncanny valley. And here there be monsters. It's different this time. It's somehow personal. And everyone—young and old—is concerned.

Posts
/
Field Notes

When the robots come for the creative jobs

At Config '24, Figma announced the introduction of gen AI features. The announcement—and the resultant backlash—illustrates a lot about what we think AI can and should be able to do.

In case you didn't hear, Dylan Field, founder & CEO of Figma, announced that generative AI would be coming to Figma. Here's Jordan Singer and Dan Mejia discussing this feature in depth:

Designers are scared. They needn't be.

And yeah, everyone kinda lost their minds. In person at the event and across the internet, worry abounded that this meant the end of designers. This, combined with the AI's unfortunate propensity to create clones of existing design work, led to Figma pulling the feature for the time being (ostensibly exclusively to iron out the kinks, not because of the existential dread thing).

So, how should designers think about this radical transformation ahead of us?

Our rigidity is an direct response to our insecurity
— Eric Talbert

What does it mean if robots can be creative? What if they're better at it than us?

In a later talk, Reginé Gilbert, an educator, said "while [AI] can brew the perfect cup, it can't replicate the barista's flair for creativity".

But can't it, though? Isn't that what gen AI is supposed to be good at (and getting better by the day at)? Being creative?

I think there are a few subjects that people like to believe they have supremacy in. Creativity is certainly one. Emotions are another. Both are thought to be something that, no matter how much the robots improve, they'll never be able to do. Whether or not that's true is debatable (I think, for what it's worth, nobody has ever gotten ahead saying a technology is impossible) but more relevant to us today is why we seem to so strongly need to say it.

What does it say about us that we need to defend a stronghold of human-ness? And that it is these subjects that are the hills we choose to die on?

Later in the Leadership Collective group, I got a chance to chat with her one-on-one about her views on AI. I love Reginé (really—I think she's awesome), but on this subject she falls prey to two issues that consistently recur in conversations about gen AI, in my opinion:

  1. That our self-worth is a function of our ability to be productive, and that if AI is better than us at something we've inherently lost a part of that worth
  2. That AI is fundamentally and insurmountably limited in the kinds of things it can do and be

She mentioned this quote from Eric Talbert (who was in the audience): “Our rigidity is an direct response to our insecurity.” Indeed. When we, as humans, are insecure, we reach to find limitations of technology. Where we can't find it, we deride it as less important, and seek to compete with it in a zero-sum, capitalist economy in which our ability to do things is the foundation of our self-esteem.

We know AI has challenged us to reimagine how we design the world: what does it mean when supply—the means of production, the ability to create things—is infinitely expanded? We're entering a time of unbound plenty; one where anybody can create anything. What value do things have when anybody can easily make anything?

But what we're less able to come to terms with is the way AI is challenging what makes us human. We seem to care a lot about holding on to that. Are we even sure it's a good thing? I, for one, welcome our robot overlords. No, seriously. Robots haven't killed a single person yet. In that environment, is being human so great after all?

We've already gone through growing pains like this before

The modern era is littered with seeming invasions of the human by the mechanical. Once upon a time, you had to do math yourself; but now computers do it literally more than trillions of times faster. To depict something, you had to paint or draw, which is hard; then easy point-and-shoot cameras came along. Gary Kasparov was beaten by Deep Blue and AlphaGo beat everybody.

While there's been grumbling each time, each of these felt inevitable, understandable, and therefore, acceptable. Of course the "thinking machine" can do math faster than me—it's so logical! When people got worried, it was often the olds who didn't understand how technology works.

But our march toward destiny has finally reached the truly uncanny valley. And here there be monsters. It's different this time. It's somehow personal. And everyone—young and old—is concerned.

Updated continuously — Latest commit on
3.11.25