Teams are underestimating product poetry. Users will not.

Teams are underestimating product poetry. Users will not.

AI is forcing teams to optimize designers for speed. The winners will be the ones who keep the poetry.

Teams are underestimating product poetry. Users will not.

** My posts are data-enforced positive shelters for designers. Follow for daily positivity pills (lines are handcrafted, no AI copy-pasta).

My recent conversations with product leaders and hiring managers alarmed me.

"We want the thinkers."
"The decision-makers."
"We need the ones with taste."

They can't ship every iteration LLMs push out. They need curators.

This was reinforced by the FIGMA 2026 State of the Designer report: Craft.

This comes on top of years of subjugation of the same philosophy and skills needed to develop this craft.

I call it "the design eye." And I wrote about it recently:

"Design is a cognitive layer that gets added to your 'think, see, and feel' system. If you practice, it trains your nervous system. It changes the way you see and perceive. It becomes you. It has to."

Digital designers largely became pixel-pushers over the last decade. Taste and philosophy could not have been worth less.

Until now, it seems.

Product teams are in trouble now. They want the curators. But they've killed the curators, and the curation process, through years of prioritization-oppression.

They are still killing it daily.

---> 54% of designers say PMs want AI features without any clear use case.

Evidencing emotions to support such use cases became "stupid" and "slow."

So this is getting everyone confused. Teams value the curation, but not the heuristics supporting it.

The problem is: curation without heuristics is guessing.

So, should we all just... guess?

Designers used to be the last line of defense against catastrophes.

One of the few classes, inside the matrix, that still dares to say:

"I don't know, I need research."

While others hip-shoot answers full time, designers craft good questions.

That's what we are good at: We don't answer our own questions.

We can only see it, if we don't know.

LLMs rely largely on text. Emotions are only coded 7% in written text. We have a problem here.

My recent tests with two different AI-moderated user-interview platforms showed a very low to zero capacity to read micro-expressions and correct emotions in voice intonation.

Friends are cheating AI job interviews by mimicking robotic intonations and "force-acting" emotions. People are literally pretending not to be human to get through the AI firewall.

That.

Either these bots are going to become wildly great at reading emotion, real fast, or company cultures and PMFs are about to hit peak FAFO.

This is the "all-out/ all-in" phase of the cycle.

To sum up our life today:

* (Onboard) Be a robot
* (DiscoverED) Never question untested use cases
* (Execute) AI -> AI -> AI
* (Refine) Curate UIs for these use cases as if you are a genius

-

The pill?

We are the ONLY, true, human readers in this game. That's leverage. Use it to carve space. Weaponize the thinking.