White Paper / Design Infrastructure

How to Train Your Agent

What How to Train Your Dragon taught me about why Claude + Figma MCP is a dead end, and what AI agents actually need to compose, not just assemble.


The Saddle Problem

In How to Train Your Dragon, Vikings spent generations trying to kill dragons. Then one kid tried something different. But here's the part everyone misses: Hiccup didn't succeed by giving Toothless a saddle. He succeeded by building a prosthetic tail fin, an interface designed around how the dragon flies, not how the rider rides.

A saddle is a human abstraction projected onto an animal. A tail fin is an extension of the animal's own anatomy.

That difference is everything.

The Metaphor
Don't strap a saddle on a dragon.
A saddle is a human interface. A prosthetic tail fin is infrastructure designed for the dragon's own anatomy. Design systems are saddles. AI agents need tail fins.

The Beautiful Artifact That Fails Beautifully

Design systems are the greatest achievement of the last decade of interface work. Figma made them gorgeous. Tokens, components, variants, auto-layout. A masterwork of human composability.

And that's precisely the problem.

A design system is a saddle. It was designed for the rider.

When you connect Claude to Figma via MCP, you're strapping a saddle onto a dragon. The dragon can read every pixel, parse every variant, traverse every component tree. It sees everything.

It understands nothing.

Not because it's dumb. Because the system was never designed for how it thinks.


The Quadrant Nobody Talks About

Think of interface tooling on two axes: who the compositor is, and how much freedom they have:

Interface Tooling Landscape
Highly Templatised
Highly Composable
Human
Page Builders
Wix, Webflow, Squarespace
Design Systems
Figma, Storybook, Tokens Studio
Agent
AI Builders
v0, Bolt, Lovable
Expression Infrastructure
LESS Studio

The top-right quadrant is a triumph. Design systems give human designers extraordinary compositional power. A skilled designer holds dozens of constraints in mind simultaneously, including brand tension, hierarchy, rhythm, density, motion, and mood, and resolves them intuitively.

The bottom-left quadrant is where everyone is building. Template the output. Narrow the choices. Give the agent a paint-by-numbers kit and ship it.

The bottom-right quadrant was empty.

Because nobody had figured out how to give an agent compositional power. Not template-filling. Actual composition, the kind that requires holding brand, structure, and expression in mind simultaneously and resolving them into something coherent.

Composability for agents requires something design systems were never asked to provide: the decisioning layer underneath.


What a Designer Knows That an Agent Doesn't

When a designer opens a component library and composes a settings page, they aren't just placing components. They're making hundreds of micro-decisions per minute:

The Invisible Decisions
This card needs more breathing room because the content is dense.
The hierarchy here should be flatter. This isn't a marketing page.
These two sections should feel related but distinct.
The tone should be calm and functional, not expressive.

These decisions aren't in the Figma file. They aren't in the design tokens. They aren't in the component API. They live in the designer's accumulated taste, their internalised model of the brand, their sense of what this particular composition is trying to say.

A design system gives the designer the vocabulary. The designer provides the judgement.

An agent reading the same design system gets the vocabulary and nothing else. It's like handing someone a dictionary and asking them to write poetry.


MCP Doesn't Fix This

MCP is a wonderful protocol. We use it ourselves. It does exactly what it says: it gives a model structured access to tools and data.

But structured access to a design system is not the same as understanding a design system. You can give Claude read access to every token, every component spec, every variant in your Figma file. The model will faithfully use them. And the output will be competent, consistent, and completely soulless.

Because the system Claude is reading was authored for a human compositor. The richness is in the human, not the file.

The Core Insight

The file is the saddle. The human is the rider. And you just handed the file to a dragon.


The Tail Fin

What if, instead of projecting human design systems onto agents, you built expression infrastructure for agents?

Not templates. Not narrowed choices. Not "here are your 12 approved layouts."

The actual infrastructure of visual decisioning:

What Expression Infrastructure Encodes
How does this brand feel, not in adjectives, but in a language machines can actually think in?
Brand identity captured in forms machines can reason over, not adjectives on a mood board, but something an agent can actually hold in mind while it builds.
What should this page actually look like, decided by evidence, not by scrolling Dribbble?
Composition knowledge earned from observation, not handed down from a template gallery. Agents build because they know what works, not because someone showed them a screenshot.
Which constraints hold and which flex, answered the way the agent actually reasons?
Design judgment that works with how models think, not against it. Not rigid rules, but a living negotiation between competing priorities.

The Quiet Part

The industry is having the wrong conversation. "How do we get AI to use our design system?" is the wrong question. It assumes the design system is the right interface. It isn't. It's a human interface.

The right question is: What does a design system look like when the compositor is a machine?

That's a much harder question. It requires understanding what agents actually need to make good decisions: not more data, not more tokens, but the right kind of judgment. How to encode taste, taste, in a form that isn't human intuition.

Nobody was asking this question because it sounded impossible.

We've been working on it for a while.


The Saddle vs. The Tail Fin
Design System + MCP (Saddle)
Expression Infrastructure (Tail Fin)
Gives agents vocabulary
Gives agents judgement
Tokens, components, variants
Judgment, taste, intent
Human-authored composability
Agent-native composability
Competent, consistent, soulless
Expressive, coherent, intentional
Agents execute
Agents compose

Google Stitch Validates the Problem. It Doesn't Solve It.

In March 2026, Google shipped Stitch's biggest update: Vibe Design, design.md, and a Design Agent with MCP Server. These are powerful additions that validate everything this essay argues.

Google's design.md is a markdown file that captures five sections of design rules — colors, typography, spacing, components, layout — in natural language with hex values. It's portable, it's agent-readable, and it's a step forward from copying screenshots into prompts.

But it's still a saddle.

design.md describes a design system in prose. It doesn't encode judgment. It doesn't enforce contracts. It doesn't serve tokens at runtime. An agent reading design.md gets better vocabulary than an agent reading a Figma file — but it still doesn't get the compositional power underneath.

Three MCP Approaches — Same Fundamental Question
Figma MCP
Stitch MCP
designless MCP
Reads existing human design system
Generates screens + exports design.md
Resolves expression infrastructure at runtime
Vocabulary without judgment
Better vocabulary, still no judgment
Judgment encoded as enforceable contracts
Static access to tokens
Static file (design.md)
Live API with bidirectional resolution
Human design system projected onto agents
AI-generated design system as documentation
Agent-native expression infrastructure

The pattern is the same. Figma MCP gives agents a human design system. Stitch MCP gives agents an AI-generated design system. Both are still saddles. The question remains: what does the tail fin look like?

See the full Google Stitch vs LESS comparison →


The quadrant is no longer empty.

Expression infrastructure for the agentic age.
A design system that speaks machine.
Build with LESS Studio
Install MCP Server →

Frequently Asked Questions
Why doesn't Claude + Figma MCP work for AI-driven design?
Figma's MCP server gives Claude structured access to tokens, components, and variants. But design systems encode vocabulary, not the decisioning layer underneath. The compositional judgement, including brand tension, hierarchy, rhythm, and mood, lives in the human designer, not the file. MCP provides access; it can't provide understanding.
What is expression infrastructure?
Expression infrastructure is the missing layer between a design system and an AI agent that actually composes. It gives agents the judgment layer that designers carry in their heads: the taste, the instinct, the "why this and not that." All in a form machines can actually use. Enabling agents to express, not just execute.
What does LESS stand for?
Layered Expression Style Standard. It's a design system that speaks machine. Instead of encoding brand rules for human interpreters, LESS encodes them for computational compositors, giving AI agents the infrastructure to make taste-level decisions at machine speed.
Is this anti-Figma or anti-MCP?
Neither. Design systems are a human triumph. Figma is extraordinary for human designers. MCP is a wonderful protocol; we use it ourselves. The argument is that design systems are the wrong interface for AI agents, not that they're wrong for humans. Different compositors need different foundations.
Can't AI agents learn to use design systems better over time?
Agents can get better at template-filling and pattern-matching within design systems. But the fundamental gap isn't intelligence. It's architecture. Design systems don't encode the judgment, the taste, the "why this and not that." No amount of model improvement fixes a missing infrastructure layer.
Design systems are a human triumph.
They should stay that way.
Agent expression needs its own foundation.
LESS Studio. Expression infrastructure for the agentic age.
LESS is short for Layered Expression Style Standard. It's a design system that speaks machine.

Go deeper
▸ Google Stitch Alternative ▸ Design Agent Comparison ▸ The expression infrastructure category ▸ Design skills for AI agents