How to Train Your Agent
What How to Train Your Dragon taught me about why Claude + Figma MCP is a dead end, and what AI agents actually need to compose, not just assemble.
The Saddle Problem
In How to Train Your Dragon, Vikings spent generations trying to kill dragons. Then one kid tried something different. But here's the part everyone misses: Hiccup didn't succeed by giving Toothless a saddle. He succeeded by building a prosthetic tail fin, an interface designed around how the dragon flies, not how the rider rides.
A saddle is a human abstraction projected onto an animal. A tail fin is an extension of the animal's own anatomy.
That difference is everything.
The Beautiful Artifact That Fails Beautifully
Design systems are the greatest achievement of the last decade of interface work. Figma made them gorgeous. Tokens, components, variants, auto-layout. A masterwork of human composability.
And that's precisely the problem.
A design system is a saddle. It was designed for the rider.
When you connect Claude to Figma via MCP, you're strapping a saddle onto a dragon. The dragon can read every pixel, parse every variant, traverse every component tree. It sees everything.
It understands nothing.
Not because it's dumb. Because the system was never designed for how it thinks.
The Quadrant Nobody Talks About
Think of interface tooling on two axes: who the compositor is, and how much freedom they have:
The top-right quadrant is a triumph. Design systems give human designers extraordinary compositional power. A skilled designer holds dozens of constraints in mind simultaneously, including brand tension, hierarchy, rhythm, density, motion, and mood, and resolves them intuitively.
The bottom-left quadrant is where everyone is building. Template the output. Narrow the choices. Give the agent a paint-by-numbers kit and ship it.
The bottom-right quadrant was empty.
Because nobody had figured out how to give an agent compositional power. Not template-filling. Actual composition, the kind that requires holding brand, structure, and expression in mind simultaneously and resolving them into something coherent.
Composability for agents requires something design systems were never asked to provide: the decisioning layer underneath.
What a Designer Knows That an Agent Doesn't
When a designer opens a component library and composes a settings page, they aren't just placing components. They're making hundreds of micro-decisions per minute:
These decisions aren't in the Figma file. They aren't in the design tokens. They aren't in the component API. They live in the designer's accumulated taste, their internalised model of the brand, their sense of what this particular composition is trying to say.
A design system gives the designer the vocabulary. The designer provides the judgement.
An agent reading the same design system gets the vocabulary and nothing else. It's like handing someone a dictionary and asking them to write poetry.
MCP Doesn't Fix This
MCP is a wonderful protocol. We use it ourselves. It does exactly what it says: it gives a model structured access to tools and data.
But structured access to a design system is not the same as understanding a design system. You can give Claude read access to every token, every component spec, every variant in your Figma file. The model will faithfully use them. And the output will be competent, consistent, and completely soulless.
Because the system Claude is reading was authored for a human compositor. The richness is in the human, not the file.
The file is the saddle. The human is the rider. And you just handed the file to a dragon.
The Tail Fin
What if, instead of projecting human design systems onto agents, you built expression infrastructure for agents?
Not templates. Not narrowed choices. Not "here are your 12 approved layouts."
The actual infrastructure of visual decisioning:
The Quiet Part
The industry is having the wrong conversation. "How do we get AI to use our design system?" is the wrong question. It assumes the design system is the right interface. It isn't. It's a human interface.
The right question is: What does a design system look like when the compositor is a machine?
That's a much harder question. It requires understanding what agents actually need to make good decisions: not more data, not more tokens, but the right kind of judgment. How to encode taste, taste, in a form that isn't human intuition.
Nobody was asking this question because it sounded impossible.
We've been working on it for a while.
Google Stitch Validates the Problem. It Doesn't Solve It.
In March 2026, Google shipped Stitch's biggest update: Vibe Design, design.md, and a Design Agent with MCP Server. These are powerful additions that validate everything this essay argues.
Google's design.md is a markdown file that captures five sections of design rules — colors, typography, spacing, components, layout — in natural language with hex values. It's portable, it's agent-readable, and it's a step forward from copying screenshots into prompts.
But it's still a saddle.
design.md describes a design system in prose. It doesn't encode judgment. It doesn't enforce contracts. It doesn't serve tokens at runtime. An agent reading design.md gets better vocabulary than an agent reading a Figma file — but it still doesn't get the compositional power underneath.
The pattern is the same. Figma MCP gives agents a human design system. Stitch MCP gives agents an AI-generated design system. Both are still saddles. The question remains: what does the tail fin look like?
The quadrant is no longer empty.
A design system that speaks machine.
Install MCP Server →