Prototype with AI — Finally Done Right

We have spent decades structuring information for humans. We have perfected documentation for intuition and judgment. This assumes a receiver who can read between the lines and build intuition over time. However, the primary consumer of our design systems has changed. Today, the primary "user" of your documentation is an AI.

A Starting Point

I lead product design at an early-stage startup building AI-powered enterprise tools. My work spans product strategy, design systems, and increasingly into figuring out how AI fits into design workflows at scale.


Over the past year, I've been deep in the intersection of AI and design tooling. Not just using AI to generate interfaces, but studying how it interprets design decisions, where it breaks down, and what it needs to succeed. A pattern kept emerging.

A New Discipline

Something fundamental is shifting in how we build software.


For the first time, the primary consumer of our documentation, our patterns, our accumulated knowledge, isn't human.


AI tools are now reading our component libraries. They're interpreting our guidelines. They're making decisions based on how we've structured information.


And they're failing. Not because the AI is bad, but because we've never learned how to communicate with it effectively.


This isn't a design systems problem. It's bigger than that.

It's about developing a new discipline - structuring human knowledge in ways AI can reason about.

The Gap We've Ignored

We've spent decades perfecting how humans transfer knowledge to other humans:


  • Documentation conventions

  • Teaching methodologies

  • Visual communication

  • Onboarding processes

All of it assumes a human receiver who can:


  • Read between the lines

  • Build intuition over time

  • Ask clarifying questions

  • Learn from context and culture

AI has none of these capabilities.


When we hand AI our existing documentation, we're speaking a language it doesn't fully understand. It catches the words but misses the meaning.


We've been optimizing for human cognition. We need to start optimizing for machine reasoning.

How AI Processes Information

To design for AI, we need to understand how it thinks.

AI doesn't infer -> it matches patterns


When a human reads "use this for important messages" they infer what "important" means from context, experience, and judgment.

AI looks for explicit patterns. If you don't define "important" it will guess. And it will often guess wrong.


Principle: Make implicit knowledge explicit.

AI doesn't build intuition -> it follows structure


Humans develop intuition through exposure. After seeing enough examples, they "just know" when something fits.

AI needs structured decision paths. It can follow logic trees. It can apply rules. But it can't develop a feel for things.


Principle: Encode judgment as navigable logic.

AI doesn't ask for clarification -> it proceeds with uncertainty


When a human is unsure, they ask. They seek context. They verify.

AI generates its best guess and moves forward. Uncertainty produces inconsistency.


Principle: Eliminate ambiguity through explicit boundaries.

A Framework for AI-First Information Architecture

Based on these principles, I've been developing a framework for structuring knowledge that AI can effectively use.

It's built around three layers:

Layer 1: Decision Architecture

The first layer answers: "Given a goal, what approach should I take?"


The INDEX.md file is the entry point for the AI. It replaces a simple list of components with a series of decision trees. Instead of guessing which component to use, the AI follows a logic path based on the user's need.

Need to communicate with the user?
├── Requires their response before continuing? Blocking pattern
├── Important but not blocking? Persistent pattern
├── Temporary acknowledgment? Transient pattern
└── Contextual to specific element? Anchored pattern

This externalizes judgment. It transforms "you'll know it when you see it" into "follow this path."

Layer 2: Contextual Depth

The second layer provides complete context for each decision branch.

For every pattern, component, or approach, the documentation answers:


Mental Model How should I conceptualize this? What's the underlying principle?


Application Criteria When is this the right choice? What conditions indicate it?


Rejection Criteria When is this the wrong choice? What should I use instead?


Relationships How does this connect to adjacent concepts? Where are the boundaries?


Behavioral Specification What are the exact parameters? Timing, transitions, states?

Some examples:

Some examples:

## When NOT to Use

 **Don't use Modal when:**
- Content is informational only use `Toast` or `Banner`
- User might need to reference page content use `Sheet`
- It's a success/completion message → use `Toast`
- Form has many fields (5+) use dedicated page
- Content is supplementary use `Sheet` or `Popover`
## Related Components

| Instead of... | Use... | When... |
|---------------|--------|---------|
| Modal | `Sheet` | Content is supplementary, not blocking |
| Modal | `Toast` | Simple feedback, no decision needed |
| Modal | `Popover` | Contextual UI anchored to trigger |
| Modal | Page route | Complex or multi-step task

Layer 3: Explicit Instruction

The third layer tells AI how to use the system itself.

To ensure the AI actually uses these documents, I use specific rule files like .cursorrules and .rules.md These files contain a "Primary Directive" that forces the AI to consult the INDEX.md and MAINTENANCE.md

Before implementing:
1. Consult the decision architecture
2. Navigate to the relevant context document
3. Read application AND rejection criteria
4. Follow specified patterns exactly

Do not infer. Do not assume. Reference explicitly

AI doesn't spontaneously check documentation. You have to build that behavior into your instructions.

Download from GitHub

Instead of dumping a 500KB documentation site into a prompt, the AI loads a ~5KB INDEX.md, finds the path, and only pulls the ~3KB of component data it needs. It’s leaner, faster, and cheaper.

ai-docs-template/
├── INDEX.md                  Decision architecture (AI reads first)
├── MAINTENANCE.md            Guide for keeping indexes updated
├── component.template.md     Template for documenting components
├── .cursorrules              Cursor AI instructions
└── .claude/
    └── rules.md              Claude AI instructions

The structure is simple. The thinking behind it is what matters.

Efficiency by Design

Instead of dumping a 500KB documentation site into a prompt, the AI loads a ~5KB INDEX.md, finds the path, and only pulls the ~3KB of component data it needs. It’s leaner, faster, and cheaper.

This Is Infrastructure Work

I want to be direct about what this is and isn't.


This isn't a quick hack. It's not a prompt trick. It's infrastructure work, developing new foundations for how humans and AI collaborate.

It requires:


  • Rethinking how we document decisions, not just outputs

  • Maintaining living systems that evolve with your work

  • Treating knowledge architecture as a first-class discipline


The teams that invest here will compound that investment over time. Every decision encoded, every boundary made explicit, every pattern documented, it all accumulates into an AI collaborator that actually reflects your thinking.

What Comes Next

This is early-stage thinking. The patterns are still emerging.


But I'm convinced the direction is right:


We need to develop fluency in communicating with AI - not through prompts alone, but through how we structure the knowledge AI consumes.


We need to treat information architecture as a core competency - not a documentation task, but a strategic capability.


We need to build for a world where AI is a collaborator — which means giving it the context and structure to collaborate effectively.


The teams that figure this out won't just be faster. They'll be able to scale their thinking, their judgment, their standards, their taste in ways that weren't possible before.


That's the opportunity.

This is an evolving framework. If you're exploring similar territory, I'd like to hear what you're learning.