- Published on
Designing for Agentic Experience: Why AX is the New UX
- Authors

- Name
- Siavoush Mohammadi

The Lunch That Sparked This Post
A few weeks ago, I was having lunch with an engineering manager and a COO. We were talking about the usual things - team dynamics, technical debt, the food, hiring challenges - when the conversation drifted to how I've been working with AI tools. I started describing my workflow casually: round-table discussions between specialized agents, documentation written specifically for AI consumption, automation of repetitive tasks with multiple agents, retrospectives where I refine prompts and workflows based on what went wrong.
They were intrigued. Really intrigued. The engineering manager put down his fork and started asking follow-up questions. The COO leaned in. "Wait, go back to the round-table thing. How does that actually work?"
As I tried to explain, I realized something uncomfortable: I'd never actually written any of this down. The methodology had evolved through months of trial and error, but it existed entirely in my head.
That conversation stuck with me. Their genuine curiosity made me realize this might be worth sharing. Not because I've discovered something nobody else knows - plenty of people are figuring this out in their own ways. But because writing it down might help others skip some of my trial-and-error.
So this post is my attempt to articulate what I've learned about working with AI agents effectively. What follows includes: why typical AI interactions hit a ceiling, what I mean by "Agentic Experience," a practical five-step methodology for building effective AI workflows, and some philosophy around progressive trust.
The Sticky-Note Problem
Here's what most AI interactions look like in practice:
You have a task. You ask the AI a question. It gives you an answer. You ask another question. It answers again. Back and forth, query by query, until either the task is done or you give up in frustration because the AI keeps missing something obvious.
This is like having a brilliant consultant who you only communicate with via sticky notes. You're wasting 90% of their potential.
You've hired someone with vast knowledge and genuine capability. Instead of sitting them down, explaining the context, walking through your goals, and having a real conversation, you slip notes under their door. "What's the best way to structure this?" Note slipped back. "Okay, now what about validation?" Another note. No shared context. No understanding of your broader goals. No knowledge of your constraints, preferences, or standards. A real person would struggle to be useful under these conditions too.
This pattern emerges because we're treating AI as a search engine with personality. We give it fragments without context, ask questions in isolation, and expect it to guess our standards and preferences correctly. When it doesn't - when the code doesn't match our style guide, when the documentation misses our audience, when the recommendation ignores a constraint we never mentioned - we blame the AI.
But the underlying problem isn't the AI's capability. It's a design problem. Our systems and processes were built for humans. We rely on shared context that never gets made explicit. We assume tribal knowledge. We expect intuition about "how we do things here." An AI has none of that unless we give it.
The good news: this is solvable. The frustrating news: it requires actual work upfront. You need to design experiences specifically for agents - not just humans.
AX is the New UX
We've spent decades perfecting User Experience. We've learned that good UX isn't about pretty interfaces; it's about understanding how humans think, what they need, and removing friction from their path.
Now we need to do the same for AI agents. We need Agentic Experience - systems designed for AI to understand, navigate, and operate within effectively.
What makes a system "agent-friendly"? Three things stand out:
Clear semantic definitions. Humans can work with ambiguous terminology because we intuit from context. Agents can't. When your documentation says "the service handles user data," an agent doesn't know if that means authentication, profile information, or behavioral tracking.
Explicit rules and boundaries. We often leave rules implicit because humans "just know" them. An agent needs those rules stated directly. What's allowed? What's forbidden? What requires approval?
Documented processes, not tribal knowledge. This one hurts. So much of how we work lives in people's heads. "Oh, you need to run that by Sarah first." "We always format dates that way because of the legacy system." Making this explicit isn't just good for agents - it's good for your organization.
Here's the interesting connection: these qualities that make systems agent-friendly also align with good architectural principles. When you declare what you want rather than prescribing how to do it, both humans and agents can navigate the system more effectively.
The competitive advantage isn't that AI is magic. It's that developers who figure out how to make their systems agent-friendly will extract dramatically more value from AI tools.
So how do you actually build this? Here's the methodology I've developed.
The Five-Step Methodology
Let me walk through the process I've settled on after months of experimentation. I'll ground it in a concrete example: setting up an agent to enforce naming conventions across a codebase. (If you want the full treatment of that particular problem, I wrote about it separately in Stop Being the Naming Convention Police.)
Start by thinking manually. Before you involve AI at all, think through how you would do the task yourself. Not in vague terms - actually walk through it. For naming conventions, that meant asking: What files do I check? What patterns am I looking for? How do I decide something violates the convention versus being an acceptable edge case?
This step seems obvious, but it's where most people skip ahead too quickly. We're eager to see AI do something impressive, so we jump straight to "hey AI, help me with this" without having really thought through what "this" entails. If you can't articulate the task precisely for yourself, you certainly can't articulate it for an agent.
Translate to an explicit process. Once you've thought it through, write it out as actual steps. Then feed this to the AI and have a real conversation about it. "Here's what I'm trying to accomplish. Here are the steps I think are involved. What am I missing?"
This conversation is genuinely valuable. The AI often spots gaps in your thinking - assumptions you made implicitly, edge cases you didn't consider. With naming conventions, the AI immediately asked about case sensitivity, about whether the rules applied to test files differently, about what counted as a "boundary" between word segments.
Walk through one example together. This is critical: go slow. Pick a single concrete example and work through it step by step. The AI explains what it's about to do before doing it. You confirm or correct each step. When it inevitably misunderstands something - and it will - your corrections here shape everything that follows.
I cannot overstate how important this slow walk-through is. The temptation is to jump ahead once things seem to be working, but premature speed creates compounding errors. With the naming convention example, I discovered during the walk-through that the AI had a different mental model of what "violation" meant - it was flagging suggestions, while I wanted hard errors. Catching that misalignment early prevented a lot of wasted effort.
Document for the agent. Once you've completed an end-to-end example successfully, ask the AI to describe the process "in its own words, suitable for another LLM to follow." This becomes your agent's "way of working" document.
The phrasing matters. When you ask the AI to explain for another AI, you get documentation that captures both the explicit steps and the implicit context that emerged during your walk-through.
Run retrospectives and improve. After each real use of the process, hold a brief retrospective. What went wrong? What assumptions turned out to be incorrect? Update the process document. Refine the prompts.
This is where the methodology pays compound interest. Each retrospective makes the process more robust. I keep a simple log: what worked, what didn't, what to change. That log becomes invaluable when you're on your fifth iteration and you can't remember why you added that particular constraint three weeks ago.
Eventually, specialize and delegate. Once a process has stabilized through several iterations, you can create a specialized agent with its own focused context. It works more independently, only coming back to you when it hits something outside its defined boundaries.
Notice how each step builds on the previous one. You can't document for the agent until you've walked through an example. You can't walk through an example until you've translated your thinking into explicit steps. The methodology is sequential because each step creates the foundation for the next.
Progressive Trust, Not Blind Delegation
A common mistake I see is trying to hand AI complex, judgment-heavy tasks on day one. "Here's our codebase, refactor it to follow clean architecture principles." That's not a task you'd give a new team member on their first day.
The methodology I've described builds trust progressively. You start with tight oversight and frequent corrections. You graduate to documented processes with spot checks. Eventually you reach a point where specialized agents work independently and you review outcomes rather than steps.
This mirrors how you'd work with any new team member. You wouldn't hand them the keys on day one, but you also wouldn't micromanage them forever.
One technique that's become particularly valuable for complex problems: round-table discussions. When I'm wrestling with a decision that has multiple valid perspectives, I'll spin up specialized agents with different viewpoints - whatever perspectives matter for the problem. I present the problem and let them discuss it while I moderate.
For technical decisions, that might be architecture, performance, and maintainability agents. For business decisions, it could be marketing, growth, and competitive analysis. Different domains, same technique.
This sounds fancier than it is - it's really just structured prompting. But it requires an environment where you can store documentation, reference it across sessions, and spin up specialized agents. AI-enabled IDEs (Cursor, Windsurf), agentic CLI tools (Claude Code, Gemini CLI), or similar. Standard chat interfaces don't support this workflow.
The format surfaces considerations I might have missed. The architect agent raises concerns the performance agent dismissed too quickly. The maintainability agent points out long-term complexity. I'm not outsourcing my judgment, but I'm getting multiple perspectives that help inform it.
The round-table approach works because it externalizes the internal debate you'd have anyway. Instead of holding three perspectives in your head and trying to weigh them simultaneously, you can watch the perspectives interact and see where they agree, where they conflict, and where one reveals a blind spot in another.
The goal throughout is not to replace human judgment. It's to free you up to focus on the judgment calls while execution gets delegated. You shouldn't spend hours manually checking naming conventions when an agent can do the mechanical verification and surface only the ambiguous cases for your decision.
Getting Started Tomorrow
If this resonates, here's how I'd suggest starting: pick one repetitive task you do regularly. Something mundane - documentation formatting, boilerplate generation, code review checklist verification. Not your most complex or judgment-heavy work.
Why start mundane? Because the methodology itself is what you're learning. You want to practice the five steps on something where the stakes are low. If your first attempt is "refactor our entire authentication system," you're learning two things simultaneously: the methodology and the domain complexity. Start simple, learn the process, then gradually apply it to more complex tasks.
Apply even just the first three steps. Think through the task manually. Translate it to explicit steps. Walk through one example with the AI. You don't have to build a production-ready workflow on day one.
What you'll discover is that the methodology forces clarity that has value beyond AI collaboration. The process of making your expectations explicit, documenting your standards, writing down the tribal knowledge - this helps your human team members too.
Once you've experienced one process working smoothly - where the agent consistently does the thing you want, the way you want it - you'll start seeing opportunities everywhere. Tasks you'd accepted as unavoidably manual start looking like candidates for delegation.
That lunch conversation I mentioned at the start? It ended with both of them wanting to try this themselves. Neither walked away thinking AI would transform their work overnight. But both walked away with something concrete to try.
This isn't about AI being magic. It's about doing the work to make AI useful. The methodology exists because without it, AI capabilities don't translate into practical value. With it, you're not just using AI - you're collaborating with it.
Maybe that's worth trying.
