AI Doesn't Understand Taste. You Have to Teach It.
March 2026
Most people think working with AI is about writing better prompts. In my experience, that’s rarely the real problem.
The real challenge is something else.
AI models are incredibly capable, but they rarely understand taste on their own. You can give them a good example, share a strong reference article, or even show them the exact output you want. And still, the result often feels slightly off.
The structure may be correct. The content may be technically accurate. But something about it doesn’t quite work.
For a while I assumed this was simply a limitation of the model. But after spending enough time working with them, I started seeing the problem differently.
The issue is rarely capability. The issue is usually missing structure.
One Good Example Is Rarely Enough
A common assumption with AI is that if you show the model one good example, it should be able to replicate the quality. In practice, that almost never works.
Most real tasks involve multiple layers of judgment. Take something simple like creating a presentation.
A good presentation requires several things to work together: narrative flow, logical structure, slide hierarchy, visual balance, and consistent design.
If you show the model one article about presentation design, it may understand some of these ideas but miss others. Sometimes the narrative works but the slides feel cluttered. Sometimes the slides look clean but the story falls apart.
The model isn’t failing randomly.
It’s simply trying to solve a complex task without enough guidance on how the process should work.
Something I Noticed After Repeating This Many Times
After generating a lot of outputs with AI models, I started noticing a pattern.
When the model makes a mistake and you explicitly point it out, it usually fixes it very quickly. But interestingly, the same mistake often shows up again in the next run.
Not because the model cannot solve it, but because it keeps defaulting to the same familiar approach.
For example, while generating presentations programmatically, the model often tries to generate raw XML slide structures. The problem is that this approach fails quite often. What is more interesting is that even after failing once, the model still returns to the same XML approach the next time.
It’s almost like the model has a few default ways of solving problems, and unless you explicitly guide it away from them, it keeps going back.
That observation changed how I started working with these models.
The Shift: From Prompting to System Design
Instead of endlessly tweaking prompts, I started thinking about the problem differently.
What if the model doesn’t just need instructions?
What if it needs a process?
So I started designing small systems around the model. The idea was simple: break the task into steps, define what the model should do at each stage, and add a few operational rules.
Once the ambiguity disappears, the model suddenly performs much better. Not because the model became smarter, but because the path to solving the problem became clearer.
One Example: Generating Presentations
One place where this approach worked well was with presentations.
Most of my presentations start from raw text inputs rather than structured outlines. So instead of treating AI like a slide generator, I built something closer to a presentation generation pipeline.
The workflow has three layers:
- The narrative layer, where the model converts raw text into a coherent story.
- The structure layer, where the story is translated into slide-level hierarchy.
- The design layer, where spacing, alignment, and visual balance are improved.
Separating these layers made the output far more reliable than asking the model to handle everything in a single step.
Operational Rules Help More Than You Think
Another improvement came from adding a few simple rules:
- Ask clarifying questions before execution.
- Check if required dependencies are already installed.
- Run a final QA pass to fix anything broken.
These sound like small things, but they make a big difference. Without them, the model often jumps directly into execution and produces avoidable errors.
The More Interesting Insight
What I found interesting is that this pattern is not limited to presentations. It shows up in many AI workflows.
Most people treat AI like a very smart assistant. They give instructions and expect the model to figure out the rest.
But the more reliable approach is to think like a system designer.
Instead of asking:
Why does the model keep getting this wrong?
A more useful question is:
What structure is missing for the model to succeed?
Once you start encoding taste, constraints, and workflow explicitly, the model suddenly becomes much more effective.
Not because it became smarter.
But because you finally taught it how to approach the problem.
Lately I’ve started thinking about it this way:
Good AI outputs rarely come from better prompts. They usually come from better systems built around the model.