\n\n\n\n My AI Reflects My Flaws: A Dads Training Confession - AgntZen \n

My AI Reflects My Flaws: A Dads Training Confession

📖 9 min read1,704 wordsUpdated Mar 26, 2026

2026-03-15

The Prompt as a Mirror: How We Train AI to Reflect Our Own Bad Habits

I’ve been trying to teach my five-year-old, Leo, to put his shoes away. Not just near the shoe rack, mind you, but *on* the shoe rack. In the designated cubby. It’s a daily battle, a tiny war of wills waged over footwear. And more often than not, I find myself frustrated, repeating the same instructions, only to find his Avengers sneakers still splayed across the hallway an hour later.

What does this have to do with AI, you ask? Everything, actually. Because as I was patiently (or not so patiently) explaining for the umpteenth time where Iron Man’s shoe belonged, a thought struck me: I’m doing the exact same thing we’re doing with AI. We’re giving it instructions, sometimes vague, sometimes overly specific, and then we’re surprised when the output isn’t quite what we imagined. We blame the AI, or the model, or the ‘black box,’ but rarely do we look at the prompt – that initial seed of intent – as a reflection of our own flawed communication.

For agntzen.com, we often talk about agency, about the locus of control and the nature of intent. When it comes to AI, the prompt is where our agency, our intent, really makes its first mark. And increasingly, I’m seeing that our prompts are less about clear direction and more about a hope that the AI will magically intuit what we mean. It’s like telling Leo, “Put your shoes away nicely,” and expecting them to land perfectly in their cubbies.

The Echo Chamber of Ambiguity

Think about the typical interaction with a large language model. You type in a request: “Write an article about the future of work.” What do you get back? Something generic, certainly well-written, but probably lacking that specific spark you were hoping for. Why? Because “future of work” is an enormous, sprawling concept. It’s a prompt that’s begging for more context, more constraints, more *you*.

We’ve become accustomed to a certain level of conversational shorthand with other humans. We fill in gaps, we infer meaning from tone, from shared context, from non-verbal cues. AI doesn’t have that. It operates on the precise statistical relationships it’s learned from vast datasets. So when we give it a vague prompt, it fills the gaps too – but it fills them with the most statistically probable information, which often translates to the most common, most generic, and therefore least interesting answers.

This isn’t just about getting a ‘better’ answer; it’s about understanding the nature of our interaction. If we approach AI with the same casual imprecision we sometimes use with each other, we’re essentially training it to mirror that imprecision. We’re creating an echo chamber of ambiguity, where our vague inputs lead to equally vague outputs, reinforcing our own bad habits of communication.

When Good Intentions Meet Bad Prompts: A Case Study

A friend of mine, a product manager at a small startup, was recently tasked with generating some initial marketing copy for a new internal communication tool. He’d heard about the power of LLMs and was excited to try. His prompt:


"Generate some engaging marketing copy for our new internal communications tool. Make it sound new."

The output was… fine. Full of corporate jargon, buzzwords, and phrases like “streamline workflows” and “foster collaboration.” My friend was disappointed. “It sounds like every other tool out there!” he complained to me over coffee. “I wanted something fresh, unique.”

My first question was, “What does ‘new’ mean *to you* for *this specific tool*?” He paused. “Well, it’s really good at asynchronous communication for distributed teams. And it has this cool feature where it summarizes long threads automatically.”

Aha! There’s the specificity. There’s the unique selling proposition. His initial prompt was asking the AI to guess what “new” meant in his context, given its vast training data. And the AI, being a dutiful statistical engine, gave him the most common statistical interpretation of “new” in marketing copy: generic jargon.

We refined his prompt together:


"Generate marketing copy for an internal communication tool designed for distributed teams. Highlight its asynchronous communication strengths and its AI-powered thread summarization feature. Focus on reducing meeting fatigue and improving information retention for remote workers. Use a tone that is helpful and slightly informal, avoiding corporate buzzwords."

The difference was night and day. The new output was targeted, specific, and actually useful. It wasn’t perfect, but it was a solid foundation, a conversation starter rather than a generic monologue.

The Agency of Specificity

This brings me back to agency. We talk about AI having agency, about its ability to ‘make decisions’ or ‘create.’ But before we get there, we need to acknowledge our own agency in shaping that interaction. The prompt isn’t just an instruction; it’s a declaration of intent. It’s where we define the boundaries, the parameters, the specific universe within which the AI is meant to operate.

Think of it like this: if you ask a chef to “make something good,” you might get a perfectly edible, but uninspired, dish. If you ask them to “make a dish that combines spicy Korean flavors with a comforting Italian pasta, using fresh seafood and a light, citrusy sauce,” you’re giving them a framework, a challenge, a canvas within which to exercise their creativity. The latter prompt doesn’t stifle creativity; it directs it.

Similarly, with AI, our specificity doesn’t limit its capabilities; it focuses them. It allows the model to draw from its immense knowledge base in a way that aligns with *our* specific needs and desires, rather than simply regurgitating the statistical average.

Practical Steps to Sharpen Your Prompting Agency

So, how do we move beyond the echo chamber of ambiguity and start using prompts as powerful tools of intent? Here are a few things I’ve been experimenting with, both in my own work and in coaching others:

  1. Define the Persona and Goal: Who is the AI supposed to be? What is the ultimate goal of this output?
    • Bad: “Write a report on climate change.”
    • Better: “Act as a policy analyst for the UN. Write a concise briefing report for a head of state on the economic impacts of rising sea levels in Southeast Asia over the next decade. The goal is to inform policy decisions for infrastructure investment.”
  2. Specify Constraints and Exclusions: What should the AI *not* do or include? This is often as important as what you want it to do.
    • Bad: “Generate ideas for a new app.”
    • Better: “Brainstorm app ideas for addressing urban loneliness. Exclude any ideas that require significant hardware development or rely on subscription models for core functionality. Focus on community-building and low-barrier-to-entry solutions.”
  3. Provide Examples (Few-Shot Prompting): If you have a particular style, tone, or format in mind, give the AI a few examples. This is incredibly powerful.
    • Bad: “Write a short story about a detective.”
    • Better: “Write a short, gritty detective story in the style of Raymond Chandler. Here’s an example of the kind of opening I like: ‘The rain was a cold, wet blanket over the city, and the only thing colder was the look in her eyes.’”
  4. Iterate and Refine: Your first prompt probably won’t be perfect. Treat the interaction as a conversation. Ask follow-up questions, provide additional context, and refine your instructions based on the AI’s output.
    • Initial Prompt: “Explain quantum entanglement simply.”
    • AI Output: (technical explanation, still a bit complex)
    • Refinement: “That’s helpful, but can you explain it using an analogy a 10-year-old would understand, without using any physics jargon?”
  5. Think in ‘Variables’: If you’re using AI for repetitive tasks, consider how you can template your prompts with variables that you can easily swap out. This forces you to think systematically about what changes and what stays the same.
    • Example for content generation:
    • "Write a [length, e.g., 300-word] blog post about the benefits of [topic, e.g., mindful eating]. The tone should be [tone, e.g., encouraging and informative]. Include a call to action to [action, e.g., try a 7-day mindful eating challenge]."

The Reflection in the Machine

Ultimately, the quality of our AI interactions isn’t solely a function of the models themselves. It’s also a direct reflection of our ability to articulate our thoughts, our desires, and our intent. When Leo finally, triumphantly, puts his shoes in their cubby, it’s not just because he’s learned the rule; it’s because I’ve learned to communicate that rule with enough clarity, repetition, and specific guidance for him to grasp it.

With AI, we have an even greater responsibility, because the stakes are higher than just misplaced sneakers. We’re building systems that will increasingly shape our information, our decisions, and our world. If we train these systems with lazy, ambiguous prompts, we’re not just getting sub-par answers; we’re inadvertently reinforcing a culture of imprecision. We’re teaching the AI to mirror our own bad habits, rather than pushing ourselves to be clearer, more intentional agents in the digital space.

So, next time you’re about to type a quick, vague prompt into your favorite AI tool, take a moment. Pause. Think about what you *really* want. Think about the specific outcome. Because in that moment, you’re not just asking a machine for an answer; you’re looking into a mirror, and what you see reflected back might just be your own agency, or lack thereof.

Actionable Takeaways:

  • Treat prompts as declarations of intent: Be precise about your goals and desired outcomes.
  • Embrace constraints: Define what the AI *shouldn’t* do as much as what it should.
  • Provide context and examples: Don’t make the AI guess your specific meaning or style.
  • Iterate and refine: Use the AI’s output as feedback to improve your next prompt.
  • Be specific about audience and persona: This dramatically improves relevance and tone.

🕒 Last updated:  ·  Originally published: March 15, 2026

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: Best Practices | Case Studies | General | minimalism | philosophy
Scroll to Top