\n\n\n\n I See AI Differently Than Most—Heres Why It Matters - AgntZen \n

I See AI Differently Than Most—Heres Why It Matters

📖 9 min read1,773 wordsUpdated Apr 15, 2026

It’s 2026, and I’m still surprised by how many conversations about AI, even among smart people, feel like we’re back in 2018. The language, the framing, the underlying assumptions – it’s often stuck in a loop of “AI will take all our jobs” or “AI will solve all our problems.” Both, frankly, are pretty unhelpful and miss the truly interesting stuff happening right now.

At Agntzen, we’re always pushing on the idea of agency, what it means, how it’s distributed, and how technology reshapes it. And nowhere is that more apparent, and more misunderstood, than in the current state of AI. Specifically, I want to talk about the creeping automation of decision-making, not at the grand, societal level, but in the micro-decisions that shape our day-to-day work, our creative output, and even our personal choices. This isn’t about AI replacing humans; it’s about AI subtly, and sometimes not so subtly, influencing the very act of choosing.

The Quiet Erosion of Micro-Decisions

Think about your last week. How many times did you open a new document and start typing without first checking a “suggested” outline? How often did you accept a code completion from your IDE without truly evaluating if it was the most elegant solution, or just the most immediate? Did you ever find yourself scrolling through an endless feed, feeling a vague sense of obligation to engage with content that an algorithm had explicitly curated to keep you there?

These are micro-decisions. They seem small, almost insignificant. But they add up. My argument today is that these small, seemingly convenient automations are chipping away at our individual agency in ways we’re only just beginning to grasp. And the real danger isn’t that AI will make all our decisions for us, but that it will make our decisions for us, but through us – by subtly shifting the Overton window of possibility, by pre-biasing our options, and by making the path of least resistance the path of algorithmic suggestion.

A Personal Anecdote: The Case of the “Optimized” Blog Post

Just last month, I was wrestling with a tricky concept for a blog post. I had a good core idea, but the introduction felt clunky. So, I did what many of us do: I tossed a few bullet points into a popular large language model and asked it to “draft an engaging opening.”

What came back was… fine. It was grammatically correct, flowed well, and hit all the expected notes. But it wasn’t me. It lacked the specific quirky phrasing, the slight tangent I’d usually go on, the little rhetorical flourishes that are part of my voice. In the past, I would have iterated, struggled, walked away, and come back to it. That struggle, that wrestling with language, is where new connections form, where unique expressions emerge.

Instead, I found myself editing the AI’s output. I tweaked a word here, rephrased a sentence there. I optimized it. And in that optimization, I realized I hadn’t truly written an introduction; I had merely refined an algorithmic suggestion. The core decision – how to start this piece – had been offloaded. And while the outcome was efficient, I felt a distinct lack of the creative satisfaction I usually get from truly crafting something.

This isn’t to say AI is inherently bad for writing. Far from it. I use it for brainstorming, for summarizing research, for catching typos. But the moment I let it dictate the initial creative spark, I felt a dilution of my own agency in the process.

The Allure of the Default: Why We Succumb

Why do we do this? Why do we so readily accept these automated micro-decisions? A few reasons come to mind:

  • Cognitive Load Reduction: Our brains are lazy, in a good way. They seek efficiency. If an AI can give us a “good enough” answer or path, we’ll often take it to conserve mental energy for perceived higher-stakes decisions.
  • Perceived Objectivity: There’s a subtle belief that an algorithm, being “logical,” might be more objective or “correct” than our own potentially biased human intuition.
  • Fear of Missing Out (FOMO) on Efficiency: In a fast-paced world, not using a tool that promises speed feels like falling behind.
  • The Illusion of Choice: Often, the AI presents a few options, giving us the feeling of choice, even if those options are themselves algorithmically constrained and don’t represent the full spectrum of possibilities.

This isn’t a new phenomenon, of course. For decades, default settings in software have guided our behavior. But with generative AI, the suggestions are dynamic, contextual, and often presented with an authoritative confidence that makes them harder to question.

Practical Strategies for Reclaiming Micro-Decisions

So, what do we do? Do we reject all AI tools? Of course not. That’s neither practical nor desirable. The trick is to be intentional, to cultivate a kind of “algorithmic skepticism” at the micro-level. Here are a few concrete ways I’ve been trying to do this:

1. The “Blank Page” Protocol

Before you even think about prompting an AI for a first draft, try starting with a truly blank page. For me, this often means a simple text editor, no formatting, no fancy features. Just me and the cursor. The goal isn’t to produce a perfect first draft, but to get your own raw, unfiltered ideas down. Only after I’ve wrestled with the initial thoughts for 15-30 minutes do I consider bringing an AI into the process, perhaps to refine, rephrase, or expand on my own core ideas.

This applies to code too. Before asking an AI for a function, try writing out the logic in pseudocode or even just comments. What are the inputs? What’s the desired output? What are the edge cases? This forces you to engage with the problem on your own terms first.

2. The “Prove It” Prompt

When an AI gives you a suggestion, especially for something critical, don’t just accept it. Ask it to justify its reasoning. Not in a confrontational way, but as a genuine inquiry into its decision-making process. For example:


"That's an interesting suggestion for the introduction. Could you explain why you chose to focus on [specific element] over [another element I considered]?"

"You've suggested this particular library for data processing. What are its main advantages compared to [alternative library I know] for this specific task?"

This isn’t just about validating the AI; it’s about forcing yourself to engage critically with its output, to understand the underlying logic, and to compare it against your own knowledge and intuition. Sometimes, the AI’s reasoning will be solid and reveal a blind spot you had. Other times, it will expose the limitations or biases of the model, giving you a chance to course-correct with your own judgment.

3. The “Two-AI Comparison” (with a Twist)

If you’re using AI for generating options (e.g., content ideas, code structures, marketing copy), try prompting two different models with the exact same input. Compare their outputs. Don’t just pick the “best” one. Analyze why they differed. What assumptions did each model make? What stylistic preferences did they exhibit?

The twist: Then, try to generate a third, human-created option that synthesizes the best aspects of both AI outputs, but adds a distinctly human touch – a unique perspective, an unexpected analogy, a subtle emotional resonance that the AIs missed. This forces you to be the ultimate arbiter, the creative synthesizer, rather than a mere editor of AI-generated content.

Here’s a simple example in Python, imagining two different AI models for generating a basic web server structure. You get two outputs:

AI Model A’s Suggestion (Flask):


from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/hello', methods=['GET'])
def hello_world():
 name = request.args.get('name', 'World')
 return jsonify({"message": f"Hello, {name}!"})

if __name__ == '__main__':
 app.run(debug=True)

AI Model B’s Suggestion (FastAPI):


from fastapi import FastAPI
import uvicorn

app = FastAPI()

@app.get("/hello")
async def hello_world(name: str = "World"):
 return {"message": f"Hello, {name}!"}

if __name__ == '__main__':
 uvicorn.run(app, host="0.0.0.0", port=8000)

Instead of just picking one, a human might look at these and decide: “Flask is simpler for small projects, but FastAPI offers better performance and async capabilities for future scaling. For this project, I need to start small but anticipate growth, so I’ll go with FastAPI but simplify the boilerplate, and perhaps add a custom logger that neither AI suggested.” You’re not just choosing; you’re actively designing.

4. Set Clear Boundaries and Intentions

Before you even open an AI tool for a task, define its role. Is it a research assistant? A brainstorming partner? A grammar checker? A code refactorer? Be explicit. If you’re using it to brainstorm, commit to generating at least three of your own ideas before prompting the AI for more. If it’s for summarization, read the original source first, then compare your summary to the AI’s to spot omissions or misinterpretations.

This is about consciously setting the boundaries of AI’s influence in your workflow, rather than letting it subtly expand into every corner of your decision-making.

The Future of Agency Isn’t About Avoiding AI, But Mastering It

The conversation around AI and agency needs to shift. It’s not about whether AI will take our agency away; it’s about how we, as individuals and as a society, choose to interact with these powerful tools. It’s about recognizing the subtle pressures, the convenient defaults, and the allure of effortless efficiency, and consciously choosing when and where to exert our own human will.

True agency in the age of AI isn’t about shunning the technology. It’s about becoming a more discerning, more intentional, and ultimately, a more powerful agent ourselves. It’s about using AI not to offload our thinking, but to augment it, to challenge it, and to push our own creative and problem-solving capacities further than ever before. The future isn’t about AI making decisions for us; it’s about us making better decisions, with AI as our occasionally brilliant, occasionally flawed, but always instrumental partner.

Actionable Takeaways:

  • Start Raw: Always begin a creative or problem-solving task with your own thoughts on a “blank page” before involving AI.
  • Question Everything: Ask AI to justify its suggestions. Don’t blindly accept its outputs.
  • Compare and Synthesize: Use multiple AI models and then actively synthesize their outputs with your own unique perspective.
  • Define Roles: Set clear, intentional boundaries for when and how you use AI in your workflow to avoid creeping automation of micro-decisions.
  • Embrace the Struggle: Remember that the friction of creative and intellectual struggle is often where true innovation and insight emerge. Don’t always choose the path of least resistance.

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: Best Practices | Case Studies | General | minimalism | philosophy
Scroll to Top