It’s 2026, and I’m sitting here, staring at a screen that’s mostly filled with a half-written article about… well, about the future. Specifically, about how we, as agents in this increasingly complex world, are going to deal with the inevitable, and frankly, already here, problem of AI making decisions for us. Not just recommendations, mind you, but actual, impactful choices that shape our lives, our work, and maybe even our sense of self.
My coffee, a rather strong Ethiopian blend, is getting cold, which is a good metaphor for how quickly some of our traditional notions of agency are cooling off in the face of these new systems. We talk a lot about AI ethics, and believe me, that’s crucial. But I want to dig into something a bit more immediate, more personal: the erosion of our decision-making muscle, and how we can consciously fight to keep it toned.
The Subtle Art of Outsourcing Our Brains
Think about your morning. Maybe you ask a smart speaker for the weather. It tells you to bring an umbrella. Do you check a second source? Do you look out the window and make your own judgment based on the clouds? Or do you just grab the umbrella? This is a tiny, inconsequential example, but it’s the thin end of a very large wedge.
I recently had a conversation with a friend who runs a small e-commerce business. He was raving about a new AI tool that optimizes his ad spend. “Sam,” he said, “it’s incredible. I just tell it my budget and my target ROAS, and it does everything. I don’t even look at the campaigns anymore.”
On one hand, fantastic. Efficiency is the holy grail for small businesses. On the other hand, a little alarm bell went off in my head. What happens when that system makes a truly terrible call? What happens when the market shifts in a way the model hasn’t seen? Will he even know *why* it’s failing? Will he have the underlying knowledge to course-correct, or will he just be hitting a “reset” button and hoping for the best?
This isn’t about AI being inherently bad. It’s about our willingness to cede our cognitive ground, piece by piece, without fully understanding the implications for our own agency. It’s about the shift from *tool use* to *system reliance*.
When Does “Helpful” Become “Hindering”?
I remember back in the early days of personal finance software. It would categorize your spending, suggest budgets. It was a mirror, reflecting your habits back at you. You still had to decide to save, to cut back. Now, we have AI advisors that can execute trades, rebalance portfolios, even suggest specific investments based on complex algorithms. The temptation to just let it run is enormous.
The problem is, intelligence isn’t just about processing information and making optimal choices. It’s about understanding context, about nuance, about the *why*. It’s about learning from mistakes, not just correcting them computationally. When an AI makes a “mistake” (or an outcome we don’t like), we often don’t get a clear explanation. We just get a new output. This lack of transparency, coupled with our increasing reliance, is a dangerous cocktail.
A few months ago, I was trying to plan a complex travel itinerary involving multiple cities, different currencies, and some very specific cultural events. My usual go-to AI travel assistant was, surprisingly, not cutting it. It kept optimizing for cost or speed, ignoring the qualitative aspects I valued. It would suggest flights that arrived too late for a specific concert, or hotels far from the local markets I wanted to explore.
I found myself getting frustrated, feeling like I was wrestling with an unyielding bureaucracy rather than interacting with a helpful tool. Eventually, I scrapped the AI’s suggestions and went back to good old-fashioned manual research: opening multiple tabs, comparing options, reading reviews, and making my own judgments. And you know what? It felt good. It felt like I was *designing* my trip, not just accepting a pre-packaged one.
That experience brought home a crucial point: the value of friction. Sometimes, a little friction, a little cognitive effort, is precisely what we need to maintain our agency. It forces us to engage, to understand, to own our decisions.
Reclaiming the Cognitive Driver’s Seat
So, how do we push back? How do we stay agents in a world increasingly run by autonomous systems? It’s not about rejecting AI wholesale; that’s neither practical nor desirable. It’s about conscious engagement and strategic disengagement.
1. Develop a “Why” Reflex
Whenever an AI system makes a recommendation or an autonomous decision that impacts you, pause and ask: “Why?” Demand an explanation, even if it’s a simple one. If the system can’t provide one, or if the explanation is opaque, that’s a red flag. This isn’t always easy, as many systems are black boxes. But even the act of *asking* strengthens your own critical thinking.
For example, if you’re using an AI for content generation (like drafting blog post outlines, not writing the whole thing, obviously!), and it suggests a particular angle, don’t just accept it. Ask yourself: “Why this angle? What other angles did I consider? Does this truly align with my goals and audience?”
2. The “Human-in-the-Loop” Isn’t Just for Safety-Critical Systems
We often hear about human-in-the-loop for things like autonomous driving or medical diagnostics. But I think we need to apply this principle more broadly to our everyday AI interactions. It means actively reviewing and overriding AI suggestions, even if they seem “good enough.”
Consider a simple task like managing your email inbox. Many AI tools will categorize, prioritize, and even draft responses. Instead of letting it send that AI-generated response, read it, edit it, make it your own. Better yet, draft a few sentences yourself before using the AI to polish it. This maintains your unique voice and ensures the message truly reflects your intent.
Here’s a practical example if you’re using a tool that offers AI-generated email drafts. Instead of just clicking “send,” consider a process like this:
# AI Suggestion (example)
"Subject: Following Up on Project X
Hi [Name],
Just wanted to touch base regarding Project X. The latest progress report indicates we're on track. Let me know if you have any questions.
Best,
[Your Name]"
# Your Human-in-the-Loop Process:
1. **Read:** Is the tone right? Is anything missing?
2. **Edit for Tone/Personalization:** "Hi [Name], Hope you're having a good week! Quick check-in on Project X. I saw the latest report and it looks like we're moving along nicely. I was particularly pleased with [specific detail]. Do you have any immediate thoughts or questions on [specific aspect]?"
3. **Add Value:** "I've also been thinking about [related idea] and wonder if we should discuss it next week."
4. **Send (your version).**
This isn’t about rejecting the AI’s convenience; it’s about using it as a starting point, a prompt, rather than a final product. It keeps your brain engaged in the communication process.
3. Cultivate Cognitive Redundancy (Don’t Put All Your Eggs in One Algorithmic Basket)
Just as you wouldn’t rely on a single data backup, don’t rely on a single AI system for critical insights or decisions. If you’re researching a complex topic, use multiple sources, including human-generated content and, yes, even different AI models. Compare their outputs. Look for discrepancies. This forces you to synthesize information and form your own conclusions.
Another example: if you’re a developer using an AI code assistant. It can be incredibly helpful for boilerplate code or debugging. But don’t just copy-paste without understanding. Take the time to step through the suggested code, understand its logic, and consider alternative approaches. This is how you learn and grow, not just how you ship code faster.
# AI-suggested Python function for calculating Fibonacci
def fibonacci(n):
a, b = 0, 1
for i in range(n):
print(a, end=" ")
a, b = b, a + b
# Your cognitive redundancy check:
# 1. Does it handle edge cases (n=0, n=1)? (No, it prints 0 for n=1, and nothing for n=0)
# 2. Is it efficient for large 'n'? (Prints, doesn't return list, might be slow for huge 'n')
# 3. What are alternative implementations? (Recursive, dynamic programming returning a list)
# 4. Do I understand *why* a, b = b, a + b works? (Yes, simultaneous assignment)
# Your revised, agency-driven version:
def get_fibonacci_sequence(n):
if n <= 0:
return []
elif n == 1:
return [0]
sequence = [0, 1]
while len(sequence) < n:
next_val = sequence[-1] + sequence[-2]
sequence.append(next_val)
return sequence
# This version is more robust, returns a usable list, and demonstrates a deeper understanding.
The goal isn't to be faster than the AI; it's to be smarter, more adaptable, and ultimately, more *human*.
4. Embrace the Messy, the Intuitive, and the Unquantifiable
AI excels at optimization based on quantifiable metrics. But life, and many of our most important decisions, are full of unquantifiable factors: intuition, gut feelings, ethical considerations that go beyond a simple cost-benefit analysis, aesthetic preferences, emotional resonance. Don't let AI systems systematically devalue these aspects of your decision-making.
When choosing a new apartment, an AI might optimize for commute time and rent. But it won't tell you about the charming coffee shop on the corner, the friendly vibe of the neighborhood, or the way the morning light hits the kitchen. These "soft" factors are often what make a place feel like home. Your agency lies in prioritizing these over purely objective metrics.
The Future of Our Minds
We are at a fascinating, and somewhat precarious, juncture. AI offers incredible power and convenience, but it also presents a subtle challenge to our cognitive sovereignty. The ease of offloading mental tasks is seductive. But like any muscle, if we don't use our decision-making faculties, they will atrophy.
My hope for agntzen.com readers, for all of us, is that we consciously choose to remain active, engaged agents in this future. That we view AI not as a replacement for our intellect, but as a sophisticated tool that demands our conscious direction. That we remember the value of the "why," the importance of the human touch, and the irreplaceable nature of our own nuanced, messy, and wonderfully human judgment.
Because ultimately, the future isn't just about what technology can do; it's about what we, as humans, choose to do with it.
Actionable Takeaways:
- **Question Everything (from AI):** Cultivate a "Why?" reflex. Don't just accept AI outputs; interrogate them.
- **Stay in the Loop:** Actively review, edit, and personalize AI-generated content or decisions, even for mundane tasks. Make it your own.
- **Diversify Your Information Diet:** Don't rely on a single AI source. Compare outputs from different models and human sources.
- **Value the Unquantifiable:** Prioritize intuition, ethics, and subjective preferences in your decision-making, even when AI suggests optimal "objective" solutions.
- **Practice Deliberate Disengagement:** Periodically turn off AI assistance for certain tasks to keep your cognitive muscles toned.
🕒 Published: