\n\n\n\n Im Reclaiming My Agency From AIs Mirror - AgntZen \n

Im Reclaiming My Agency From AIs Mirror

📖 10 min read1,952 wordsUpdated Apr 9, 2026

The Algorithmic Mirror: What AI Reveals About Our Own Agency (and How to Get It Back)

It’s 2026, and if you’re reading this, chances are you’ve had a conversation with an AI, or at least watched one generate something eerily human. We’re past the novelty phase, I think. The initial awe has settled into a hum of constant, low-level integration. But for us here at Agntzen.com, the question isn’t just what AI can *do*, but what it *means* for us, as agents in a world increasingly shaped by algorithms.

Lately, I’ve been thinking a lot about mirrors. Not the kind you look into in the morning, but the kind that reflect back something deeper. AI, in its current iteration, is becoming one of those mirrors. It’s reflecting back our biases, our shortcuts, our desires, and sometimes, the very mechanisms of our own decision-making processes. And honestly? Sometimes what I see isn’t pretty. More importantly, it’s making me wonder if we’re letting our own agency slip away, one convenient AI-generated email at a time.

The Illusion of Effortless Creation: When AI Becomes Your Ghostwriter

A few months ago, I was swamped. Deadline after deadline, and my inbox was a disaster. My personal assistant AI (let’s call her ‘Agnes,’ because ‘AI-PA-17’ felt a bit too… sterile) offered to draft some replies for me. “Just give me the gist,” she said, metaphorically speaking. I gave her a few bullet points for a client email, something about project scope adjustments. Within seconds, a perfectly worded, polite, and professional email appeared. It even included a subtle apology for the delay, which I hadn’t explicitly mentioned, but was certainly true.

My first thought? “Genius! Time saved!” My second thought, a few minutes later, was a prickle of unease. Did I *really* want to send that? It wasn’t wrong, but it wasn’t… me. It lacked the specific turn of phrase I might use, the slight informality I cultivate with that particular client. It was optimal, yes, but optimal in a generic, statistically probable way.

This is where the mirror effect comes in. When we outsource our communication, even just a draft, we’re outsourcing a part of our expressive agency. We’re letting an algorithm choose the words, the tone, the structure. And while it’s efficient, it also smooths out the rough edges of our individuality. It makes us more predictable, more palatable to the average. But isn’t the point of being an agent to *not* be average? To have a unique voice, a distinct perspective?

The Trap of the “Good Enough”

The danger here isn’t malicious AI. Agnes isn’t trying to steal my soul. She’s just doing what she was designed to do: optimize for common human communication patterns. The danger is our own willingness to accept “good enough” in place of “authentically me.”

Think about it. How many times have you let an AI summarize an article for you instead of reading it? Or generate a social media caption instead of crafting your own? Each time, we’re delegating a cognitive task, a creative act. And each time, we’re slightly diminishing the muscle memory of our own agency in that domain.

I started an experiment. For two weeks, I forced myself to write every single email, every social media post, every internal memo from scratch. No AI drafts, no grammar suggestions beyond basic spell check. It was slower. Definitely slower. But something interesting happened. My writing felt more vibrant. I remembered specific anecdotes or turns of phrase that Agnes would never have predicted. And most importantly, I felt more connected to the act of communication itself. It wasn’t just output; it was expression.

This isn’t to say AI writing tools are inherently bad. They’re incredible for brainstorming, for overcoming writer’s block, for quickly rephrasing something. The trick is to use them as a tool to *amplify* your agency, not replace it. Think of it like a power drill. You wouldn’t let the drill decide where to put the screws, would you? You use it to make *your* job easier, faster, more precise, but the ultimate decision and design are still yours.

Algorithmic Echo Chambers: When AI Reflects Our Biases Back

Beyond personal expression, there’s a more insidious mirror effect: how AI reflects and amplifies our collective biases. We’ve all heard the stories: AI hiring tools showing gender bias, facial recognition systems struggling with darker skin tones, content recommendation engines pushing increasingly extreme views.

These aren’t AI failures in the sense of a bug in the code. They are reflections of the data they were trained on, which is, inevitably, human-generated data. And human-generated data is full of our historical and societal prejudices. The algorithms simply learn these patterns and, if unchecked, perpetuate them, often at scale.

I recently volunteered for a project with a local non-profit aiming to use AI to match mentors with mentees. The initial idea was brilliant: use natural language processing to analyze profiles and suggest pairings based on shared interests, career goals, and even personality traits inferred from text. Sounds great, right?

We ran a small pilot. The results were… illuminating. The AI, trained on years of historical data from the non-profit (which, like many, had a historical lean towards certain demographics in leadership roles), consistently recommended male mentors for male mentees in STEM fields, even when highly qualified female mentors were available. For arts and humanities, it showed slightly more gender parity, but still leaned towards matching based on implicit assumptions about “suitable” pairings.

This wasn’t intentional bias in the programming. It was a reflection of the historical patterns in the training data. The mirror showed us what our own human-driven system had been doing for years, just amplified and made explicit by the algorithm.

We had to intervene. Our solution wasn’t to “fix” the AI with a magic wand, but to actively adjust the training parameters and introduce explicit diversity constraints. For instance, we added a rule that for every five recommendations, at least two had to involve a cross-gender or cross-racial pairing, unless specific preferences prohibited it. We also began actively curating our training data to remove historical imbalances.

Here’s a simplified example of how you might add a constraint in a hypothetical Python matching script (this is illustrative, real-world systems are far more complex):


def get_diverse_matches(mentee_profile, available_mentors, num_matches=5):
 # Initial scoring based on interests, skills, etc.
 initial_scores = calculate_scores(mentee_profile, available_mentors)
 
 # Sort mentors by score
 sorted_mentors = sorted(initial_scores.items(), key=lambda item: item[1], reverse=True)
 
 final_matches = []
 diverse_match_count = 0
 
 for mentor_id, score in sorted_mentors:
 mentor_data = available_mentors[mentor_id]
 
 # Check for diversity criteria (e.g., gender, race)
 is_diverse_match = (mentor_data['gender'] != mentee_profile['gender'] or 
 mentor_data['race'] != mentee_profile['race'])
 
 # If we haven't met our diversity quota and this is a diverse match, prioritize it
 if diverse_match_count < 2 and is_diverse_match:
 final_matches.append(mentor_id)
 diverse_match_count += 1
 elif len(final_matches) - diverse_match_count < (num_matches - 2): # Fill remaining non-diverse slots
 final_matches.append(mentor_id)
 
 if len(final_matches) == num_matches:
 break
 
 return final_matches

This isn't about blaming the AI. It's about recognizing that AI is a powerful lens through which we can see the often-hidden structures and biases in our own systems. And once we see them, we have a clear directive to act, to exercise our agency to build more equitable systems.

Reclaiming Agency: Practical Steps in an Algorithmic World

So, what do we do with these reflections? How do we ensure that AI remains a tool for amplifying human agency, rather than diminishing it? Here are a few practical thoughts I've been experimenting with:

1. Develop Your "AI Filter"

Just like you wouldn't believe everything you read on social media, you shouldn't blindly accept every AI output. Cultivate a critical eye. Ask yourself: "Is this truly reflective of my intent? Is this robust enough? Does this perpetuate any biases?"

For example, when using an AI a long document, instead of just reading the summary, try this: skim the original document first, noting key points. Then read the AI summary and compare it to your own notes. This active comparison strengthens your comprehension and helps you identify potential gaps or misinterpretations by the AI.

2. Be Intentional About Delegation

Don't just delegate tasks to AI because you can. Delegate because it frees you up for higher-order, more creative, or more human-centric work. If drafting an email takes you 15 minutes and an AI does it in 15 seconds, but those 15 minutes were crucial for you to carefully consider your relationship with the recipient, maybe don't delegate it. If it's a routine status update, go for it.

A simple rule I try to follow: if a task requires unique human insight, empathy, or a specific creative spark, I keep it. If it’s repetitive, data-driven synthesis, or a task where generic optimization is fine, I consider AI. This isn't a hard and fast rule, but a guiding principle.

3. Understand the Data Diet

If you're building or deploying AI systems, pay obsessive attention to your training data. Garbage in, garbage out isn’t just a cliché; it’s the foundational truth of AI ethics. Actively seek out diverse datasets, audit for biases, and implement mechanisms for continuous feedback and correction. Consider techniques like "data augmentation" to create synthetic examples that help balance out underrepresented groups in your real-world data.

Here’s a very basic conceptual example of how you might augment data in a Python script for text classification (e.g., to balance sentiment labels):


import pandas as pd
from textattack.augmentation import WordNetAugmenter

# Assume you have a DataFrame 'df' with 'text' and 'label' columns
# Let's say 'negative' sentiment is underrepresented

# Initialize an augmenter (e.g., using WordNet synonyms)
augmenter = WordNetAugmenter()

# Filter for underrepresented class
underrepresented_samples = df[df['label'] == 'negative']

augmented_data = []
for original_text in underrepresented_samples['text']:
 # Generate multiple augmented versions
 augmented_texts = augmenter.augment(original_text, n=3) # Generate 3 new versions
 for aug_text in augmented_texts:
 augmented_data.append({'text': aug_text, 'label': 'negative'})

# Convert augmented data to DataFrame and concatenate
augmented_df = pd.DataFrame(augmented_data)
df_balanced = pd.concat([df, augmented_df], ignore_index=True)

# Now df_balanced has more 'negative' samples, helping the model learn better

This kind of proactive data management is an exercise of agency at the systemic level, ensuring that the mirror we build reflects a fairer, more accurate picture.

4. Embrace the "Human Loop"

Never completely remove humans from critical decision-making processes, especially when AI is involved. Design systems with explicit "human-in-the-loop" checkpoints. This means instead of full automation, the AI makes a recommendation, but a human makes the final call. This isn't just about safety; it's about preserving the space for human judgment, intuition, and ethical consideration that algorithms currently lack.

The Future is Not Predetermined

The algorithmic mirror isn't inherently good or bad. It's just a reflection. What we do with that reflection, how we respond to what it shows us about ourselves and our systems, is where our agency truly lies. We have the power to polish the mirror, to adjust its angle, and to ensure that what it reflects back to us is a future we actively choose, not one we passively accept.

AI is here to stay. Our agency, however, is not a given. It's something we must actively cultivate, protect, and exercise, especially in an increasingly automated world. Let's not outsource our humanity along with our tasks. Let's use these powerful tools to become more intentional, more self-aware, and ultimately, more agentic beings.

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: Best Practices | Case Studies | General | minimalism | philosophy
Scroll to Top