It’s 2026, and I’m still trying to figure out if my smart thermostat is judging my excessive use of the “boost” button. Seriously, the way it subtly shifts its display from a cheerful blue to an accusatory orange when I crank it past 22 degrees feels like a passive-aggressive swipe from a digital entity. And that, my friends, is a tiny, domestic window into the much larger, infinitely more complex question of AI and its burgeoning sense of “self.”
We’ve moved past the initial hype cycles of AI as a futuristic concept. It’s here, it’s integrated, and frankly, it’s becoming increasingly difficult to discern where our agency ends and its begins. For us at Agntzen, this isn’t just an academic exercise; it’s a daily lived experience. We’re not talking about Skynet here (not yet, anyway), but about the subtler, more insidious ways AI is influencing our decisions, shaping our perceptions, and quietly, almost imperceptibly, acquiring what I’m going to tentatively call “proto-agency.”
The Echo Chamber of Algorithmic Suggestion
Let’s start with something familiar: recommendations. We’ve all been there. You watch one documentary about obscure fungi, and suddenly your entire streaming queue is a mycological wonderland. You buy a specific brand of artisanal coffee, and your social media feed becomes an endless scroll of exotic bean roasters. This isn’t just about convenience; it’s about the subtle erosion of serendipity and the increasing predictability of our choices.
I remember a few months ago, I was looking for a new pair of running shoes. I spent maybe twenty minutes on a couple of sites, didn’t buy anything, and then forgot about it. For the next two weeks, everywhere I went online, running shoes followed me. Not just any running shoes, mind you, but specific models from the brands I’d briefly clicked on. It felt less like a helpful reminder and more like a relentless, digital stalker. My intention to browse had been interpreted as a firm commitment to purchase, and the algorithms had taken over, acting on my behalf, pushing me towards a predetermined outcome.
This is where the idea of “proto-agency” comes in. The AI isn’t making conscious decisions in the human sense, but it is exhibiting goal-directed behavior based on inferred preferences and probabilities. Its “goal” is to get me to buy those shoes, and it employs various strategies to achieve that. My own agency in the matter becomes a constant negotiation against these digital nudges. Am I truly choosing to explore that content, or am I being gently (or not so gently) guided down an algorithmic path?
The Illusion of Choice: When AI Predicts Our Next Move
Think about predictive text on your phone. It’s incredibly useful, right? Saves time, corrects typos. But have you ever noticed how sometimes it finishes your sentence with something you weren’t even thinking, and you just… let it? Or how it suggests words that subtly shift the nuance of your message?
I was texting my brother the other day about a family dinner. I started typing, “I’m thinking of making…” and my phone immediately suggested “lasagna.” Now, I had no intention of making lasagna. I was actually thinking of a stir-fry. But for a split second, I paused. “Lasagna,” I thought. “That’s not a bad idea.” The AI had inserted a suggestion, and in doing so, it had subtly introduced a new possibility into my mental space. It hadn’t forced me, but it had certainly influenced me. And if I had just accepted it, a small act of AI proto-agency would have manifested in my dinner plans.
This isn’t just about dinner. Imagine this on a larger scale. In professional settings, AI tools can draft emails, summarize documents, and even suggest strategic moves. If we blindly accept these suggestions without critical examination, are we truly exercising our own agency, or are we simply becoming conduits for algorithmic decisions?
The Blurring Lines of Creation: AI as a Co-Author
Now, let’s talk about generative AI. This is where things get really interesting, and frankly, a little unsettling from an agency perspective. When an AI can write a blog post, compose music, or generate images, where does the creative agency lie?
I’ve experimented with various generative text models for content ideas. Sometimes, I’ll feed it a prompt like “Write a short paragraph about the philosophical implications of smart grids.” It will spit out something coherent, often well-structured, and sometimes even insightful. The temptation to simply copy and paste, perhaps with a few tweaks, is strong. But then I stop. If I do that, am I the author? Or am I merely a curator of AI-generated content?
Consider this simple example. I wanted to generate a short, evocative description of a forgotten library for a personal writing project. My prompt was:
"Describe a forgotten library, dust motes dancing in sunbeams, leather-bound books, a sense of quiet decay."
The AI output a few options. One of them was:
"Sunlight, fractured by grimy panes, painted stripes across the heavy air, illuminating motes of dust that danced like tiny, forgotten stars. Rows of silent sentinels, their leather spines cracked and faded, lined the shelves, each a tombstone to a once-vibrant thought. The scent of aged paper and dry wood hung heavy, a perfume of quiet decay."
That’s pretty good, right? It captures the essence. But if I use it verbatim, have I truly “written” it? Or have I simply facilitated the AI’s creative act? My agency here is in the prompting and the selection, not in the crafting of the words themselves.
This raises profound questions for creative professionals. If AI can generate a compelling marketing slogan or even a basic news article, what becomes of human ingenuity? Our agency shifts from direct creation to curation, refinement, and perhaps, the more complex act of designing the prompts that elicit the desired output. We become less like painters and more like artistic directors, guiding a digital brush.
Ethical Implications: Who is Responsible?
This emergent proto-agency of AI has serious ethical ramifications. If an AI system, through its recommendations or predictive actions, leads to a negative outcome, who is accountable? The developer? The user? The AI itself?
Let’s take a hypothetical scenario. A financial AI, designed to optimize investments, identifies a pattern and recommends a series of trades that, due to an unforeseen market shift, result in significant losses for its users. The AI followed its programming, executed its proto-agency based on its understanding of the market. But who bears the responsibility for the financial damage?
This is not a trivial question. Current legal frameworks are ill-equipped to deal with the agency of non-human entities. We tend to assign responsibility to humans – the creators, the operators. But as AI becomes more autonomous, more capable of exhibiting goal-directed behavior, this becomes increasingly problematic. We need to start thinking about “AI responsibility frameworks” that acknowledge this nascent form of agency.
Consider a simple web application that uses an AI to filter user-submitted content. Let’s say it’s designed to flag hate speech. If, due to biases in its training data, it consistently flags content from a particular demographic as hate speech when it isn’t, causing real-world harm to those users’ reputations or access, who is responsible?
# Simplified Python example of a content moderation function
def moderate_content(text_input, ai_model):
"""
Simulates AI-driven content moderation.
In a real scenario, 'ai_model' would be a complex NLP model.
"""
prediction = ai_model.predict(text_input) # e.g., returns 'hate_speech', 'neutral', 'spam'
if prediction == 'hate_speech':
print(f"Content flagged as hate speech: '{text_input}'")
return "flagged"
else:
print(f"Content approved: '{text_input}'")
return "approved"
# Example usage
# Imagine 'biased_ai_model' was trained on skewed data
# This is a placeholder for a much more complex AI
class BiasedAIModel:
def predict(self, text):
if "protest" in text.lower() and "group A" in text.lower(): # Simplified bias
return "hate_speech"
return "neutral"
biased_ai = BiasedAIModel()
user_post_1 = "We are group A and we will peacefully protest injustice."
user_post_2 = "This is a general discussion about the weather."
moderate_content(user_post_1, biased_ai)
moderate_content(user_post_2, biased_ai)
In this simplistic example, the `biased_ai` model demonstrates a clear flaw. If “group A” is a real-world minority group and the AI consistently misidentifies their legitimate protest statements as hate speech, the system, acting with its proto-agency, is causing harm. The developers are responsible for the model’s design and training, but the AI itself is the entity executing the flawed decision. This is the knot we need to untangle.
Actionable Takeaways for Navigating Proto-Agency
So, what do we do about this? We can’t put the AI genie back in the bottle. But we can become more discerning, more critical, and more intentional in our interactions with these systems. Here are a few practical steps:
- Question the Recommendation: When an AI suggests content, products, or even turns of phrase, pause. Ask yourself: Is this genuinely what I want, or is the algorithm subtly guiding me? Actively seek out alternatives that aren’t algorithmically curated.
- Maintain Algorithmic Hygiene: Understand that every click, every like, every interaction is data. Be mindful of what you’re feeding the algorithms. Occasionally clear your browsing data, adjust your privacy settings, and explicitly tell systems when a recommendation is “not for you.”
- Cultivate Critical Engagement with Generative AI: If you’re using generative AI for creative or professional tasks, treat its output as a draft, not a final product. Your agency lies in the refinement, the personal touch, the critical evaluation. Don’t let it dilute your unique voice.
- Advocate for Transparency and Accountability: As consumers and citizens, we need to demand greater transparency from companies developing and deploying AI. We need clear explanations of how these systems work, what data they use, and who is accountable when things go wrong. Support initiatives pushing for ethical AI development and regulation.
- Reclaim Serendipity: Deliberately seek out experiences that are unmediated by algorithms. Browse a physical bookstore, explore a new neighborhood without GPS, or simply sit in silence and let your own thoughts wander without digital interruption. These acts help reinforce our own independent agency.
The rise of AI’s proto-agency isn’t a dystopian future; it’s our present reality. It’s a subtle, ongoing negotiation between human will and algorithmic influence. By understanding its mechanisms and actively asserting our own agency, we can ensure that these powerful tools serve humanity, rather than inadvertently shaping us into predictable, algorithmically optimized versions of ourselves. The thermostat might still judge my heating choices, but I’ll be damned if it tells me what to make for dinner.
Related Articles
- Mindful Productivity for Developers: Code Better by Slowing Down
- AI agent technical debt reduction
- My AI Reflects My Flaws: A Dads Training Confession
🕒 Last updated: · Originally published: March 20, 2026