\n\n\n\n My Agency Philosophy: Navigating AI Anxiety in March 2026 - AgntZen \n

My Agency Philosophy: Navigating AI Anxiety in March 2026

📖 9 min read1,792 wordsUpdated Mar 30, 2026

Hey everyone, Sam here from Agntzen.com. It’s March 2026, and I feel like we’re all living in a constant state of low-grade anxiety about… well, everything. But especially about AI. Everywhere you look, there’s another headline about some new breakthrough, some new capability, or some new existential threat. And honestly, it’s exhausting.

For us here, steeped in agent philosophy, the conversation around AI often feels like it’s missing something fundamental. We talk about the intelligence, the capabilities, the ethical dilemmas, but rarely do we really dig into the agency of it all. Or, more accurately, the lack thereof, and what that truly means for us as agents operating in an increasingly complex, algorithmically-mediated world.

Today, I want to talk about something specific, something I’ve been wrestling with in my own work and even in my personal projects: the subtle but profound shift from AI as a tool to AI as a proxy. It’s a distinction that sounds academic but has very real, very practical implications for how we build, interact with, and even perceive these systems.

Beyond the Tool: When AI Becomes Your Stand-In

We’ve all gotten comfortable with AI as a tool. Spellcheckers, recommendation engines, predictive text – these are all tools. They augment our capabilities, make things easier, faster, more efficient. My smart thermostat is a tool. My photo editor’s generative fill feature? A tool. We direct them; they perform a function. Simple.

But what happens when the AI isn’t just helping you write an email, but *writing* the email, or even *responding* to an email on your behalf? Or when an AI isn’t just suggesting code, but *generating* entire functions or even small applications based on a high-level prompt? This isn’t just a tool anymore. It’s acting *for* you. It’s a proxy.

I first started noticing this shift a few months back when I was trying to automate some of the more tedious aspects of managing my open-source project. I wanted an AI to handle basic bug reports, categorize them, and even suggest boilerplate responses to common issues. My initial thought was, “Great, a tool to save me time.”

But as I dug in, I realized I wasn’t just building a glorified filter. I was building a system that would *represent* me, or at least the project, to other contributors. It needed to understand context, maintain a certain tone, and even infer intent. It needed to act as a stand-in, a proxy for my own agency in those interactions. And that, my friends, is a whole different ballgame.

The Slippery Slope of Delegated Agency

The problem with proxies, especially digital ones, is that they tend to blur the lines of responsibility and intent. When you delegate an action to a proxy, you’re essentially saying, “Act as me in this context.” But does the proxy truly understand *your* intent, *your* values, *your* specific nuance?

Consider a customer service chatbot. In its simplest form, it’s a tool for answering FAQs. But many modern chatbots are designed to handle complex queries, process refunds, and even make judgment calls. They are acting as a proxy for the company’s customer service agent. When the chatbot makes a mistake, who is responsible? The bot? The developer? The company? The customer who interacted with it?

The philosophical implications here are vast. As agents, we understand that our actions have consequences, and we bear responsibility for them. When we delegate those actions to a system that lacks true understanding or consciousness, the chain of responsibility becomes incredibly difficult to trace. It’s like sending a robot to negotiate a peace treaty. The robot can execute the pre-programmed moves, but can it truly negotiate? Can it adapt to unforeseen circumstances with the same moral and ethical reasoning as a human agent?

Practicalities: Recognizing and Managing AI Proxies

So, how do we navigate this? How do we build and interact with AI systems in a way that acknowledges their proxy nature without abdicating our own agency? It comes down to a few core principles.

1. Explicitly Define the AI’s Scope of Agency

When you’re designing or deploying an AI, be crystal clear about what it can and cannot do. What decisions can it make autonomously? What actions can it initiate? Where are the hard boundaries?

For my bug report AI, I initially wanted it to auto-close “invalid” reports. But after some thought, I realized that delegating the judgment of “invalid” to an AI was too much. What if it misinterpreted something? What if a new contributor felt dismissed? Instead, I restricted its agency:

  • Can: Categorize reports, suggest tags, draft common responses.
  • Cannot: Close reports, directly assign severity (only suggest), make definitive statements about future features.
  • Requires Human Review: Any response involving apologies, complex technical explanations, or anything that could be perceived as a commitment.

This might seem like common sense, but it’s often overlooked in the rush to automate everything. The more sensitive the domain, the tighter the leash on the AI’s proxy functions should be.

2. Implement Clear Human-in-the-Loop Protocols

If an AI is acting as a proxy, there must be a clear pathway for human oversight and intervention. This isn’t just about catching errors; it’s about maintaining agency and accountability.

Think about a sales assistant AI that drafts personalized outreach emails. If it sends them directly, it’s a proxy. If it drafts them and puts them in a human’s inbox for review and approval, it’s still a powerful tool, but the human retains the final agency. The line is subtle, but crucial.

Here’s a simple (and slightly generalized) Python example of how you might structure a human-in-the-loop system for an AI-drafted response:


def draft_response(prompt, context):
 # Imagine this calls an LLM API
 ai_draft = f"AI drafted response to '{prompt}': Based on context '{context}', here's a potential reply..."
 return ai_draft

def send_message_with_review(prompt, context, recipient):
 ai_suggestion = draft_response(prompt, context)
 
 print(f"\n--- AI Draft for {recipient} ---")
 print(ai_suggestion)
 print("-----------------------------------")
 
 user_input = input("Review and edit. Press Enter to send, 'e' to edit, or 'c' to cancel: ")
 
 if user_input.lower() == 'e':
 final_message = input("Enter your edited message: ")
 elif user_input.lower() == 'c':
 print("Message cancelled.")
 return
 else:
 final_message = ai_suggestion # Send as-is if user just presses Enter
 
 print(f"\nSending to {recipient}:\n'{final_message}'")
 # Actually send the message here (e.g., via email API)

# Example usage
# send_message_with_review("Bug in login flow", "User reported they can't log in after password reset.", "[email protected]")

This might seem overly simplistic, but the core idea is there: the AI proposes, the human disposes. The human always has the final say, maintaining their agency over the communication.

3. Cultivate Transparency About AI Interaction

If an individual or an organization is interacting with an AI proxy, they should know it. This isn’t just about legal disclosures; it’s about managing expectations and maintaining trust.

Think about my open-source project again. If someone submits a bug report and an AI responds, it’s important that they understand they’re talking to an automated system. A simple “This response was drafted by an automated assistant to help categorize your issue. A human will review it shortly.” can make a world of difference.


def generate_ai_response(issue_description):
 # AI logic to generate a response
 response_text = f"Thank you for your report regarding '{issue_description}'.\n\n"
 response_text += "Based on our automated analysis, this appears to be a [CATEGORY] issue.\n"
 response_text += "We've logged it as [ISSUE_ID] and a human reviewer will look into it soon.\n"
 response_text += "Please note: This initial response was drafted by an automated assistant."
 return response_text

# Example of an automated AI response
# print(generate_ai_response("My widget button is not clickable."))

This transparency is crucial for preserving the integrity of human-to-human relationships, even when mediated by AI. If a user thinks they’re talking to a human and then discovers it was a bot, it can feel like a betrayal, eroding trust.

4. Regularly Audit and Retrain AI Proxies

AI models are not static. They learn, they evolve, and sometimes, they drift. If an AI is acting as your proxy, you need to regularly audit its performance against your intended goals and values. Is it still representing you (or your organization) effectively? Is it making decisions that align with your ethical framework?

This means more than just looking at accuracy metrics. It means qualitative review: reading its generated content, observing its interactions, and soliciting feedback from those who interact with it. Just as you’d review an employee’s performance, you need to review your AI proxy’s performance.

The Future of Agency in an AI-Proxied World

The distinction between AI as a tool and AI as a proxy is going to become increasingly important as these systems become more capable and ubiquitous. If we don’t consciously make this distinction, we risk unconsciously delegating our agency in ways we might later regret.

It’s not about fear-mongering or rejecting AI. Far from it. It’s about being intentional. It’s about understanding the philosophical underpinnings of agency and responsibility and applying them rigorously to the digital agents we create and interact with.

As agents ourselves, we have a responsibility to understand the tools we wield and the proxies we deploy. This isn’t just about efficiency; it’s about preserving our humanity, our autonomy, and our accountability in a world that’s changing faster than most of us can keep up with.

Actionable Takeaways:

  • For Developers & Builders: Before deploying any AI, ask yourself: Is this AI merely a tool, or is it acting as a proxy for a human agent? If it’s a proxy, clearly define its scope of agency, build in robust human-in-the-loop mechanisms, and ensure transparency for end-users.
  • For Users & Consumers: Be aware when you’re interacting with an AI proxy. Don’t assume you’re talking to a human unless explicitly stated. Look for cues and, if in doubt, ask for human intervention. Demand transparency from the systems you interact with.
  • For Organizations: Establish clear policies for AI proxy deployment. Who is ultimately responsible when an AI proxy makes a mistake? How will you audit its behavior and ensure it aligns with your company’s values and ethical guidelines? Don’t let the pursuit of efficiency overshadow the need for accountability.
  • For Everyone: Engage in the conversation. Understand the implications of delegating agency to non-human systems. Our collective future depends on how thoughtfully we approach this evolving relationship.

That’s all for today. Let me know your thoughts in the comments below. Have you encountered AI proxies in your daily life? How did you feel about it? Let’s keep this discussion going.

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: Best Practices | Case Studies | General | minimalism | philosophy
Scroll to Top