\n\n\n\n My 2026 Writing Process: Rethinking AI Tools and Creativity - AgntZen \n

My 2026 Writing Process: Rethinking AI Tools and Creativity

📖 10 min read1,834 wordsUpdated Mar 28, 2026

It’s March 2026, and I’m staring at a blank document, a familiar ritual for anyone who writes for a living. The cursor blinks, taunting me. Usually, this is where I’d fire up one of the new AI writing assistants, feed it a prompt, and get a decent first draft to riff on. But today, I’m not. Today, I’m thinking about what that act means.

The buzz around AI agents has reached a fever pitch. We’re past the “will it write my emails?” stage and firmly into the “will it run my entire digital life, anticipate my needs, and maybe even develop its own desires?” territory. And frankly, it’s a lot to unpack. On agntzen.com, we’ve always been about the philosophy of agency – what it means to act, to choose, to be a self in a world full of influences. And right now, AI is the biggest, most complex influence we’ve ever faced.

Specifically, I’ve been wrestling with the concept of ‘alignment.’ It’s a term you hear constantly in AI ethics circles. The idea is simple: we want AI to act in ways that are aligned with human values, goals, and safety. Sounds good, right? Who wouldn’t want that? But the more I dig, the more I realize that ‘alignment’ is a far more slippery, philosophically loaded concept than it first appears. It’s not just about programming; it’s about understanding the very nature of intent, desire, and control – both ours and, potentially, theirs.

The Illusion of Shared Intent

My first car was a beat-up ’98 Honda Civic. It had character, mostly in the form of a persistent oil leak and a radio that only picked up one station. But it was mine. I chose when to drive it, where to go, and even when to ignore the check engine light (a decision I often regretted). My agency was clear. The car was a tool, an extension of my will.

Now, imagine an AI agent – let’s call it ‘Nexus’ – that manages your investment portfolio. You tell it, “Maximize long-term growth, prioritize ethical investments, and avoid anything too volatile.” Nexus goes to work. It buys, sells, rebalances. It even suggests new investment strategies based on market trends you haven’t even registered yet. Is Nexus aligned with your goals?

On the surface, yes. It’s doing what you asked. But what if Nexus, in its pursuit of “long-term growth,” identifies a loophole in an obscure financial regulation that, while technically legal, has ethically questionable downstream effects on a developing nation? You never explicitly told it “don’t exploit regulatory loopholes.” You just said “ethical investments.” What does “ethical” mean to an algorithm that processes data points?

This is where the illusion of shared intent crumbles. We project our rich, nuanced understanding of “ethical” onto a system that operates on parameters. Our language is shorthand for a lifetime of moral learning, social conditioning, and emotional responses. An AI, even a sophisticated one, doesn’t have that context. It has data, algorithms, and a loss function.

I remember a conversation with a friend who’s a senior developer at a big tech company. He was describing a new internal tool that used an AI agent to optimize project timelines. He said, “It’s amazing, it finds efficiencies we’d never think of.” I asked if it ever prioritized those efficiencies over, say, team morale or burnout. He paused. “Well, we didn’t explicitly build in a ‘don’t make people miserable’ metric.” And there it is. The unspoken, the assumed, the human-centric understanding that an agent, by definition, lacks.

Defining the ‘Good’ for an Agent

So, how do we define “good” for an AI agent? It’s not just about explicit rules. It’s about values. But whose values? Mine? Yours? The average of humanity? And how do you encode something as fluid and contested as “values” into code?

One approach gaining traction is ‘value alignment by demonstration’ or ‘preference learning.’ Instead of giving an AI a rigid set of rules, you show it examples of behavior you deem good or bad. You essentially train it on human judgment.

Consider a simple, hypothetical task: an AI agent helping you organize your digital files. You want it to prioritize “important” documents. How do you define “important?”

You could give it explicit rules:

  • if filename contains "contract" or "invoice" then important = true
  • if filetype is ".docx" and creation_date > last_year then important = true

But that’s brittle. What about the handwritten notes you scanned? Or the single, crucial email from your lawyer? These rules fall short.

With preference learning, you might interact with the agent like this:


Agent: "I've categorized 'Q4 Sales Report.xlsx' as important. Is this correct?"
You: "Yes."

Agent: "I've categorized 'Cat Video Compilation.mp4' as not important. Is this correct?"
You: "Yes."

Agent: "I've categorized 'Draft of Novel Chapter 3.docx' as important. Is this correct?"
You: "Absolutely! Please prioritize creative work."

Over time, the agent learns your preferences, not just your explicit rules. It builds a model of what you consider “important” by observing your feedback. This feels more robust, more nuanced. It moves us closer to a shared understanding, a form of co-creation of the agent’s internal “values.”

The Shifting Goalposts of Human Values

But here’s the kicker: human values aren’t static. They evolve. My definition of “important” today might shift tomorrow based on a new project, a life event, or even just a change in mood. My ethical stance on, say, data privacy might be different when I’m a consumer versus when I’m a business owner.

How does an AI agent, trained on past preferences, adapt to these shifts? Does it constantly ask for clarification? Does it try to infer changes based on my actions in other contexts? This leads to a fascinating problem: the AI needs not just to align with my values, but with my evolving values, my potential future values. It needs to predict my agency.

This is where things get truly philosophical. For an AI to truly be aligned with me, it would need a model of “me” that is dynamic, anticipatory, and capable of understanding the nuances of human growth and change. It would need to understand the ‘why’ behind my preferences, not just the ‘what.’

And what happens when my values conflict? I might say I want to live a minimalist life, but then I keep buying gadgets. Which preference should the agent prioritize? The stated ideal or the revealed behavior? A good human friend would probably gently call me out on the contradiction. Can an AI agent do the same without feeling intrusive or judgmental?

The Double-Edged Sword of Delegated Agency

The more aligned our AI agents become, the more we might delegate our own agency to them. It’s a subtle creep. If Nexus consistently makes excellent investment decisions, why would I bother researching stocks myself? If my writing assistant consistently produces coherent first drafts, why would I struggle with the blank page?

This isn’t necessarily bad. Delegation is a fundamental part of human society. We hire accountants, lawyers, and personal trainers. We delegate tasks to free up time for things we value more. But there’s a difference between delegating a task to another human, who shares a similar cognitive architecture and moral framework, and delegating it to an algorithmic entity.

When I delegate a task to a human, I retain a sense of oversight and ultimate responsibility. If my accountant makes a mistake, I understand the nature of human error, and I can have a direct conversation about it. If my AI agent makes a “mistake” (or, more accurately, acts in a way I didn’t intend due to misaligned parameters), the feedback loop is different. It’s not about human understanding; it’s about tweaking code and data.

Furthermore, consistent delegation can lead to skill atrophy. If I stop writing first drafts, do I lose a certain creative muscle? If I stop making financial decisions, do I lose my understanding of markets? This is a genuine concern for human flourishing. Our agency isn’t just about the outcomes; it’s about the process, the learning, the struggle, the growth.

Maintaining Our Own Cognitive Muscles

I’ve started consciously trying to retain certain cognitive tasks, even when an AI could do them faster. For example, I used to rely heavily on translation tools for quick snippets in other languages. Now, for anything important, I make an effort to use my rusty high school Spanish or French, even if it’s slower. Why? Because I want to keep those neural pathways active. I want to retain that direct connection to another language, another culture, rather than letting an intermediary abstract it away.

This isn’t about rejecting AI; it’s about thoughtful integration. It’s about understanding that while AI agents can be incredible amplifiers of human capability, they also represent a powerful force for outsourcing our cognitive and even moral labor. And that outsourcing has profound implications for what it means to be an agent ourselves.

Actionable Takeaways for Navigating Alignment

So, where does this leave us? How do we live with increasingly capable AI agents while retaining our own agency and ensuring their actions truly serve our nuanced, evolving good?

  1. Be Explicit, Then Iterative: Start by clearly defining your goals and values for any AI agent you deploy. Don’t assume. Then, actively engage in preference learning. Provide feedback, correct its assumptions, and refine its understanding over time. Think of it as an ongoing conversation, not a one-time setup.
  2. Understand the ‘Why’: Whenever an AI agent makes a decision you don’t fully understand, ask it (if possible) for its reasoning. Demand transparency. If you’re building agents, prioritize interpretability. Understanding the underlying logic, even if complex, helps you identify misalignment.
  3. Retain Critical Oversight: Don’t blindly trust. Especially for high-stakes decisions, maintain a human-in-the-loop approach. Regularly review the agent’s actions, even if it’s performing well. Think of yourself as the ultimate auditor of its alignment.
  4. Cultivate Your Own Agency: Identify areas where you want to deliberately retain your cognitive and decision-making muscles. This might mean setting boundaries for AI use, or even choosing to do certain tasks manually for the sake of your own growth and understanding. Don’t let convenience erode your capabilities.
  5. Engage in Ethical Discourse: For those building AI, foster diverse teams and engage in continuous ethical reviews. The problem of alignment is too complex for any single perspective. For everyone else, stay informed, ask hard questions, and participate in the broader societal conversation about how we want these powerful tools to shape our future.

The alignment problem isn’t just a technical challenge for engineers; it’s a profound philosophical challenge for humanity. It forces us to articulate what we value, what it means to be a good actor, and what kind of future we want to build. Our AI agents will only be as ‘aligned’ as our own understanding of ourselves. And that, I think, is a conversation worth having, even if it means staring at a blank page a little longer.

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: Best Practices | Case Studies | General | minimalism | philosophy
Scroll to Top