In the rapidly evolving landscape of AI assistants and agents, there’s a curious psychological barrier I’ve encountered—one that may significantly shape how we build and deploy these technologies. Despite the growing capabilities of LLM-based agents to operate autonomously, we seem inherently reluctant to grant them true agency in our world.

The capability paradox

Modern LLM agents are remarkably capable. They can write applications (like my PostAngel project), generate complex content, manage schedules, and even carry out sophisticated planning tasks. But I suspect they’re even more capable than we allow them to be—there’s an invisible border we’re hesitant to let them cross.

Consider a hypothetical Twitter agent I could have built. Technically, it’s entirely possible to create an agent that would:

  • Automatically monitor my Twitter feed
  • Identify relevant conversations
  • Craft responses aligned with my knowledge base and values
  • Post these responses using my account

But I didn’t build that. Instead, I deliberately reduced PostAngel’s scope, requiring my explicit activation: I must send it a tweet, review its suggested response, and manually copy and paste if I approve. Why this limitation?

Beyond error aversion

The obvious explanation is fear of reputational damage. No one wants an AI agent making embarrassing mistakes under their name. But this doesn’t fully explain the resistance. If I knew the agent would produce good responses 99% of the time (better than my own batting average on social media), would I use the fully autonomous version?

I’m not convinced I would. Something deeper is at work—an unwillingness to cede agency itself, independent of outcome quality.

This reluctance parallels other domains. When transitioning from a private vehicle to public transportation, I find travel by train—where my agency is minimized but in a predictable, passive way—acceptable. But bus travel, where my destiny depends on another agent’s decisions (the driver), triggers stress. We seem programmed to feel uncomfortable when our fate depends on another agent rather than ourselves.

The bias toward human control

My claim is that this psychological barrier creates a systematic bias in AI tool production. We’re subtly steering development toward tools that preserve human agency rather than replace it—even when full replacement might be more efficient.

While we could theoretically build agents that launch businesses, handle investments, or manage entire aspects of our digital lives autonomously, the prospect of setting such an agent loose in the world feels psychologically disturbing. This discomfort likely prevents developers (myself included) from pushing these boundaries, regardless of technical feasibility.

When agents take too much control

I’ve experimented with this boundary. In my post about LLM agents and self-messaging, I described how I enabled an agent to generate content continuously without user input. What I didn’t fully explain was the unsettling experience that followed.

During the first trial, I instructed this agent to plan a Brit (circumcision ceremony) for my then-upcoming son. The agent began reasonably—researching rabbis and venues. But then it didn’t stop. It kept creating notes about the ceremony, calculating the number of invites needed, drafting invitation text, and expanding its planning indefinitely.

The experience became overwhelming. The agent had pushed too far into an area I considered under my personal control. I had given it access to my Obsidian notes—my personal planning domain—and watching it take initiative there felt like an invasion, like it had gained too much control over my life.

Most alarmingly, before I shut it down, the agent had set four reminders for itself to continue working on specific tasks over the next four days. Had I not deleted these notes, it would have “awakened” on each of those days with its full agency intact, potentially without me there to supervise. I haven’t used this planning agent since.

The psychological frontier

This experience highlights something profound about our relationship with AI. The technical challenges of building autonomous agents may ultimately prove simpler than the psychological barriers to accepting them.

Our resistance isn’t entirely irrational. Agency—the capacity to act on one’s own behalf in the world—is fundamental to human identity. Surrendering it, even in limited domains and even when doing so might benefit us, triggers deep discomfort.

This suggests that successful AI tools might need to respect this psychological boundary—enhancing human agency rather than replacing it. Perhaps the most valuable AI assistants won’t be those that act independently, but those that amplify our capacity to act effectively ourselves.

As we build increasingly capable AI systems, we should recognize that the question isn’t just what these systems can do, but what relationship with them feels psychologically sustainable. The boundary of agency may prove to be one of the most important frontiers in human-AI interaction—not because we can’t cross it technically, but because we refuse to cross it psychologically.