How AI Agents Are Fundamentally Reshaping Collaboration
When does a tool cross the line to become something more?
The digital workplace has always evolved, but never quite like this. While headlines focus on AI replacing jobs, something more profound is happening beneath the surface: AI agents are transforming how we collaborate at a fundamental level.
This isn't just another productivity boost. It's a complete restructuring of how work flows between individuals, how software is designed, and how organizational hierarchies function. The implications extend far beyond efficiency gains, raising critical questions about how human relationships with tools will never be the same.
Designed for AI use, not humans
Today's software is built for human interaction; it's no wonder that Anthropic's Model Context Protocol (MCP) has been such a huge hit. It is a bridge from the new to the old, as it helps AI Agents better understand and interact with existing tools. But with the proliferation of AI, it's easy to imagine a future where the balancing scales are tipped: eventually, there will be more AI Agents interacting with software tools than humans themselves.
In that world, we will design software from the ground up with Large Language Models (LLMs) as the primary users. This is not hard to imagine, as a product's adoption will depend on how effectively AI agents can utilize it. We will have product usability studies run entirely in telemetry logs rather than human observation rooms. We will create product features based on the capacity of AI workloads, not human ones. We will replace app onboarding flows with LLM training data and system prompts. Is the customer always right, even when it’s an algorithm?
Human software won't disappear—much of it will simply evolve into extensible tools for AI Agents. Rather than design for human collaboration to happen directly in the software, we will create software that enables AI to help humans better collaborate.
Why click, when I can chat?
Early ChatGPT adopters experienced a profound shift from traditional 'click and scroll' interfaces to AI chat. When researching, gone are the days of spending hours in Google Search, blogs, PDFs, news articles, and subreddits. We're now digesting AI-synthesized research in minutes—all through the portal of an AI chat interface. This transformation extends to collaboration tools, with Microsoft's Copilot being a key example of AI integrating directly into familiar work interfaces. As multimodal capabilities advance, it's not a stretch of the imagination to see a world where our first point of contact will always be an AI Agent. Why check your email inbox, Slack channels, and Asana to-dos when you can have Claude summarize your team's most pressing action items?
There is a not-so-distant future where almost all of our human-to-human collaborations will be supplemented by AI. We won't be clicking our way to answers; we'll simply ask for them.
Agent-in-the-loop
While the concept of 'human-in-the-loop' is clear as it pertains to directing and controlling AI workflows, we've mostly ignored the opposite angle: how humans are being directed by AI when there is an 'agent-in-the-loop' in our workflows. Most obviously, we think of AI hallucinations that send people in the wrong direction based on invented facts. However, there are more subtly impactful ways that AI will change the very nature of how we collaborate.
Firstly, let's look at the 'Domain Expert', described as a person whose experience and expertise lies within the field that AI is assisting them in. Think of a developer who is collaborating with Windsurf's Cascade to help code up a customer database, or a designer working with Lovable to prototype a website. In this human-to-agent collaboration, the Domain Expert is the bread in a sandwich—they set the scope, the AI does all the work, and they review it in the end. While this new style of collaboration has proven productive, it has also caused skill atrophy en masse. There's a crucial difference between tools that enhance your skills versus a collaborator that takes away your opportunity to grow.
Secondly, let's examine the 'Vibe Worker,' described as a person whose experience and expertise lie in a different field than the one in which the AI is assisting them. Think of an HR manager working with Vercel's v0 to build an app to help with team onboarding, or a waiter asking Midjourney to design their food menus. In most cases, this unlocks new possibilities for people to get work done without any expertise. Inline with the Pareto Principle, AI helps take a Vibe Worker from zero to eighty in seconds; they just have to get the remaining twenty percent across the finish line.
However, in many cases, a person conducting work in a field without expertise opens themselves up to huge risk. An HR manager's app could leak sensitive employee information. A waiter's menu could mislabel ingredients and cause a severe allergic reaction. The future likely involves a hybrid model of collaboration among Domain Experts, Vibe Workers, and AI Agents. This approach will optimize gains while minimizing risks.
Agents In Your Org Chart
The ratio of humans to AI agents in organizations is trending toward 1:1 at minimum, with strong indicators pointing toward multiple agents per human employee. On top of this, the benefits of specialized sub-agents that perform lower-level tasks that filter up starts to manifest a new paradigm of company hierarchy. That is, we will start to see AI Agents structured as teams with managerial oversight.
A whole new class of software will need to be created to observe, optimize, control, and secure teams of AI agents across an organization. Companies will likely manage AI Agents using human-like frameworks—performance reviews, benchmarking, training and development programs, security, compliance, and more. If conversing with an AI Agent feels strange today, imagine having to terminate one tomorrow. Agent Smith, is that you?
By default, our relationship with traditional software has transferred over to how we treat AI—but this will inevitably change as they are fundamentally different. Software operates under strict rules and lead to deterministic outcomes. LLM operates with human-like capabilities and lead to probabilistic outcomes. We don’t mistrust software outputs the same way we mistrust AI outputs. We don’t expect emergent properties from tools the same way we do agents.
LLMs are not traditional software, nor are they human, they are something in between.
Once AI Agents become effective, reliable, secure, and efficient it’s likely that they will no longer be viewed or treated as tools, but as collaborators. And that’s a future we’re excited to build towards.