AI agents are coming to your operations. The question is where to draw the line.
79% of companies now use agentic AI. Fewer than one in 20 capture real value. The difference is governance.
The agent opportunity is real
Agentic AI represents a genuine shift. AI can now act on decisions within defined boundaries, rather than just providing information.
A chatbot answers questions. An agent executes workflows. It reads inputs, makes decisions, and produces outputs without waiting for a human at each step.
BCG research estimates that AI agents account for 17% of total AI value today, rising to 29% by 2028. PwC reports that 79% of companies are already using agentic AI. The investment is happening. The question is whether the value follows.
For professional services firms, agents offer something specific: operational leverage without proportional headcount growth. An agent that handles meeting preparation, client research, or proposal drafting removes the work about work that keeps your best people from client-facing activity.
When agents work and when simpler automation is enough
Not every workflow needs an agent. Many need a well-designed automation. The distinction matters because agents introduce complexity. An automation follows a fixed path: if this, then that. An agent makes decisions within boundaries. That flexibility creates value when the workflow requires judgment. It creates risk when the workflow does not.
Agents work well when
- The workflow involves varied inputs requiring interpretation
- Decisions need to be made within defined boundaries
- The task benefits from context carried across steps
- Quality improves with feedback over time
Simpler automation is enough when
- The workflow follows a predictable, repeatable path
- Inputs and outputs are structured and consistent
- No interpretation is required
- Speed and reliability matter more than flexibility
Start with the simpler option. Graduate to agents when you have evidence that the workflow needs adaptive behaviour, not when a vendor tells you agents are the future.
Three questions before you deploy
Before any agent goes live in your operations, answer three questions.
1. Where are the handoffs?
Every point where an agent passes work to another agent or to a human is a point where context gets lost. Map these deliberately. Design explicit protocols for what information transfers and what gets verified at each boundary.
2. Where are the human checkpoints?
Not everywhere. At decision boundaries where the cost of error is high. Client-facing communications. Pricing decisions. Compliance judgments.
Let the agent prepare the dossier. Keep the decision human.
3. What happens when it is wrong?
Not if. When. Every agent will produce bad output at some point. The question is whether your system catches it before it reaches a client. Design for recovery, not just success.
The firms getting this right treat agent orchestration as an access control problem. The agent gets permissions proportional to the reversibility of its actions. Low-impact, reversible tasks run autonomously. Irreversible decisions stay human.
The human authority line
The strategic question for operations leaders is not “what can we automate?” It is “where must human judgment remain non-delegable?”
This reframe changes how you deploy AI. Instead of starting with capability and working backwards, you start with governance and work forwards.
Six principles for drawing the line:
- 1.Codify the human element. Identify capabilities your organisation cannot remove. Professional judgment, client relationships, ethical decisions.
- 2.Audit for identity risk. Test whether efficiency narratives erode the professional judgment that makes your firm valuable.
- 3.Set domain-specific boundaries. Different practice areas, client types, and risk profiles need different lines. A one-size-fits-all automation policy fails.
- 4.Elevate bridge builders. The people who translate between AI capability and professional practice are your most valuable asset during adoption.
- 5.Make protection visible. Transparent boundaries around data, model training, and decision authority build client trust.
- 6.Measure empowerment, not adoption. Track whether people are making better decisions, not whether they are using the tools.
Every automation decision is a governance decision. Governance is about trust.
The architecture beneath your agents
BCG research on AI scaling reveals a pattern. Organisations that capture real value follow the 10-20-70 principle: 10% of effort on algorithms, 20% on data and technology, 70% on people, processes, and cultural change.
Most organisations invert this. They spend on technology and wonder why adoption stalls.
The firms seeing real impact direct more than half of their AI investment to agents deployed end-to-end across workstreams. They are twice as likely as followers to deploy agents across full processes rather than in isolated use cases.
What separates these firms is the operating architecture beneath the technology. Agents need redesigned workflows, clear data governance, defined roles, and explicit incentive structures.
You already have the hard part. Domain expertise. Client relationships. Controlled environments. You are not building an open network. You are adding intelligence to a system you understand.
Start with one workflow
Pick one workflow where an agent could augment your team's capacity. Diagnose it. Define the boundaries. Then build.