AI agents are at the peak of hype. Like most pragmatists, I believe we’re still quite a way from making AI agents a widespread reality. The technology is still largely theoretical, and there are many preconditions to deploying successful Agentic AI. It's hard to see most enterprises deploying them at scale on an operational level. After all, most companies and nonprofits are still struggling to implement plain old prompt-centric generative AI.
In the words of Forrester, “[Agentic AI]’s full potential and limitations aren’t yet fully clear. Many of the core design concepts are only just making the leap from academic papers to real-world implementations.”
In short, most agentic AI promises are not ready for widespread deployment across the business world. I don’t want to drag this conversation into a technological shortfall conversation, but I will refer interested parties to Cobus Greyling’s analyses of AI agent needs with tools, platforms, security, and inspectability.
That being said, agentic AI offers a better suite of business use cases for LLM technology than pure content creation or answer engines simply because it is 1) grounded in enterprise-specific data and 2) executes a process to achieve a desired outcome. AI agents provide greater opportunities to create operational efficiencies, save costs, relieve workers of rote, painful tasks, and empower humans to engage in better, more rewarding work.
The remainder of this article explains what an AI agent is, and dives into potential agentic AI opportunities, and building organizational capacity to implement agentic AI technologies.
What Is an Agent (Non Technical Jargon Definition)?
Salesforce’s Agentforce has led the AI hype wave.
An AI agent performs work tasks independently on behalf of employees. AI companies call them digital assistants that can take actions without constant human supervision. Personally, I think of them more as “If This Than That” decision tree type automations with a bit of randomness mixed in for the outcomes. In that vein, some are calling agents Robotic Process Automations (RPA) 2.0.
For example, if a customer asks a question of a chatbot, the LLM engine used will find an agentic framework to research the customer service manual and offer potential outcomes based on the manual. Next, the agent will ask for the customer’s permission to assist them in resolving the matter. If it receives a positive response, then it would guide the human through it.
The specific response and level of detail will likely be somewhat random. If the source data – e.g. the customer service manual – is not curated well in a RAG database, then the agent may provide all sorts of odd answers. It will take backward analysis to determine the LLM’s logic in presenting certain options. Even if the data is well chunked, metatagged, and ranked by issue, there may still be an element of randomness that will require reverse mapping to understand the LLM’s logic.
A narrow, well-trained agent offers value for enterprises and nonprofits, including:
Completing specific tasks autonomously
Execute basic decisions based on their programming, observe their environment (contextual and training data), and permissions to take the initiative.
Goal-based logic to select the best steps to accomplish tasks.
Trainable to learn from experience and improve performance
In practical business terms, with proper human oversight AI agents can handle routine customer service inquiries; monitor your systems and alert you to problems before they become critical issues; assist with research by gathering and organizing information; and just, generally automate repetitive tasks in operational workflows.
Document Potential Agent Opportunities
Perhaps opportunities is the best way to phrase use cases, which is still a murky term for some business folks (yes, I have been fighting the use case education game for three years and counting now). Regardless of nomenclature, these are not dramatic tasks. Rather, they are small opportunities to address repetitive tasks.
The above UiPath graphic illustrates their AI agent identification categories in the discover portion. I like the approach as it addresses several critical categories of potential agents:
Process Mining: Business processes with bottlenecks and optimization opportunities.
Task Mining: User interactions with applications that have opportunities for efficiency improvements.
Communications Mining: Emails, chats, and meetings with patterns, knowledge flows, and collaboration opportunities. Execute upon and fulfill such flows when necessary.
Idea Capture & Management: Facilitate and capture employee ideation, including research, to support continuous improvement and innovation initiatives.
One of the most adopted generative AI tasks to date is recording and transcribing meetings and then providing a summary with major action items. In a well-defined agent scenario, the agent may request notes approval from the meeting organizer. Upon receiving approval (with or without changes), an agent could automatically document and assign the to-dos in a project management board such as Jira, Monday, or Asana.
Agents are often chosen for deployment based on ROI, ease of implementation, and repeatability of the agent process. So in addition to documenting the process itself, managers should measure and manage the costs/effectiveness of tasks. Without this thorough documentation, it's hard to evaluate the impacts of potential agents within the larger KPIs of departments and organizational goals.
Project Management Capacity
PMI Project Management Chart by SlideGeek.
Notice I mentioned project management in the introduction. Increasingly, it’s apparent that organizations that successfully operationalize AI have strong project management capabilities. They document processes, insist on executive sponsors for significant projects, and deploy change management, training and education programs as part of a normal project execution process.
For those who think, this is an advertisement for the Project Management Institute, it is not. But as a former PMP and scrum master, the organizational pitfalls that plague organizational missteps are more often than not an unwillingness to embrace the discipline of strong organizational guardrails. These guardrails are often the hallmarks of project management, Agile, and Lean processes that keep enterprises on task.
Most smaller and medium-sized enterprises lack the discipline or desire to invest in such processes or the people necessary to ensure they stay on point. Instead, they prefer to buy more tech and throw it at the problem. This is a huge waste of time and resources. Honestly, it would be better to wait and invest in building stronger processes or, at a minimum, wait for the organization’s existing platforms to embrace agentic AI workflows.
An Honest Assessment of Data and Tech
In addition to the organizational capacity, you’ll also need to honestly assess your existing data as it relates to the desired agentic flows. Remember, agents need to observe their environment to provide context, and that means they need good, clean, and well-organized data to execute well. Arguably, this is perhaps the most critical step of building a strong agent. Poor data quality creates multiple opportunities for failure throughout the agency process.
There are several approaches to provisioning clean data, but they all require a strong tech stack and data governance, including platform capabilities and accessibility. For example, you will need to ensure that API access is clean so the agent can access the information effectively. Unfortunately, because agentic AI as a technology is immature, in many cases, access to platforms will require custom APIs.
Head hurting already? Again, agents are not mature from a technology deployment standpoint. The data matter reaffirms that. But it also illustrates another precursor for AI agents that you must consider.
You’ll Need Development Talent
Know your way around GitHub Copilot? No? You probably need a developer.
While promised as a low-code/no-code technology, I don’t think no-code agents are really here yet. Creating AI Agents has gotten easier than in the past, it still requires an entry-level developer’s knowledge. This is based on my own experiments with Agentforce, Bubble.io and Dify.ai. In the case of the prior, I watched a live Agentforce demo that included building an agent, and I experimented directly with the latter two platforms.
All three are low-code enough for me to get started, but I hit stumbling blocks due to my lack of basic platform and coding skills. This tells me you will need a data scientist, developer, or platform-specific engineer (e.g., Salesforce engineer) to build your agent.
For those like me who lack development skills but want to play with agentic flows of multiple tasks, they can still create a GPT via OpenAI. In my opinion, this is still the best no-code option. Those interested in managing agents should consider experimenting with workflow building to familiarize themselves with the efforts.
Conclusion
Implementing effective AI agents requires organizational discipline, strong project management capabilities, clean data architecture, and technical talent. While AI agents promise to automate routine tasks and create operational efficiencies – a tantalizing prospect for any business-minded individual – organizations should take a measured approach. In fact, it might be better to master a few general AI implementations before engaging in AI agent development.
Regardless, agents require building robust processes, improving data quality, and developing the necessary organizational capabilities. Building the foundation better positions your enterprise or organization to leverage AI agents when the technology matures.
The organizations that succeed with AI agents won't be those that adopt them the fastest. Instead, they will implement them with purpose, discipline, and a clear understanding of specific business needs.
Geoff, I like your content and writing style. Keep the articles and insights coming ... !