Six things to consider when answering this question and what it means for your AI strategy by XpertRule CEO and Founder, Akeel Attar

     

    1. The rise of Agentic Gen AI

    Following the initial high expectations of the business transformative power of LLMs / Gen AI, it is becoming increasingly clear that in order to deliver real ROI and productivity improvements in the enterprise, Gen AI must move beyond AI assistants and co-pilots towards the Intelligent Automation of work using Agentic AI. 

    Agentic AI promises to substantially transform work across many industries via deploying autonomous programs that can be given business goals and then left to figure out how to achieve these goals independently by executing end-to-end automation tasks.  They function autonomously through understanding natural language, comprehending goals and instructions, seeking additional information, and then planning and executing multi-step goals / tasks without human oversight. They can also autonomously access knowledge documents / data bases, and interface with enterprise systems. Leading companies like OpenAI and Microsoft are promoting agentic AI development through large language models, action models, co-pilots and other innovations.  

    But does Agentic AI live up to the hype? 

     

    2. Beneath the hype, there are limitations of Agentic Gen AI

    The ‘perceived wisdom’ is that Gen AI equips Agentic AI with the following essential capabilities that are necessary for autonomous agency: 

    • A Natural Language (NL) user interface
    • Interrogating NL documents & communications for question answering
    • Orchestration, reasoning, decision making & planning
    • Continuous learning
    • Generating reports & communications
    • Information retrieval
    • Summarising, translating and reformatting of NL text
    • Interfacing to enterprise applications & data, RPA / APIs, and microservices

    The critical capabilities that are essential for the operation of autonomous Agentic AI are agent orchestration, reasoning, and planning.  Unfortunately, these very capabilities also represent the main limitations of Gen AI, summarised below: 

    • Limited Orchestration capability - automation of work at a high level requires the orchestration of workflow & decision-flows to execute tasks to achieve business goals. The current Gen AI Agent technology is simply incapable of reliably deploying an orchestration agent (or multiple agents) to autonomously act as the orchestrator / glue for performing actions by coordinating with various other AI agents.
    • Limited reasoning capability - Gen AI agents cannot reliably execute logical reasoning, problem solving and planning tasks. This is partly because that while LLMs can learn patterns very well, they cannot be trained to make inferences from logical rules. This limitation is further amplified by the fact that there is no universal syntax for expressing logical rules in natural language which increases the unreliability / ambiguity of Gen AI in reasoning.
    • Limited continuous learning capability - the limited orchestration and logical reasoning capabilities apply to both the deployment of these capabilities in automation and to the continuous learning of new decisions and workflows from data.

    So, there are clear limitations to Agentic Gen AI but can it deliver on the requirements of responsible AI?

     

    3. The need for Responsible AI

    Most early adopters of Gen AI and agentic AI found it difficult to transition from Proof-of-Concept developments to production systems due to the risks posed by Gen AI. These risks include data privacy, security, hallucinations, biased outputs, compliance, copyright & legal risks. Responsible AI is an approach for developing and deploying Gen AI that mitigates these risks by ensuring safety and ethical fairness. 

    Responsible AI is not a nice-to-have but a must-have framework to empower businesses. The National Institute of Standards and Technology (NIST) released a 7-component framework that characterizes an ideal trustworthy and responsible use of Gen AI based on 7 criteria: 

    1. Accuracy & Dependability: AI systems must provide reliable and precise results. 
    2. User Safety: Prioritizing the safety of users to prevent harm. 
    3. Security and Resilience: AI systems should be resilient against malicious attacks and ensure security. 
    4. Accountability & Transparency: AI systems need to be transparent & accountable for their actions. 
    5. Understandability: The inner workings of AI systems should be comprehensible. 
    6. Privacy Protection: Respecting user privacy and safeguarding personal data. 
    7. Fairness & Bias Management: Ensuring fairness in outcomes and mitigating harmful biases.

    Most businesses are clear that any AI strategy must follow this framework but how does Agentic AI measure up?

     

    4. Agentic Gen AI fail the Responsible AI tests

    The limited capabilities of Agentic Gen AI in workflow orchestration, reasoning and planning means that the autonomous actions taken, outcomes generated, and the decisions made by Gen AI agents cannot be trusted to be accurate and consistent (due to hallucinations and lack of common-sense reasoning) thereby failing the Responsible AI tests of Accuracy & Dependability and safety. 

    Another important limitation of Agentic Gen AI (which particularly impacts an important Responsible AI criteria) is that Gen AI / LLMs are black box models that cannot explain their decisions or actions. This results in Agentic Gen AI failing the Responsible AI tests of Accountability, Transparency, Understandability, and Fairness & Bias Management. 

    As it stands, Agentic Gen AI does not pass the responsible AI test but does that mean businesses should dismiss it?

     

    5. Using Composite AI to deliver Responsible Agentic AI

    In order to address the limitations of Agentic Gen AI and deliver Agentic AI that pass the Responsible AI test, we need to address the limitations of Gen AI. First, we need to understand that AI does not revolve around Gen AI which is only one AI technology out of many constituting the Composite AI universe. Responsible Agentic AI can be delivered using Agentic Composite AI that combines Gen AI, Symbolic (Logic) AI, Predictive AI (non Gen AI ML) , and optimisation techniques as follows. 

    • Symbolic & Predictive AI deliver predictable and repeatable outcomes that will help Agents meet the Accuracy, Dependability, and Safety tests for autonomous decision making & planning
    • Symbolic AI (no code logic) and symbolic Machine learning (learning of graphical trees & rules from data) both deliver explainable / understandable reasoning models that can address the Accountability, Transparency, and Understandability tests.
    • Gen AI can still be used for autonomous orchestration, reasoning & decision making provided that additional Responsible AI Guardrails are deployed. These Guardrails can be in the form of an accuracy and common-sense validation layer using symbolic AI (rule-based logic), or a human in the loop augmented by Decision Intelligence, or both.

    In other words, combined with other AI technologies, Agentic Gen AI can indeed deliver on its promise.

     

    6. Why Agentic AI needs a new Orchestration Framework

    Agentic AI has the potential to completely transform the nature of work through agentic automation and human-agent collaboration. The effective orchestration of Agentic AI is key to such a transformation. There are currently two approaches for the orchestration of Agentic AI. The first is the classical Business Process Orchestration, while the new kid on the block is the autonomous Gen AI multi agent orchestration. 

    The Business Process orchestration approach is outdated since it treats work as a collection of (routine) processes to automate as opposed to the automation of decision-making tasks, reasoning, planning and actions. Agentic (Gen) AI in theory can deliver autonomous multi agent orchestration but in practice (as outlined in previous sections) has limitations in reasoning and workflow orchestration that makes it unsuitable for autonomous orchestration. 

    An Agentic Orchestration platform needs to orchestrate many agents and multiple types of agents including LLMs, Decision Logic, Predictive AI, Optimisation, Microservices, Human in the loop, Document Processing, and bots. 

    Our viabl.ai Decision Intelligence Platform uses decision inference for both agent orchestration and agentic reasoning and is a single Low-code unified platform for the implementation and Orchestration of agentic AI that deliver Decision Intelligent, LLMs, Document processing, API integration, and advanced hybrid agent-human collaboration. 

    In conclusion, yes, I believe that Agentic AI can pass the Responsible AI test and deliver on the promise of delivering autonomously on business goals, but it must be done in partnership with other AI tools. 

     

    I’d love to hear what you think.  Please comment underneath the article on LinkedIn or reach out to me directly with your thoughts.