Having travelled the AI powered automation galaxy for the last 35 years, I wanted to share with my fellow travellers some of the practical navigational experiences that I gained in the course of my journey. I have tried to keep this guide as brief as possible with a comprehensive map of the technologies that are required to deliver the widest range of intelligent automation applications. I have also included a list of essential myths and facts that are often masked by the massive hype associated with AI that we continue to witness in the media. If you find this guide useful then please share with other fellow travellers!

    The map below shows the building blocks of Intelligent Automation in terms of platforms, functionalities and AI technologies:

     

    AI powered Intelligent Automation: The Myths and the Facts by Akeel Attar

     

    1- AI is not machine learning

    There is a perception that machine learning and deep learning in particular now defines AI and supersedes all other AI technologies of the preceding 30 years. The truth is that machine learning is only one aspect of AI that complements other AI technologies and deep learning is only one of the machine learning technologies that has proved particularly effective in processing unstructured data.

     

    2- Robotic Process Automation support for AI

    RPA tools are perceived to support rules automation for automating human decision-making tasks. In reality RPA tools provide limited support for conditional logic branching in decision flows which is far short of what is needed to automate the Monitor / Assess / Act / Plan skills supported by Decision / Rules Automation tools.

    RPA tools are increasing offering features for processing unstructured data and natural language text in particular.

     

    3- IOT & AI

    An IOT eco system (MS, Amazon, IBM, ARM etc.) extends the intelligent automation capability to physical assets. However, note that some intelligent automation can take place at the IOT edge and therefore very often the data uploaded to the Enterprise digital ecosystem will consist of time aggregated measurements plus any automated actions / decisions made at the IOT edge. Similar AI capabilities to the ones shown in the diagram above can be available at the edge perhaps with the exception of advanced processing of unstructured data.

     

    4- Processing Unstructured Data

    By far the biggest breakthrough in AI over the last 10 years has been the processing of unstructured data (images, speech and natural language text) and Deep learning has been the technology underpinning this success. This has allowed structured data attributes to be extracted from unstructured data thereby allowing this data to be used by the mature symbolic reasoning / decisioning AI technologies. So once again deep learning supplemented and not replaced automated human decisioning. Deep learning has been particularly effective for processing unstructured data for 2 reasons:

    i- The lack of transparency of DL models are not a problem for image / speech / text processing where an explanation is not required to justify, for example, a classification made by the DL model.

    ii- Major vendors like Google / MS / IBM are providing general DL cognitive services that have been trained on large data sets and ready to be easily consumed by intelligent automation applications to process unstructured data (translation, Speech recognition, image classification, sentiment analysis etc.). No need for data scientists to consume such cognitive services.

    It must be remembered that DL has no common sense knowledge and no ‘understanding’ of the results of the processing of unstructured data and therefore before using in automated decisioning the cost of mis-classification without human involvement has to be estimated and allowed for.

     

    5- Decisioning & Machine learning

    Decisioning (to monitor, assess, act, plan and collaborate) is at the heart of AI and therefore the engine room of intelligent automation. There is a myth that Deep Learning is suited for automated decisioning but there are many reasons for why this may not be the case.

    i- Forget about ‘big data’ for most decisioning applications there is not sufficient data to learn from. Sufficient data that covers the whole range of practical operating regions, data that covers all events (problems) and interactions etc. Human subject matter experts can learn from a small number of occurrences because they can apply common sense and background domain knowledge to the observations something that DL learning can not do as DL can only use the data presented to it. This is why automating the heuristic expertise of people using rules / decision engines is so effective for automating decision making.

    ii- Predictive Analytics can be used to enhance automated human decisioning by machine learning better decisions from historic operations data, for example learning better loan underwriting decisions. We have the choice here of using Deep Learning (Neural Nets), Statistical learning (regression or Bayesian) or symbolic Learning (Trees or rules). As a rule, the degree of explainability / understandability of the machine learning model is inversely proportional to the accuracy of the model. For decision making applications the understandability of the models by the subject matter experts is very important as it gives the human experts the confidence in the model and allow them to supplement it with automated rules. This makes symbolic learning and statistical models far more suited than DL and this has been my experience in many domains particularly in process and manufacturing.

    iii- Bayesian machine learning is most suited for real-time online learning in decisioning applications and can handle both structured and unstructured data.

     

    6- Intelligent chat-bots / virtual Assistants

    Most chat-bot technologies today support shallow conversations (a few interactions) aimed at users that are either seeking help in navigating online/enterprise contents for relevant information or users requesting simple services like booking a flight, making an appointment, making a payment or ordering a pizza. The chatbot aimed at providing simple services or information will use NLP to extract the ‘intent’ behind the user uttering and then engage in a simple conversation to capture related attributes as required to carry out the simple automated task or to retrieve the relevant information.

    More advanced chatbots / virtual assistants use knowledge graphs, ontologies and DL to support a deeper conversation and allow enterprise data / content to be combined with external content / ontologies that can be easily navigated by a user through a conversational interface.

    It is a myth that the use of knowledge graph with ontologies, DL and NLP can deliver deep conversations relating to detailed compliant advisory process (such as financial planning or health screening) or problem solving (such as complex trouble-shooting) etc. For such applications, a Decisioning engine is required to drive a conversation via inference and NLP. This can support conversations with unlimited complexity with the conversation thread, focus and decisioning expertise being controlled by the Decisioning engine.

     

    7- Smart workflow for Decisioning

    A decisioning system needs a Smart-Workflow capability that is separate from any workflow functionality available in the RPA or BPM digital platform. Smart-workflow is required to:

    i- Drive the Decisioning problem solving strategy which often involves the orchestration of a number of decisioning sub-tasks and actions

    ii- Manage the flow / to & fro hand-over / collaboration between the automated decisioning or RPA bots and the chat-bots / virtual assistants that are managing the conversation with the human users.

     

    For more information please visit xpertrule.com