Virtualization Technology News and Information
Article
RSS
IFS 2021 Predictions: Why explainable AI can speed adoption of intelligent enterprise systems in 2021

vmblog 2021 prediction series 

Industry executives and experts share their predictions for 2021.  Read them in this 13th annual VMblog.com series exclusive.

Why explainable AI can speed adoption of intelligent enterprise systems in 2021

By Bob De Caux, VP of AI and Robotic Process Automation, IFS

We are becoming accustomed to autonomous systems in our daily life-from mechanical ones like our electric garage door opener that shuts down if the electronic eye senses an obstruction to the automatic braking system on our vehicle that slows or stops us before a frontal collision.

Understanding all the details behind the choices made by the artificial intelligence (AI) driving autonomous vehicles can be challenging, if not impossible, for a human. If we consider the time scale these systems work at (microseconds to make decisions based on enormous amounts of data), we can see how we would never be able to audit every part of that process.

Still, we are starting to accept those systems and the fact that there is only a limited level of understanding that we can get to; we are learning to trust those systems more and more through empirical evidence. We are also accepting that, in certain cases, we will need to focus on managing exceptions rather than on having complete control.

But what about cases where AI is applied to a process which is intuitively more auditable and where a human is still "in the loop" on the decision? Imagine an inventory planner working in her company's business system when a bot in that system advises her to cancel orders for a range of parts for the coming quarter by 37 percent.

The natural question that experienced planner would ask is-why?

The answer to that question may be trickier and the level of explainability required to make that planner comfortable may be higher. That business system may be aggregating data not only on historical transactions resident in the underlying database, but applying algorithms to demand signals further up in the sales pipeline, slowing progress on pending sales or even external data on weather patterns, leading economic indicators like commodities prices or metadata across social media big data stores. Ultimately, the effectiveness of machine learning models comes from how they can inform recommendations by finding hidden and unexpected correlations between indicators in aggregated volumes of data that humans could never process manually.

Yet the planner, as a professional with an independent mind, needs to know where the recommendation came from in order to trust it. Intelligent enterprise systems must cultivate this trust. In fact, the first goal of AI applied to the enterprise should be supporting expert human knowledge rather than replacing it. How can an intelligent system provide visibility into how it makes decisions? And why do we need this so badly before we can take guidance from intelligent systems or, ultimately, stand aside and simply monitor increasingly autonomous business models?

Why we ask why

Humans will always be responsible, ultimately, not only for the decisions they make based on guidance from AI, or for decisions AI makes on our behalf. But apart from legal liability or ethical responsibility, we have a more deep-seated, innate need to know the hows and whys behind what is going on around us. But to accept the answers we get requires trust.

Struggling to understand the motivations of human management is hard enough-but artificial systems may be even more opaque. Of course, human dynamics and motivation are just one reason we must know why artificial systems do what they do or make the recommendations they offer up to us in our managerial and executive roles.

There are regulatory, fiduciary and risk management-related reasons for us to know how AI thinks, makes decisions and formulates recommendations. Decisions must be auditable along various matrices and a software application must be able to prove compliance with environmental financial, labor and other types of regulations. In almost any industry where AI is used in 2021, AI must be explainable for liability purposes-primarily civil but potentially criminal. Even automation that touches customer data must be auditable to ensure it respects the General Data Protection Regulation (GDPR). That means that both the decision-making framework and the underlying data set must be auditable to ensure decisions or recommendations from AI are not biased or counter to regulation or legislation. They must also be demonstrated to be in the interest of the organization, its stakeholders and the organization's mission.

Enterprise AI visibility

AI deployed in a business or enterprise software environment must be able to solve a multitude of different problems. And sometimes, these problems require complex algorithms that are applied to a significant number of variables to identify an optimal course of action.

Not only do end-users and management need to trust the AI, but they must have enough visibility into its inner workings to debug, improve its performance and monitor it.

Local explainability

Local explainability is really what the users would like, and what we provide where it is feasible. They want a simple and visual audit trail to determine how a decision was made. Why did the model suggest we double our inventory of this specific stock keeping unit, spare part or raw material? Why are we reducing periodicity of maintenance on a given class of assets in the oil field? As we build AI into our products, we are opting for visually intuitive methods for local explainability-in some cases, inherently explainable models like decision trees can be used to solve a problem and such choice in itself facilitates explainability.

With a decision tree, an end user can see the decision flow in very linear terms, and quickly come to an understanding of how a decision was arrived at. Each attribute is visible as is the weighting factors that determine which factors are more significant than others.

In cases where different and less explainable models are required, like neural networks for example, we can use ‘model interpreters' to surface the reasons behind the model's recommendations and make them understandable to human eyes.

Global explainability

Sometimes a broader insight in how the model actually works is required, among other things to make sure that the model isn't interpreting data in a biased way. For example, there have been cases of biased models that penalized certain ethnicities or social groups when recommending whether or not to grant loans. Global explainability in other words tries to understand at a higher level the reasoning of a model, rather than focusing on the steps that led to a specific decision. A global explainability approach can also help technicians to tweak and adjust the model's decision-making process for better performance and quality.

Harnessing the business logic for Intelligent Process Automation

One way to deliver explainable AI now coming to market in enterprise systems is intelligent process automation (IPA), which involves using the underlying business logic of a business software system like enterprise resource planning (ERP) or service management as an understandable basis to be leveraged by AI. Business software will encompass processes and procedures based on best practices that are well understood, and it is these processes and procedures that can directly feed the development of machine learning (ML) models that can eventually improve and automate them.

But isn't the point of AI and ML to replace the use of business rules? We really need both. We need business rules that define how decisions are made in the business, but we also need those rules to change based on unfolding information. The enlightened approach in the context of a business software solution will be able to do just this, and enable a powerful ML algorithm to evaluate the outcomes of rule-based decisions and revise rules in a predictable, explainable fashion. Ultimately, ML may not only be able to augment existing rules, but also suggest new ones. However,  this approach will require a data history a ML algorithm can learn from.

In the more immediate term, the first priority will be to apply IPA to business processes in transactional systems like ERP or customer relationship management (CRM). As a lead comes in, IPA can apply business rules to qualify it based on standard criteria. That would trigger an automated workflow from assigning the lead to a salesperson, tracking the progress through the funnel and eventually converting it into a sale. This will pay for itself many times over in administrative overhead, freeing up business development teams to spend more time interacting with customers. Once there is enough transaction history built up, a ML algorithm may evaluate how well the rules used to qualify work in terms of predicting which will convert to sales, and eventually, which rules or lead characteristics result in more profitable customers. A ML application may look at hundreds of business rules and create weighted combinations of those rules in an additive approach to optimizing the business. In similar fashion, IPA can automate accounts payable processes by matching invoices to purchase orders in ERP. IPA is explainable by default as it is based on well-defined process flows and rules contained in a transactional system of record. As transaction history accumulates, ML can suggest or even automatically make improvements to these rules to optimize processes for defined business goals.

We must know why

People have a hard-enough time explaining their own decisions, but ultimately, they are held responsible for them. AI can be more rational and objective than humans in its decision making and can consider exponentially more data in making them. But we must be able to get to an acceptable level of comfort with how those decisions are made, regardless of how complex the underlying process is. People want to understand these decisions before they accept them; they need to reach the acceptance that the behavior of the model is in line with what they would expect. In some cases, we must be able to assign blame or proportional responsibility when AI goes wrong. We have the approaches to ensure AI used in an enterprise setting is explainable to the best possible degree allowed by each use case. Watch for and ask about these approaches in the technology you use to run your business.

##

About the Author

Bob De Caux, vice president, artificial intelligence & robotic process automation

Bob De Caux 

Bob De Caux, VP of AI and Robotic Process Automation, combines deep technical expertise (PhD in AI and complex systems simulation) with an in-depth knowledge of how to use those techniques to build products and create customer solutions across a number of sectors. He is leading IFS' journey to incorporate AI and machine learning into their software products around the world to make them more effective.

Published Monday, December 07, 2020 7:35 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<December 2020>
SuMoTuWeThFrSa
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789