However, while AI models can drive greater levels of accuracy than more traditional methods, the complexity that accompanies sophisticated models raises associated issues.
Engineers are now faced with the questions: Why is my AI model making those decisions, and how can I verify that these results are the ones that I expected?
To overcome this lack of transparency resulting from the sophisticated nature of AI models, engineers are turning to explainable AI – a set of tools and techniques to help them understand a model’s decision-making process.
Understanding the how and the why of AI models’ decisions is instrumental for the wider integration of the technology into a variety of industries, and in particular for the ones with a stringent regulatory framework.
Explainability holds an immense potential for engineers across all verticals, which is why taking full advantage of the technology available will be integral for future innovation.
Figure 1: Evolution of AI models. A simple model may be more transparent, while a more sophisticated model can improve performance
Increasing predictive power
AI models need not always be complex. Models like temperature control are inherently explainable due to a “common sense” understanding of the physical relationships in that model. As temperature falls below a certain threshold, the heater turns on. As it rises above a higher threshold, it turns off. It is easy to verify the system is working as expected based on the temperature in the room. In applications where black box models are unacceptable, inherently explainable models may be accepted if they are sufficiently accurate.
However, moving to more sophisticated models allows engineers to improve predictive power. Complex models can take complex data, such as images and streaming signals, and use machine learning and deep learning techniques to process that data and extract patterns that a rules-based approach could not. In so doing, AI can improve performance in complex application areas like wireless and radar communications in a way previously not possible.
Benefits of explainability
With complexity comes a lack of transparency. AI models are sometimes referred to as “black boxes,” with complex systems providing little visibility into what the model learned during training, or if it will work as expected in unknown conditions.
Explainable AI aims to ask questions about the model to uncover and then explain their predictions, decisions, and actions so engineers can maintain confidence that their models will work in all scenarios, even as their predictive power increases.
For engineers working on models, explainability can also help analyse incorrect predictions and debug their work. This can include looking into issues within the model itself or the input data used to train it. Using explainable techniques can provide evidence as to why a model arrived at a particular result, providing an opportunity for engineers to improve accuracy.
Stakeholders beyond model developers and engineers are also interested in the ability to explain a model, their individual needs shaped by their interaction with the application. For example, a decision maker would want to understand, without getting into the technical explanation, how a model works; while a customer would want to feel confident that the model will work as expected in all scenarios.
As desire to use AI in areas with specific regulatory requirements to provide robustness of training increases, the ability to demonstrate fairness and trustworthiness in a model’s decisions will grow in importance. Ultimately, decision makers want to feel confident that the models they work with are rational and will work within a tight regulatory framework.
Of particular importance is the identification and removal of bias in all applications. Bias can be introduced when models are trained on data that is unevenly sampled and could be particularly concerning when applied to people.
Model developers must understand how bias could implicitly sway results to ensure AI models provide accurate predictions without implicitly favouring particular groups.
Figure 2: The trade-off between explainablility and predictive power. In general, more powerful models tend to be less explainable, and engineers will need new approaches to explainability to make sure they can maintain confidence in the models as predictive power increases
Current explainability methods
To deal with issues like confidence in models and the introduction of bias, engineers can integrate explainability methods into their AI models. Current explainable methods fall into two categories – global and local.
Global methods provide an overview of the most influential variables in the model based on input data and predicted output. For example, feature ranking sorts features by their impact on model predictions, while partial dependence plots chart a specific feature’s impact on predictions across all of its values.
Local methods like Local Interpretable Model-agnostic Explanation (LIME), provide an explanation of a single prediction result. LIME approximates a complex machine learning or deep learning model within a simple, explainable model in the vicinity of a point of interest. By doing so, it provides visibility into which of the predictors most influenced the model’s decision.
Visualisations are another robust tool to assess model explainability when building models for image processing or computer vision applications. Local methods such as Grad-CAM identify locations in images that most strongly influenced the prediction of the model, while global T-SNE uses feature groupings to display high-dimensional data in a simple two-dimensional plot.
Figure 3: The difference between global and local methods. Local methods focus on a single prediction, while global methods focus on multiple predictions
Looking beyond explainability
While explainability may overcome stakeholder resistance against black box AI models, this is only one step towards integrating AI into engineered systems. AI used in practice requires models that can be understood and constructed using a rigorous process, and that can operate at a level necessary for safety-critical and sensitive applications. For explainability to truly embed itself in AI development, further research is needed.
This is shown in industries such as aerospace and automotive, which are defining what safety certification of AI looks like for their applications. Traditional approaches replaced or enhanced with AI must meet the same standards and will only be adopted by proving their outcomes with interpretable results. Verification and validation research is moving explainability beyond confidence that a model works under certain conditions, to instead confirm that models used in safety-critical applications meet minimum standards.
As the development of explainable AI continues, it can be expected that engineers will increasingly recognise that the output of a system must match the expectations of the end user. Transparency in communicating results with end users interacting with these AI models will therefore become a fundamental part of the design process.
Emphasis on explainability
The future of AI will have a strong emphasis on explainability. As AI is incorporated into safety-critical and everyday applications, scrutiny from both internal stakeholders and external users is likely to increase.
Viewing explainability as essential benefits everyone.
Engineers have better information to debug their models with and ensure that output matches their intuition, while also gaining more insight into their model’s behaviour in order to meet compliance standards.
AI is likely to only increase in complexity, and the ability of engineers to focus on increased transparency for these systems will be vital in the forward evolution of AI.
Author details: Johanna Pingel, AI Product Manager, MathWorks