Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models – ZDNet
Posted: August 27, 2020 at 3:50 am
Machine learning and artificial intelligence are helping automate an ever-increasing array of tasks, with ever-increasing accuracy. They are supported by the growing volume of data used to feed them, and the growing sophistication in algorithms.
The flip side of more complex algorithms, however, is less interpretability. In many cases, the ability to retrace and explain outcomes reached by machine learning models (ML) is crucial, as:
"Trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things. Algorithmic trust helps to ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge, differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI."
The above quote is taken from Gartner's newly released 2020 Hype Cycle for Emerging Technologies. In it, explainable AI is placed at the peak of inflated expectations. In other words, we have reached peak hype for explainable AI. To put that into perspective, a recap may be useful.
As experts such as Gary Marcus point out, AI is probably not what you think it is. Many people today conflate AI with machine learning. While machine learning has made strides in recent years, it's not the only type of AI we have. Rule-based, symbolic AI has been around for years, and it has always been explainable.
Incidentally, that kind of AI, in the form of "Ontologies and Graphs" is also included in the same Gartner Hype Cycle, albeit in a different phase -- the trough of disillusionment. Incidentally, again, that's conflating.Ontologies are part of AI, while graphs, not necessarily.
That said: If you are interested in getting a better understanding of the state of the art in explainable AI machine learning, reading Christoph Molnar's book is a good place to start. Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. Molnar has written the bookInterpretable Machine Learning: A Guide for Making Black Box Models Explainable, in which he elaborates on the issue and examines methods for achieving explainability.
Gartner's Hype Cycle for Emerging Technologies, 2020. Explainable AI, meaning interpretable machine learning, is at the peak of inflated expectations. Ontologies, a part of symbolic AI which is explainable, is in the trough of disillusionment
Recently, Molnar and a group of researchers attempted to addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research. Their work was published as a research paper, titledPitfalls to Avoid when Interpreting Machine Learning Models, by the ICML 2020 Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
Similar to Molnar's book, the paper is thorough. Admittedly, however, it's also more involved. Yet, Molnar has striven to make it more approachable by means of visualization, using what he dubs "poorly drawn comics" to highlight each pitfall. As with Molnar's book on interpretable machine learning, we summarize findings here, while encouraging readers to dive in for themselves.
The paper mainly focuses on the pitfalls of global interpretation techniques when the full functional relationship underlying the data is to be analyzed. Discussion of "local" interpretation methods, where individual predictions are to be explained, is out of scope. For a reference on global vs. local interpretations, you can refer to Molnar's book as previously covered on ZDNet.
Authors note that ML models usually contain non-linear effects and higher-order interactions. As interpretations are based on simplifying assumptions, the associated conclusions are only valid if we have checked that the assumptions underlying our simplifications are not substantially violated.
In classical statistics this process is called "model diagnostics," and the research claims that a similar process is necessary for interpretable ML (IML) based techniques. The research identifies and describes pitfalls to avoid when interpreting ML models, reviews (partial) solutions for practitioners, and discusses open issues that require further research.
Under- or overfitting models will result in misleading interpretations regarding true feature effects and importance scores, as the model does not match the underlying data generating process well. Evaluation of training data should not be used for ML models due to the danger of overfitting. We have to resort to out-of-sample validation such as cross-validation procedures.
Formally, IML methods are designed to interpret the model instead of drawing inferences about the data generating process. In practice, however, the latter is the goal of the analysis, not the former. If a model approximates the data generating process well enough, its interpretation should reveal insights into the underlying process. Interpretations can only be as good as their underlying models. It is crucial to properly evaluate models using training and test splits -- ideally using a resampling scheme.
Flexible models should be part of the model selection process so that the true data-generating function is more likely to be discovered. This is important, as the Bayes error for most practical situations is unknown, and we cannot make absolute statements about whether a model already fits the data optimally.
Using opaque, complex ML models when an interpretable model would have been sufficient (i.e., having similar performance) is considered a common mistake. Starting with simple, interpretable models and gradually increasing complexity in a controlled, step-wise manner, where predictive performance is carefully measured and compared is recommended.
Measures of model complexity allow us to quantify the trade-off between complexity and performance and to automatically optimize for multiple objectives beyond performance. Some steps toward quantifying model complexity have been made. However, further research is required as there is no single perfect definition of interpretability but rather multiple, depending on the context.
This pitfall is further analyzed in three sub-categories: Interpretation with extrapolation, confusing correlation with dependence, and misunderstanding conditional interpretation.
Interpretation with Extrapolation refers to producing artificial data points that are used for model predictions with perturbations. These are aggregated to produce global interpretations. But if features are dependent, perturbation approaches produce unrealistic data points. In addition, even if features are independent, using an equidistant grid can produce unrealistic values for the feature of interest. Both issues can result in misleading interpretations.
Before applying interpretation methods, practitioners should check for dependencies between features in the data (e.g., via descriptive statistics or measures of dependence). When it is unavoidable to include dependent features in the model, which is usually the case in ML scenarios, additional information regarding the strength and shape of the dependence structure should be provided.
Confusing correlation with dependence is a typical error. The Pearson correlation coefficient (PCC) is a measure used to track dependency among ML features. But features with PCC close to zero can still be dependent and cause misleading model interpretations. While independence between two features implies that the PCC is zero, the converse is generally false.
Any type of dependence between features can have a strong impact on the interpretation of the results of IML methods. Thus, knowledge about (possibly non-linear) dependencies between features is crucial. Low-dimensional data can be visualized to detect dependence. For high-dimensional data, several other measures of dependence in addition to PCC can be used.
Misunderstanding conditional interpretation. Conditional variants to estimate feature effects and importance scores require a different interpretation. While conditional variants for feature effects avoid model extrapolations, these methods answer a different question. Interpretation methods that perturb features independently of others also yield an unconditional interpretation.
Conditional variants do not replace values independently of other features, but in such a way that they conform to the conditional distribution. This changes the interpretation as the effects of all dependent features become entangled. The safest option would be to remove dependent features, but this is usually infeasible in practice.
When features are highly dependent and conditional effects and importance scores are used, the practitioner has to be aware of the distinct interpretation. Currently, no approach allows us to simultaneously avoid model extrapolations and to allow a conditional interpretation of effects and importance scores for dependent features.
Global interpretation methods can produce misleading interpretations when features interact. Many interpretation methods cannot separate interactions from main effects. Most methods that identify and visualize interactions are not able to identify higher-order interactions and interactions of dependent features.
There are some methods to deal with this, but further research is still warranted. Furthermore, solutions lack in automatic detection and ranking of all interactions of a model as well as specifying the type of modeled interaction.
Due to the variance in the estimation process, interpretations of ML models can become misleading. When sampling techniques are used to approximate expected values, estimates vary, depending on the data used for the estimation. Furthermore, the obtained ML model is also a random variable, as it is generated on randomly sampled data and the inducing algorithm might contain stochastic components as well.
Hence, themodel variance has to be taken into account. The true effect of a feature may be flat, but purely by chance, especially on smaller data, an effect might algorithmically be detected. This effect could cancel out once averaged over multiple model fits. The researchers note the uncertainty in feature effect methods has not been studied in detail.
It's a steep fall to the peak of inflated expectations to the trough of disillusionment. Getting things done for interpretable machine learning takes expertise and concerted effort.
Simultaneously testing the importance of multiple features will result in false-positive interpretations if the multiple comparisons problem (MCP) is ignored. MCP is well known in significance tests for linear models and similarly exists in testing for feature importance in ML.
For example, when simultaneously testing the importance of 50 features, even if all features are unimportant, the probability of observing that at least one feature is significantly important is 0.923. Multiple comparisons will even be more problematic, the higher dimensional a dataset is. Since MCP is well known in statistics, the authors refer practitioners to existing overviews and discussions of alternative adjustment methods.
Practitioners are often interested in causal insights into the underlying data-generating mechanisms, which IML methods, in general, do not provide. Common causal questions include the identification of causes and effects, predicting the effects of interventions, and answering counterfactual questions. In the search for answers, researchers can be tempted to interpret the result of IML methods from a causal perspective.
However, a causal interpretation of predictive models is often not possible. Standard supervised ML models are not designed to model causal relationships but to merely exploit associations. A model may, therefore, rely on the causes and effects of the target variable as well as on variables that help to reconstruct unobserved influences.
Consequently, the question of whether a variable is relevant to a predictive model does not directly indicate whether a variable is a cause, an effect, or does not stand in any causal relation to the target variable.
As the researchers note, the challenge of causal discovery and inference remains an open key issue in the field of machine learning. Careful research is required to make explicit under which assumptions what insight about the underlying data generating mechanism can be gained by interpreting a machine learning model
Molnar et. al. offer an involved review of the pitfalls of global model-agnostic interpretation techniques for ML. Although as they note their list is far from complete, they cover common ones that pose a particularly high risk.
They aim to encourage a more cautious approach when interpreting ML models in practice, to point practitioners to already (partially) available solutions, and to stimulate further research.
Contrasting this highly involved and detailed groundwork to high-level hype and trends on explainable AI may be instructive.
The rest is here:
- The Top Five AWS Re:Invent 2019 Announcements That Impact Your Enterprise Today - Forbes [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Cloudy with a chance of neurons: The tools that make neural networks work - Ars Technica [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- NFL Looks to Cloud and Machine Learning to Improve Player Safety - Which-50 [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Managing Big Data in Real-Time with AI and Machine Learning - Database Trends and Applications [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- 10 Machine Learning Techniques and their Definitions - AiThority [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game - Analytics India Magazine [Last Updated On: December 31st, 2019] [Originally Added On: December 31st, 2019]
- Machine learning to grow innovation as smart personal device market peaks - IT Brief New Zealand [Last Updated On: December 31st, 2019] [Originally Added On: December 31st, 2019]
- Can machine learning take over the role of investors? - TechHQ [Last Updated On: December 31st, 2019] [Originally Added On: December 31st, 2019]
- The impact of ML and AI in security testing - JAXenter [Last Updated On: December 31st, 2019] [Originally Added On: December 31st, 2019]
- Are We Overly Infatuated With Deep Learning? - Forbes [Last Updated On: December 31st, 2019] [Originally Added On: December 31st, 2019]
- Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization - Science Times [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Get ready for the emergence of AI-as-a-Service - The Next Web [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Clean data, AI advances, and provider/payer collaboration will be key in 2020 - Healthcare IT News [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- An Open Source Alternative to AWS SageMaker - Datanami [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- How Machine Learning Will Lead to Better Maps - Popular Mechanics [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Federated machine learning is coming - here's the questions we should be asking - Diginomica [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Iguazio pulls in $24m from investors, shows off storage-integrated parallelised, real-time AI/machine learning workflows - Blocks and Files [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera - HedgeWeek [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Short- and long-term impacts of machine learning on contact centres - Which-50 [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Yahoo Finance [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Regulators Begin to Accept Machine Learning to Improve AML, But There Are Major Issues - PaymentsJournal [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- What Is Machine Learning? | How It Works, Techniques ... [Last Updated On: January 27th, 2020] [Originally Added On: January 27th, 2020]
- Global Deep Learning Market 2020-2024 | Growing Application of Deep Learning to Boost Market Growth | Technavio - Business Wire [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- The Human-Powered Companies That Make AI Work - Forbes [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care - UB Now: News and views for UB faculty and staff - University at Buffalo... [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news - The Real Deal [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators - HPCwire [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones - Economic Times [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Reinforcement Learning: An Introduction to the Technology - Yahoo Finance [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Reinforcement Learning (RL) Market Report & Framework, 2020: An Introduction to the Technology - Yahoo Finance [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Top Machine Learning Services in the Cloud - Datamation [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak - Machine Learning Times - machine learning & data science news - The... [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Combating the coronavirus with Twitter, data mining, and machine learning - TechRepublic [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- Speechmatics and Soho2 apply machine learning to analyse voice data - Finextra [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply - Business Wire [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- What is Machine Learning? A definition - Expert System [Last Updated On: February 4th, 2020] [Originally Added On: February 4th, 2020]
- How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Google Teaches AI To Play The Game Of Chip Design - The Next Platform [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Would you tell your innermost secrets to Alexa? How AI therapists could save you time and money on mental health care - MarketWatch [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Cisco Enhances IoT Platform with 5G Readiness and Machine Learning - The Fast Mode [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - HPCwire [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - MarTech Series [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Machine Learning: Real-life applications and it's significance in Data Science - Techstory [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- What is machine learning? Everything you need to know | ZDNet [Last Updated On: February 22nd, 2020] [Originally Added On: February 22nd, 2020]
- AI Is Top Game-Changing Technology In Healthcare Industry - Forbes [Last Updated On: February 23rd, 2020] [Originally Added On: February 23rd, 2020]
- Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website [Last Updated On: February 23rd, 2020] [Originally Added On: February 23rd, 2020]
- This AI Researcher Thinks We Have It All Wrong - Forbes [Last Updated On: February 23rd, 2020] [Originally Added On: February 23rd, 2020]
- TMR Projects Strong Growth for Property Management Software Market, AI and Machine Learning to Boost Valuation to ~US$ 2 Bn by 2027 - PRNewswire [Last Updated On: February 29th, 2020] [Originally Added On: February 29th, 2020]
- Global Machine Learning as a Service Market, Trends, Analysis, Opportunities, Share and Forecast 2019-2027 - NJ MMA News [Last Updated On: February 29th, 2020] [Originally Added On: February 29th, 2020]
- Forget Chessthe Real Challenge Is Teaching AI to Play D&D - WIRED [Last Updated On: February 29th, 2020] [Originally Added On: February 29th, 2020]
- Workday, Machine Learning, and the Future of Enterprise Applications - Cloud Wars [Last Updated On: February 29th, 2020] [Originally Added On: February 29th, 2020]
- The Global Deep Learning Chipset Market size is expected to reach $24.5 billion by 2025, rising at a market growth of 37% CAGR during the forecast... [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- The Power of AI in 'Next Best Actions' - CMSWire [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- Proof in the power of data - PES Media [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data - The Register [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era - Datamation [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 - Bandera County Courier [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- With launch of COVID-19 data hub, the White House issues a call to action for AI researchers - TechCrunch [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- Why AI might be the most effective weapon we have to fight COVID-19 - The Next Web [Last Updated On: March 22nd, 2020] [Originally Added On: March 22nd, 2020]
- AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Deep Learning to Be Key Driver for Expansion and Adoption of AI in Asia-Pacific, Says GlobalData - MarTech Series [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data... [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- What are the top AI platforms? - Gigabit Magazine - Technology News, Magazine and Website [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Deep Learning: What You Need To Know - Forbes [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Neural networks facilitate optimization in the search for new materials - MIT News [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- PSD2: How machine learning reduces friction and satisfies SCA - The Paypers [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Google is using AI to design chips that will accelerate AI - MIT Technology Review [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- What Researches says on Machine learning with COVID-19 - Techiexpert.com - TechiExpert.com [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]
- Self-driving truck boss: 'Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching' - The Register [Last Updated On: March 29th, 2020] [Originally Added On: March 29th, 2020]