Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert SystemsCitation formats

Standard

Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems : A Case Study on Lending Decisions. / Sachan, Swati; Yang, Jian-Bo; Xu, Dong-Ling.

2019. Paper presented at 10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE, Nottingham, United Kingdom.

Research output: Contribution to conferencePaper

Harvard

Sachan, S, Yang, J-B & Xu, D-L 2019, 'Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems: A Case Study on Lending Decisions' Paper presented at 10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE, Nottingham, United Kingdom, 2/06/19 - 5/06/19, .

APA

Vancouver

Author

Bibtex

@conference{15b0519e92f64344a48abf4c34c0f392,
title = "Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems: A Case Study on Lending Decisions",
abstract = "The idea to explain the decisions of artificial intelligence (AI) model started in the 1970s to test and engender user trust in expert systems. However, specular advances in computation power and improvements in optimization algorithms shifted the focus towards the accuracy, while the ability to explain the decision has taken a back seat. In future, the decision-making process would be partially or completely dependent on machine learning (ML) algorithms which require humans to trust these algorithms in order to accept those decisions. Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.",
author = "Swati Sachan and Jian-Bo Yang and Dong-Ling Xu",
year = "2019",
month = "6",
day = "4",
language = "English",
note = "10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE ; Conference date: 02-06-2019 Through 05-06-2019",
url = "http://www.edsi-conference.org/",

}

RIS

TY - CONF

T1 - Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems

T2 - A Case Study on Lending Decisions

AU - Sachan, Swati

AU - Yang, Jian-Bo

AU - Xu, Dong-Ling

PY - 2019/6/4

Y1 - 2019/6/4

N2 - The idea to explain the decisions of artificial intelligence (AI) model started in the 1970s to test and engender user trust in expert systems. However, specular advances in computation power and improvements in optimization algorithms shifted the focus towards the accuracy, while the ability to explain the decision has taken a back seat. In future, the decision-making process would be partially or completely dependent on machine learning (ML) algorithms which require humans to trust these algorithms in order to accept those decisions. Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.

AB - The idea to explain the decisions of artificial intelligence (AI) model started in the 1970s to test and engender user trust in expert systems. However, specular advances in computation power and improvements in optimization algorithms shifted the focus towards the accuracy, while the ability to explain the decision has taken a back seat. In future, the decision-making process would be partially or completely dependent on machine learning (ML) algorithms which require humans to trust these algorithms in order to accept those decisions. Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.

M3 - Paper

ER -