Analysing the explainability of credit scoring machine learning models using Shapley Additive Explanations approach
| dc.contributor.advisor | Modipa, T. I. | |
| dc.contributor.author | Thoka, Merriam Ramakgahlele | |
| dc.date.accessioned | 2026-03-12T10:46:28Z | |
| dc.date.available | 2026-03-12T10:46:28Z | |
| dc.date.issued | 2025 | |
| dc.description | Thesis (M. Sc. (eScience)) -- University of Limpopo, 2025 | en_US |
| dc.description.abstract | In recent years, machine learning models have gained popularity in credit scoring applications due to their ability to handle large volumes of data and capture complex patterns. However, the lack of transparency and interpretability in these models raises concerns regarding their trustworthiness and fairness. This study aims to address this matter by employing the Shapley Additive Explanations (SHAP) approach to analyse the explainability of credit scoring machine learning models. The lending club dataset, a comprehensive collection of loan applications and associated attributes, is utilized for this analysis. The methodology involves training and evaluating various credit scoring models, including Random Forest, XGBoost, and CatBoost, and generating SHAP values to quantify the importance of input features in the prediction process. The results reveal valuable insights into the factors influencing credit scoring decisions and provide a holistic understanding of the models’ behaviour. By utilizing SHAP explanations, we gain interpretability and can identify features that significantly impact the credit scoring outcomes. This knowledge can help stakeholders, including lenders and regulators, make informed decisions and improve the transparency and accountability of credit scoring systems. The discoveries of this study advance the expanding field of explainable artificial intelligence(AI) and its application in the domain of credit risk management. By enhancing the explainability of credit scoring models, we aim to increase trust, fairness, and accountability in the lending process, ultimately shaping a more inclusive and responsible financial ecosystem. | en_US |
| dc.format.extent | xii, 59 leaves | en_US |
| dc.identifier.uri | http://hdl.handle.net/10386/5379 | |
| dc.language.iso | en | en_US |
| dc.relation.requires | en_US | |
| dc.subject | Expainability | en_US |
| dc.subject | Credit scoring | en_US |
| dc.subject | Machine learning models | en_US |
| dc.subject | Shapley additive explanations | en_US |
| dc.subject.lcsh | Machine learning | en_US |
| dc.subject.lcsh | Credit scoring systems | en_US |
| dc.title | Analysing the explainability of credit scoring machine learning models using Shapley Additive Explanations approach | en_US |
| dc.type | Thesis | en_US |
