A method in Explainable AI (XAI) that attributes a machine learning model’s output to its input features, based on the principles of cooperative game theory. SHAP assigns each feature a share of the prediction by calculating how much that feature contributed to the outcome compared to other combinations of inputs.
- S – SHapley: Refers to Shapley values, a concept from game theory that ensures fair distribution of outcomes among participants.
- A – Additive: Indicates that SHAP builds its explanations by summing the contributions of individual features.
- P – exPlanations: Emphasizes SHAP’s goal of providing clear, quantitative explanations for each prediction made by a model.
This technique is fundamental in domains where model transparency is crucial, such as finance, healthcare, and marketing optimization. It supports both local interpretability (individual predictions) and global model insights (overall feature importance), making it one of the most widely adopted tools in the XAI ecosystem.
By quantifying the impact of each feature on a prediction, SHAP enables business users to understand and trust AI-driven decisions, whether for approving a loan, ranking a sales lead, or allocating ad spend.