Machine learning models can be excellent tools for diagnosis and hypothesis generation, but they’re hard to analyze and understand. We’ll start with a bird’s eye view of machine learning, and then take focus in on ML model explainability. In particular, we will discuss SHAP (SHapley Additive exPlanation) values, how they can turn black box models into glass box models, how SHAP values are used in the Ki process, and finish up with some concrete examples.