This may pose challenges in building trust in those models, but also affect our society as a whole. Kasia will talk about current ways of evaluating those opaque (‘black-box’) models and their caveats. Then, she’ll introduce Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models – using breast cancer data as a specific case scenario. Finally, she’ll discuss why using frameworks such as LIME is important not just from technical, but also ethical point of view.
- Kasia Kulma Data Science Consultant
Complexity and interpretability of Machine Learning, what’s the tradeoff?
In-person
Saturday 21st April 2018