Kasia Kulma


Talk Abstract: There’s a trade-off between complexity and interpretability of Machine Learning algorithms. This may pose challenges in building trust in those models, but also affect our society as a whole. Kasia will talk about current ways of evaluating those opaque (‘black-box’) models and their caveats. Then, she’ll introduce Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models – using breast cancer data as a specific case scenario. Finally, she’ll discuss why using frameworks such as LIME is important not just from technical, but also ethical point of view.

Bio:  Kasia Kulma holds a PhD in evolutionary biology from Uppsala University and is now a Data Scientist at Aviva. She has experience in building recommender systems, customer segmentations, web applications and is now leading an NLP project. She is the author of the blog R-tastic and is a mentor in R-Ladies London. She is an R-enthusiast interested in data (science) ethics, evidence based medicine and general machine learning modelling.


Saturday April 21st , 2018
9:00 am-
5:00 pm
Data Science Festival Mainstage (Ballot ticket only) CodeNode - 10 South Pl, London EC2M 7EB BALLOT TICKETS ARE NOW OPEN Get Tickets Due to the popularity of Data Science Festival events, we are now allocating event tickets via a random ballot. Registering here enters you into the ticket ballot for…