Data scientists are deeply invested in producing trustworthy insights but as AI becomes more powerful and autonomous we’re being confronted with the question: How do we know our insights are truly reliable?

While explainable AI techniques have helped make black-box models more interpretable, interpretability alone doesn’t guarantee trustworthiness. In this talk, we’ll explore why trustworthy data science requires not just explanations, but an understanding of causality — the ability to reason about interventions and make decisions based on how the world actually works, not just how it correlates.
Through real-world examples from churn prediction to replacing A/B testing with simulation we’ll show how causal inference equips data scientists to ask better questions. You’ll leave with a clearer understanding of when to use explainability tools, when to reach for causal methods, and how to integrate both into your everyday workflow.

Key Takeaways:
– Understand the difference between explainability and causality — and why both matter for trustworthy AI
– Learn when and how to use explainable AI techniques and when to avoid!
– Discover how to build and reason with causal diagrams (DAGs) to clarify assumptions and avoid hidden confounders
– Gain a practical intro to counterfactual reasoning and how it strengthens decision-making beyond correlation
– Explore hands-on tools like DoWhy to implement causal analysis in real-world workflows
– Recognise common pitfalls in causal thinking, including bias and bad graph assumptions
– See real-world examples of causal inference in action — from churn modelling to feature releases

Technical Level of Session: Technical practitioner

Supported by