We all love linear regression for its interpretability: Increase square meters by 1, that leads to rent going up by 8 euros. A human can easily understand why this model made a certain prediction.

Complex machine learning models like tree aggregates or neural networks usually make better predictions, but this comes at a price: it's hard to understand these models.

In this talk, we'll look at a few methods to pry open these models and gain some insights

Specifically, the topics covered are

  • What makes a model interpretable?
    • Linear models, trees, decision rules
  • The SIPA framework (Sampling, Intervention, Prediction, Aggregation) for making models interpretable again
  • Model-agnostic methods for interpretability
    • Partial dependence plots (PDPs)
    • Individual conditional expectations (ICEs)
  • Example-based explanations
  • The future of machine learning

Alexander Engelhardt

Affiliation: Engelhardt Data Science GmbH

Statistician turned freelance data scientist, Munich based.

Caught the entrepreneurial bug. Now experimenting with product-based business and/or productized services.

visit the speaker at: TwitterGithubHomepage