We talk a lot about explainable and interpretable AI in the office. Interpretation is important to gain trust (and is very useful when debugging your model as well!), but is very domain specific.
This article discusses the problems of post-hoc explanations. It proposes to look into inherently interpretable models after you verified that machine learning can produce reasonable results. This suggests that ML is still very much a process that requires human input!
(Also posted on my LinkedIn feed)
blog.acolyer.org
It’s pretty clear from the title alone what Cynthia Rudin would like us to do! The paper is a mix of technical and philosophical arguments and comes with two main takeaways for me: firstly, a sharpening of my understanding of the difference between explainability and interpretability, and why the former may be problematic; and secondly some great pointers to techniques for creating truly interpretable models.