JP van Oosten

Post-hoc explanations and human effort

Jan 9, 2020

We talk a lot about explainable and interpretable AI in the office. Interpretation is important to gain trust (and is very useful when debugging your model as well!), but is very domain specific.

This article discusses the problems of post-hoc explanations. It proposes to look into inherently interpretable models after you verified that machine learning can produce reasonable results. This suggests that ML is still very much a process that requires human input!

(Also posted on my LinkedIn feed)