Explainability and Interpretability

Explainability of decisions produced by machines is one of the hottest topic these days (see XAI). Explainable AI usually makes decisions using a complicated black box model, and uses a second (posthoc) model created to explain what the first model is doing. Interpretable AI concentrates on models that can themselves be directly inspected and interpreted by human experts. The recent paper “Stop explaining  black box machine learning models for high stakes decisions and use interpretable models instead” shows the difference between explainability and interpretability, and states that the former may be problematic. Link

This entry was posted in Artificial Intelligence, Explanations. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s