For anyone who has ever wondered how AI makes decisions and improves upon them, a recent TechTarget post offers fantastic insight. The piece examines the difference between AI interpretability and explainability, and how people without scientific backgrounds can begin to understand how AI works.

The author draws a distinction between interpretability and explainability in how AI problems are presented to users within the context of a user experience: “AI interpretability is applied to the process of examining rules-based algorithms, while explainability is applied to the process of examining black box deep learning algorithms.”

Chris Butler, Chief Product Architect at IPsoft, is a proponent of explainability when describing interactions with experts, but prefers interpretability for describing interactions with other users. “Explainability is a measure of how much an expert who understands all parts of a system would make decisions about that system,” Butler told TechTarget.

As TechTarget explains: “The goal of explainability is an understanding so deep that the expert can build, maintain and even debug it, as well as use it. Interpretability, on the other hand, gauges how well any user — especially a non-expert — would understand why and what the system is doing either overall or for a particular case.”

“Humans can’t possibly know everything about the way an organization works, and they don’t have to with interpretability,” Butler said.

To learn more about what experts have to say on this subject, be sure to read the TechTarget article.

New call-to-action