Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space. With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, an
![The Building Blocks of Interpretability](https://cdn-ak-scissors.b.st-hatena.com/image/square/6b391299342f6a9282554947058ad09bab44b21e/height=288;version=1;width=512/https%3A%2F%2Fdistill.pub%2F2018%2Fbuilding-blocks%2Fthumbnail.jpg)