Deep learning has become increasingly popular in the past few years. While deep learning models may show promising results, the lack of interpretability means that when a modern deep network fails, practitioners are unable to determine the reason why the model has predicted wrongly. Consequently, stakeholders may quickly lose trust in such systems.
To overcome this potential barrier to the mass adoption of modern Artificial Intelligence Systems, various modern techniques are developed to interpret the uninterpretable.
In this article, we introduce one such method, Gradient-weighted Class Activation Map (Grad-CAM), which is used to explain how modern Convolutional Neural Networks (CNNs) make their decisions.
There will be snippets of code for you to follow along and are stored in Google Colab notebook.
Feel free to reach out to me at hongnan@aisingapore.org for clarifications.