Notifications
Clear all

Article How To Use Grad-CAM To Interpret Your Convolutional Neural Network?

2 Posts
1 Users
6 Likes
551 Views
Gao Hongnan
(@gao-hongnan)
Eminent Member Moderator
Joined: 1 year ago
Posts: 28
Topic starter  

Deep learning has become increasingly popular in the past few years. While deep learning models may show promising results, the lack of interpretability means that when a modern deep network fails, practitioners are unable to determine the reason why the model has predicted wrongly. Consequently, stakeholders may quickly lose trust in such systems.

To overcome this potential barrier to the mass adoption of modern Artificial Intelligence Systems, various modern techniques are developed to interpret the uninterpretable.

In this article, we introduce one such method, Gradient-weighted Class Activation Map (Grad-CAM), which is used to explain how modern Convolutional Neural Networks (CNNs) make their decisions.

There will be snippets of code for you to follow along and are stored in Google Colab notebook. 

Feel free to reach out to me at hongnan@aisingapore.org for clarifications.

cat dog gradcam

 

 

 


   
leeseng, dotw, Kimmy and 3 people reacted
Quote
Gao Hongnan
(@gao-hongnan)
Eminent Member Moderator
Joined: 1 year ago
Posts: 28
Topic starter  

The article link: Click Me


   
ReplyQuote
Share: