Improving Classification By Visualising Activations

Subhaditya Mukherjee
6 min readAug 19, 2022

Note: This is a slightly advanced article. If you are not comfortable with training neural networks, this is probably not for you yet. Start here instead.

· Intro
The Objective and Data
Code
Training
Hooks
Plotting Activations
What’s next?
Fin

So you want to train a Neural Network to classify images. Woah. That’s awesome! How well did it do? Did you get a good score? Oh? You want to do better? I hear you. What if you could see what the network sees to make the choice? That would help understand how to make it perform better right? Read on!

A few years ago, a paper titled “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization” by Selvaraju et al. talked about how we could visually see the activation maps of a trained CNN by looking at the gradients in the final layer. This post will show you how to use that for your own needs.

Note: We will be using PyTorch and the fast.ai library. But the concepts stay the same, so you should be able to use it in any other library.

The Objective and Data

Before we can go to the code, what exactly are we trying to achieve? In short, we want to first train a network such as the “resnet34” architecture on any kind of data. In…

--

--

Subhaditya Mukherjee
Subhaditya Mukherjee

Written by Subhaditya Mukherjee

My aim is to push the boundaries of what we deem possible and contribute to the community along the way.

No responses yet