Optimize ResNet Learning Process

Context

A key component of most image classification models is batch normalization. However, this procedure has some limitations depending on the batch size and interactions between the images inside the batch. Some alternatives have appeared in the last few years but have shown accuracy limitations. Nonetheless, one Normalizer-Free (NF) technique seems promising, and it’s called the Adaptive Gradient Clipping (AGC). In this work, we compare the performances of a ResNet with and without a normalizer, as well as a ResNet using AGC.

The project

The idea of this project was to make an ablation study about the Adaptive Gradient Clipping when applied to ResNet architectures. A comparison between ResNets, using AGC or Batch normalization and ResNets without any type of normalizer, was done.

For more details the final term paper of this study is available below

Stack

Python: TensorFlow, Keras

Previous
Previous

ASK YOUR PDF

Next
Next

face Detection WIth Googlenet