We investigate the impact of aliasing on generalization in Deep Convolutional Networks and show that data augmentation schemes alone are unable to prevent it due to structural limitations in widely used architectures. Drawing insights from frequency analysis theory, we take a closer look at ResNet and EfficientNet architectures and review the trade-off between aliasing and information loss in each of their major components. We show how to mitigate aliasing by inserting non-trainable low-pass filters at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in generalization on i.i.d. and even more on out-of-distribution conditions, such as image classification under natural corruptions on ImageNet-C and few-shot learning on Meta-Dataset. State-of-the art results are achieved on both datasets without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.
*Texto informado pelo autor.
Registre-se com antecedência para este seminário:
Após o registro, você receberá um e-mail de confirmação contendo informações sobre conexão no seminário.
Cristina Nader Vasconcelos is a Research Software Engineer at Google Brain, Montreal. She obtained her PhD from PUC-Rio/Brazil in Computer Graphics, and she is interested in Machine Learning, Parallel Processing, Computer Graphics/Vision and Speech Recognition. Before joining Google she worked as a Senior Software Engineer at SoundHound, where she developed acoustic models based on Deep Learning and as an Associate professor at UFF/Brazil from 2010 and 2018.