Main content
Top content
10. September 2018 : To improve generalization, data augmentation outperformes explicit regularization
New publ.:
Hernández-García A and König P (2018).
Data augmentation instead of explicit regularization. arXiv:1806.03852v2 [cs.CV] [preprint posting].
Modern artificial neural networks have achieved impressive results with models of very large capacity. However, in order to improve generalization, explicit regularization techniques such as weight decay and dropout are used, leading to a reduction of the effective capacity.
Here, Hernández-García and König systematically analyze the role of data augmentation in deep neural networks for object recognition, to investigate, if it is possible to replace explicit regularization techniques with the more capacity-effective alternative.
The findings challenge the actual usefulness of these almost ubiquitous techniques in deep learning such as weight decay and dropout. They show that data augmentation provides the same benefits, without wasting capacity and while having many other desirable properties.