Compression of dnns using magnitude pruning and nonlinear information bottleneck training

Morten Østergaard Nielsen, Jan Østergaard, Jesper Jensen, Zheng-Hua Tan

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

1 Citation (Scopus)

Abstract

As Deep Neural Networks (DNNs) have achieved state-of- the-art performance in various scientific fields and applica- tions, the memory and computational complexity of DNNs have increased concurrently. The increased complexity re- quired by DNNs prohibits them from running on platforms with limited computational resources. This has sparked a re- newed interest in parameter pruning. We propose to replace the standard cross-entropy objective – typically used in clas- sification problems – with the Nonlinear Information Bottle- neck (NIB) objective to improve the accuracy of a pruned net- work. We demonstrate, that our proposal outperforms cross- entropy combined with global magnitude pruning for high compression rates on VGG-nets trained on CIFAR10. With approximately 97% of the parameters pruned, we obtain an accuracy of 87.63% and 88.22% for VGG-16 and VGG-19, respectively, where the baseline accuracy is 91.5% for the un- pruned networks. We observe that the majority of biases are pruned completely, and pruning parameters globally outper- forms layer-wise pruning.
Original languageEnglish
Title of host publication2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)
Number of pages6
PublisherIEEE (Institute of Electrical and Electronics Engineers)
Publication dateOct 2021
Pages1-6
Article number9596128
ISBN (Print)978-1-6654-1184-4
ISBN (Electronic)978-1-7281-6338-3
DOIs
Publication statusPublished - Oct 2021
Event2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) - Gold Coast, Australia
Duration: 25 Oct 202128 Oct 2021

Conference

Conference2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)
Country/TerritoryAustralia
CityGold Coast
Period25/10/202128/10/2021
SeriesIEEE Workshop on Machine Learning for Signal Processing
ISSN1551-2541

Keywords

  • deep learning
  • mutual information
  • parameter pruning
  • variational bottleneck

Fingerprint

Dive into the research topics of 'Compression of dnns using magnitude pruning and nonlinear information bottleneck training'. Together they form a unique fingerprint.

Cite this