Memory- and time-efficient dense network for single-image super-resolution

Nasrin Imanpour, Ahmad Reza Naghsh-Nilchi, Amirhassan Monadjemi, Hossein Karshenas, Kamal Nasrollahi, Thomas B. Moeslund

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

4 Citationer (Scopus)
120 Downloads (Pure)

Abstract

Dense connection in convolutional neural networks, which connects each layer to every other layer, can avoid mid/high-frequency information loss and further enhance high-frequency signals. Single image super-resolution (SISR) can benefit from it in restoring rich details. A larger number of propagating feature maps, named as growth rate, especially with deeper depths use high memory. To address this problem, an efficient two-step concatenate feature map learning is proposed in this paper. The idea is to enrich the concatenate feature maps using a convolutional layer with more filters before concatenate layers instead of increasing the growth rate. Afterward, representative concatenate feature maps are extracted using a smaller growth rate. That significantly reduces memory usage without loss of information. The proposed dense block improves the results by 0.24 dB in comparison toSISR with the basic dense block. Moreover, it results in 24% and 6% less memory usage and test time. Furthermore, the proposed method decreases the growth rate by at least a factor of 2 while producing competitive results, and improves the percentage of memory and time consumption by up to 40% and 12%. These results suggest that the proposed approach is a more practical method for SISR.
OriginalsprogEngelsk
TidsskriftIET Signal Processing
Vol/bind15
Udgave nummer2
Sider (fra-til)141-152
ISSN1751-9675
DOI
StatusUdgivet - apr. 2021

Fingeraftryk

Dyk ned i forskningsemnerne om 'Memory- and time-efficient dense network for single-image super-resolution'. Sammen danner de et unikt fingeraftryk.

Citationsformater