Representative dense feature learning for memory- and time-efficient single image super-resolution

Nasrin Imanpour, Ahmad Reza Naghsh-Nilchi, Amirhassan Monadjemi, Hossein Karshenas, Kamal Nasrollahi, Thomas B. Moeslund

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

8 Downloads (Pure)

Abstrakt

Dense connection in convolutional neural networks, which connects each layer to every other layer, can avoid mid/high-frequency information loss and further enhance high-frequency signals. Single image super-resolution (SISR) can benefit from it in restoring rich details. A larger number of propagating feature maps, named as growth rate, especially with deeper depths use high memory. To address this problem, an efficient two-step concatenate feature map learning is proposed in this paper. The idea is to enrich the concatenate feature maps using a convolutional layer with more filters before concatenate layers instead of increasing the growth rate. Afterward, representative concatenate feature maps are extracted using a smaller growth rate. That significantly reduces memory usage without loss of information. The proposed dense block improves the results by 0.24 dB in comparison toSISR with the basic dense block. Moreover, it results in 24% and 6% less memory usage and test time. Furthermore, the proposed method decreases the growth rate by at least a factor of 2 while producing competitive results, and improves the percentage of memory and time consumption by up to 40% and 12%. These results suggest that the proposed approach is a more practical method for SISR.
OriginalsprogEngelsk
TidsskriftIET Signal Processing
ISSN1751-9675
DOI
StatusUdgivet - 10 mar. 2021

Fingeraftryk Dyk ned i forskningsemnerne om 'Representative dense feature learning for memory- and time-efficient single image super-resolution'. Sammen danner de et unikt fingeraftryk.

Citationsformater