TY - JOUR
T1 - Single image super-resolution based on directional variance attention network.
AU - Behjati, Parichehr
AU - Rodríguez, Pau
AU - Fernández, Carles
AU - Hupont, Isabelle
AU - Mehri, Armin
AU - Gonzàlez, Jordi
N1 - This work was supported by the Spanish Ministry of Economy and Competitiveness (MINECO ) and the European Regional Development Fund (ERDF) under Grants PID2020-120311RB-I00 funded by MCIN/AEI/10.13039/501100011033, and TIN2015-65464-R. Isabelle Hupont’s work is supported by the HUMAINT project of the European Commission’s Joint Research Centre..
PY - 2023/1
Y1 - 2023/1
N2 - Recent advances in single image super-resolution (SISR) explore the power of deep convolutional neural networks (CNNs) to achieve better performance. However, most of the progress has been made by scaling CNN architectures, which usually raise computational demands and memory consumption. This makes modern architectures less applicable in practice. In addition, most CNN-based SR methods do not fully utilize the informative hierarchical features that are helpful for final image recovery. In order to address these issues, we propose a directional variance attention network (DiVANet), a computationally efficient yet accurate network for SISR. Specifically, we introduce a novel directional variance attention (DiVA) mechanism to capture long-range spatial dependencies and exploit inter-channel dependencies simultaneously for more discriminative representations. Furthermore, we propose a residual attention feature group (RAFG) for parallelizing attention and residual block computation. The output of each residual block is linearly fused at the RAFG output to provide access to the whole feature hierarchy. In parallel, DiVA extracts most relevant features from the network for improving the final output and preventing information loss along the successive operations inside the network. Experimental results demonstrate the superiority of DiVANet over the state of the art in several datasets, while maintaining relatively low computation and memory footprint.
AB - Recent advances in single image super-resolution (SISR) explore the power of deep convolutional neural networks (CNNs) to achieve better performance. However, most of the progress has been made by scaling CNN architectures, which usually raise computational demands and memory consumption. This makes modern architectures less applicable in practice. In addition, most CNN-based SR methods do not fully utilize the informative hierarchical features that are helpful for final image recovery. In order to address these issues, we propose a directional variance attention network (DiVANet), a computationally efficient yet accurate network for SISR. Specifically, we introduce a novel directional variance attention (DiVA) mechanism to capture long-range spatial dependencies and exploit inter-channel dependencies simultaneously for more discriminative representations. Furthermore, we propose a residual attention feature group (RAFG) for parallelizing attention and residual block computation. The output of each residual block is linearly fused at the RAFG output to provide access to the whole feature hierarchy. In parallel, DiVA extracts most relevant features from the network for improving the final output and preventing information loss along the successive operations inside the network. Experimental results demonstrate the superiority of DiVANet over the state of the art in several datasets, while maintaining relatively low computation and memory footprint.
KW - Attention mechanism
KW - Efficient network
KW - Single image super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85138457088&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/928ed317-294d-361e-9cf2-1b386378bbb5/
U2 - 10.1016/j.patcog.2022.108997
DO - 10.1016/j.patcog.2022.108997
M3 - Article
VL - 133
M1 - 108997
ER -