Fusion-based variational image dehazing

Adrian Galdran*, Javier Vazquez-Corral, David Pardo, Marcelo Bertalmio

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

117 Citations (Scopus)

Abstract

We propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions.

Original languageEnglish
Article number7792620
Pages (from-to)151-155
Number of pages5
JournalIEEE Signal Processing Letters
Volume24
Issue number2
DOIs
Publication statusPublished - Feb 2017

Keywords

  • Color correction
  • contrast enhancement
  • image dehazing
  • image fusion
  • variational image processing

Fingerprint

Dive into the research topics of 'Fusion-based variational image dehazing'. Together they form a unique fingerprint.

Cite this