Enhancing Underwater Images and Videos by Fusion
Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012)
Abstract
This paper describes a novel strategy to enhance underwater videos and images. Built on the fusion principles, our strategy derives the inputs and the weight measures only from the degraded version of the image. In order to overcome the limitations of the underwater medium we define two inputs that represent color corrected and contrast enhanced versions of the original underwater image/frame, but also four weight maps that aim to increase the visibility of the distant objects degraded due to the medium scattering and absorption. Our strategy is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. Our fusion framework also supports temporal coherence between adjacent frames by performing an effective edge preserving noise reduction strategy. The enhanced images and videos are characterized by reduced noise level, better exposed-ness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, the utility of our enhancing technique is proved for several challenging applications.
Bibtex
@inproceedings{Ancuti_CVPR2012
author = {Ancuti, Codruta O. and Ancuti, Cosmin and Haber,Tom and Bekaert, Philippe},
title = {Enhancing Underwater Images and Videos by Fusion},
booktitle = {Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012)},
series = {CVPR'12},
year = {2012},
}