Concurrent Video Denoising and Deblurring for Dynamic Scenes

Dynamic scene video deblurring is a challenging task due to the spatially variant blur inflicted by independently moving objects and camera shakes. Recent deep learning works bypass the ill-posedness of explicitly deriving the blur kernel by learning pixel-to-pixel mappings, which is commonly enhanc...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Efklidis Katsaros, Piotr K. Ostrowski, Daniel Wesierski, Anna Jezierska
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/1af63d3770014729bf1a49f163047378
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:1af63d3770014729bf1a49f163047378
record_format dspace
spelling oai:doaj.org-article:1af63d3770014729bf1a49f1630473782021-12-03T00:01:18ZConcurrent Video Denoising and Deblurring for Dynamic Scenes2169-353610.1109/ACCESS.2021.3129602https://doaj.org/article/1af63d3770014729bf1a49f1630473782021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9622250/https://doaj.org/toc/2169-3536Dynamic scene video deblurring is a challenging task due to the spatially variant blur inflicted by independently moving objects and camera shakes. Recent deep learning works bypass the ill-posedness of explicitly deriving the blur kernel by learning pixel-to-pixel mappings, which is commonly enhanced by larger region awareness. This is a difficult yet simplified scenario because noise is neglected when it is omnipresent in a wide spectrum of video processing applications. Despite its relevance, the problem of concurrent noise and dynamic blur has not yet been addressed in the deep learning literature. To this end, we analyze existing state-of-the-art deblurring methods and encounter their limitations in handling non-uniform blur under strong noise conditions. Thereafter, we propose a first-to-date work that addresses blur- and noise-free frame recovery by casting the restoration problem into a multi-task learning framework. Our contribution is threefold: <bold>a)</bold> We propose R2-D4, a multi-scale encoder architecture attached to two cascaded decoders performing the restoration task in two steps. <bold>b)</bold> We design multi-scale residual dense modules, bolstered by our modulated efficient channel attention, to enhance the encoder representations via augmenting deformable convolutions to capture longer-range and object-specific context that assists blur kernel estimation under strong noise. <bold>c)</bold> We perform extensive experiments and evaluate state-of-the-art approaches on a publicly available dataset under different noise levels. The proposed method performs favorably under all noise levels while retaining a reasonably low computational and memory footprint.Efklidis KatsarosPiotr K. OstrowskiDaniel WesierskiAnna JezierskaIEEEarticleDeblurringdenoisingmulti-task learningvideo enhancementElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 157437-157446 (2021)
institution DOAJ
collection DOAJ
language EN
topic Deblurring
denoising
multi-task learning
video enhancement
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Deblurring
denoising
multi-task learning
video enhancement
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Efklidis Katsaros
Piotr K. Ostrowski
Daniel Wesierski
Anna Jezierska
Concurrent Video Denoising and Deblurring for Dynamic Scenes
description Dynamic scene video deblurring is a challenging task due to the spatially variant blur inflicted by independently moving objects and camera shakes. Recent deep learning works bypass the ill-posedness of explicitly deriving the blur kernel by learning pixel-to-pixel mappings, which is commonly enhanced by larger region awareness. This is a difficult yet simplified scenario because noise is neglected when it is omnipresent in a wide spectrum of video processing applications. Despite its relevance, the problem of concurrent noise and dynamic blur has not yet been addressed in the deep learning literature. To this end, we analyze existing state-of-the-art deblurring methods and encounter their limitations in handling non-uniform blur under strong noise conditions. Thereafter, we propose a first-to-date work that addresses blur- and noise-free frame recovery by casting the restoration problem into a multi-task learning framework. Our contribution is threefold: <bold>a)</bold> We propose R2-D4, a multi-scale encoder architecture attached to two cascaded decoders performing the restoration task in two steps. <bold>b)</bold> We design multi-scale residual dense modules, bolstered by our modulated efficient channel attention, to enhance the encoder representations via augmenting deformable convolutions to capture longer-range and object-specific context that assists blur kernel estimation under strong noise. <bold>c)</bold> We perform extensive experiments and evaluate state-of-the-art approaches on a publicly available dataset under different noise levels. The proposed method performs favorably under all noise levels while retaining a reasonably low computational and memory footprint.
format article
author Efklidis Katsaros
Piotr K. Ostrowski
Daniel Wesierski
Anna Jezierska
author_facet Efklidis Katsaros
Piotr K. Ostrowski
Daniel Wesierski
Anna Jezierska
author_sort Efklidis Katsaros
title Concurrent Video Denoising and Deblurring for Dynamic Scenes
title_short Concurrent Video Denoising and Deblurring for Dynamic Scenes
title_full Concurrent Video Denoising and Deblurring for Dynamic Scenes
title_fullStr Concurrent Video Denoising and Deblurring for Dynamic Scenes
title_full_unstemmed Concurrent Video Denoising and Deblurring for Dynamic Scenes
title_sort concurrent video denoising and deblurring for dynamic scenes
publisher IEEE
publishDate 2021
url https://doaj.org/article/1af63d3770014729bf1a49f163047378
work_keys_str_mv AT efklidiskatsaros concurrentvideodenoisinganddeblurringfordynamicscenes
AT piotrkostrowski concurrentvideodenoisinganddeblurringfordynamicscenes
AT danielwesierski concurrentvideodenoisinganddeblurringfordynamicscenes
AT annajezierska concurrentvideodenoisinganddeblurringfordynamicscenes
_version_ 1718373975732518912