Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera

Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Limeng Zhang, Hongguang Zhang, Jihua Chen, Lei Wang
Formato: article
Lenguaje:EN
Publicado: IEEE 2020
Materias:
Acceso en línea:https://doaj.org/article/a23fb01e75674ee2b0894170b13cb055
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can simultaneously produce gray-scale Active Pixel Sensor (APS) frames and events, which can capture fast motions as events of very high temporal resolution, <italic>i. e.</italic>, <inline-formula> <tex-math notation="LaTeX">$1~\mu s$ </tex-math></inline-formula>, can provide extra information for blurry APS frames. Due to the natural noise and sparsity of events, we employ a recurrent encoder-decoder architecture to generate dense recurrent event representations, which encode the overall historical information. We concatenate the original blurry image with the event representation as our hybrid input, from which the network learns to restore the sharp output. We conduct extensive experiments on GoPro dataset and a real event blurry dataset captured by DAVIS240C. Our experimental results on both synthetic and real images demonstrate state-of-the-art performance for <inline-formula> <tex-math notation="LaTeX">$1280\times 720 $ </tex-math></inline-formula> images at 30 fps.