Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera
Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can...
Saved in:
Main Authors: | , , , |
---|---|
Format: | article |
Language: | EN |
Published: |
IEEE
2020
|
Subjects: | |
Online Access: | https://doaj.org/article/a23fb01e75674ee2b0894170b13cb055 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
oai:doaj.org-article:a23fb01e75674ee2b0894170b13cb055 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:a23fb01e75674ee2b0894170b13cb0552021-11-19T00:03:37ZHybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera2169-353610.1109/ACCESS.2020.3015759https://doaj.org/article/a23fb01e75674ee2b0894170b13cb0552020-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9165110/https://doaj.org/toc/2169-3536Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can simultaneously produce gray-scale Active Pixel Sensor (APS) frames and events, which can capture fast motions as events of very high temporal resolution, <italic>i. e.</italic>, <inline-formula> <tex-math notation="LaTeX">$1~\mu s$ </tex-math></inline-formula>, can provide extra information for blurry APS frames. Due to the natural noise and sparsity of events, we employ a recurrent encoder-decoder architecture to generate dense recurrent event representations, which encode the overall historical information. We concatenate the original blurry image with the event representation as our hybrid input, from which the network learns to restore the sharp output. We conduct extensive experiments on GoPro dataset and a real event blurry dataset captured by DAVIS240C. Our experimental results on both synthetic and real images demonstrate state-of-the-art performance for <inline-formula> <tex-math notation="LaTeX">$1280\times 720 $ </tex-math></inline-formula> images at 30 fps.Limeng ZhangHongguang ZhangJihua ChenLei WangIEEEarticleEvent-based visionhigh speedimage deblurringreal-timeElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 8, Pp 148075-148083 (2020) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Event-based vision high speed image deblurring real-time Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Event-based vision high speed image deblurring real-time Electrical engineering. Electronics. Nuclear engineering TK1-9971 Limeng Zhang Hongguang Zhang Jihua Chen Lei Wang Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
description |
Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can simultaneously produce gray-scale Active Pixel Sensor (APS) frames and events, which can capture fast motions as events of very high temporal resolution, <italic>i. e.</italic>, <inline-formula> <tex-math notation="LaTeX">$1~\mu s$ </tex-math></inline-formula>, can provide extra information for blurry APS frames. Due to the natural noise and sparsity of events, we employ a recurrent encoder-decoder architecture to generate dense recurrent event representations, which encode the overall historical information. We concatenate the original blurry image with the event representation as our hybrid input, from which the network learns to restore the sharp output. We conduct extensive experiments on GoPro dataset and a real event blurry dataset captured by DAVIS240C. Our experimental results on both synthetic and real images demonstrate state-of-the-art performance for <inline-formula> <tex-math notation="LaTeX">$1280\times 720 $ </tex-math></inline-formula> images at 30 fps. |
format |
article |
author |
Limeng Zhang Hongguang Zhang Jihua Chen Lei Wang |
author_facet |
Limeng Zhang Hongguang Zhang Jihua Chen Lei Wang |
author_sort |
Limeng Zhang |
title |
Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
title_short |
Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
title_full |
Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
title_fullStr |
Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
title_full_unstemmed |
Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera |
title_sort |
hybrid deblur net: deep non-uniform deblurring with event camera |
publisher |
IEEE |
publishDate |
2020 |
url |
https://doaj.org/article/a23fb01e75674ee2b0894170b13cb055 |
work_keys_str_mv |
AT limengzhang hybriddeblurnetdeepnonuniformdeblurringwitheventcamera AT hongguangzhang hybriddeblurnetdeepnonuniformdeblurringwitheventcamera AT jihuachen hybriddeblurnetdeepnonuniformdeblurringwitheventcamera AT leiwang hybriddeblurnetdeepnonuniformdeblurringwitheventcamera |
_version_ |
1718420684574556160 |