GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies
Autonomous systems require a continuous and dependable environment perception for navigation and decision-making, which is best achieved by combining different sensor types. Radar continues to function robustly in compromised circumstances in which cameras become impaired, guaranteeing a steady infl...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/dfdceb8ac3b54d1bbc8b5e82b6e69b33 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:dfdceb8ac3b54d1bbc8b5e82b6e69b33 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:dfdceb8ac3b54d1bbc8b5e82b6e69b332021-11-18T00:08:07ZGenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies2169-353610.1109/ACCESS.2021.3120202https://doaj.org/article/dfdceb8ac3b54d1bbc8b5e82b6e69b332021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9570339/https://doaj.org/toc/2169-3536Autonomous systems require a continuous and dependable environment perception for navigation and decision-making, which is best achieved by combining different sensor types. Radar continues to function robustly in compromised circumstances in which cameras become impaired, guaranteeing a steady inflow of information. Yet, camera images provide a more intuitive and readily applicable impression of the world. This work combines the complementary strengths of both sensor types in a unique self-learning fusion approach for a probabilistic scene reconstruction in adverse surrounding conditions. After reducing the memory requirements of both high-dimensional measurements through a decoupled stochastic self-supervised compression technique, the proposed algorithm exploits similarities and establishes correspondences between both domains at different feature levels during training. Then, at inference time, relying exclusively on radio frequencies, the model successively predicts camera constituents in an autoregressive and self-contained process. These discrete tokens are finally transformed back into an instructive view of the respective surrounding, allowing to visually perceive potential dangers for important tasks downstream.Carsten DitzelKlaus DietmayerIEEEarticleRadar signal processingcomputer visionsensor fusiondeep learningmachine learningvariational autoencoderElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 148994-149042 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Radar signal processing computer vision sensor fusion deep learning machine learning variational autoencoder Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Radar signal processing computer vision sensor fusion deep learning machine learning variational autoencoder Electrical engineering. Electronics. Nuclear engineering TK1-9971 Carsten Ditzel Klaus Dietmayer GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
description |
Autonomous systems require a continuous and dependable environment perception for navigation and decision-making, which is best achieved by combining different sensor types. Radar continues to function robustly in compromised circumstances in which cameras become impaired, guaranteeing a steady inflow of information. Yet, camera images provide a more intuitive and readily applicable impression of the world. This work combines the complementary strengths of both sensor types in a unique self-learning fusion approach for a probabilistic scene reconstruction in adverse surrounding conditions. After reducing the memory requirements of both high-dimensional measurements through a decoupled stochastic self-supervised compression technique, the proposed algorithm exploits similarities and establishes correspondences between both domains at different feature levels during training. Then, at inference time, relying exclusively on radio frequencies, the model successively predicts camera constituents in an autoregressive and self-contained process. These discrete tokens are finally transformed back into an instructive view of the respective surrounding, allowing to visually perceive potential dangers for important tasks downstream. |
format |
article |
author |
Carsten Ditzel Klaus Dietmayer |
author_facet |
Carsten Ditzel Klaus Dietmayer |
author_sort |
Carsten Ditzel |
title |
GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
title_short |
GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
title_full |
GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
title_fullStr |
GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
title_full_unstemmed |
GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies |
title_sort |
genradar: self-supervised probabilistic camera synthesis based on radar frequencies |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/dfdceb8ac3b54d1bbc8b5e82b6e69b33 |
work_keys_str_mv |
AT carstenditzel genradarselfsupervisedprobabilisticcamerasynthesisbasedonradarfrequencies AT klausdietmayer genradarselfsupervisedprobabilisticcamerasynthesisbasedonradarfrequencies |
_version_ |
1718425255452606464 |