Logo Detection With No Priors
In recent years, top referred methods on object detection like R-CNN have implemented this task as a combination of proposal region generation and supervised classification on the proposed bounding boxes. Although this pipeline has achieved state-of-the-art results in multiple datasets, it has inher...
Guardado en:
Autores principales: | , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/06053aaa1432435eaf1b7fc745bef20f |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:06053aaa1432435eaf1b7fc745bef20f |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:06053aaa1432435eaf1b7fc745bef20f2021-12-01T00:01:40ZLogo Detection With No Priors2169-353610.1109/ACCESS.2021.3101297https://doaj.org/article/06053aaa1432435eaf1b7fc745bef20f2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9502074/https://doaj.org/toc/2169-3536In recent years, top referred methods on object detection like R-CNN have implemented this task as a combination of proposal region generation and supervised classification on the proposed bounding boxes. Although this pipeline has achieved state-of-the-art results in multiple datasets, it has inherent limitations that make object detection a very complex and inefficient task in computational terms. Instead of considering this standard strategy, in this paper we enhance Detection Transformers (DETR) which tackles object detection as a set-prediction problem directly in an end-to-end fully differentiable pipeline without requiring priors. In particular, we incorporate Feature Pyramids (FP) to the DETR architecture and demonstrate the effectiveness of the resulting DETR-FP approach on improving logo detection results thanks to the improved detection of small logos. So, without requiring any domain specific prior to be fed to the model, DETR-FP obtains competitive results on the OpenLogo and MS-COCO datasets offering a relative improvement of up to 30%, when compared to a Faster R-CNN baseline which strongly depends on hand-designed priors.Diego A. VelazquezJosep M. GonfausPau RodriguezF. Xavier RocaSeiichi OzawaJordi GonzalezIEEEarticleObject detectiontransformerslogo detectiondeep learningattentionElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 106998-107011 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Object detection transformers logo detection deep learning attention Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Object detection transformers logo detection deep learning attention Electrical engineering. Electronics. Nuclear engineering TK1-9971 Diego A. Velazquez Josep M. Gonfaus Pau Rodriguez F. Xavier Roca Seiichi Ozawa Jordi Gonzalez Logo Detection With No Priors |
description |
In recent years, top referred methods on object detection like R-CNN have implemented this task as a combination of proposal region generation and supervised classification on the proposed bounding boxes. Although this pipeline has achieved state-of-the-art results in multiple datasets, it has inherent limitations that make object detection a very complex and inefficient task in computational terms. Instead of considering this standard strategy, in this paper we enhance Detection Transformers (DETR) which tackles object detection as a set-prediction problem directly in an end-to-end fully differentiable pipeline without requiring priors. In particular, we incorporate Feature Pyramids (FP) to the DETR architecture and demonstrate the effectiveness of the resulting DETR-FP approach on improving logo detection results thanks to the improved detection of small logos. So, without requiring any domain specific prior to be fed to the model, DETR-FP obtains competitive results on the OpenLogo and MS-COCO datasets offering a relative improvement of up to 30%, when compared to a Faster R-CNN baseline which strongly depends on hand-designed priors. |
format |
article |
author |
Diego A. Velazquez Josep M. Gonfaus Pau Rodriguez F. Xavier Roca Seiichi Ozawa Jordi Gonzalez |
author_facet |
Diego A. Velazquez Josep M. Gonfaus Pau Rodriguez F. Xavier Roca Seiichi Ozawa Jordi Gonzalez |
author_sort |
Diego A. Velazquez |
title |
Logo Detection With No Priors |
title_short |
Logo Detection With No Priors |
title_full |
Logo Detection With No Priors |
title_fullStr |
Logo Detection With No Priors |
title_full_unstemmed |
Logo Detection With No Priors |
title_sort |
logo detection with no priors |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/06053aaa1432435eaf1b7fc745bef20f |
work_keys_str_mv |
AT diegoavelazquez logodetectionwithnopriors AT josepmgonfaus logodetectionwithnopriors AT paurodriguez logodetectionwithnopriors AT fxavierroca logodetectionwithnopriors AT seiichiozawa logodetectionwithnopriors AT jordigonzalez logodetectionwithnopriors |
_version_ |
1718406122601185280 |