TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data
Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-s...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/ca3604f6b3c54fdfb7dee462a3b5ae05 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:ca3604f6b3c54fdfb7dee462a3b5ae05 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:ca3604f6b3c54fdfb7dee462a3b5ae052021-11-25T16:38:30ZTDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data10.3390/app1122108032076-3417https://doaj.org/article/ca3604f6b3c54fdfb7dee462a3b5ae052021-11-01T00:00:00Zhttps://www.mdpi.com/2076-3417/11/22/10803https://doaj.org/toc/2076-3417Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method.Jiagang SongYunwu LinJiayu SongWeiren YuLeyuan ZhangMDPI AGarticlegeo-multimedianearest neighbor querycross-modal hashingtriplet lossTH-QuadtreeTechnologyTEngineering (General). Civil engineering (General)TA1-2040Biology (General)QH301-705.5PhysicsQC1-999ChemistryQD1-999ENApplied Sciences, Vol 11, Iss 10803, p 10803 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
geo-multimedia nearest neighbor query cross-modal hashing triplet loss TH-Quadtree Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 |
spellingShingle |
geo-multimedia nearest neighbor query cross-modal hashing triplet loss TH-Quadtree Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 Jiagang Song Yunwu Lin Jiayu Song Weiren Yu Leyuan Zhang TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
description |
Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method. |
format |
article |
author |
Jiagang Song Yunwu Lin Jiayu Song Weiren Yu Leyuan Zhang |
author_facet |
Jiagang Song Yunwu Lin Jiayu Song Weiren Yu Leyuan Zhang |
author_sort |
Jiagang Song |
title |
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
title_short |
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
title_full |
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
title_fullStr |
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
title_full_unstemmed |
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data |
title_sort |
tdcmr: triplet-based deep cross-modal retrieval for geo-multimedia data |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/ca3604f6b3c54fdfb7dee462a3b5ae05 |
work_keys_str_mv |
AT jiagangsong tdcmrtripletbaseddeepcrossmodalretrievalforgeomultimediadata AT yunwulin tdcmrtripletbaseddeepcrossmodalretrievalforgeomultimediadata AT jiayusong tdcmrtripletbaseddeepcrossmodalretrievalforgeomultimediadata AT weirenyu tdcmrtripletbaseddeepcrossmodalretrievalforgeomultimediadata AT leyuanzhang tdcmrtripletbaseddeepcrossmodalretrievalforgeomultimediadata |
_version_ |
1718413102293188608 |