Synthetic Source Universal Domain Adaptation through Contrastive Learning

Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Jungchan Cho
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/7a576453521840dfb764b12bf8684110
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:7a576453521840dfb764b12bf8684110
record_format dspace
spelling oai:doaj.org-article:7a576453521840dfb764b12bf86841102021-11-25T18:57:18ZSynthetic Source Universal Domain Adaptation through Contrastive Learning10.3390/s212275391424-8220https://doaj.org/article/7a576453521840dfb764b12bf86841102021-11-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/22/7539https://doaj.org/toc/1424-8220Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively.Jungchan ChoMDPI AGarticleuniversal domain adaptationcontrastive learningclassificationdeep learningChemical technologyTP1-1185ENSensors, Vol 21, Iss 7539, p 7539 (2021)
institution DOAJ
collection DOAJ
language EN
topic universal domain adaptation
contrastive learning
classification
deep learning
Chemical technology
TP1-1185
spellingShingle universal domain adaptation
contrastive learning
classification
deep learning
Chemical technology
TP1-1185
Jungchan Cho
Synthetic Source Universal Domain Adaptation through Contrastive Learning
description Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively.
format article
author Jungchan Cho
author_facet Jungchan Cho
author_sort Jungchan Cho
title Synthetic Source Universal Domain Adaptation through Contrastive Learning
title_short Synthetic Source Universal Domain Adaptation through Contrastive Learning
title_full Synthetic Source Universal Domain Adaptation through Contrastive Learning
title_fullStr Synthetic Source Universal Domain Adaptation through Contrastive Learning
title_full_unstemmed Synthetic Source Universal Domain Adaptation through Contrastive Learning
title_sort synthetic source universal domain adaptation through contrastive learning
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/7a576453521840dfb764b12bf8684110
work_keys_str_mv AT jungchancho syntheticsourceuniversaldomainadaptationthroughcontrastivelearning
_version_ 1718410471100383232