Domain Adaption Based on Symmetric Matrices Space Bi-Subspace Learning and Source Linear Discriminant Analysis Regularization

At present, Symmetric Positive Definite (SPD) matrix data is the most common non-Euclidean data in machine learning. Because SPD data don’t form a linear space, most machine learning algorithms can not be carried out directly on SPD data. The first purpose of this paper is to propose a ne...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Qian Li, Zhengming Ma, Shuyu Liu, Yanli Pei
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/4dfa2953c2e244aa9f43b97d25fac455
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:At present, Symmetric Positive Definite (SPD) matrix data is the most common non-Euclidean data in machine learning. Because SPD data don’t form a linear space, most machine learning algorithms can not be carried out directly on SPD data. The first purpose of this paper is to propose a new framework of SPD data machine learning, in which SPD data are transformed into the tangent spaces of Riemannian manifold, rather than a Reproducing Kernel Hilbert Space (RKHS) as usual. Domain adaption learning is a kind of machine learning. The second purpose of this paper is to apply the proposed framework to domain adaption learning (DAL), in which the architecture of bi-subspace learning is adopted. Compared with the commonly-used one subspace learning architecture, the proposed architecture provides a broader optimization space to meet the domain adaption criterion. At last, in order to further improve the classification accuracy, Linear Discriminant Analysis (LDA) regularization of source domain data is added. The experimental results on five real-world datasets demonstrate the out-performance of the proposed algorithm over other five related state-of-the-art algorithms.