A Linearly Involved Generalized Moreau Enhancement of <i>ℓ</i><sub>2,1</sub>-Norm with Application to Weighted Group Sparse Classification

This paper proposes a new group-sparsity-inducing regularizer to approximate <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mrow><mn>2</mn><mo>,</mo>&l...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yang Chen, Masao Yamagishi, Isao Yamada
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/64f9e46ca8f4408a91b5946d0094eaf3
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:This paper proposes a new group-sparsity-inducing regularizer to approximate <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mrow><mn>2</mn><mo>,</mo><mn>0</mn></mrow></msub></semantics></math></inline-formula> pseudo-norm. The regularizer is nonconvex, which can be seen as a linearly involved generalized Moreau enhancement of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></semantics></math></inline-formula>-norm. Moreover, the overall convexity of the corresponding group-sparsity-regularized least squares problem can be achieved. The model can handle general group configurations such as weighted group sparse problems, and can be solved through a proximal splitting algorithm. Among the applications, considering that the bias of convex regularizer may lead to incorrect classification results especially for unbalanced training sets, we apply the proposed model to the (weighted) group sparse classification problem. The proposed classifier can use the label, similarity and locality information of samples. It also suppresses the bias of convex regularizer-based classifiers. Experimental results demonstrate that the proposed classifier improves the performance of convex <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></semantics></math></inline-formula> regularizer-based methods, especially when the training data set is unbalanced. This paper enhances the potential applicability and effectiveness of using nonconvex regularizers in the frame of convex optimization.