Full body virtual try‐on with semi‐self‐supervised learning

Abstract This paper proposes a full body virtual try‐on which handles both top and bottom garments and generates realistic try‐on images. For the full body virtual try‐on, this paper addresses lack of suitable training data to align and fit top and bottom naturally. The proposed system consists of t...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Hyug‐Jae Lee, Byumhyuk Koo, Ha‐Eun Ahn, Minseok Kang, Rokkyu Lee, Gunhan Park
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/1e9d778467f44d22a26f9f3182cabfcc
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract This paper proposes a full body virtual try‐on which handles both top and bottom garments and generates realistic try‐on images. For the full body virtual try‐on, this paper addresses lack of suitable training data to align and fit top and bottom naturally. The proposed system consists of three modules: Clothing Guide Module (CGM), Geometric Matching Module (GMM), and Try‐On Module (TOM). CGM is introduced to generate a clothing guide map (CGMap) which describes the shape of a garment on a model. Unlike the single‐garment virtual try‐on scheme, it is impractical to collect meaningful data at a large scale for the multi‐garment system. To address this problem, two novel training strategies are proposed to leverage the existing training data. First, a pseudo triplet of model‐top‐bottom is generated from a pair of model‐top or model‐bottom which are already secured. Second, the CGM network is arranged to be exposed to both top and bottom garments during training. Then, the following GMM networks warp and align the top and bottom garments. Finally, TOM synthesizes a realistic try‐on image with the aligned garment and the CGMap. Experimental results prove remarkable performance of the proposed method in the full body virtual try‐on.