Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks
This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to d...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/b08f0012ca384ebdb65dcb0499bc9160 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:b08f0012ca384ebdb65dcb0499bc9160 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:b08f0012ca384ebdb65dcb0499bc91602021-11-11T18:51:37ZTextured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks10.3390/rs132142542072-4292https://doaj.org/article/b08f0012ca384ebdb65dcb0499bc91602021-10-01T00:00:00Zhttps://www.mdpi.com/2072-4292/13/21/4254https://doaj.org/toc/2072-4292This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved.Mingyun WenJisun ParkKyungeun ChoMDPI AGarticlesingle image textured mesh reconstructionconvolutional neural networksgenerative adversarial networksuper-resolutionScienceQENRemote Sensing, Vol 13, Iss 4254, p 4254 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
single image textured mesh reconstruction convolutional neural networks generative adversarial network super-resolution Science Q |
spellingShingle |
single image textured mesh reconstruction convolutional neural networks generative adversarial network super-resolution Science Q Mingyun Wen Jisun Park Kyungeun Cho Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
description |
This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved. |
format |
article |
author |
Mingyun Wen Jisun Park Kyungeun Cho |
author_facet |
Mingyun Wen Jisun Park Kyungeun Cho |
author_sort |
Mingyun Wen |
title |
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
title_short |
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
title_full |
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
title_fullStr |
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
title_full_unstemmed |
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks |
title_sort |
textured mesh generation using multi-view and multi-source supervision and generative adversarial networks |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/b08f0012ca384ebdb65dcb0499bc9160 |
work_keys_str_mv |
AT mingyunwen texturedmeshgenerationusingmultiviewandmultisourcesupervisionandgenerativeadversarialnetworks AT jisunpark texturedmeshgenerationusingmultiviewandmultisourcesupervisionandgenerativeadversarialnetworks AT kyungeuncho texturedmeshgenerationusingmultiviewandmultisourcesupervisionandgenerativeadversarialnetworks |
_version_ |
1718431720297988096 |