Generating Scenery Images with Larger Variety According to User Descriptions
In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framew...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d4917a6f9c914dad8f72d117519c86a5 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:d4917a6f9c914dad8f72d117519c86a5 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:d4917a6f9c914dad8f72d117519c86a52021-11-11T15:16:22ZGenerating Scenery Images with Larger Variety According to User Descriptions10.3390/app1121102242076-3417https://doaj.org/article/d4917a6f9c914dad8f72d117519c86a52021-11-01T00:00:00Zhttps://www.mdpi.com/2076-3417/11/21/10224https://doaj.org/toc/2076-3417In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framework improves and modifies the architecture of a generative adversarial network with attention models by adding the imagination models. The proposed attentional and imaginative generative network uses the hidden layer information to initialize the memory cell of the recurrent neural network to produce the desired photos. A data set containing different categories of scenery images is established to train the proposed system. The experiments validate that the proposed method is able to increase the quality and diversity of the generated images compared to the existing method. A possible application of road image generation for data augmentation is also demonstrated in the experimental results.Hsu-Yung ChengChih-Chang YuMDPI AGarticletext to imageimage generationgenerative adversarial networksTechnologyTEngineering (General). Civil engineering (General)TA1-2040Biology (General)QH301-705.5PhysicsQC1-999ChemistryQD1-999ENApplied Sciences, Vol 11, Iss 10224, p 10224 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
text to image image generation generative adversarial networks Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 |
spellingShingle |
text to image image generation generative adversarial networks Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 Hsu-Yung Cheng Chih-Chang Yu Generating Scenery Images with Larger Variety According to User Descriptions |
description |
In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framework improves and modifies the architecture of a generative adversarial network with attention models by adding the imagination models. The proposed attentional and imaginative generative network uses the hidden layer information to initialize the memory cell of the recurrent neural network to produce the desired photos. A data set containing different categories of scenery images is established to train the proposed system. The experiments validate that the proposed method is able to increase the quality and diversity of the generated images compared to the existing method. A possible application of road image generation for data augmentation is also demonstrated in the experimental results. |
format |
article |
author |
Hsu-Yung Cheng Chih-Chang Yu |
author_facet |
Hsu-Yung Cheng Chih-Chang Yu |
author_sort |
Hsu-Yung Cheng |
title |
Generating Scenery Images with Larger Variety According to User Descriptions |
title_short |
Generating Scenery Images with Larger Variety According to User Descriptions |
title_full |
Generating Scenery Images with Larger Variety According to User Descriptions |
title_fullStr |
Generating Scenery Images with Larger Variety According to User Descriptions |
title_full_unstemmed |
Generating Scenery Images with Larger Variety According to User Descriptions |
title_sort |
generating scenery images with larger variety according to user descriptions |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/d4917a6f9c914dad8f72d117519c86a5 |
work_keys_str_mv |
AT hsuyungcheng generatingsceneryimageswithlargervarietyaccordingtouserdescriptions AT chihchangyu generatingsceneryimageswithlargervarietyaccordingtouserdescriptions |
_version_ |
1718435782739361792 |