A Manifold Learning Perspective on Representation Learning: Learning Decoder and Representations without an Encoder
Autoencoders are commonly used in representation learning. They consist of an encoder and a decoder, which provide a straightforward method to map <i>n</i>-dimensional data in input space to a lower <i>m</i>-dimensional representation space and back. The decoder itself define...
Guardado en:
Autores principales: | Viktoria Schuster, Anders Krogh |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d20c216f39bc4836996f2afcc9ba9edc |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Optimizing Few-Shot Learning Based on Variational Autoencoders
por: Ruoqi Wei, et al.
Publicado: (2021) -
Language Representation Models: An Overview
por: Thorben Schomacker, et al.
Publicado: (2021) -
How choosing random-walk model and network representation matters for flow-based community detection in hypergraphs
por: Anton Eriksson, et al.
Publicado: (2021) -
Quantum compiling by deep reinforcement learning
por: Lorenzo Moro, et al.
Publicado: (2021) -
Learning the best nanoscale heat engines through evolving network topology
por: Yuto Ashida, et al.
Publicado: (2021)