Language Representation Models: An Overview
In the last few decades, text mining has been used to extract knowledge from free texts. Applying neural networks and deep learning to natural language processing (NLP) tasks has led to many accomplishments for real-world language problems over the years. The developments of the last five years have...
Guardado en:
Autores principales: | Thorben Schomacker, Marina Tropmann-Frick |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a2b1f93d252a4263ad8616a20bf6b939 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Personal Interest Attention Graph Neural Networks for Session-Based Recommendation
por: Xiangde Zhang, et al.
Publicado: (2021) -
A Manifold Learning Perspective on Representation Learning: Learning Decoder and Representations without an Encoder
por: Viktoria Schuster, et al.
Publicado: (2021) -
How choosing random-walk model and network representation matters for flow-based community detection in hypergraphs
por: Anton Eriksson, et al.
Publicado: (2021) -
Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
por: Chi-Ken Lu, et al.
Publicado: (2021) -
Bert-Enhanced Text Graph Neural Network for Classification
por: Yiping Yang, et al.
Publicado: (2021)