Optimizing Small BERTs Trained for German NER
Currently, the most widespread neural network architecture for training language models is the so-called BERT, which led to improvements in various Natural Language Processing (NLP) tasks. In general, the larger the number of parameters in a BERT model, the better the results obtained in these NLP t...
Guardado en:
Autores principales: | Jochen Zöllner, Konrad Sperfeld, Christoph Wick, Roger Labahn |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/c269d264e4014373a18286098ba93af0 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
RDFsim: Similarity-Based Browsing over DBpedia Using Embeddings
por: Manos Chatzakis, et al.
Publicado: (2021) -
THE SYSTEM RECOGNIZES SURFACE DEFECTS OF MARBLE SLABS BASED ON SEGMENTATION METHODS
por: E. Sipko, et al.
Publicado: (2020) -
THE EXPERT SYSTEM OF CONTROL AND KNOWLEDGE ASSESSMENT
por: V. Golovachyova, et al.
Publicado: (2020) -
Pattern Recognition of Human Face With Photos Using KNN Algorithm
por: Dedy Kurniadi, et al.
Publicado: (2021) -
Optimization and improvement of fake news detection using deep learning approaches for societal benefit
por: Tavishee Chauhan, M.E, et al.
Publicado: (2021)