Compressing deep graph convolution network with multi-staged knowledge distillation.

Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Junghun Kim, Jinhong Jung, U Kang
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/ca3c820cee7544318b24cd0850d32610
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!