Compressing deep graph convolution network with multi-staged knowledge distillation.

Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Junghun Kim, Jinhong Jung, U Kang
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/ca3c820cee7544318b24cd0850d32610
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:ca3c820cee7544318b24cd0850d32610
record_format dspace
spelling oai:doaj.org-article:ca3c820cee7544318b24cd0850d326102021-12-02T20:18:06ZCompressing deep graph convolution network with multi-staged knowledge distillation.1932-620310.1371/journal.pone.0256187https://doaj.org/article/ca3c820cee7544318b24cd0850d326102021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0256187https://doaj.org/toc/1932-6203Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems, which have limited computing resources. However, previous works for compressing deep GCNs do not consider the multi-hop aggregation of the deep GCNs, though it is the main purpose for their multiple GCN layers. In this work, we propose MustaD (Multi-staged knowledge Distillation), a novel approach for compressing deep GCNs to single-layered GCNs through multi-staged knowledge distillation (KD). MustaD distills the knowledge of 1) the aggregation from multiple GCN layers as well as 2) task prediction while preserving the multi-hop feature aggregation of deep GCNs by a single effective layer. Extensive experiments on four real-world datasets show that MustaD provides the state-of-the-art performance compared to other KD based methods. Specifically, MustaD presents up to 4.21%p improvement of accuracy compared to the second-best KD models.Junghun KimJinhong JungU KangPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 8, p e0256187 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Junghun Kim
Jinhong Jung
U Kang
Compressing deep graph convolution network with multi-staged knowledge distillation.
description Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems, which have limited computing resources. However, previous works for compressing deep GCNs do not consider the multi-hop aggregation of the deep GCNs, though it is the main purpose for their multiple GCN layers. In this work, we propose MustaD (Multi-staged knowledge Distillation), a novel approach for compressing deep GCNs to single-layered GCNs through multi-staged knowledge distillation (KD). MustaD distills the knowledge of 1) the aggregation from multiple GCN layers as well as 2) task prediction while preserving the multi-hop feature aggregation of deep GCNs by a single effective layer. Extensive experiments on four real-world datasets show that MustaD provides the state-of-the-art performance compared to other KD based methods. Specifically, MustaD presents up to 4.21%p improvement of accuracy compared to the second-best KD models.
format article
author Junghun Kim
Jinhong Jung
U Kang
author_facet Junghun Kim
Jinhong Jung
U Kang
author_sort Junghun Kim
title Compressing deep graph convolution network with multi-staged knowledge distillation.
title_short Compressing deep graph convolution network with multi-staged knowledge distillation.
title_full Compressing deep graph convolution network with multi-staged knowledge distillation.
title_fullStr Compressing deep graph convolution network with multi-staged knowledge distillation.
title_full_unstemmed Compressing deep graph convolution network with multi-staged knowledge distillation.
title_sort compressing deep graph convolution network with multi-staged knowledge distillation.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/ca3c820cee7544318b24cd0850d32610
work_keys_str_mv AT junghunkim compressingdeepgraphconvolutionnetworkwithmultistagedknowledgedistillation
AT jinhongjung compressingdeepgraphconvolutionnetworkwithmultistagedknowledgedistillation
AT ukang compressingdeepgraphconvolutionnetworkwithmultistagedknowledgedistillation
_version_ 1718374312102068224