Compressing deep graph convolution network with multi-staged knowledge distillation.
Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems...
Enregistré dans:
Auteurs principaux: | , , |
---|---|
Format: | article |
Langue: | EN |
Publié: |
Public Library of Science (PLoS)
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/ca3c820cee7544318b24cd0850d32610 |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|