Adversarial Attention-Based Variational Graph Autoencoder

Autoencoders have been successfully used for graph embedding, and many variants have been proven to effectively express graph data and conduct graph analysis in low-dimensional space. However, previous methods ignore the structure and properties of the reconstructed graph, or they do not consider th...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ziqiang Weng, Weiyu Zhang, Wei Dou
Formato: article
Lenguaje:EN
Publicado: IEEE 2020
Materias:
Acceso en línea:https://doaj.org/article/36c14220733f4a9a88d0654312455cd5
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:36c14220733f4a9a88d0654312455cd5
record_format dspace
spelling oai:doaj.org-article:36c14220733f4a9a88d0654312455cd52021-11-19T00:05:55ZAdversarial Attention-Based Variational Graph Autoencoder2169-353610.1109/ACCESS.2020.3018033https://doaj.org/article/36c14220733f4a9a88d0654312455cd52020-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9171337/https://doaj.org/toc/2169-3536Autoencoders have been successfully used for graph embedding, and many variants have been proven to effectively express graph data and conduct graph analysis in low-dimensional space. However, previous methods ignore the structure and properties of the reconstructed graph, or they do not consider the potential data distribution in the graph, which typically leads to unsatisfactory graph embedding performance. In this paper, we propose the adversarial attention variational graph autoencoder (AAVGA), which is a novel framework that incorporates attention networks into the encoder part and uses an adversarial mechanism in embedded training. The encoder involves node neighbors in the representation of nodes by stacking attention layers, which can further improve the graph embedding performance of the encoder. At the same time, due to the adversarial mechanism, the distribution of the potential features that are generated by the encoder are closer to the actual distribution of the original graph data; thus, the decoder generates a graph that is closer to the original graph. Experimental results prove that AAVGA performs competitively with state-of-the-art popular graph encoders on three citation datasets.Ziqiang WengWeiyu ZhangWei DouIEEEarticleAttention layersadversarial mechanismvariational graph autoencoderElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 8, Pp 152637-152645 (2020)
institution DOAJ
collection DOAJ
language EN
topic Attention layers
adversarial mechanism
variational graph autoencoder
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Attention layers
adversarial mechanism
variational graph autoencoder
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Ziqiang Weng
Weiyu Zhang
Wei Dou
Adversarial Attention-Based Variational Graph Autoencoder
description Autoencoders have been successfully used for graph embedding, and many variants have been proven to effectively express graph data and conduct graph analysis in low-dimensional space. However, previous methods ignore the structure and properties of the reconstructed graph, or they do not consider the potential data distribution in the graph, which typically leads to unsatisfactory graph embedding performance. In this paper, we propose the adversarial attention variational graph autoencoder (AAVGA), which is a novel framework that incorporates attention networks into the encoder part and uses an adversarial mechanism in embedded training. The encoder involves node neighbors in the representation of nodes by stacking attention layers, which can further improve the graph embedding performance of the encoder. At the same time, due to the adversarial mechanism, the distribution of the potential features that are generated by the encoder are closer to the actual distribution of the original graph data; thus, the decoder generates a graph that is closer to the original graph. Experimental results prove that AAVGA performs competitively with state-of-the-art popular graph encoders on three citation datasets.
format article
author Ziqiang Weng
Weiyu Zhang
Wei Dou
author_facet Ziqiang Weng
Weiyu Zhang
Wei Dou
author_sort Ziqiang Weng
title Adversarial Attention-Based Variational Graph Autoencoder
title_short Adversarial Attention-Based Variational Graph Autoencoder
title_full Adversarial Attention-Based Variational Graph Autoencoder
title_fullStr Adversarial Attention-Based Variational Graph Autoencoder
title_full_unstemmed Adversarial Attention-Based Variational Graph Autoencoder
title_sort adversarial attention-based variational graph autoencoder
publisher IEEE
publishDate 2020
url https://doaj.org/article/36c14220733f4a9a88d0654312455cd5
work_keys_str_mv AT ziqiangweng adversarialattentionbasedvariationalgraphautoencoder
AT weiyuzhang adversarialattentionbasedvariationalgraphautoencoder
AT weidou adversarialattentionbasedvariationalgraphautoencoder
_version_ 1718420662276587520