Benchmarking graph neural networks for materials chemistry

Abstract Graph neural networks (GNNs) have received intense interest as a rapidly expanding class of machine learning models remarkably well-suited for materials applications. To date, a number of successful GNNs have been proposed and demonstrated for systems ranging from crystal stability to elect...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Victor Fung, Jiaxin Zhang, Eric Juarez, Bobby G. Sumpter
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
Acceso en línea:https://doaj.org/article/86b6e1b70bf24df7bb74a248da2c8e25
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:86b6e1b70bf24df7bb74a248da2c8e25
record_format dspace
spelling oai:doaj.org-article:86b6e1b70bf24df7bb74a248da2c8e252021-12-02T15:57:19ZBenchmarking graph neural networks for materials chemistry10.1038/s41524-021-00554-02057-3960https://doaj.org/article/86b6e1b70bf24df7bb74a248da2c8e252021-06-01T00:00:00Zhttps://doi.org/10.1038/s41524-021-00554-0https://doaj.org/toc/2057-3960Abstract Graph neural networks (GNNs) have received intense interest as a rapidly expanding class of machine learning models remarkably well-suited for materials applications. To date, a number of successful GNNs have been proposed and demonstrated for systems ranging from crystal stability to electronic property prediction and to surface chemistry and heterogeneous catalysis. However, a consistent benchmark of these models remains lacking, hindering the development and consistent evaluation of new models in the materials field. Here, we present a workflow and testing platform, MatDeepLearn, for quickly and reproducibly assessing and comparing GNNs and other machine learning models. We use this platform to optimize and evaluate a selection of top performing GNNs on several representative datasets in computational materials chemistry. From our investigations we note the importance of hyperparameter selection and find roughly similar performances for the top models once optimized. We identify several strengths in GNNs over conventional models in cases with compositionally diverse datasets and in its overall flexibility with respect to inputs, due to learned rather than defined representations. Meanwhile several weaknesses of GNNs are also observed including high data requirements, and suggestions for further improvement for applications in materials chemistry are discussed.Victor FungJiaxin ZhangEric JuarezBobby G. SumpterNature PortfolioarticleMaterials of engineering and construction. Mechanics of materialsTA401-492Computer softwareQA76.75-76.765ENnpj Computational Materials, Vol 7, Iss 1, Pp 1-8 (2021)
institution DOAJ
collection DOAJ
language EN
topic Materials of engineering and construction. Mechanics of materials
TA401-492
Computer software
QA76.75-76.765
spellingShingle Materials of engineering and construction. Mechanics of materials
TA401-492
Computer software
QA76.75-76.765
Victor Fung
Jiaxin Zhang
Eric Juarez
Bobby G. Sumpter
Benchmarking graph neural networks for materials chemistry
description Abstract Graph neural networks (GNNs) have received intense interest as a rapidly expanding class of machine learning models remarkably well-suited for materials applications. To date, a number of successful GNNs have been proposed and demonstrated for systems ranging from crystal stability to electronic property prediction and to surface chemistry and heterogeneous catalysis. However, a consistent benchmark of these models remains lacking, hindering the development and consistent evaluation of new models in the materials field. Here, we present a workflow and testing platform, MatDeepLearn, for quickly and reproducibly assessing and comparing GNNs and other machine learning models. We use this platform to optimize and evaluate a selection of top performing GNNs on several representative datasets in computational materials chemistry. From our investigations we note the importance of hyperparameter selection and find roughly similar performances for the top models once optimized. We identify several strengths in GNNs over conventional models in cases with compositionally diverse datasets and in its overall flexibility with respect to inputs, due to learned rather than defined representations. Meanwhile several weaknesses of GNNs are also observed including high data requirements, and suggestions for further improvement for applications in materials chemistry are discussed.
format article
author Victor Fung
Jiaxin Zhang
Eric Juarez
Bobby G. Sumpter
author_facet Victor Fung
Jiaxin Zhang
Eric Juarez
Bobby G. Sumpter
author_sort Victor Fung
title Benchmarking graph neural networks for materials chemistry
title_short Benchmarking graph neural networks for materials chemistry
title_full Benchmarking graph neural networks for materials chemistry
title_fullStr Benchmarking graph neural networks for materials chemistry
title_full_unstemmed Benchmarking graph neural networks for materials chemistry
title_sort benchmarking graph neural networks for materials chemistry
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/86b6e1b70bf24df7bb74a248da2c8e25
work_keys_str_mv AT victorfung benchmarkinggraphneuralnetworksformaterialschemistry
AT jiaxinzhang benchmarkinggraphneuralnetworksformaterialschemistry
AT ericjuarez benchmarkinggraphneuralnetworksformaterialschemistry
AT bobbygsumpter benchmarkinggraphneuralnetworksformaterialschemistry
_version_ 1718385338279264256