Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains

Abstract Bayesian optimization (BO) has been leveraged for guiding autonomous and high-throughput experiments in materials science. However, few have evaluated the efficiency of BO across a broad range of experimental materials domains. In this work, we quantify the performance of BO with a collecti...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Qiaohao Liang, Aldair E. Gongora, Zekun Ren, Armi Tiihonen, Zhe Liu, Shijing Sun, James R. Deneault, Daniil Bash, Flore Mekki-Berrada, Saif A. Khan, Kedar Hippalgaonkar, Benji Maruyama, Keith A. Brown, John Fisher III, Tonio Buonassisi
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
Acceso en línea:https://doaj.org/article/bfc6d1e9f14b4b8bbc093f32c3360b8c
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:bfc6d1e9f14b4b8bbc093f32c3360b8c
record_format dspace
spelling oai:doaj.org-article:bfc6d1e9f14b4b8bbc093f32c3360b8c2021-11-21T12:13:27ZBenchmarking the performance of Bayesian optimization across multiple experimental materials science domains10.1038/s41524-021-00656-92057-3960https://doaj.org/article/bfc6d1e9f14b4b8bbc093f32c3360b8c2021-11-01T00:00:00Zhttps://doi.org/10.1038/s41524-021-00656-9https://doaj.org/toc/2057-3960Abstract Bayesian optimization (BO) has been leveraged for guiding autonomous and high-throughput experiments in materials science. However, few have evaluated the efficiency of BO across a broad range of experimental materials domains. In this work, we quantify the performance of BO with a collection of surrogate model and acquisition function pairs across five diverse experimental materials systems. By defining acceleration and enhancement metrics for materials optimization objectives, we find that surrogate models such as Gaussian Process (GP) with anisotropic kernels and Random Forest (RF) have comparable performance in BO, and both outperform the commonly used GP with isotropic kernels. GP with anisotropic kernels has demonstrated the most robustness, yet RF is a close alternative and warrants more consideration because it is free from distribution assumptions, has smaller time complexity, and requires less effort in initial hyperparameter selection. We also raise awareness about the benefits of using GP with anisotropic kernels in future materials optimization campaigns.Qiaohao LiangAldair E. GongoraZekun RenArmi TiihonenZhe LiuShijing SunJames R. DeneaultDaniil BashFlore Mekki-BerradaSaif A. KhanKedar HippalgaonkarBenji MaruyamaKeith A. BrownJohn Fisher IIITonio BuonassisiNature PortfolioarticleMaterials of engineering and construction. Mechanics of materialsTA401-492Computer softwareQA76.75-76.765ENnpj Computational Materials, Vol 7, Iss 1, Pp 1-10 (2021)
institution DOAJ
collection DOAJ
language EN
topic Materials of engineering and construction. Mechanics of materials
TA401-492
Computer software
QA76.75-76.765
spellingShingle Materials of engineering and construction. Mechanics of materials
TA401-492
Computer software
QA76.75-76.765
Qiaohao Liang
Aldair E. Gongora
Zekun Ren
Armi Tiihonen
Zhe Liu
Shijing Sun
James R. Deneault
Daniil Bash
Flore Mekki-Berrada
Saif A. Khan
Kedar Hippalgaonkar
Benji Maruyama
Keith A. Brown
John Fisher III
Tonio Buonassisi
Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
description Abstract Bayesian optimization (BO) has been leveraged for guiding autonomous and high-throughput experiments in materials science. However, few have evaluated the efficiency of BO across a broad range of experimental materials domains. In this work, we quantify the performance of BO with a collection of surrogate model and acquisition function pairs across five diverse experimental materials systems. By defining acceleration and enhancement metrics for materials optimization objectives, we find that surrogate models such as Gaussian Process (GP) with anisotropic kernels and Random Forest (RF) have comparable performance in BO, and both outperform the commonly used GP with isotropic kernels. GP with anisotropic kernels has demonstrated the most robustness, yet RF is a close alternative and warrants more consideration because it is free from distribution assumptions, has smaller time complexity, and requires less effort in initial hyperparameter selection. We also raise awareness about the benefits of using GP with anisotropic kernels in future materials optimization campaigns.
format article
author Qiaohao Liang
Aldair E. Gongora
Zekun Ren
Armi Tiihonen
Zhe Liu
Shijing Sun
James R. Deneault
Daniil Bash
Flore Mekki-Berrada
Saif A. Khan
Kedar Hippalgaonkar
Benji Maruyama
Keith A. Brown
John Fisher III
Tonio Buonassisi
author_facet Qiaohao Liang
Aldair E. Gongora
Zekun Ren
Armi Tiihonen
Zhe Liu
Shijing Sun
James R. Deneault
Daniil Bash
Flore Mekki-Berrada
Saif A. Khan
Kedar Hippalgaonkar
Benji Maruyama
Keith A. Brown
John Fisher III
Tonio Buonassisi
author_sort Qiaohao Liang
title Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
title_short Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
title_full Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
title_fullStr Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
title_full_unstemmed Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains
title_sort benchmarking the performance of bayesian optimization across multiple experimental materials science domains
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/bfc6d1e9f14b4b8bbc093f32c3360b8c
work_keys_str_mv AT qiaohaoliang benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT aldairegongora benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT zekunren benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT armitiihonen benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT zheliu benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT shijingsun benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT jamesrdeneault benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT daniilbash benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT floremekkiberrada benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT saifakhan benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT kedarhippalgaonkar benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT benjimaruyama benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT keithabrown benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT johnfisheriii benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
AT toniobuonassisi benchmarkingtheperformanceofbayesianoptimizationacrossmultipleexperimentalmaterialssciencedomains
_version_ 1718419148064686080