Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models
Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate m...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Wiley
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/7b396f6d97e843f0b445409a937ba3a1 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:7b396f6d97e843f0b445409a937ba3a1 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:7b396f6d97e843f0b445409a937ba3a12021-11-23T07:58:48ZExtrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models2640-456710.1002/aisy.202100101https://doaj.org/article/7b396f6d97e843f0b445409a937ba3a12021-11-01T00:00:00Zhttps://doi.org/10.1002/aisy.202100101https://doaj.org/toc/2640-4567Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate model in the BO algorithm, Gaussian processes (GPs), may be limited due to its inability to handle complex datasets. Herein, various surrogate models for BO, including GPs and neural network ensembles (NNEs), are investigated. Two materials datasets of different complexity with different properties are used, to compare the performance of GP and NNE—the first is the compressive strength of concrete (8 inputs and 1 target), and the second is a simulated high‐dimensional dataset of thermoelectric properties of inorganic materials (22 inputs and 1 target). While NNEs can converge faster toward optimum values, GPs with optimized kernels are able to ultimately achieve the best evaluated values after 100 iterations, even for the most complex dataset. This surprising result is contrary to expectations. It is believed that these findings shed new light on the understanding of surrogate models for BO, and can help accelerate the inverse design of new materials with better structural and functional performance.Yee-Fun LimChee Koon NgU.S. VaitesswarKedar HippalgaonkarWileyarticleautomated experimentsBayesian optimizationextrapolative algorithmsmachine learningneural network ensemblesComputer engineering. Computer hardwareTK7885-7895Control engineering systems. Automatic machinery (General)TJ212-225ENAdvanced Intelligent Systems, Vol 3, Iss 11, Pp n/a-n/a (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
automated experiments Bayesian optimization extrapolative algorithms machine learning neural network ensembles Computer engineering. Computer hardware TK7885-7895 Control engineering systems. Automatic machinery (General) TJ212-225 |
spellingShingle |
automated experiments Bayesian optimization extrapolative algorithms machine learning neural network ensembles Computer engineering. Computer hardware TK7885-7895 Control engineering systems. Automatic machinery (General) TJ212-225 Yee-Fun Lim Chee Koon Ng U.S. Vaitesswar Kedar Hippalgaonkar Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
description |
Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate model in the BO algorithm, Gaussian processes (GPs), may be limited due to its inability to handle complex datasets. Herein, various surrogate models for BO, including GPs and neural network ensembles (NNEs), are investigated. Two materials datasets of different complexity with different properties are used, to compare the performance of GP and NNE—the first is the compressive strength of concrete (8 inputs and 1 target), and the second is a simulated high‐dimensional dataset of thermoelectric properties of inorganic materials (22 inputs and 1 target). While NNEs can converge faster toward optimum values, GPs with optimized kernels are able to ultimately achieve the best evaluated values after 100 iterations, even for the most complex dataset. This surprising result is contrary to expectations. It is believed that these findings shed new light on the understanding of surrogate models for BO, and can help accelerate the inverse design of new materials with better structural and functional performance. |
format |
article |
author |
Yee-Fun Lim Chee Koon Ng U.S. Vaitesswar Kedar Hippalgaonkar |
author_facet |
Yee-Fun Lim Chee Koon Ng U.S. Vaitesswar Kedar Hippalgaonkar |
author_sort |
Yee-Fun Lim |
title |
Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
title_short |
Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
title_full |
Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
title_fullStr |
Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
title_full_unstemmed |
Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models |
title_sort |
extrapolative bayesian optimization with gaussian process and neural network ensemble surrogate models |
publisher |
Wiley |
publishDate |
2021 |
url |
https://doaj.org/article/7b396f6d97e843f0b445409a937ba3a1 |
work_keys_str_mv |
AT yeefunlim extrapolativebayesianoptimizationwithgaussianprocessandneuralnetworkensemblesurrogatemodels AT cheekoonng extrapolativebayesianoptimizationwithgaussianprocessandneuralnetworkensemblesurrogatemodels AT usvaitesswar extrapolativebayesianoptimizationwithgaussianprocessandneuralnetworkensemblesurrogatemodels AT kedarhippalgaonkar extrapolativebayesianoptimizationwithgaussianprocessandneuralnetworkensemblesurrogatemodels |
_version_ |
1718416836419125248 |