Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models
Simulation-based optimization models are widely applied to find optimal operating conditions of processes. Often, computational challenges arise from model complexity, making the generation of reliable design solutions difficult. We propose an algorithm for replacing non-linear process simulation mo...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/fc8e64d0cca34a4e968769333a745d00 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:fc8e64d0cca34a4e968769333a745d00 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:fc8e64d0cca34a4e968769333a745d002021-11-04T04:41:21ZIncreasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models2673-271810.3389/fceng.2021.778876https://doaj.org/article/fc8e64d0cca34a4e968769333a745d002021-11-01T00:00:00Zhttps://www.frontiersin.org/articles/10.3389/fceng.2021.778876/fullhttps://doaj.org/toc/2673-2718Simulation-based optimization models are widely applied to find optimal operating conditions of processes. Often, computational challenges arise from model complexity, making the generation of reliable design solutions difficult. We propose an algorithm for replacing non-linear process simulation models integrated in multi-level optimization of a process and energy system superstructure with surrogate models, applying an active learning strategy to continuously enrich the database on which the surrogate models are trained and evaluated. Surrogate models are generated and trained on an initial data set, each featuring the ability to quantify the uncertainty with which a prediction is made. Until a defined prediction quality is met, new data points are continuously labeled and added to the training set. They are selected from a pool of unlabeled data points based on the predicted uncertainty, ensuring a rapid improvement of surrogate quality. When applied in the optimization superstructure, the surrogates can only be used when the prediction quality for the given data point reaches a specified threshold, otherwise the original simulation model is called for evaluating the process performance and the newly obtained data points are used to improve the surrogates. The method is tested on three simulation models, ranging in size and complexity. The proposed approach yields mean squared errors of the test prediction below 2% for all cases. Applying the active learning approach leads to better predictions compared to random sampling for the same size of database. When integrated in the optimization framework, simpler surrogates are favored in over 60% of cases, while the more complex ones are enabled by using simulation results generated during optimization for improving the surrogates after the initial generation. Significant time savings are recorded when using complex process simulations, though the advantage gained for simpler processes is marginal. Overall, we show that the proposed method saves time and adds flexibility to complex superstructure optimization problems that involve optimizing process operating conditions. Computational time can be greatly reduced without penalizing result quality, while the continuous improvement of surrogates when simulation is used in the optimization leads to a natural refinement of the model.Julia GranacherIvan Daniel KantorIvan Daniel KantorFrançois MaréchalFrontiers Media S.A.articlesuperstructure optimizationsurrogate modelsactive learningenergy systemsprocess designmathematical programmingTechnologyTChemical technologyTP1-1185ENFrontiers in Chemical Engineering, Vol 3 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
superstructure optimization surrogate models active learning energy systems process design mathematical programming Technology T Chemical technology TP1-1185 |
spellingShingle |
superstructure optimization surrogate models active learning energy systems process design mathematical programming Technology T Chemical technology TP1-1185 Julia Granacher Ivan Daniel Kantor Ivan Daniel Kantor François Maréchal Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
description |
Simulation-based optimization models are widely applied to find optimal operating conditions of processes. Often, computational challenges arise from model complexity, making the generation of reliable design solutions difficult. We propose an algorithm for replacing non-linear process simulation models integrated in multi-level optimization of a process and energy system superstructure with surrogate models, applying an active learning strategy to continuously enrich the database on which the surrogate models are trained and evaluated. Surrogate models are generated and trained on an initial data set, each featuring the ability to quantify the uncertainty with which a prediction is made. Until a defined prediction quality is met, new data points are continuously labeled and added to the training set. They are selected from a pool of unlabeled data points based on the predicted uncertainty, ensuring a rapid improvement of surrogate quality. When applied in the optimization superstructure, the surrogates can only be used when the prediction quality for the given data point reaches a specified threshold, otherwise the original simulation model is called for evaluating the process performance and the newly obtained data points are used to improve the surrogates. The method is tested on three simulation models, ranging in size and complexity. The proposed approach yields mean squared errors of the test prediction below 2% for all cases. Applying the active learning approach leads to better predictions compared to random sampling for the same size of database. When integrated in the optimization framework, simpler surrogates are favored in over 60% of cases, while the more complex ones are enabled by using simulation results generated during optimization for improving the surrogates after the initial generation. Significant time savings are recorded when using complex process simulations, though the advantage gained for simpler processes is marginal. Overall, we show that the proposed method saves time and adds flexibility to complex superstructure optimization problems that involve optimizing process operating conditions. Computational time can be greatly reduced without penalizing result quality, while the continuous improvement of surrogates when simulation is used in the optimization leads to a natural refinement of the model. |
format |
article |
author |
Julia Granacher Ivan Daniel Kantor Ivan Daniel Kantor François Maréchal |
author_facet |
Julia Granacher Ivan Daniel Kantor Ivan Daniel Kantor François Maréchal |
author_sort |
Julia Granacher |
title |
Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
title_short |
Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
title_full |
Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
title_fullStr |
Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
title_full_unstemmed |
Increasing Superstructure Optimization Capacity Through Self-Learning Surrogate Models |
title_sort |
increasing superstructure optimization capacity through self-learning surrogate models |
publisher |
Frontiers Media S.A. |
publishDate |
2021 |
url |
https://doaj.org/article/fc8e64d0cca34a4e968769333a745d00 |
work_keys_str_mv |
AT juliagranacher increasingsuperstructureoptimizationcapacitythroughselflearningsurrogatemodels AT ivandanielkantor increasingsuperstructureoptimizationcapacitythroughselflearningsurrogatemodels AT ivandanielkantor increasingsuperstructureoptimizationcapacitythroughselflearningsurrogatemodels AT francoismarechal increasingsuperstructureoptimizationcapacitythroughselflearningsurrogatemodels |
_version_ |
1718445249074823168 |