An Approach to the Automatic Comparison of Reference Point-Based Interactive Methods for Multiobjective Optimization

Solving multiobjective optimization problems means finding the best balance among multiple conflicting objectives. This needs preference information from a decision maker who is a domain expert. In interactive methods, the decision maker takes part in an iterative process to learn about the interdep...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Dmitry Podkopaev, Kaisa Miettinen, Vesa Ojalehto
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/ae9a657db71d4dc18f9c2f616b67f2b3
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Solving multiobjective optimization problems means finding the best balance among multiple conflicting objectives. This needs preference information from a decision maker who is a domain expert. In interactive methods, the decision maker takes part in an iterative process to learn about the interdependencies and can adjust the preferences. We address the need to compare different interactive multiobjective optimization methods, which is essential when selecting the most suited method for solving a particular problem. We concentrate on a class of interactive methods where a decision maker expresses preference information as reference points, i.e., desirable objective function values. Comparison of interactive methods with human decision makers is not a straightforward process due to cost and reliability issues. The lack of suitable behavioral models hampers creating artificial decision makers for automatic experiments. Few approaches to automating testing have been proposed in the literature; however, none are widely used. As a result, empirical performance studies are scarce for this class of methods despite its popularity among researchers and practitioners. We have developed a new approach to replace a decision maker to automatically compare interactive methods based on reference points or similar preference information. Keeping in mind the lack of suitable human behavioral models, we concentrate on evaluating general performance characteristics. Such an evaluation can partly address the absence of any tests and is appropriate for screening methods before more rigorous testing. We have implemented our approach as a ready-to-use Python module and illustrated it with computational examples.