Are Evolutionary Algorithms Safe Optimizers?

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We consider a type of constrained optimization problem, where the violation of a constraint leads to an irrevocable loss, such as breakage of a valuable experimental resource/platform or loss of human life. Such problems are referred to as safe optimization problems (SafeOPs). While SafeOPs have received attention in the machine learning community in recent years, there was little interest in the evolutionary computation (EC) community despite some early
attempts between 2009 and 2011. Moreover, there is a lack of acceptable
guidelines on how to benchmark different algorithms for SafeOPs, an area where the EC community has significant experience in. Driven by the need for more efficient algorithms and benchmark guidelines for SafeOPs, the objective of this paper is to reignite the interest of the EC community in this problem class.
To achieve this we (i) provide a formal definition of SafeOPs and contrast it to other types of optimization problems that the EC community is familiar with, (ii) investigate the impact of key SafeOP parameters on the performance of selected safe optimization algorithms, (iii) benchmark EC against state-of-the-art safe optimization algorithms from the machine learning community, and (iv) provide
an open-source Python framework to replicate and extend our work.

• Mathematics of computing → Continuous optimization; •Theory of computation→Evolutionary algorithms; • Computing methodologies→Gaussian processes.

Bibliographical metadata

Original languageEnglish
Title of host publicationGECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference
Publication statusAccepted/In press - 25 Mar 2022