14th Metaheuristics International Conference

11-14 July 2022, Ortigia-Syracuse, Italy

Plenary Speakers


  

  

  

Christian Blum

Christian Blum

Spanish National Research Council
Spain

Recent Hybrid Techniques for Solving Large-Scale
Combinatorial Optimization Problems

  

In this talk, I will present two successful examples of our recent work on developing efficient algorithms for solving large-scale combinatorial optimization problems. Both algorithms belong to the class of hybrid metaheuristics, as they make use of exact techniques for solving reduced problem instances at each algorithm iteration. The first example concerns an extension of the metaheuristic ant colony optimization by a negative learning mechanism. The second example is about an award-winning algorithm known as construct, merge, solve & adapt (CMSA). Example applications of both algorithms will be presented. Finally, we will shortly present a new tool for visualizing algorithm behaviour based on search trajectory networks (STNs). This tool potentially helps to understand algorithm behaviour and can be downloaded by anyone interested in using it.

   

   

   

   

Making Metaheuristics More Powerful Using Human
and Machine-Learnt Knowledge

  

Metaheuristics-based search and optimization algorithms have clearly demonstrated their worth in solving complex practical problems due to their flexible framework, population approach, and implicit parallel processing of promising search regions. With an increasing need for solving more practical and more challenging problems, metaheuristics researchers are seeking help from contemporary computing fields and human experts. In this lecture, we shall highlight a few such efforts involving machine learning based and human-driven metaheuristics to tackle large-scale and challenging problems involving multiple conflicting objectives, non-linear constraints, and interlinked variables. Case studies from practical search and optimization problems will be presented to demonstrate effective merging of machine learning, metaheuristics, and human expertise.

Kalyanmoy Deb

Kalyanmoy Deb

Michigan State University
USA

   

   

   

   

Fred Glover

Fred Glover

Entanglement, Inc.
USA


























New Advances for Quantum-Inspired Optimization - part A

  

In recent years we have discovered that a mathematical formulation known as QUBO, an acronym for a Quadratic Unconstrained Binary Optimization problem, can embrace an exceptional variety of important optimization problems found in industry, science, and government. The QUBO model has emerged as an underpinning of the quantum computing areas known as quantum annealing and digital annealing and has become a subject of study in neuromorphic computing. Through these connections, QUBO models lie at the heart of experimentation carried out with quantum computers developed by D-Wave Systems and neuromorphic computers developed by IBM. New discoveries linking QUBO models to quantum computing are being explored in initiatives by organizations such as IBM, Google, Amazon, Microsoft, D-Wave and Lockheed Martin in the commercial realm and Los Alamos National Laboratory, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory and NASA’s Ames Research Center in the public sector. Computational experience is being amassed by both the classical and the quantum computing communities that highlights not only the potential of the QUBO model but also its effectiveness as an alternative to traditional modeling and solution methodologies.
The significance of the ability of the QUBO model to encompass many models in combinatorial optimization is enhanced by the fact that the QUBO model can be shown to be equivalent to the Ising model that plays a prominent role in physics. Consequently, the broad range of optimization problems solved effectively by state-of-the-art QUBO solution methods are joined by an important domain of problems arising in physics applications. We illustrate the process of reformulating important optimization problems as QUBO models through a series of explicit examples. We disclose the unexpected advantages of modeling a wide range of problems in a form that differs from the linear models classically adopted in the optimization community. We then go farther by describing important QUBO-Plus and PUBO models (where “P” stands for “Polynomial”) that go beyond QUBO models to embrace a wide range of additional important applications. Each step of generating such models is illustrated in detail by simple numerical examples, to highlight the convenience of using these models in numerous settings. Beyond the modeling component, an extremely significant dimension lies in the development of powerful algorithms and efficient computer implementations. We describe recent algorithmic innovations that offer a fertile avenue for integrating classical and quantum computing and for applying these models. These innovations, embodied in software made available through Entanglement, Inc., have produced an ability to solve dramatically larger problems and to obtain significantly better solutions than software being offered through D-Wave, IBM, Microsoft, Fujitsu and other groups pursuing this area. Some of the major applications addressed with these innovations include those in: (1) Classical Combinatorial Optimization; (2) Financial Services; (3) Transportation; (4) Manufacturing; (5) Pharmaceuticals and Related; (6) Network and Energy; and (7) Machine learning.

   

   

   

   

Interactive Evolutionary MultiObjective Optimization

  

In multiobjective optimization, in general does not exist a solution that is preferred to all the others with respect to all objective functions. Indeed, in general, to improve the performances on one objective one has to accept a deterioration with respect to one or more other objectives. In this perspective, very often a set of Pareto optimal solutions is computed and presented to the decision maker. However, this is not completely satisficing because the Decision Maker (DM) has to select one solution being the most preferred. Based on this consideration, interactive evolutionary multiobjective optimization methods have been proposed with the aim to handle the search of the most interesting part of the Pareto front taking into account some preferences provided by the DM during the decision support procedure. In this talk we shall present two interactive evolutionary multiobjective optimization methods: NEMOIICh and XIMEA-DRSA. Both of them take into account the DM’s preferences to construct a decision model in terms of value functions and “if…, then…” decision rules, respectively.

Salvatore Greco

Salvatore Greco

University of Catania
Italy

   

   

   

   

Holger H. Hoos

Holger H. Hoos

Leiden University
The Netherlands





Cooperative Competition: A New Way of Solving Computationally
Challenging Problems in AI and Beyond

  

Progress in solving challenging problems in artificial intelligence, computer science at large, and beyond is driven, to a significant extent, by competition - regular algorithm competitions as well as comparative performance evaluation against state-of-the-art methods from the literature. A prominent example for this is the satisfiability problem in propositional logic (SAT), an NP-hard problem that not only lies at the foundations of computer science, but also plays a key role in many real-world applications, notably in ensuring the correctness of hard- and software. In this presentation, I will argue that it is time to rethink the way we assess the state of the art in solving problems such as SAT and the incentives for improving it. I will demonstrate how automated algorithm selection and configuration techniques based on sophisticated machine learning and optimisation methods have fundamentally changed not only the state of the art in solving SAT and many other NP-hard problems, but also provide a natural basis for cooperative competition - a new approach for achieving and assessing progress not merely in solving these problems, but also in the way we approach them as a scientific community.

   

   

   

   

New Advances for Quantum-Inspired Optimization - part B

  

In recent years we have discovered that a mathematical formulation known as QUBO, an acronym for a Quadratic Unconstrained Binary Optimization problem, can embrace an exceptional variety of important optimization problems found in industry, science, and government. The QUBO model has emerged as an underpinning of the quantum computing areas known as quantum annealing and digital annealing and has become a subject of study in neuromorphic computing. Through these connections, QUBO models lie at the heart of experimentation carried out with quantum computers developed by D-Wave Systems and neuromorphic computers developed by IBM. New discoveries linking QUBO models to quantum computing are being explored in initiatives by organizations such as IBM, Google, Amazon, Microsoft, D-Wave and Lockheed Martin in the commercial realm and Los Alamos National Laboratory, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory and NASA’s Ames Research Center in the public sector. Computational experience is being amassed by both the classical and the quantum computing communities that highlights not only the potential of the QUBO model but also its effectiveness as an alternative to traditional modeling and solution methodologies.
The significance of the ability of the QUBO model to encompass many models in combinatorial optimization is enhanced by the fact that the QUBO model can be shown to be equivalent to the Ising model that plays a prominent role in physics. Consequently, the broad range of optimization problems solved effectively by state-of-the-art QUBO solution methods are joined by an important domain of problems arising in physics applications. We illustrate the process of reformulating important optimization problems as QUBO models through a series of explicit examples. We disclose the unexpected advantages of modeling a wide range of problems in a form that differs from the linear models classically adopted in the optimization community. We then go farther by describing important QUBO-Plus and PUBO models (where “P” stands for “Polynomial”) that go beyond QUBO models to embrace a wide range of additional important applications. Each step of generating such models is illustrated in detail by simple numerical examples, to highlight the convenience of using these models in numerous settings. Beyond the modeling component, an extremely significant dimension lies in the development of powerful algorithms and efficient computer implementations. We describe recent algorithmic innovations that offer a fertile avenue for integrating classical and quantum computing and for applying these models. These innovations, embodied in software made available through Entanglement, Inc., have produced an ability to solve dramatically larger problems and to obtain significantly better solutions than software being offered through D-Wave, IBM, Microsoft, Fujitsu and other groups pursuing this area. Some of the major applications addressed with these innovations include those in: (1) Classical Combinatorial Optimization; (2) Financial Services; (3) Transportation; (4) Manufacturing; (5) Pharmaceuticals and Related; (6) Network and Energy; and (7) Machine learning.

Gary Kochenberger

Gary Kochenberger

Entanglement, Inc.
USA


























   

   

   

   

El-Ghazali Talbi

El-Ghazali Talbi

University of Lille 1, CNRS
INRIA, France

How machine learning can help metaheuristics ?

  

During the last years, research in applying machine learning (ML) to design efficient, effective and robust metaheuristics became increasingly popular. Many of those machine learning-supported metaheuristics have generated high quality results and represent state-of-the-art optimization algorithms. Although various approaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this talk we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies which might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic which needs further in-depth investigations.