Optimization is at the core of Machine Learning (ML) since almost all aspects of the learning framework involve explicitly or implicitly minimizing/maximizing a given objective metric or function. Most ML tasks involve optimizing multiple conflicting metrics. Traditionally, ML approaches employ optimization algorithms that impose the objective as a single-valued function. So, the objective metrics must be somehow aggregated, or only one can be considered the objective function. In contrast, recent research has proposed using Multi-Objective Optimization (MOO) algorithms to find the best solutions for ML tasks requiring optimizing multiple goals. This research presents a systematic review of such literature, which we call Multi-Objective Machine Learning (MOML). We offer a brief introduction to MOO focused on ML researchers and practitioners. Then, we overview the current MOML research, categorizing the existing work according to the specific ML task they aim to solve, such as Model Configuration, Multi-Task Learning, and Trustworthy ML. And lastly, we discuss current limitations and future research directions.
Via Zoom
Link: https://fgv-br.zoom.us/j/99681532797?pwd=SG5WQjRGN3MzNXVmQWlLUlpaL2pwdz09
Quando
13 de junho de 2023, às 16h