Machine learning (ML) techniques are rapidly gaining prominence across various domains. Central to the successful and reliable application of these methods are guarantees of "correctness" for learned models, ideally expressed as rigorous theorems. This thesis presents a series of works that push the boundaries of such guarantees and their underlying methodologies. Our investigation begins by establishing learning guarantees for ML models that infer causal structures. We approach this challenge through the formalism of generalization bounds, providing a solid theoretical foundation for the reliability of these models. Next, we turn our attention to post-hoc calibration approaches, with a particular focus on Conformal Prediction methods. We introduce an innovative extension of Split Conformal Prediction to strategic settings, where data evolves in response to our model as agents adapt their strategies to optimize their outcomes. Finally, we demonstrate the practical application of these techniques for uncertainty quantification in image super-resolution powered by diffusion models. This application showcases the real-world potential of our theoretical contributions
Towards Machine Learning with Guarantees
Autor
Data
Local
Membros da banca
Orientador: Claudio José Struchiner - FGV EMAp
Co-orientador: Guilherme Tegoni Goedert - FGV EMAp
Membro Interno: Yuri Fahham Saporito - FGV EMAp
Membro Externo: Flávio Bambirra Gonçalves - UFMG
Membro Externo: Luiz Carlos Pacheco Rodrigues Velho - IMPA
Membro Externo: André Ponce de Leon - USP