Sobre o Evento
In this talk, we will approach two important issues related to machine learning explainability methods: stability and disagreement. We will present T-Explainer, a new explanation technique derived from the Taylor expansion, which is fully determinist and more stable than well-known explanation methods such as LIME and SHAP. We will also present a visualization-assisted method designed to explore the disagreement between explanation methods and its possible causes.
Texto informado pelo autor.
* Os participantes dos seminários não poderão acessar às dependências da FGV usando bermuda, chinelos, blusa modelo top ou cropped, minissaia ou camiseta regata. O uso da máscara é facultativo, porém é obrigatória a apresentação do comprovante de vacinação (físico ou digital).
Apoiadores / Parceiros / Patrocinadores
Luis Gustavo Nonato
Luis Gustavo Nonato received the PhD degree in applied mathematics from the Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro - Brazil, in 1998. He is professor with the Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos, Brazil. Nonato was a visiting professor at the Center for Data Science, New York University, from 2016 to 2018 and he was also a visiting scholar at the SCI Institute, University of Utah, from 2008 to 2010. Nonato served on several program committees, including IEEE SciVis, IEEE InfoVis, and EuroVis; he was associate editor of Computer Graphics Forum and IEEE Transactions on Visualization and Computer Graphics journals, General Chair of the IEEE Visualization conference in 2021, and editor-in-chief of the SBMAC SpringerBriefs in Applied Mathematics and Computational Sciences. Nonato's main research interests include geometric computing, data science, machine learning, and visualization.