november, 2021
Event Details
Abstract:One of the top challenges in AI/ML is the black box models cannot be trusted in high-risk areas due to lack of explainability. Generally speaking, Explainability in
Event Details
Abstract:
One of the top challenges in AI/ML is the black box models cannot be trusted in high-risk areas due to lack of explainability. Generally speaking, Explainability in ML is two folded, Casual Explainability (also known as interpretability) and Counterfactual Explainability. While the former addresses ‘why’, the latter addresses ‘how’ small and plausible perturbations of the input modify the output? The author’s focus is on Counterfactual Explainability from the optimization lens.
In ML, the learning phase is actually a constrained optimization problem where a given objective (a.k.a, loose) function must be optimized in terms of some constraints, e.g., regularization, lasso, dropout, etc. Thus, from the constrained optimization lens, the explainability in fact refers to ‘sensitivity analysis’ or ‘post optimality’ practice.
Using post optimality, we should focus on those learning coefficients that have a narrow range of optimality and coefficients near the endpoints of the range. However, the key conjecture in post optimality is: the optimization (learning) algorithm guarantees the ‘global’ or near-global optimum. Indeed, the majority of optimization algorithms in ML cannot guarantee the global optimum due to uneven (non-convex) loos surfaces or stochastic nature of the method. Actually, non-convexity and stochasticity are two sides of the complexity coin.
In this talk, the author argues that there is a trade-off between model explainability and accuracy. The lack of global optimum guarantee is the key reason why high-accurate (and mostly black box) models are not explainable. The above trade-off raises a critical discussion during the mode selection phase: a more explainable but less accurate model is better than a less explainable but more accurate model!?
What You’ll Learn:
There is a trade-off between model explainability and accuracy
Nima has a Ph.D. in system and industrial engineering with a background in Applied Mathematics. He held a postdoctoral position at C-MORE Lab (Center for Maintenance Optimization & Reliability Engineering), University of Toronto, Canada, working on machine learning and Operations Research (ML/OR) projects in collaboration with various industry and service sectors. He was with Department of Maintenance Support and Planning, Bombardier Aerospace with a focus on ML/OR methods for reliability/survival analysis, maintenance, and airline operations optimization. Nima is currently with Data Science & Analytics (DSA) lab, Scotiabank, Toronto, Canada, as senior data scientist. He has more than 40 peer-reviewed articles and book chapters published in top-tier journals as well as one published patent. He also invited to present his findings in some ML top conferences such as GRAPH+AI 2020, NVIDIA GTC 2020/2021, and ICML 2021.
more
Time
(Wednesday) 10:55 AM - 11:40 AM
Recent Comments