FDS Talk: Abhishek Shetty (UC Berkeley), “Optimal PAC Bounds without Uniform Convergence”

decorative

Webcast

NOTE: This talk will be virtual only by Zoom.
Click the button below or go to: https://yale.zoom.us/j/94884435356

Abstract: Though modern machine learning has been highly successful, as we move towards more critical applications, many challenges towards building trustworthy systems, such as mitigating tail risk and ensuring robustness, privacy, and fairness, arise. Ad hoc and empirical approaches have often led to unintended consequences for these objectives, thus necessitating a principled approach. Traditional solutions often require redesigning entire pipelines or come with a significant loss in quality. In this talk, we will look at principles that can incorporate important desiderata into existing pipelines without significant computational and statistical overhead. In particular, we will focus on the setting of incorporating high probability tail bounds into learning algorithms that satisfy milder correctness guarantees (i.e. stability or expected risk).

We will discuss a simple technique that converts any algorithm satisfying stability and permutation invariance into one that satisfies high probability risk bounds. Through this reduction, we will present optimal high probability risk bounds that surpass the limitations of traditional uniform convergence arguments. In addition to binary classification, we will see applications in three settings where uniform convergence is provably sub-optimal. For multiclass classification, we prove an optimal risk bound scaling with the one-inclusion hypergraph density of the class, addressing the suboptimality of the analysis by Daniely and Shalev-Shwartz. In partial concept classification, we determine the optimal sample complexity bound, resolving a question posed by Alon, Hanneke, Holzman, and Moran. In the context of realizable bounded regression with absolute loss, we derive an optimal risk bound that relies on a modified version of the scale-sensitive dimension, refining the results of Bartlett and Long. Our rates surpass standard uniform convergence-based results due to the smaller complexity measure in our risk bound.

This is based on joint work with Ishaq Aden-Ali, Yeshwanth Cherapanamjeri and Nikita Zhivotovskiy.

NOTE: This talk will be virtual only by Zoom. Click the above or go to: https://yale.zoom.us/j/94884435356


Abhishek Shetty (UC Berkeley)

Speaker Bio: I am currently a PhD student in the Department of Computer Science at the University of California at Berkeley (previously at Cornell) advised by Nika Haghtalab. I am also fortunate to have worked with Lester Mackey at MSR New England during the summer of 2021, Akshay Krishnamurthy and Cyril Zhang at MSR NYC during the summer of 2022 and Parikshit Gopalan at Apple during the summer of 2023. I am honored to be amongst the 2023 cohort of the Apple AI/ML fellowship.

Prior to starting grad school, I was a Research Fellow at Microsoft Research India working with Navin Goyal. Before that, I was an undergraduate majoring in Mathematics at the Indian Institute of Science and worked with Bhavana Kanukurthi on cryptography.

My research focuses on designing mathematical frameworks bringing together the theory and practice of machine learning and using these to developing simple algorithms with provable guarantees on real-world data. Importantly, these frameworks come with design principles that can be used to incorporate important desiderata such as robustness, privacy and fairness into existing pipelines without significant computational and statistical overhead. Some focus areas of my research have been:

  • Statistical and Computational Aspects of Smoothed Online Learning
  • Algorithmic Generalization and High Probability Risk Bounds
  • Discrepancy Theory and Applications

Website: https://ashettyv.github.io/