Speaker:Song Mei (Berkeley) “Revisiting neural network approximation theory in the age of generative AI” Wednesday, October 2, 2024 11:30 am: Lunch (Kitchen) 12:00 pm: Talk (Seminar Room #1327) |
Add To: Google Calendar | Outlook | iCal File
Optional Zoom link: https://yale.zoom.us/j/97222935172
Abstract: Textbooks on deep learning theory primarily perceive neural networks as universal function approximators. While this classical viewpoint is fundamental, it inadequately explains the impressive capabilities of modern generative AI models such as language models and diffusion models. This talk puts forth a refined perspective: neural networks often serve as algorithm approximators, going beyond mere function approximation. I will explain how this refined perspective offers a deeper insight into the success of modern generative AI models.
Bio: Song Mei is an assistant professor of statistics and EECS at UC Berkeley. He received his Ph. D. from Stanford in June 2020. His research lies at the intersection of statistics and machine learning. His recent research focuses on the theory of deep learning and generative AI models. Song has received an NSF career award, an Amazon Research Award, and a Google Research Scholar Award.