Newsroom
Natural Language Processing
-
Dissertation Defense: Anay Mehrotra, “Learning Theory in the Wild: Foundations of Missing Data and Language Generation”
Abstract: What can be learned from data? This fundamental question in machine learning takes on new complexity in modern pipelines where classical assumptions fail—both […]
-
FDS Colloquium: Tom McCoy (Yale), “Understanding AI systems by understanding their training data: Memorization, generalization, and points in between”
Abstract: Large language models (LLMs) can perform a wide range of tasks impressively well. To what extent are these abilities driven by shallow heuristics vs. […]
-
FDS Special Seminar: Peiran Jin (Microsoft Research), “Nature Language Model: Deciphering the Language of Nature for Scientific Discovery”
Nature Language Model: Deciphering the Language of Nature for Scientific Discovery [https://arxiv.org/abs/2502.07527] Abstract: Foundation models have revolutionized natural language processing and artificial intelligence, significantly […]
-
-
FDS Special Seminar: Yi R. (May) Fung (HKUST), “Scaling Human-Centric Trustworthy Foundation Model Reasoning”
Abstract: In recent years, language models have made significant advancements, achieving remarkable performance on a large variety of tasks, as well as promising zero-shot/few-shot […]
-
FDS Colloquium: Song Mei (Berkeley), “Revisiting neural network approximation theory in the age of generative AI”
Optional Zoom link: https://yale.zoom.us/j/97222935172 Abstract: Textbooks on deep learning theory primarily perceive neural networks as universal function approximators. While this classical viewpoint is fundamental, it […]
