Robert Frank, PhD
Professor of Linguistics; Professor of Computer Science; Professor of Cognitive Science; Member of the Wu Tsai Institute.
Bob Frank studies the representation, acquisition and processing of natural language using computational and mathematical models. His most recent work explores the representational limits and inductive biases of large language models and their viability as models of human language.
What do you do with Data Science?
At present, my main line of work concerns computational models of statistical learning, and more specifically the kind of neural network models underlying contemporary AI. I study the expressive capacity of these models using tools from complexity theory, and explore their inductive biases both theoretically and empirically. I am interested in understanding the kinds of abstract knowledge that these models can acquire, and the kind of data that they require, and my students, colleagues and I have been exploring such questions in the domain of human language. Some relevant publications: Zhenghao Zhou, Robert Frank and R. Thomas McCoy. 2025. Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural Priming. Proceedings of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL). Michael Wilson, Jackson Petty and Robert Frank. 2023. How Abstract Is Linguistic Generalization in Large Language Models? Experiments with Argument Structure. Transactions of the Association for Computational Linguistics. Yiding Hao, Dana Angluin and Robert Frank. 2022. Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity. Transactions of the Association for Computational Linguistics.
