Special Seminar: Meet the FDS Postdocs

decorative

Yale Institute for Foundations of Data Science, Kline Tower 13th Floor, Room 1327, New Haven, CT 06511

Please join the Institute for Foundations of Data Science as we proudly introduce
our inaugural group of Postdoctoral Fellows to the Yale community.

Coffee at 3:30pm in 1307

Postdoc talks at 4:00pm in 1327

Reception to follow at 5:00pm

Insu Han

Insu Han
FDS Postdoctoral Fellow
Appointed in Department of Electrical Engineering
insu.han@yale.edu | https://insuhan.github.io/

“Accelerating Large Language Models via Approximating Algorithms”

Abstract: In this talk, I will discuss several approximation techniques for accelerating computations in large language models (LLMs). Given their remarkable success across broad applications, LLMs present numerous computational challenges. I will present specific operations that serve as bottlenecks and present an acceleration scheme based on kernel density estimation to address the issue of long-range context.

Bio: I obtained my Ph.D. degree in the School of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST), where I am advised by Jinwoo Shin. I received an M.S. in Electrical engineering and a B.S. in Electrical Engineering and Mathematics (minored) from KAIST.

My research interests focus on approximate algorithm design and analysis for large-scale machine learning and its applications. In 2019, I was fortunate to be a recipient of Microsoft Research Asia Fellowship 2019.

Alkis Kalavasis
FDS Postdoctoral Fellow
Appointed in Department of Computer Science
alkis.kalavasis@yale.edu | https://alkisk.github.io/

“Biased Data and Optimization Barriers in Machine Learning”

Abstract: In this talk, we will discuss the computational challenges behind various high-dimensional Machine Learning problems arising in (i) statistical analysis with biased data and (ii) single- and multi- agent optimization tasks. We firstly focus on the design of computationally efficient algorithms for problems arising in (a) Truncated, (b) Censored and (c) Robust Statistics. Second, we will talk about negative results in high-dimensional optimization problems such as lower bounds for optimizing nonsmooth objectives and computing sparse equilibria in games.

Bio: Before coming to Yale, I was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. I completed my undergraduate studies in the School of Electrical and Computer Engineering Department of the NTUA, where I was advised by Dimitris Fotakis.

My research focuses on theoretical foundations of Machine Learning and their interplay with Statistics & High-Dimensional Probability, Optimization and Computational Complexity.

Aditi Laddha

Aditi Laddha
FDS Postdoctoral Fellow
Appointed in Department of Computer Science
aditiladdha5@gmail.com | https://sites.cc.gatech.edu/~aladdha6/

“Discrepancy, Sampling, and High-Dimensional Integration”

Abstract: In this talk, I will provide an overview of some algorithms for high-dimensional sampling and integration, which are backed by theoretical guarantees. I will also discuss some problems related to discrepancy minimization and their applications, along with ongoing research on sampling and discrepancy.

Bio: Research interests include; High Dimensional Sampling and applications, Convex and Combinatorial Optimization, Machine Learning and Robust Estimation, and Markov Chain Monte Carlo Algorithms.

As a Ph.D. candidate in the Algorithms, Combinatorics, and Optimization program at Georgia Institute of Technology, I was fortunate to be advised by Santosh Vempala. I received my B.Tech. in Computer Science and Engineering from IIT Bombay.

Gaurav Mahajan

Gaurav Mahajan
FDS Postdoctoral Fellow
Appointed in Department of Computer Science

gaurav.mahajan@yale.edu | https://gomahajan.github.io/

“Computational RL, Classical Shadows and Distribution Rank”

Abstract: In this talk, I will give an overview of my research interests. I will start by discussing a computational problem in reinforcement learning. Even though it was observed as early as 1963 that there are empirical benefits of using linear function approximation, only recently a series of works found sample efficient algorithms for this setting. These works posed an important open problem: Can we design polynomial time​ algorithms for this setting? I will discuss recent progress on this problem: a surprising computational-statistical gap in reinforcement learning. 

Then, I will give a quick overview of problems I am currently working on in the areas of quantum information theory, distribution learning and discrepancy theory.

Bio: I am broadly interested in theoretical computer science and machine learning theory.

Previously, I completed my PhD in the theory group at UCSD advised by Sanjoy Dasgupta and Shachar Lovett. I have spent some fun summers at Microsoft Research, Institute for Advanced Study and Simons Institute.


Refreshments will be served