BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:America/New_York
X-WR-TIMEZONE:America/New_York
BEGIN:VEVENT
UID:602@fds.yale.edu
DTSTART;TZID=America/New_York:20221109T160000
DTEND;TZID=America/New_York:20221109T180000
DTSTAMP:20250916T142119Z
URL:https://fds.yale.edu/events/fds-seminar-priya-panda-exploring-robustne
 ss-and-energy-efficiency-in-neural-systems-with-spike-based-machine-intell
 igence/
SUMMARY:FDS Seminar: Priya Panda (Department of Electrical Engineering)"Exp
 loring Robustness and Energy–Efficiency in Neural Systems with Spike–b
 ased Machine Intelligence"
DESCRIPTION:Abstract: Spiking Neural Networks (SNNs) have recently emerged 
 as an alternative to deep learning due to their huge energy efficiency ben
 efits on neuromorphic hardware. In this presentation\, I will talk about i
 mportant techniques for training SNNs which bring a huge benefit in terms 
 of latency\, accuracy\, interpretability\, and robustness. We will first d
 elve into how training is performed in SNNs. Training SNNs with surrogate 
 gradients presents computational benefits due to short latency. However\, 
 due to the non-differentiable nature of spiking neurons\, the training bec
 omes problematic and surrogate methods have thus been limited to shallow n
 etworks. To address this training issue with surrogate gradients\, we will
  go over a recently proposed method Batch Normalization Through Time (BNTT
 ) that allows us to train SNNs from scratch with very low latency and enab
 les us to target interesting applications like video segmentation and beyo
 nd traditional learning scenarios\, like federated training. Another criti
 cal limitation of SNNs is the lack of interpretability. While a considerab
 le amount of attention has been given to optimizing SNNs\, the development
  of explainability still is at its infancy. I will talk about our recent w
 ork on a bio-plausible visualization tool for SNNs\, called Spike Activati
 on Map (SAM) compatible with BNTT training. The proposed SAM highlights sp
 ikes having short inter-spike interval\, containing discriminative informa
 tion for classification. Finally\, with proposed BNTT and SAM\, I will hig
 hlight the robustness aspect of SNNs with respect to adversarial attacks. 
 In the end\, I will talk about interesting prospects of SNNs for non-conve
 ntional learning scenarios such as privacy-preserving distributed learning
  as well as unraveling the temporal correlation in SNNs with feedback conn
 ections. Finally\, time permitting\, I will talk about the prospects of SN
 Ns for novel and emerging compute-in-memory hardware that can potentially 
 yield order of magnitude lower power consumption than conventional CPUs/GP
 Us.\n\n\n\nBio: Priya's research interests lie in Neuromorphic Computing: 
 spanning energy-efficient design methodologies for deep learning networks\
 , novel supervised/unsupervised learning algorithms for spiking neural net
 works and developing neural architectures for new computing scenarios (suc
 h as lifelong learning\, generative models\, stochastic networks\, adversa
 rial attacks etc.).\n\n\n\nHer goal is to empower energy-aware and energy-
 efficient machine intelligence through algorithm-hardware co-design while 
 being secure to adversarial scenarios and catering to the resource constra
 ints of Internet of Things (IoT) devices.\n\n\n\nWebsite: https://seas.yal
 e.edu/faculty-research/faculty-directory/priya-panda\n\n\n\n\nWatch (Acces
 s to Yale network required)\n\n
CATEGORIES:FDS Events,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:603@fds.yale.edu
DTSTART;TZID=America/New_York:20221122T160000
DTEND;TZID=America/New_York:20221122T180000
DTSTAMP:20250916T142119Z
URL:https://fds.yale.edu/events/fds-seminar-michael-lopez-sr-nfl-analyzing
 -the-national-football-league-is-challenging-but-player-tracking-data-is-h
 ere-to-help/
SUMMARY:FDS Seminar: Michael Lopez Sr. (NFL) "Analyzing the National Footba
 ll League is challenging\, but player tracking data is here to help"
DESCRIPTION:Abstract: Most historical National Football League (NFL) analys
 is\, both mainstream and academic\, has relied on play-by-play data to gen
 erate team and player-level trends. Given the number of outside variables 
 that impact on-field results\, such as play call and game situation\, find
 ings are often no more than interesting anecdotes. With the release of pla
 yer tracking data\, however\, analysts can appropriately ask and answer qu
 estions that better isolate player skill and coaching strategy. In this ta
 lk\, we highlight the limitations of traditional analyses\, and use a deca
 des-old punching bag for analysts – fourth-down strategy – as a microc
 osm for why tracking data is needed.\n\n\n\n\nView Webinar\n\n
CATEGORIES:FDS Events,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:601@fds.yale.edu
DTSTART;TZID=America/New_York:20221216T130000
DTEND;TZID=America/New_York:20221216T140000
DTSTAMP:20250916T142119Z
URL:https://fds.yale.edu/events/fds-seminar-yuchen-wu-stanford-university/
SUMMARY:FDS Seminar: Yuchen Wu (Stanford University)
DESCRIPTION:Fundamental Limits of Low-Rank Matrix Estimation: Information-T
 heoretic and Computational Perspectives\n\n\nAbstract: Many statistical es
 timation problems can be reduced to the reconstruction of a low-rank n×d 
 matrix when observed through a noisy channel. While tremendous positive re
 sults have been established\, relatively few works focus on understanding 
 the fundamental limitations of the proposed models and algorithms. Underst
 anding such limitations not only provides practitioners with guidance on a
 lgorithm selection\, but also spurs the development of cutting-edge method
 ologies. In this talk\, I will present some recent progress in this direct
 ion from two perspectives in the context of low-rank matrix estimation. Fr
 om an information-theoretic perspective\, I will give an exact characteriz
 ation of the limiting minimum estimation error. Our results apply to the h
 igh-dimensional regime n\,d→∞ and d/n→∞ (or d/n→0) and generaliz
 e earlier works that focus on the proportional asymptotics n\,d→∞\, d/
 n→δ∈(0\,∞). From an algorithmic perspective\, large-dimensional mat
 rices are often processed by iterative algorithms like power iteration and
  gradient descent\, thus encouraging the pursuit of understanding the fund
 amental limits of these approaches. We introduce a class of general first 
 order methods (GFOM)\, which is broad enough to include the aforementioned
  algorithms and many others. I will describe the asymptotic behavior of an
 y GFOM\, and provide a sharp characterization of the optimal error achieve
 d by the GFOM class.This is based on joint works with Michael Celentano an
 d Andrea Montanari.\n\n\n\nThis seminar was held virtually over zoom and a
  recording is not available.\n
CATEGORIES:FDS Events,Postdoctoral Applicants,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:597@fds.yale.edu
DTSTART;TZID=America/New_York:20221216T150000
DTEND;TZID=America/New_York:20221216T160000
DTSTAMP:20250916T142119Z
URL:https://fds.yale.edu/events/fds-seminar-aditi-laddha-ga-tech/
SUMMARY:FDS Seminar: Aditi Laddha (GA Tech)
DESCRIPTION:"High-Dimensional Markov Chains and Applications"\n\n\nAbstract
 : A Markov chain is a random process in which the next state is chosen acc
 ording to some probability distribution that depends only on the current s
 tate. In a high-dimensional setting\, Markov chains are essential tools fo
 r understanding the geometry of the space and form the backbone of many ef
 ficient randomized algorithms for tasks like optimization\, integration\, 
 linear programming\, approximate counting\, etc. In this talk\, I will pro
 vide an overview of my research on “High-Dimensional Markov Chains\,” 
 with a focus on the geometric aspects of the chains. I will describe two r
 esults that illustrate the importance of Markov chains for designing effic
 ient algorithms. First\, I will discuss my work on a barrier-based random 
 walk for bounding the discrepancy of set systems. I will then present a ge
 neral framework for bounding discrepancy in various settings. Second\, I w
 ill describe two Markov chains\, the Weighted Dikin Walk and Coordinate Hi
 t-and-Run for sampling convex bodies\, and discuss new techniques for boun
 ding their convergence rates.\n\n\n\nThis seminar was held virtually over 
 zoom and no recording is available.\n
CATEGORIES:FDS Events,Postdoctoral Applicants,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:596@fds.yale.edu
DTSTART;TZID=America/New_York:20221219T123000
DTEND;TZID=America/New_York:20221219T133000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/fds-seminar-alkis-kalavasis-national-techn
 ical-university-of-athens/
SUMMARY:FDS Seminar: Alkis Kalavasis (National Technical University of Athe
 ns)
DESCRIPTION:"Efficient Algorithms and Computational Barriers in Reliable Ma
 chine Learning"Speaker: Alkis KalavasisNational Technical University of At
 hens\nAbstract: In this talk\, we will discuss the computational challenge
 s arising in various problems in Reliable Machine Learning. Reliable ML ai
 ms at the design of computationally efficient algorithms that provide guar
 antees such as robustness to biased data\, reproducibility and privacy. We
  firstly focus on the design of algorithms robust to biased and corrupted 
 observations. We begin with the problem of learning from coarse data. The 
 motivation behind this problem is that in many learning tasks one may not 
 have access to fine grained label information\; e.g.\, an image can be lab
 eled as husky\, dog\, or even animal depending on the expertise of the ann
 otator. We formalize these settings from the viewpoint of computational le
 arning theory and provide efficient algorithms and computational hardness 
 results. We then continue with the task of learning noisy linear label ran
 kings. Label ranking is the supervised task of learning a sorting function
  that maps feature vectors to rankings over a finite set of labels. We pro
 vide the first efficient algorithms for learning linear sorting functions 
 in the presence of bounded noise (an extension of the Massart noise condit
 ion to label rankings) under Gaussian marginals. Next\, we consider questi
 ons regarding responsibility aspects of ML systems. We study the important
  problem of reproducibility as an algorithmic property in decision-making 
 settings. We introduce the notion of reproducible policies in the context 
 of stochastic bandits\, one of the canonical problems in interactive learn
 ing. A policy in the bandit environment is called reproducible if it pulls
 \, with high probability\, the exact same sequence of arms in two differen
 t and independent executions (under independent reward realizations and sh
 ared internal randomness). We show that not only do reproducible policies 
 exist\, but also they achieve almost the same optimal (non-reproducible) r
 egret bounds in terms of the time horizon. In the end of the talk\, we wil
 l shortly discuss some on-going work on the complexity of min-max optimiza
 tion\, a fundamental problem in the area of equilibrium computation in mul
 ti-agent environments.
CATEGORIES:FDS Events,Postdoctoral Applicants
END:VEVENT
BEGIN:VEVENT
UID:600@fds.yale.edu
DTSTART;TZID=America/New_York:20230112T150000
DTEND;TZID=America/New_York:20230112T160000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/fds-seminar-arnab-auddy-columbia/
SUMMARY:FDS Seminar: Arnab Auddy (Columbia)
DESCRIPTION:"Statistical Benefits and Computational Challenges of Tensor Sp
 ectral Learning"\n\n\nTalk Abstract:Given multivariate observations from a
  statistical model\, tensors are a natural way of recording higher order i
 nteractions among variables. Tensor spectral learning is a collection of m
 ethods wherein we aim to decompose a tensor into its components\, each of 
 which correspond to interpretable features of the model. This approach has
  recently received a lot of attention for its application to latent variab
 le models. In this talk\, I will focus on orthogonally decomposable tensor
 s\, which arise naturally in many problems. These tensors have a decomposi
 tion that can be interpreted very similarly to matrix SVD\, but automatica
 lly provides much better identifiability properties than their matrix coun
 terparts. I will show that in such a tensor decomposition\, a small pertur
 bation affects each singular vector in isolation\, and their estimatibilit
 y does not depend on the gap between consecutive singular values. In contr
 ast to these attractive statistical properties\, in general\, tensor metho
 ds present us with intriguing computational considerations. I will illustr
 ate these phenomena in the particular application to a spiked tensor PCA p
 roblem and in Independent Component Analysis (ICA). Interestingly there is
  a gap within the information theoretic and computationally tractable limi
 ts of both problems. Above the computational threshold\, we provide noise 
 robust algorithms based on spectral truncation\, which provide rate optima
 l estimators. Our estimators are also asymptotically normal thus allowing 
 confidence interval construction. Finally I will present some examples dem
 onstrating our theoretical findings.\n\n\n\nThis talk was held virtually o
 n January 12\, 2023 @ 3:00 pm\n
CATEGORIES:FDS Events,Postdoctoral Applicants
END:VEVENT
BEGIN:VEVENT
UID:598@fds.yale.edu
DTSTART;TZID=America/New_York:20230113T130000
DTEND;TZID=America/New_York:20230113T140000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/fds-seminar-gaurav-mahajan-ucsd/
SUMMARY:FDS Seminar: Gaurav Mahajan (UCSD)
DESCRIPTION:“Computational-Statistical Gaps in Reinforcement Learning”\
 n\n\nSpeaker: Gaurav Mahajan (UCSD)\n\n\n\nAbstract: A fundamental assumpt
 ion in theory of reinforcement learning is "RL with linear function approx
 imation". Under this assumption\, the optimal value function (either Q*\, 
 or V*\, or both) can be obtained as the linear combination of finitely man
 y known basis functions. Even though it was observed as early as 1963 that
  there are empirical benefits of using linear function approximation\, onl
 y recently a series of work designed sample efficient algorithms for this 
 setting. These works posed an important open problem: Can we design polyno
 mial time algorithms for this setting?   In this talk\, I will go over re
 cent progress on this open problem: a surprising computational-statistical
  gap in reinforcement learning. Even though we have polynomial sample comp
 lexity algorithms\, under standard hardness assumption (NP != RP) there ar
 e no polynomial time algorithms for this setting. I will start by going ov
 er a few algorithmic ideas for designing sample efficient algorithms in RL
  and then move on to show how to build hard MDPs which satisfy linear func
 tion approximation assumption from hard 3-SAT instances. I will end the ta
 lk by discussing a few open problems in RL and sequence modelling.\n\n\n\n
 Remote presentation only.\n\n\n\nJoin from PC\, Mac\, Linux\, iOS or Andro
 id: https://yale.zoom.us/j/94359913798Or Telephone：203-432-9666 (2-ZOOM
  if on-campus) or 646 568 7788One Tap Mobile: +12034329666\,\,94359913798#
  US (Bridgeport)\n\n\n\nMeeting ID: 943 5991 3798International numbers ava
 ilable: https://yale.zoom.us/u/ac1Gq3KLWp\n\n\n\nWebcast\n\n\n\n\n
CATEGORIES:FDS Events,Postdoctoral Applicants
END:VEVENT
BEGIN:VEVENT
UID:599@fds.yale.edu
DTSTART;TZID=America/New_York:20230113T150000
DTEND;TZID=America/New_York:20230113T160000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/fds-seminar-ming-yin-ucsb/
SUMMARY:FDS Seminar: Ming Yin (UCSB)
DESCRIPTION:\n\n“Instance-Adaptive and Optimal Offline Reinforcement Lear
 ning” \nSpeaker: Ming Yin (UCSB) \nAbstract: Reinforcement Learning is b
 ecoming the mainstay of sequential decision-making problems. In particular
 \, offline reinforcement learning is considered the central framework for 
 real-life applications when online interactions are not permitted. This ta
 lk will expose the main challenges for offline RL (including distribution 
 shift\, the curse of the horizon\, and the suboptimal data) and offer our 
 solutions on how to bypass them. I will discuss how to improve the sample 
 efficiency using various techniques and show how they adapt to the hardnes
 s of individual problems. I will also briefly discuss the connection betwe
 en these methodologies and their extensions to more general settings.\nRem
 ote presentation only.\nJoin from PC\, Mac\, Linux\, iOS or Android: https
 ://yale.zoom.us/j/95770019076Or Telephone：203-432-9666 (2-ZOOM if on-cam
 pus) or 646 568 7788 One Tap Mobile: +12034329666\,\,95770019076# US (Brid
 geport)\nMeeting ID: 957 7001 9076International numbers available: https:/
 /yale.zoom.us/u/adTjb3rkTu
CATEGORIES:FDS Events,Postdoctoral Applicants
END:VEVENT
BEGIN:VEVENT
UID:595@fds.yale.edu
DTSTART;TZID=America/New_York:20230119T110000
DTEND;TZID=America/New_York:20230119T120000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/sds-seminar-edward-de-brouwer-ku-leuven/
SUMMARY:S&amp\;DS Seminar: Edward De Brouwer (KU Leuven)
DESCRIPTION:"Predicting the impact of treatments over time with uncertainty
  aware neural differential equations"\n\n\nSpeaker: Edward De Brouwer (KU 
 Leuven) \n\n\n\nTalk Abstract: Predicting the impact of interventions in t
 he real world from observational data alone represents a major statistical
  challenge. Indeed\, treatment assignments are usually correlated with the
  predictors of the response\, resulting in a lack of data support for coun
 terfactual predictions and therefore in poor quality estimates. Developmen
 ts in causal inference have lead to methods addressing this confounding by
  requiring a minimum level of overlap. However\, overlap is difficult to a
 ssess and usually not satisfied in practice. In this work\, we propose to 
 circumvent the overlap assumption by predicting the impact of treatments c
 ontinuously over time using neural ordinary differential equations equippe
 d with uncertainty estimates.\n\n\n\nThis presentation was held virtually 
 on January 19\, 2023 @ 11:00 AM\n
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:591@fds.yale.edu
DTSTART;TZID=America/New_York:20230208T120000
DTEND;TZID=America/New_York:20230208T130000
DTSTAMP:20250916T142120Z
URL:https://fds.yale.edu/events/data-science-lit-search-nightmares-and-how
 -to-avoid-them/
SUMMARY:Data Science Lit Search Nightmares (and how to avoid them)
DESCRIPTION:Have you ever heard horror stories about an embarrassing meetin
 g when someone learned they'd missed half of the seminal research papers r
 elated to their thesis? Do you have nightmares that someone else just publ
 ished another version of your “groundbreaking” research? Let’s face 
 it\, we need to do so much research on who is doing what research that the
 re’s no time left to do the research!\n\n\nJoin us for the first annual 
 FDS/Marx Library collaboration lunch\, where you'll hear from librarians a
 bout how to optimize your lit review workflows and set yourself up for suc
 cess while freeing up time in the process. We will cover file management s
 trategies\, optimizing Zotero for LaTeX\, and how to pull PDF information 
 from the web into Zotero. \n\n\n\nLunch included! Bring a friend! Meet a 
 librarian!\n\n\n\n17 Hillhouse Ave\, 3rd floor\n
CATEGORIES:FDS Events,Training
END:VEVENT
BEGIN:VEVENT
UID:575@fds.yale.edu
DTSTART;TZID=America/New_York:20230327T160000
DTEND;TZID=America/New_York:20230327T170000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/sds-colloquium-nadav-cohen-tel-aviv-univer
 sity-what-makes-data-suitable-for-deep-learning/
SUMMARY:S&amp\;DS Colloquium: Nadav Cohen (Tel Aviv University) "What Makes
  Data Suitable for Deep Learning?"
DESCRIPTION:Deep learning is delivering unprecedented performance when appl
 ied to various data modalities\, yet there are data distributions over whi
 ch it utterly fails. The question of what makes a data distribution suitab
 le for deep learning is a fundamental open problem in the field.  In this
  talk I will present a recent theory aiming to address the problem via too
 ls from quantum physics.  The theory establishes that certain neural netw
 orks are capable of accurate prediction over a data distribution if and on
 ly if the data distribution admits low quantum entanglement under certain 
 partitions of features.  This brings forth practical methods for adaptati
 on of data to neural networks\, and vice versa.  Experiments with widespr
 ead models over various datasets will demonstrate the findings.  An under
 lying theme of the talk will be the potential of physics to advance our un
 derstanding of the relation between deep learning and real-world data.\n\n
 \nWorks covered in the talk were in collaboration with my graduate student
 s Noam Razin\, Yotam Alexander\, Nimrod De La Vega and Tom Verbin.\n\n\n\n
 Bio: Nadav Cohen is an Asst. Prof. of Computer Science at Tel Aviv Univers
 ity.  His research focuses on the theoretical and algorithmic foundations
  of deep learning.  He earned a BSc in electrical engineering and a BSc i
 n mathematics (both summa cum laude) at the Technion Excellence Program fo
 r Distinguished Undergraduates\, followed by a PhD (direct track) in compu
 ter science at the Hebrew University of Jerusalem.  Subsequently\, he was
  a postdoctoral research scholar at the Institute for Advanced Study in Pr
 inceton.  For his contributions to deep learning\, Nadav received a numbe
 r of awards\, including the Google Doctoral Fellowship in Machine Learning
 \, the Rothschild Postdoctoral Fellowship\, the Zuckerman Postdoctoral Fel
 lowship\, and the Google Research Scholar Award.\n\n\n\nIn-Person seminars
  will be held at Mason Lab 211\, 9 Hillhouse Avenue with the option of vir
 tual participation: https://yale.hosted.panopto.com/Panopto/Pages/Viewer.
 aspx?id=53e6ca36-44bf-4760-83fc-af93011fd562\n\n\n\n3:30pm -   Pre-talk m
 eet and greet teatime - Dana House\, 24 Hillhouse Avenue\n
CATEGORIES:FDS Events,Statistics &amp; Data Science
 Seminar,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:576@fds.yale.edu
DTSTART;TZID=America/New_York:20230329T160000
DTEND;TZID=America/New_York:20230329T170000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/fds-colloquium-nathan-srebro-ttic-interpol
 ation-learning-and-overfitting-with-linear-predictors-and-short-programs/
SUMMARY:FDS Colloquium: Nathan Srebro (TTIC) “Interpolation Learning and
  Overfitting with Linear Predictors and Short Programs”
DESCRIPTION:"Interpolation Learning and Overfitting with Linear Predictors
  and Short Programs"\n\n\nLocation: Mason 211 or remote access: https://ya
 le.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=7e9e0891-7848-44ad-91e7
 -af93011fd580 \n\n\n\n\n\n\n\nSpeaker: Nathan SrebroProfessor\, Toyota Tec
 hnological Institute at Chicago\n\n\n\nAbstract: Classical theory\, conven
 tional wisdom\, and all textbooks\, tell us to avoid reaching zero trainin
 g error and overfitting the noise\, and instead balance model fit and comp
 lexity.  Yet\, recent empirical and theoretical results suggest that in m
 any cases overfitting is benign\, and even interpolating the training data
  can lead to good generalization.  Can we characterize and understand whe
 n overfitting is indeed benign\, and when it is catastrophic as classic t
 heory suggests?  And can existing theoretical approaches be used to stud
 y and explain benign overfitting and the "double descent" curve?  I will 
 discuss interpolation learning in linear (and kernel) methods\, as well as
  using the universal "minimum description length" or "shortest program" le
 arning rule.\n\n\n\n\n\n\n\nBio: Nati (Nathan) Srebro is a professor at t
 he Toyota Technological Institute at Chicago\, with cross-appointments at 
 the University of Chicago's Department of Computer Science\, and Committee
  on Computational and Applied Mathematics. He obtained his PhD from the Ma
 ssachusetts Institute of Technology in 2004\, and previously was a postdoc
 toral fellow at the University of Toronto\, a visiting scientist at IBM\, 
 and an associate professor at the Technion.  \n\n\n\nDr. Srebro’s rese
 arch encompasses methodological\, statistical and computational aspects of
  machine learning\, as well as related problems in optimization. Some of S
 rebro’s significant contributions include work on learning “wider” M
 arkov networks\, introducing the use of the nuclear norm for machine learn
 ing and matrix reconstruction\, work on fast optimization techniques for m
 achine learning\, and on the relationship between learning and optimizatio
 n. His current interests include understanding deep learning through a det
 ailed understanding of optimization\, distributed and federated learning\,
  algorithmic fairness and practical adaptive data analysis.\n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:574@fds.yale.edu
DTSTART;TZID=America/New_York:20230331T120000
DTEND;TZID=America/New_York:20230331T130000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/fds-colloquium-tara-javidi-ucsd-a-conseque
 ntial-view-of-information-for-statistical-learning-and-optimization/
SUMMARY:FDS Colloquium: Tara Javidi (UCSD) "A (Con)Sequential View of Infor
 mation for Statistical Learning and Optimization"
DESCRIPTION:A (Con)Sequential View of Information for Statistical Learning 
 and Optimization\n\n\nSpeaker: Tara JavidiJacobs Family Scholar and Profes
 sorElectrical and Computer EngineeringUCSD\n\n\n\nAbstract: In most commun
 ication systems\, adapting transmission strategies to the (unpredictable) 
 realization of channel output at the receiver requires an (unrealistic) as
 sumption about the availability of a reliable “feedback” channel. This
  unfortunate fact\, combined by the historical linkage between teaching in
 formation theory and digital communication curriculum has kept “feedback
  information theory” less taught\, discussed\, appreciated and understoo
 d compared to other topics in our field.\n\n\n\nThis talk\, in contrast\, 
 highlights important and challenging problems in machine learning\, optimi
 zation\, statistics\, and control theory\, where the problem of acquiring 
 information in an adaptive manner arises very naturally. Thus\, I will arg
 ue that an increased emphasis on (teaching) feedback information theory ca
 n provide vast and exciting research opportunities at the intersection of 
 information theory and these fields. In particular\, I will revisit simple
 -to-teach results in feedback information theory including sequential hypo
 thesis testing\, arithmetic coding\, successive refinement\, noisy binary 
 search\, and posterior matching. Drawing on my own research\, I will also 
 highlight the successful application of these sequential techniques in a v
 ariety of problem instances such as black-box optimization\, distribution 
 estimation\, and active machine learning with imperfect labels.\n\n\n\nSpe
 aker bio: Tara Javidi received her BS in electrical engineering at Sharif
  University of Technology\, Tehran\, Iran. She received her MS degrees in
  electrical engineering (systems) and in applied mathematics (stochastic 
 analysis) from the University of Michigan\, Ann Arbor as well as her Ph.D
 . in electrical engineering and computer science in 2002. She is currentl
 y a Jacobs Family Scholar and Professor of Electrical and Computer Engine
 ering and a founding co-director of the Center for Machine-Intelligence\,
  Computing and Security (MICS) at UCSD.\n\n\n\nTara Javidi’s research i
 nterests are in theory of active learning\, information acquisition and s
 tatistical inference\, information theory with feedback\,  stochastic co
 ntrol theory\, and wireless networks. \n\n\n\nLocation: In-person at YINS
 \, 17 Hillhouse Ave\, 3rd floor. Yale-only livestream: https://yale.hosted
 .panopto.com/Panopto/Pages/Viewer.aspx?id=accec6b8-cece-4306-869b-afce0158
 dceb \n\n\n\nLunch will be served.\n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:573@fds.yale.edu
DTSTART;TZID=America/New_York:20230331T150000
DTEND;TZID=America/New_York:20230331T160000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/fds-faculty-showcase/
SUMMARY:FDS Faculty Showcase
DESCRIPTION:Location: YINS\, 17 Hillhouse Avenue\, 3rd floor. Streaming ava
 ilable to the Yale community only: https://yale.hosted.panopto.com/Panopto
 /Pages/Viewer.aspx?id=81b6f8fe-5edd-4813-8f04-afcf00d6f70d \n\n\nWe invite
  you to join us at the Yale Institute for Foundations of Data Science (FDS
 ) Faculty Showcase on March 31st at 3:00 PM. Ten distinguished Yale facult
 y members will present their research and insights\, including Andre Wibis
 ono\, Rex Ying\, Brian Macdonald\, Ethan Meyers\, Leying Guan\, Jason Shaw
 \, Yihong Wu\, Lucila Ohno-Machado\, Zhuoran Yang\, Casey King and Madiha 
 Tahir. Each speaker will have just five minutes to tantalize the community
  and stimulate future conversation and collaboration. Refreshments will be
  provided. This is a wonderful opportunity to learn about these esteemed f
 aculty members.\n\n\n\n\n\n\n\nSpeakers:\n\n\n\nAndre Wibisono\n\n\n\nRex 
 Ying\n\n\n\nBrian Macdonald\n\n\n\nEthan Meyers\n\n\n\nLeying Guan\n\n\n\n
 Jason Shaw\n\n\n\nYihong Wu\n\n\n\nLucila Ohno-Machado\n\n\n\nZhuoran Yang
 \n\n\n\nCasey King\n\n\n\nMadiha Tahir\n\n\n\n\n\n\n\n\n
CATEGORIES:FDS Events,Special Seminar
END:VEVENT
BEGIN:VEVENT
UID:570@fds.yale.edu
DTSTART;TZID=America/New_York:20230403T160000
DTEND;TZID=America/New_York:20230403T170000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/sds-seminar-sebastian-pokutta-tu-berlin-co
 nditional-gradients-in-machine-learning/
SUMMARY:S&amp\;DS Seminar: Sebastian Pokutta (TU Berlin)\, "Conditional Gra
 dients in Machine Learning"
DESCRIPTION:"Conditional Gradients in Machine Learning" \n\n\nSpeaker: Seba
 stian Pokutta (TU Berlin)\n\n\n\nMonday\, April 03\, 2023\, 4:00PM to 5:00
 PM \n\n\n\n3:30pm - Pre-talk meet and greet teatime - Dana House\, 24 Hill
 house Avenue\n\n\n\nLocation: Mason Lab\, Rm. 211\, 9 Hillhouse Avenue New
  Haven\, CT 06511 or via Panopto\n\n\n\nAbstract: Conditional Gradient met
 hods are an important class of methods to minimize (non-)smooth convex fun
 ctions over (combinatorial) polytopes. Recently these methods received a l
 ot of attention as they allow for structured optimization and hence learni
 ng\, incorporating the underlying polyhedral structure into solutions. In 
 this talk I will give a broad overview of these methods\, their applicatio
 ns\, as well as present some recent results both in traditional optimizati
 on and learning as well as in deep learning. \n\n\n\nSpeaker Bio: Sebastia
 n Pokutta is the Vice President of the Zuse Institute Berlin (ZIB) and a P
 rofessor of Mathematics at TU Berlin with a research focus on Artificial I
 ntelligence and Optimization. Having received both his diploma and Ph.D. i
 n mathematics from the University of Duisburg-Essen in Germany\, Pokutta w
 as a postdoctoral researcher and visiting lecturer at MIT\, worked for IBM
  ILOG\, and Krall Demmel Baumgarten. Prior to joining ZIB and TU Berlin\, 
 he was the David M. McKenney Family Associate Professor in the School of I
 ndustrial and Systems Engineering and an Associate Director of the Machine
  Learning @ GT Center at the Georgia Institute of Technology as well as a 
 Professor at the University of Erlangen-Nürnberg. Sebastian received the 
 David M. McKenney Family Early Career Professorship in 2016\, an NSF CAREE
 R Award in 2015\, the Coca-Cola Early Career Professorship in 2014\, the o
 utstanding thesis award of the University of Duisburg-Essen in 2006\, as w
 ell as various Best Paper awards. \n\n\n\nPokutta’s research is situated
  at the intersection of Artificial Intelligence and Optimization\, combini
 ng Machine Learning with Discrete Optimization techniques as well as the T
 heory of Extended Formulations\, exploring the limits of computation in al
 ternative models of complexity. A particular focus are so-called Frank-Wol
 fe methods and conditional gradient methods due to their versatility in th
 e context of constrained optimization and structured learning. Pokutta has
  also worked on applications of Optimization and Machine Learning\, levera
 ging data in the context of pressing industrial and financial challenges. 
 These areas include Supply Chain Management\, Manufacturing\, Cyber-Physic
 al Systems (incl. Industrial Internet\, Industry 4.0\, Internet of Things)
 \, and Finance. Examples of Pokutta’s applied work include stowage optim
 ization problems for inland vessels\, oil production problems\, clearing o
 f electricity markets\, order fulfillment problems\, warehouse location pr
 oblems\, simulation of autonomous vehicle fleets\, portfolio optimization 
 problems\, optimal liquidity management strategies\, and predictive pregna
 ncy diagnostics. \n\n\n\n3:30pm - Pre-talk meet and greet teatime - Dana H
 ouse\, 24 Hillhouse Avenue\n
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:571@fds.yale.edu
DTSTART;TZID=America/New_York:20230404T120000
DTEND;TZID=America/New_York:20230404T130000
DTSTAMP:20250916T142121Z
URL:https://fds.yale.edu/events/critical-visualizations-rethinking-represe
 ntations-of-data/
SUMMARY:Critical Visualizations: Rethinking representations of data
DESCRIPTION:Speaker: Peter A. HallReader in Graphic Design at CCW\, Univers
 ity of the Arts London\, UK\n\n\nLocation: 17 Hillhouse Avenue\, 3rd floor
 \n\n\n\nAbstract: Information may be beautiful\, but our decisions about t
 he data we choose to represent and how we represent it are never neutral. 
 This insightful history traces how data visualization accompanied modern t
 echnologies of war\, colonialism and the management of social issues of po
 verty\, health and crime. Discussion is based around examples of visualiza
 tion\, from the ancient Andean information technology of the quipu to co
 ntemporary projects that show the fate of our rubbish and take a participa
 tory approach to visualizing cities. This analysis places visualization in
  its theoretical and cultural contexts\, and provides a critical framework
  for understanding the history of information design with new directions f
 or contemporary practice.\n\n\n\nSpeaker bio: Peter A. Hall is Reader in G
 raphic Design at CCW\, University of the Arts London\, UK. His publication
 s include Critical Visualization: Rethinking the Representation of Data\, 
 co-authored with Patricio Dávila (Bloomsbury\, 2022)\, Sagmeister: Made Y
 ou Look (2009)\, Else/Where: Mapping - New Cartographies of Networks and T
 erritories\, co-edited with Janet Abrams (2005) and Tibor Kalman: Perverse
  Optimist (2002).\n\n\n\nFor more information about the book please visit
  here\n
CATEGORIES:FDS Events,Special Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:562@fds.yale.edu
DTSTART;TZID=America/New_York:20230417T160000
DTEND;TZID=America/New_York:20230417T170000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/fds-colloquium-dan-yamins-a-fruitful-recip
 rocity-the-neuroscience-ai-connection/
SUMMARY:FDS Colloquium: Dan Yamins "A Fruitful Reciprocity: The Neuroscienc
 e–AI Connection
DESCRIPTION:Speaker: Dan YaminsAssistant Professor of Psychology and Comput
 er ScienceStanford University\n\n\nHosted by: John Lafferty\n\n\n\nIn-pers
 on event with remote access option via Panopto\n\n\n\nA Fruitful Reciproci
 ty: The Neuroscience-AI Connection\n\n\n\nAbstract: The emerging field of 
 NeuroAI has leveraged techniques from artificial intelligence to analyze l
 arge-scale brain data. In this talk\, I will show that the connection betw
 een neuroscience and AI can be fruitful in both directions. Towards “AI 
 driving neuroscience”\, I will discuss recent advances in self-supervise
 d learning with deep recurrent networks that yield a developmentally-plaus
 ible model of the primate visual system. In the direction of “neuroscien
 ce guiding AI”\, I will present a novel cognitively-grounded computation
 al theory of perception that generates powerful new learning algorithms fo
 r real-world scene understanding. Taken together\, these ideas illustrate 
 how neural networks optimized to solve cognitively-informed tasks provide 
 a unified framework for both understanding the brain and improving AI.\n\n
 \n\nBio: Dan Yamins is a computational neuroscientist at Stanford Universi
 ty\, where he's an assistant professor of Psychology and Computer Science\
 , and a faculty scholar at the Wu Tsai Neurosciences Institute. Dan works 
 on science and technology challenges at the intersection of neuroscience\,
  artificial intelligence\, psychology and large-scale data analysis.\n\n\n
 \nThe brain is the embodiment of the most beautiful algorithms ever writte
 n. His research group\, the Stanford NeuroAILab\, seeks to "reverse engine
 er" these algorithms\, both to learn both about how our minds work and bui
 ld more effective artificial intelligence systems. Website: http://stanfor
 d.edu/~yamins/\n\n\n\nMonday\, April 17\, 2023 \n\n\n\n3:30pm – Pre-talk
  meet and greet teatime – Dana House\, 24 Hillhouse Avenue\n\n\n\n4:00 -
  5:00 pm - Talk - In-Person seminars will be held at Mason Lab 211 with vi
 rtual participation (on campus only):(https://yale.hosted.panopto.com/Pano
 pto/Pages/Sessions/List.aspx?folderID=f8b73c34-a27b-42a7-a073-af2d00f90ffa
 )\n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:563@fds.yale.edu
DTSTART;TZID=America/New_York:20230420T160000
DTEND;TZID=America/New_York:20230420T170000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/fds-colloquium-philippe-rigollet-mit-stati
 stical-applications-of-wasserstein-gradient-flows/
SUMMARY:FDS Colloquium: Philippe Rigollet (MIT) "Statistical applications o
 f Wasserstein gradient flows"
DESCRIPTION:Speaker: Philippe Rigollet\, PhDProfessor of MathematicsMassac
 husetts Institute  of Technology\n\n\nHosted by Yihong Wu\n\n\n\nIn-person
  event with remote access option via Panopto\n\n\n\nStatistical applicati
 ons of Wasserstein gradient flows\n\n\n\nAbstract: Otto calculus is a fund
 amental toolbox in mathematical optimal transport\, imparting the Wasserst
 ein space of probability measures with a Riemmanian structure. In particul
 ar\, one can compute the Riemannian gradient of a functional over this spa
 ce and\, in turn\, optimize it using Wasserstein gradient flows. The neces
 sary background to define and compute Wasserstein gradient flows will be p
 resented in the first part of the talk before moving to several statistica
 l applications ranging from variational inference to maximum likelihood es
 timation in Gaussian mixture models. Emphasis will be placed on conceptual
  ideas in order for the talk to be accessible to a broad audience.\n\n\n\n
 Bio: Philippe Rigollet works at the intersection of statistics\, machine l
 earning\, and optimization\, focusing primarily on the design and analysis
  of statistical methods for high-dimensional problems. His recent research
  focuses on statistical optimal transport and its applications to geometri
 c data analysis and sampling. Website: www-math.mit.edu/~rigollet\n\n\n\nT
 hursday\, April 20\, 2023\n\n\n\n3:30pm – Pre-talk meet and greet teatim
 e – Dana House\, 24 Hillhouse Avenue\n\n\n\n4:00 – 5:00pm – Talk –
  This in-person seminar will be held at 17 Hillhouse\, 3rd Floor Common Ar
 ea with virtual participation https://yale.hosted.panopto.com/Panopto/Page
 s/Viewer.aspx?id=7219ac1f-3d1b-458c-86d7-afe9010e4e65\n\n\n\n\n\n\n\n\nWEB
 CAST\n\n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:578@fds.yale.edu
DTSTART;TZID=America/New_York:20230424T160000
DTEND;TZID=America/New_York:20230424T170000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/fds-colloquium-robert-schapire-microsoft-r
 esearch-convex-analysis-at-infinity-an-introduction-to-astral-space/
SUMMARY:FDS Colloquium: Robert Schapire (Microsoft Research) "Convex Analys
 is at Infinity: An Introduction to Astral Space"
DESCRIPTION:Speaker: Robert SchapireComputer Scientist\,Microsoft Research 
 (NYC Lab)\n\n\nHosted by: Dan Spielman\n\n\n\nIn person event with remote 
 access via Panopto.\n\n\n\nAbstract: Not all convex functions have finite 
 minimizers\; some can only be minimized by a sequence as it heads to infin
 ity.  In this work\, we aim to develop a theory for understanding such mi
 nimizers at infinity.  We study astral space\, a compact extension of Eu
 clidean space to which such points at infinity have been added.  Astral s
 pace is constructed to be as small as possible while still ensuring that a
 ll linear functions can be continuously extended to the new space.  Altho
 ugh not a vector space\, nor even a metric space\, astral space is neverth
 eless so well-structured as to allow useful and meaningful extensions of s
 uch concepts as convexity\, conjugacy\, and subdifferentials.  We develop
  these concepts and analyze various properties of convex functions on astr
 al space\, including the detailed structure of their minimizers\, exact ch
 aracterizations of continuity\, and convergence of descent algorithms.\n\n
 \n\nThis is joint work with Miroslav Dudík and Matus Telgarsky.\n\n\n\nBi
 o: Robert Schapire is a Partner Researcher at Microsoft Research in New Yo
 rk City. He received his PhD from MIT in 1991. After a short postdoc at Ha
 rvard\, he joined the technical staff at AT&T Labs (formerly AT&T Bell Lab
 oratories) in 1991. In 2002\, he became a Professor of Computer Science at
  Princeton University. He joined Microsoft Research in 2014. His awards in
 clude the 1991 ACM Doctoral Dissertation Award\, the 2003 Gödel Prize\, a
 nd the 2004 Kanelakkis Theory and Practice Award (both of the last two wit
 h Yoav Freund). He is a fellow of the AAAI\, and a member of both the Nati
 onal Academy of Engineering and the National Academy of Sciences. His main
  research interest is in theoretical and applied machine learning. Website
 : http://rob.schapire.net/\n\n\n\nMonday\, April 24\, 2023\n\n\n\n3:30pm 
 – Pre-talk meet and greet teatime – Dana House\, 24 Hillhouse Avenue\n
 \n\n\n4:00pm – 5:00 pm – Talk – Mason Lab 211 with the option of vir
 tual participation https://yale.hosted.panopto.com/Panopto/Pages/Viewer.as
 px?id=e1a54b37-2829-4b7a-841e-af93011fd666 \n\n\n\n\nWatch\n\n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:561@fds.yale.edu
DTSTART;TZID=America/New_York:20230427T160000
DTEND;TZID=America/New_York:20230427T170000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/fds-seminar-abhinav-bhardwaj-yale-math-ent
 ry-wise-dissipation-for-singular-vector-perturbation-bounds/
SUMMARY:FDS Seminar: Abhinav Bhardwaj (Yale Math)\, "Entry–wise dissipati
 on for singular vector perturbation bounds"
DESCRIPTION:Speaker: Abhinav Bhardwaj (Yale Math)\n\n\nAbstract: Consider a
  random perturbation of a low rank matrix. In this talk\, we discuss entry
 -wise bounds on the perturbation of the singular vectors (i.e\, a Davis-Ka
 han type bound in the infinity norm). Among others\, our result shows that
 \, under common incoherence assumptions\, the entry-wise error is evenly d
 issipated. This improves a number of previous results and has algorithmic 
 applications for many well known clustering problems\, including the hidde
 n clique\, planted coloring\, and planted bipartition.\n\n\n\nLocation: 24
  Hillhouse\, room 107\n
CATEGORIES:FDS Events,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:564@fds.yale.edu
DTSTART;TZID=America/New_York:20230503T160000
DTEND;TZID=America/New_York:20230503T170000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/fds-seminar-wei-ji-ma-process-models-of-co
 mplex-mental-computation/
SUMMARY:FDS Seminar: Wei Ji Ma\, "Process models of complex mental computat
 ion"
DESCRIPTION:"Process models of complex mental computation"\n\n\nSpeaker: We
 i Ji MaProfessor of Neural Science and PsychologyCenter for Neural Science
 New York University\n\n\n\nLocation: 211 Mason or remotely via Panopto: ht
 tps://yale.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=77784297-c06b-4
 41e-b2d8-af93011fd6e8\n\n\n\nAbstract: Computational cognitive models comm
 it to a sequence of steps in which an observer/agent mentally processes in
 formation leading up to a behavioral response. Typically\, both the model 
 parameters and the model structure have to be inferred solely from stimulu
 s-response pairs. For more complex mental computations\, these inferences 
 tend to be more challenging\, yet potentially yield greater insights. I wi
 ll illustrate this using two examples from disparate domains. In the first
  study\, we test whether people perform unconscious Bayesian inference in 
 visual search\, specifically\, whether they marginalize over nuisance vari
 ables. In the second study\, we model human planning in a two-player board
  game using a “humanized” variant of best-first search. I will describ
 e the methodological challenges associated with unbiased estimation of log
  likelihoods and with parameter fitting\, and our proposed solutions. \n\
 n\n\nSpeaker bio: Wei Ji Ma is Professor of Neural Science and Psychology 
 at NYU. His lab studies decision-making in planning\, social cognition\, w
 orking memory\, perception\, and attention\, using a combination of human 
 behavioral experiments\, computational modeling\, and - through collaborat
 ions - electrophysiology and neuroimaging. Wei Ji grew up in the Netherlan
 ds and received his Ph.D. in Physics from the University of Groningen. He 
 continued as a postdoc in computational neuroscience\, first with Christof
  Koch at Caltech and then with Alexandre Pouget at the University of Roche
 ster. He was Assistant Professor of Neuroscience at Baylor College of Medi
 cine from 2008 to 2013. He has been at NYU since 2013. He has affiliate ap
 pointments in the Neuroscience Institute\, the Institute for the Study of 
 Decision Making\, the Center for Data Science\, and the Center for Experim
 ental Social Science\, and is Collaborating Faculty of the NYU-ECNU Instit
 ute of Brain and Cognitive Science at NYU Shanghai. With Xiao-Jing Wang\, 
 Wei Ji is Program Director of the NIH-funded Training Program in Computati
 onal Neuroscience at NYU. Moreover\, Wei Ji is active in mentorship\, comm
 unity-building\, and outreach. He is a founding member of the Scientist Ac
 tion and Advocacy Network and of NeuWrite NYU. Wei Ji co-founded and leads
  the Growing up in Science seminar series\, in which scientists tell their
  "unofficial stories". Read or listen to Wei Ji's own unofficial story. Be
 sides his academic work\, Wei Ji is the co-founder of the Rural China Educ
 ation Foundation.\n\n\n\nHosted by John Lafferty\n
CATEGORIES:FDS Events,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:560@fds.yale.edu
DTSTART;TZID=America/New_York:20230524T120000
DTEND;TZID=America/New_York:20230524T130000
DTSTAMP:20250916T142122Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-xifan-yu-algor
 ithmic-lower-bounds-for-expansion-profile-of-regular-graphs/
SUMMARY:Yale Theory Student Seminar: Xifan Yu\, "Algorithmic Lower Bounds f
 or Expansion Profile of Regular Graphs"
DESCRIPTION:"Algorithmic Lower Bounds for Expansion Profile of Regular Grap
 hs"\n\n\nSpeaker: Xifan Yu\n\n\n\nThis is a place for theory-minded studen
 ts and postdocs to gather for a weekly lunch seminar. We meet on Wednesday
 s at 12 (for lunch) and the talk starts at 12:15. The presentations are on
  papers\, results\, conjectures\, or anything theory-oriented. In order to
  keep things more casual and interactive\, presentations are on the board.
  We meet at the YINS common room\, on the 3rd floor of 17 Hillhouse.\n\n\n
 \nhttps://yaletheorystudents.github.io/\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:558@fds.yale.edu
DTSTART;TZID=America/New_York:20230531T120000
DTEND;TZID=America/New_York:20230531T131500
DTSTAMP:20250916T142123Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-asaf-etgar-on-
 the-connectivity-and-diameter-of-geodetic-graphs/
SUMMARY:Yale Theory Student Seminar: Asaf Etgar\, "On the Connectivity and 
 Diameter of Geodetic Graphs"
DESCRIPTION:Abstract: Geodetic Graphs are graphs in which any two vertices 
 are connected by a unique shortest path. In 1962\, Ore asked to characteri
 ze this fundamental family.of graphs. Despite many attempts\, such charact
 erization seems beyond reach. In this talk we present some history of geod
 etic graphs\, some constructions - and a result that\, under reasonable as
 sumptions\, limits the structure of geodetic graphs - taking another step 
 towards characterization.\n\n\nhttps://yaletheorystudents.github.io/\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:545@fds.yale.edu
DTSTART;TZID=America/New_York:20230607T120000
DTEND;TZID=America/New_York:20230607T130000
DTSTAMP:20250916T142123Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-series-jane-le
 e-statistics-without-iid-samples-learning-from-truncated-data/
SUMMARY:Yale Theory Student Seminar Series: Jane Lee\, "Statistics Without 
 iid Samples: Learning From Truncated Data"
DESCRIPTION:Website: https://yaletheorystudents.github.io/
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:496@fds.yale.edu
DTSTART;TZID=America/New_York:20230614T120000
DTEND;TZID=America/New_York:20230614T130000
DTSTAMP:20250916T142123Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-jinzhao-wu-on-
 the-optimal-fixed-price-mechanism-in-bilateral-trade/
SUMMARY:Yale Theory Student Seminar: Jinzhao Wu\, "On the Optimal Fixed–P
 rice Mechanism in Bilateral Trade"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:495@fds.yale.edu
DTSTART;TZID=America/New_York:20230705T120000
DTEND;TZID=America/New_York:20230705T130000
DTSTAMP:20250916T142123Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-siddharth-mitr
 a-on-single-cell-trajectory-inference/
SUMMARY:Yale Theory Student Seminar: Siddharth Mitra\, "On Single–cell Tr
 ajectory Inference"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:556@fds.yale.edu
DTSTART;TZID=America/New_York:20230712T120000
DTEND;TZID=America/New_York:20230712T130000
DTSTAMP:20250916T142123Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-asaf-etgar-on-
 graphs-and-geometry/
SUMMARY:Yale Theory Student Seminar: Asaf Etgar\, "On Graphs and Geometry"
DESCRIPTION:https://yaletheorystudents.github.io/
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:510@fds.yale.edu
DTSTART;TZID=America/New_York:20230719T120000
DTEND;TZID=America/New_York:20230719T130000
DTSTAMP:20250916T142124Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-marco-pirazzin
 i-on-the-small-set-expansion-hypothesis/
SUMMARY:Yale Theory Student Seminar: Marco Pirazzini\, "On the Small Set Ex
 pansion Hypothesis"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:494@fds.yale.edu
DTSTART;TZID=America/New_York:20230726T120000
DTEND;TZID=America/New_York:20230726T130000
DTSTAMP:20250916T142124Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-khashayar-gatm
 iry-mit-sampling-with-barriers-faster-mixing-via-lewis-weights/
SUMMARY:Yale Theory Student Seminar: Khashayar Gatmiry (MIT)\, "Sampling wi
 th Barriers: Faster Mixing via Lewis Weights"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:493@fds.yale.edu
DTSTART;TZID=America/New_York:20230816T120000
DTEND;TZID=America/New_York:20230816T130000
DTSTAMP:20250916T142124Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-john-lazarsfel
 d-decentralized-learning-dynamics-in-the-gossip-model/
SUMMARY:Yale Theory Student Seminar: John Lazarsfeld\, "Decentralized Learn
 ing Dynamics in the Gossip Model"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:492@fds.yale.edu
DTSTART;TZID=America/New_York:20230823T120000
DTEND;TZID=America/New_York:20230823T130000
DTSTAMP:20250916T142124Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-aditi-laddha-d
 eterminant-maximization-via-local-search/
SUMMARY:Yale Theory Student Seminar: Aditi Laddha\, "Determinant Maximizati
 on via Local Search"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:555@fds.yale.edu
DTSTART;TZID=America/New_York:20230829T140000
DTEND;TZID=America/New_York:20230829T150000
DTSTAMP:20250916T142124Z
URL:https://fds.yale.edu/events/data-science-project-match-2/
SUMMARY:Data Science Project Match
DESCRIPTION:An opportunity for students to match with data science research
  opportunities presented by Yale faculty.\n\n\nOpening Remarks & Introduct
 ion \n\n\n\nby Daniel SpielmanSterling Professor of Computer Science\; Pro
 fessor of Statistics & Data Science\, and of MathematicsJames A. Attwood D
 irector of the Institute for Foundations of Data Science at Yale (FDS)\n\n
 \n\nProject Presentations\n\n\n\nRohan Khera\, MD\, MSDirector\, Cardiovas
 cular Data Science (CarDS) LabAssistant Professor\, Cardiovascular Medicin
 e\, Yale School of Medicinerohan.khera@yale.edu | CarDS-Lab.org\n\n\n\n"In
 novating Cardiovascular Care with Multimodality Data Science"The Cardiovas
 cular Data Science (CarDS) Lab at Yale leverages advances in deep learning
  and AI to enhance and automate care. The work uses numerous data streams 
 in the electronic health record and focuses on natural language processing
 \, federated learning\, signal processing\, and computer vision for enhanc
 ed inference\, and develops and deploys novel convolutional neural network
 s and transformer models to address care challenges. The experience is ide
 al for students interested in health tech and/or medicine and looking to g
 ain from a longitudinal research experience.\n\n\n\nJennifer MarlonSenior 
 Research Scientist\, School of the EnvironmentDirector of Data Science\, Y
 ale Program on Climate Change CommunicationLecturer\, Department of Molecu
 lar\, Cellular and Developmental Biologyjennifer.marlon@yale.edu | https:/
 /environment.yale.edu/profile/jennifer-marlon\n\n\n\n“Using paleofire re
 cords and global fire simulations to understand wildfire responses to clim
 ate change and human activities”Jennifer Marlon\, Nicholas O'Mara\, Carl
 a StaverOver the last several years unusually large and severe wildfires h
 ave devastated communities and wildlife and transformed ecosystems around 
 the globe. This project reconstructs and analyzes long-term fire and veget
 ation records from ice and lake sediment cores for comparison with dynamic
  global fire model simulations. We seek a data analyst/database engineer t
 o help develop the paleofire records and the SQL database that will house 
 them. The research assistant (RA) will use R and SQL to generate composite
  records of regional to global wildfire activity spanning thousands of yea
 rs of Earth’s history. The RA will have the opportunity to participate i
 n bi-weekly project meetings\, to present scientific results to a team of 
 international\, interdisciplinary collaborators\, and to co-author peer-re
 viewed publications.\n\n\n\nIlias ZadikAssistant Professor\, Department of
  Statistics and Data Science Ilias.zadik@yale.edu | https://iliaszadik.gi
 thub.io/\n\n\n\n"MCMC methods for pooled testing"In pooled or group testin
 g\, which was of high importance over the recent COVID-19 pandemic\, one t
 ests subsets of a population of individuals with the goal to detect the su
 bset of infected ones using as few as possible total number of tests. One 
 of the simplest yet information-theoretically optimal (in terms of number 
 of total number of tests used)\, such testing procedures is to choose the 
 individuals participating in each test independently at random. This is a 
 simple implication of the so-called probabilistic method. Yet\, besides th
 e simplicity of its procedure\, multiple natural computationally efficient
  procedures that have been mathematically proven to require a larger numbe
 r of tests. Interestingly\, MCMC methods have never been mathematically an
 alyzed for this setting and have shown intriguing success in (small scale)
  simulations. This project\, as part of a general goal of build tools to a
 nalyze MCMC methods for statistical tasks\, aims to understand (empiricall
 y in large scale and ideally mathematically establish) the performance of 
 natural MCMC methods for this important group testing scheme.\n\n\n\nSohei
 l GhiliAssistant Professor of Marketing\, School of Managementsoheil.ghili
 @yale.edu | https://sites.google.com/view/soheil-ghili/\n\n\n\n“Training
  Large Language Models for Price Negotiation”Price negotiation in academ
 ia is mostly examined within the field of economics and in environments in
  which each party to the negotiation has a simple set of moves available: 
 accept/reject the offer made\, or counter-offer a price. In this study\, w
 e aim to take a step further and train models for negotiation in environme
 nt in which each party’s moves entail generating a text that not only co
 ntains an offer\, but also supports it with information and reasoning. An 
 important aspect of our objectives in training LLMs for this task is that 
 they learn the game theoretical aspects. To illustrate\, a seller LLM that
  has info indicating its product is of high value is expected to share tha
 t info as part of its offer\, while a seller that knows its product has lo
 wer quality is expected to remain silent about the quality aspect. In the 
 initial stages of the project\, we will try to train LLMs for simpler task
 s\; and we will build toward the ultimate goal of price negotiation over t
 ime.\n\n\n\nAlfred P. Kaye\, MD PhDAssistant Professor\, Department of Psy
 chiatry\, Yale University School of Medicinealfred.kaye@yale.edu | https:/
 /www.kayelab.com/\n\n\n\n"Neural representation of threat" In this project
 \, we have recorded from large numbers of neurons in the mouse prefrontal 
 cortex as a mouse navigates through the environment. These optical recordi
 ngs of neurons can be used to infer the animal's level of threat perceptio
 n in virtual environments with differing levels of safety. The neural repr
 esentation can then be used to predict behavior\, while accounting for oth
 er variables such as arousal\, locomotion\, and other task-related measure
 s. Thus\, a student interested in working on this project can apply nonlin
 ear dimensionality reduction and ML approaches to understand how neurons e
 ncode information about emotionally related variables in the world.\n\n\n\
 nLu LuAssistant Professor of Statistics and Data ScienceLu.lu@yale.edu | h
 ttps://lu.seas.upenn.edu\n\n\n\n“Physics-informed neural operators for f
 ast prediction of multiscale systems”High-fidelity simulations like dire
 ct numerical simulation (DNS) of turbulence and molecular dynamics (MD) of
  atomistic systems are computationally very expensive and data-intensive. 
 Furthermore\, for multiscale problems\, the microscale component is so exp
 ensive that it has stalled progress in simulating time-dependent atomistic
 -continuum systems. These open issues\, in turn\, have delayed progress in
  forecasting of real-time dynamics in critical applications such as autono
 my\, extreme weather patterns\, and designing efficiently new functional m
 aterials. Scientific machine learning (SciML) has the potential to totally
  reverse this rather inefficient paradigm and significantly accelerate sci
 entific discovery with direct impact on technology in the next few decades
 . We propose to develop a new generation of neural operators\, universal a
 pproximators for operators\, that can learn explicit and implicit operator
 s from data only. To this end\, we need to extend the predictability of ne
 ural operators for unseen out-of-distribution inputs and to speed-up the t
 raining process via high performance and multi-GPU computing. We will endo
 w neural operators with physics\, multifidelity data\, and equivariant pri
 nciples (e.g.\, geometric equivariance and conservation laws) for continuu
 m systems and with seamless coupling for hybrid continuum-molecular system
 s\, where neural operators will replace the expensive molecular component.
 \n\n\n\nSteven KleinsteinAnthony N Brady Professor of Pathology. Departmen
 t of Pathology\, Yale School of Medicine. Department of Immunobiology.stev
 en.kleinstein@yale.eduProject presented by Gisela Gabernet\, Associate Res
 earch Scientist at the Kleinstein Labgisela.gabernet@yale.edu | https://me
 dicine.yale.edu/lab/kleinstein/\n\n\n\n“Identifying convergent antibody 
 responses across infections and auto-immune diseases”The development of 
 antibodies that target and neutralize pathogens is an important facet of t
 he adaptive immune response to foreign pathogens. Antibodies are generated
  through the recombination of Variable\, Diversity and Joining gene segmen
 ts at the DNA level\, with additional targeted mutations that generate a t
 heoretical antibody diversity of 1014 unique sequences. Despite this high 
 diversity\, a bias in the usage of these gene segments or even antibodies 
 with overall high sequence similarity – denominated convergent antibodie
 s – have been observed across cohorts of patients after an immune challe
 nge such as vaccination\, infection or auto-immune diseases. Convergent an
 tibodies have been described to target conserved epitopes across mutagenic
  pathogens such as HIV and influenza\, showing a potential towards the dev
 elopment of broadly protective vaccines. They have also been observed in a
 uto-immune diseases\, potentially serving as diagnostics and monitoring ma
 rkers. In our lab\, we have developed a high-throughput analysis pipeline 
 that enables the efficient processing of antibody repertoires of individua
 l cohorts (https://nf-co.re/airrflow). This project will aim at benchmarki
 ng and improving current convergent antibody detection methods as well as 
 visualizations. One potential approach will involve modelling the antibody
  sequences as a network of sequence similarity and identifying regions in 
 the network shared across multiple subjects.\n\n\n\nHemant TagareProfessor
  of Radiology and Biomedical Imaging and of Biomedical Engineeringhemant.t
 agare@yale.edu | https://medicine.yale.edu/profile/hemant-tagare/\n\n\n\n
 “Predict the progression of Parkinson’s Disease”Parkinson’s Diseas
 e (PD) is the fastest growing neurodegenerative disease in the world. PD i
 s also heterogeneous – different patients progress at different rates al
 ong different trajectories. Predicting the patient-specific progress of PD
  is critical in treating the disease and in shortening the length of clini
 cal trials for new PD therapies. Currently\, there are no reliable methods
  to predict PD progression. The goal of this research is to use a large da
 taset of PD patients to predict PD progress from baseline data. The datase
 t has images\, clinical scores\, wearables data\, lab reports\, and geneti
 c information. The challenge is to use this heterogeneous data to create a
 n accurate prediction model. All methods (frequentist\, Bayesian\, deep le
 arning) are welcome.\n\n\n\nDavid van Dijk\, Ph.D.Assistant Professor of M
 edicine\, Yale School of MedicineAssistant Professor of Computer Scienceda
 vid.vandijk@yale.edu | vandijklab.org\n\n\n\n"Using Machine Learning to un
 derstand the language of biology"Recent advances in large language models 
 provide new opportunities for decoding biology. Single-cell omics data enc
 odes complex cellular behaviors and processes into high-dimensional molecu
 lar profiles. By treating these data as textual representations\, we can a
 pply and fine-tune neural language models to uncover the underlying gramma
 tical rules governing biological systems. We have demonstrated that these 
 models can learn to translate between species\, matching cell types and ge
 ne expression programs between mice and humans in a completely unsupervise
 d fashion. This cross-species translation highlights how fundamental aspec
 ts of biology form a universal language translatable across organisms. Mor
 e broadly\, interpreting single cell data as “biological text” enables
  leveraging powerful natural language processing approaches to find patter
 ns\, generate hypotheses\, and gain conceptual understanding of biology.\n
 \n\n\nZhuoran YangAssistant Professor\, Department of Statistics & Data Sc
 iencezhuoran.yang@yale.edu | https://statistics.yale.edu/people/zhuoran-ya
 ng\n\n\n\n"What and How does In-Context Learning Learn? Bayesian Model Ave
 raging\, Parameterization\, and Generalization"Large language models demon
 strate an in-context learning (ICL) ability\, i.e.\, they can learn from a
  few examples provided in the prompt without updating their parameters. In
  this project\, we conduct a comprehensive study of ICL\, addressing sever
 al open questions:(a) What type of ICL estimator is learned within languag
 e models?(b) What are the suitable performance metrics to evaluate ICL acc
 urately\, and what are their associated error rates?(c) How does the trans
 former architecture facilitate ICL?To address (a)\, we adopt a Bayesian pe
 rspective and demonstrate that ICL implicitly implements the Bayesian mode
 l averaging algorithm. This Bayesian model averaging algorithm is shown to
  be approximated by the attention mechanism. For (b)\, we analyze ICL perf
 ormance from an online learning standpoint and establish a sublinear regre
 t bound. This shows that the error diminishes as the number of examples in
  the prompt increases. Regarding (c)\, beyond the encoded Bayesian model a
 veraging algorithm in the attention mechanism\, we reveal that during pret
 raining\, the total variation distance between the learned model and the n
 ominal model is bounded by the sum of an approximation error and a general
 ization error.Our findings aim to offer a unified understanding of the tra
 nsformer and its ICL capability\, with bounds on ICL regret\, approximatio
 n\, and generalization. This deepens our comprehension of these crucial fa
 cets of modern language models and illuminates advanced prompt methodologi
 es for tackling more complex reasoning tasks.\n\n\n\nTong WangAssistant Pr
 ofessor of Marketing\, School of Management\, Yale Universitytong.wang.tw6
 87@yale.edu | https://tongwang-ai.github.io/\n\n\n\n"Exploring Post Hoc In
 terpretation of Representations for Unstructured Data"In recent years\, de
 ep learning has emerged as the prevailing solution for tackling decision-m
 aking tasks involving unstructured data\, such as images and texts. The ef
 ficacy of any predictive undertaking related to unstructured data hinges u
 pon the caliber of their representation in the latent space—often referr
 ed to as embeddings. In essence\, the pivotal question revolves around whe
 ther an insightful portrayal of unstructured data can be attained\, one th
 at encapsulates pertinent information for downstream tasks. Our objective 
 is to delve into the realm of post hoc interpretation concerning these rep
 resentations\, contextualizing our exploration within various domains\, in
 cluding business and medical data. Through an analytical lens\, we seek to
  unveil the concealed insights nestled within latent representations\, the
 reby discerning the origins of the informational cues present in the train
 ing data. It is noteworthy that a portion of this endeavor enjoys sponsors
 hip from NSF and is executed in close collaboration with the esteemed Mayo
  Clinic.\n\n\n\n\n\n\n\nRefreshments will be served\n
CATEGORIES:FDS Events,Project Match,Training
END:VEVENT
BEGIN:VEVENT
UID:506@fds.yale.edu
DTSTART;TZID=America/New_York:20230830T120000
DTEND;TZID=America/New_York:20230830T130000
DTSTAMP:20250916T142125Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-gaurav-mahajan
 -some-open-problems-in-tcs/
SUMMARY:Yale Theory Student Seminar: Gaurav Mahajan\, "Some Open Problems i
 n TCS"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:557@fds.yale.edu
DTSTART;TZID=America/New_York:20230906T120000
DTEND;TZID=America/New_York:20230906T131500
DTSTAMP:20250916T142125Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-alkis-kalavasi
 s-some-open-problems-in-tcs/
SUMMARY:Yale Theory Student Seminar: Alkis Kalavasis\, "Some Open Problems 
 in TCS"
DESCRIPTION:Abstract: \n\n\n"Overview of the things I am interested in (Mac
 hine Learning & Optimization)"\n\n\n\nQuestion 1 (TCS): Query Complexity o
 f MaxCut and beyond.\n\n\n\nQuestion 2 (Computational Learning Theory): In
 troduction to Quantum learning theory and open questions.\n\n\n\nWebsite: 
 https://yaletheorystudents.github.io/\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:559@fds.yale.edu
DTSTART;TZID=America/New_York:20230913T120000
DTEND;TZID=America/New_York:20230913T130000
DTSTAMP:20250916T142125Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-xifan-yu-from-
 an-interview-probability-question-to-expansion-properties-of-some-0-1-poly
 topes/
SUMMARY:Yale Theory Student Seminar: Xifan Yu\, "From an Interview Probabil
 ity Question to Expansion Properties of Some 0/1 Polytopes"
DESCRIPTION:https://yaletheorystudents.github.io/
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Summer
 Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:551@fds.yale.edu
DTSTART;TZID=America/New_York:20230921T140000
DTEND;TZID=America/New_York:20230921T150000
DTSTAMP:20250916T142125Z
URL:https://fds.yale.edu/events/getting-the-most-out-of-your-representatio
 ns/
SUMMARY:Getting the Most Out of Your Representations
DESCRIPTION:Speaker: Karen UllrichResearch Scientist\, The Fundamental AI R
 esearch (FAIR) team at Meta AI\n\n\nThursday September 21\, 2023 Time: 2:0
 0 pm - 3:00 pmLocation: AKW Room 200\n\n\n\nZoom link: https://yale.zoom.
 us/j/91494159820\n\n\n\nAbstract: The goal of source compression is to map
  any outcome of a discrete random variable $x ∼ p_d(x)$ in a finite symb
 ol space $x ∈ S$ to its shortest possible binary representation. Given a
  tractable model probability mass function (PMF) $p(x)$ that approximates 
 $p_d(x)$\, entropy coders provide such an optimal mapping. As a result\, t
 he task of source compression is simplified to identifying a good model PM
 F for the data at hand. Even though the setup as described is the most com
 monly used one\, there are restrictions to it. Entropy coders can only pro
 cess one dimensional variables and process them sequentially. Hence the st
 ructure of the entropy coder implies a sequential structure of the data. T
 his is a problem when compressing sets instead of sequences. In the first 
 part of the talk\, I present an optimal codec for sets [1]. The problem we
  encounter for sets can be generalized for many other structural priors in
  data. In the second part of the talk I thus investigate the problem. We g
 eneralize rate distortion theory for structural data priors and develop a 
 strategy to learn codecs for this data [2].[1] Improving Lossless Compress
 ion Rates via Monte Carlo Bits-Back Coding\; Yangjun Ruan\, Karen Ullrich\
 , Daniel Severo\, James Townsend\, Ashish Khisti\, Arnaud Doucet\, Alireza
  Makhzani\, Chris J Maddison\; Oral @ICML[2] Lossy Compression for Lossles
 s Prediction\; Yann Dubois\, Benjamin Bloem-Reddy\, Karen Ullrich\, Chris 
 J Maddison\; Spotlight @ Neurips\n\n\n\nSpeaker bio: I am a research scien
 tist (s/h) at FAIR NY and am actively collaborating with researchers from 
 the Vector Institute and the UoAmsterdam. My main research focus lies in t
 he intersection of information theory and probabilistic machine learning /
  deep learning. I previously completed a PhD under the supervision of Prof
 . Max Welling. Prior to that\, I worked at the Austrian Research Institute
  for AI\, Intelligent Music Processing and Machine Learning Group lead by 
 Prof. Gerhard Widmer. I studied Physics and Numerical Simulations in Leipz
 ig and Amsterdam.\n\n\n\nSpeaker Website: https://karenullrich.info/#\n\n\
 n\nHosted by: Smita Krishnaswamy\n
CATEGORIES:FDS Events,Special Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:503@fds.yale.edu
DTSTART;TZID=America/New_York:20230927T120000
DTEND;TZID=America/New_York:20230927T130000
DTSTAMP:20250916T142125Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-anay-mehrotra-
 selection-with-implicit-bias-evaluating-the-efficacy-of-interventions/
SUMMARY:Yale Theory Student Seminar: Anay Mehrotra\, "Selection with Implic
 it Bias: Evaluating the Efficacy of Interventions"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:505@fds.yale.edu
DTSTART;TZID=America/New_York:20231004T120000
DTEND;TZID=America/New_York:20231004T130000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-grigoris-veleg
 kas-statistical-indistinguishability-of-learning-algorithms/
SUMMARY:Yale Theory Student Seminar: Grigoris Velegkas\, "Statistical Indis
 tinguishability of Learning Algorithms"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:546@fds.yale.edu
DTSTART;TZID=America/New_York:20231004T160000
DTEND;TZID=America/New_York:20231004T170000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/fds-x-astronomy-colloquium-priyamvada-nata
 rajan/
SUMMARY:FDS x Astronomy Colloquium: Priyamvada Natarajan
DESCRIPTION:Speaker: Priyamvada NatarajanChair\, Department of AstronomyJos
 eph S. and Sophia S. Fruton Professor of Astronomy & Professor of Physics
 Director\, The Franke Program in Science & the HumanitiesYale University\n
 \n\n"Machine Learning for Fundamental Physics: New Insights for Black Hole
  Physics"\n\n\n\nAbstract: Machine Learning has typically been used in the
  service of organizing\, classifying information from complex multi-dimens
 ional data-sets\, and extracting higher-level correlations. Here we demons
 trate a new use-case\, an inversion\, where ML can be used powerfully to g
 arner new insights into fundamental physics - in this instance\, stress-te
 st our understanding of how black holes grow and evolve in the Universe.\n
 \n\n\nSpeaker Bio: https://campuspress.yale.edu/priya/ \n
CATEGORIES:FDS Events,Colloquium,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:504@fds.yale.edu
DTSTART;TZID=America/New_York:20231011T120000
DTEND;TZID=America/New_York:20231011T130000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-felix-zhou-rep
 licable-clustering/
SUMMARY:Yale Theory Student Seminar: Felix Zhou\, "Replicable Clustering"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:500@fds.yale.edu
DTSTART;TZID=America/New_York:20231018T120000
DTEND;TZID=America/New_York:20231018T130000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-binghui-peng-c
 olumbia-memory-query-tradeoffs-for-randomized-convex-optimization/
SUMMARY:Yale Theory Student Seminar: Binghui Peng (Columbia)\, "Memory–Qu
 ery Tradeoffs for Randomized Convex Optimization"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:542@fds.yale.edu
DTSTART;TZID=America/New_York:20231023T160000
DTEND;TZID=America/New_York:20231023T170000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/sds-seminar-cynthia-rush-columbia-universi
 ty/
SUMMARY:S&amp\;DS Seminar: Cynthia Rush (Columbia University)
DESCRIPTION:
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:541@fds.yale.edu
DTSTART;TZID=America/New_York:20231030T160000
DTEND;TZID=America/New_York:20231030T170000
DTSTAMP:20250916T142126Z
URL:https://fds.yale.edu/events/sds-seminar-tim-g-j-rudner-nyu/
SUMMARY:S&amp\;DS Seminar: Tim G.J. Rudner (NYU)
DESCRIPTION:
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:501@fds.yale.edu
DTSTART;TZID=America/New_York:20231101T120000
DTEND;TZID=America/New_York:20231101T130000
DTSTAMP:20250916T142127Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-zihan-tan-dima
 cs-on-1-eps-approximate-flow-sparsifiers/
SUMMARY:Yale Theory Student Seminar: Zihan Tan (DIMACS)\, "On (1 + eps)– 
 Approximate Flow Sparsifiers
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:540@fds.yale.edu
DTSTART;TZID=America/New_York:20231106T160000
DTEND;TZID=America/New_York:20231106T170000
DTSTAMP:20250916T142127Z
URL:https://fds.yale.edu/events/sds-seminar-devavrat-shah-mit/
SUMMARY:S&amp\;DS Seminar: Devavrat Shah (MIT)
DESCRIPTION:
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:502@fds.yale.edu
DTSTART;TZID=America/New_York:20231108T120000
DTEND;TZID=America/New_York:20231108T130000
DTSTAMP:20250916T142127Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-peiyuan-zhang-
 the-minimax-theorem-and-algorithms-in-geodesic-metric-space/
SUMMARY:Yale Theory Student Seminar: Peiyuan Zhang\, "The Minimax Theorem a
 nd Algorithms in Geodesic Metric Space"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:539@fds.yale.edu
DTSTART;TZID=America/New_York:20231113T160000
DTEND;TZID=America/New_York:20231113T170000
DTSTAMP:20250916T142127Z
URL:https://fds.yale.edu/events/sds-seminar-yuejie-chi-carnegie-mellon-uni
 versity/
SUMMARY:S&amp\;DS Seminar: Yuejie Chi (Carnegie Mellon University)
DESCRIPTION:
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VEVENT
UID:499@fds.yale.edu
DTSTART;TZID=America/New_York:20231115T120000
DTEND;TZID=America/New_York:20231115T130000
DTSTAMP:20250916T142127Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-alkis-kalavasi
 s-optimizing-solution-samplers-for-combinatorial-problems-the-landscape-of
 -policy-gradient-methods/
SUMMARY:Yale Theory Student Seminar: Alkis Kalavasis\, "Optimizing Solution
 –Samplers for Combinatorial Problems: The Landscape of Policy–Gradient
  Methods"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:498@fds.yale.edu
DTSTART;TZID=America/New_York:20231122T120000
DTEND;TZID=America/New_York:20231122T130000
DTSTAMP:20250916T142128Z
URL:https://fds.yale.edu/events/yale-theory-student-seminar-siddharth-mitr
 a-on-system-identification-in-linear-dynamical-system/
SUMMARY:Yale Theory Student Seminar: Siddharth Mitra\, "On System Identific
 ation in Linear Dynamical System"
DESCRIPTION:If you are interested in joining the mailing list\, please reac
 h out to Marco Pirazzini (marco.pirazzini@yale.edu) or Siddharth Mitra (si
 ddharth.mitra@yale.edu).\n\n\nYale Theory Student Seminar Website\n
CATEGORIES:FDS Events,Seminar Series,Student Led Seminar,Training
END:VEVENT
BEGIN:VEVENT
UID:538@fds.yale.edu
DTSTART;TZID=America/New_York:20231127T160000
DTEND;TZID=America/New_York:20231127T170000
DTSTAMP:20250916T142128Z
URL:https://fds.yale.edu/events/sds-seminar-daniel-j-hsu-columbia-universi
 ty/
SUMMARY:S&amp\;DS Seminar: Daniel J. Hsu (Columbia University)
DESCRIPTION:
CATEGORIES:FDS Events,Statistics &amp; Data Science Seminar,Seminar Series
END:VEVENT
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20221106T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR