This Event has Passed
Postdoctoral Applicants

Causal AI for Transferable, Interpretable, and Controllable Machine Learning

Speaker: Lingjing Kong (CMU)

PhD Candidate

Carnegie Mellon University

Monday, February 16, 2026

1:00PM - 2:00PM

and via Webcast: https://yale.zoom.us/j/97413733616?pwd=6msa1514SaqlKDOwpd9SKXBJFTqbxZ.1

Zoom Password: 123

Abstract: Foundation models are rapidly becoming capable assistants for knowledge work, but their deployment in real settings is limited by three gaps: they do not transfer reliably across environments, their internal reasoning is opaque, and their behavior is hard to precisely control. In this talk, I argue that these limitations are not only about model size — they are fundamentally about whether learning captures and leverages the underlying structure of the data-generating process. I use causal thinking as a practical lens to model what is invariant, what changes, and what can be intervened on, and I further show how this leads to learning principles that improve trustworthiness.

I will first present methods for learning unifying mechanisms from heterogeneous data, across domains and modalities, to enable reliable transfer and controllable generation. Next, I will show how structured concepts can be recovered even from seemingly unstructured data, by analyzing and improving self-supervised objectives (such as masking and diffusion) through hierarchical latent-variable models. These concept structures can then be used to interpret generative models and support targeted, multi-level edits. Finally, I connect these two threads to generalization beyond the training distribution. I will discuss natural conditions for extrapolation and a compositional generation framework that improves prompt following for novel concept combinations. I will conclude with a brief outlook on self-improving world models and AI-assisted scientific discovery.

Speaker Bio: Lingjing Kong is a Ph.D. candidate in the Computer Science Department at Carnegie Mellon University. His research focuses on Causal AI for transferable, interpretable, and controllable systems, with an emphasis on understanding and exploiting the structure of real-world data to make foundation models actionable and more reliable. He develops identification principles and scalable algorithms for learning unified models from heterogeneous data, uncovering hierarchical concept structures in unstructured data (e.g., images and text), and generalizing beyond training support through compositionality and extrapolation. His work has appeared in top ML venues, including ICML, NeurIPS, CVPR, ICLR, and EMNLP, and has been prototyped and applied in industry through research internships.

Add To: Google Calendar | Outlook | iCal File

  • Postdoctoral Applicants

Submit an Event

Interested in creating your own event, or have an event to share? Please fill the form if you’d like to send us an event you’d like to have added to the calendar.

Submit an Event

Share your event ideas with us using the form below.

"*" indicates required fields

MM slash DD slash YYYY
Start Time*
:
End Time*
: