BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:America/New_York
X-WR-TIMEZONE:America/New_York
BEGIN:VEVENT
UID:805@fds.yale.edu
DTSTART;TZID=America/New_York:20241211T113000
DTEND;TZID=America/New_York:20241211T130000
DTSTAMP:20250916T142145Z
URL:https://fds.yale.edu/events/fds-colloquium-guang-lin-purdue-towards-in
 terpretable-robust-trustworthy-machine-learning-for-diverse-applications-i
 n-science-and-engineering/
SUMMARY:FDS Colloquium: Guang Lin (Purdue)\, "Towards Interpretable\, Robus
 t Trustworthy Machine Learning for Diverse Applications in Science and Eng
 ineering" 
DESCRIPTION:\n\n\n\n\nAbstract: This talk aims to create new technologies t
 hat can be translated into interpretable\, robust trustworthy AI systems t
 hat can be deployed for real-time complex dynamical system prediction\, an
 d applications to improve the stability and efficiency of complex dynamica
 l systems. In the first part of the talk\, I will employ Covid-19 pandemic
  prediction and personalized prediction in Alzheimer’s disease to illust
 rate how to build an interpretable\, trustworthy data-driven model. In the
  second part of the talk\, I will introduce scalable algorithms for Bayesi
 an deep learning via Repica exchange stochastic gradient Monte Carlo. Rep
 lica exchange Monte Carlo (reMC)\, also known as parallel tempering\, is a
 n important technique for accelerating the convergence of the conventional
  Markov Chain Monte Carlo (MCMC) algorithms. However\, such a method requi
 res the evaluation of the energy function based on the full dataset and is
  not scalable to big data. The naïve implementation of reMC in mini-batch
  settings introduces large biases\, which cannot be directly extended to t
 he stochastic gradient MCMC (SGMCMC)\, the standard sampling method for si
 mulating from deep neural networks (DNNs). In this talk\, I will present a
 n adaptive replica exchange SGMCMC (reSGMCMC) to automatically correct the
  bias and study the corresponding properties. The analysis implies an acce
 leration-accuracy trade-off in the numerical discretization of a Markov ju
 mp process in a stochastic environment. Empirically\, we test the algorith
 m through extensive experiments on various setups and obtain the state-of-
 the-art results on CIFAR10\, and CIFAR100 in both supervised learning and 
 semi-supervised learning tasks. \n\n\n\nSpeaker bio: Prof. Guang Lin is t
 he Associate Dean for Research and Innovation and the Director of Data Sci
 ence Consulting Service that performs cutting-edge research on data scienc
 e and provides hands-on consulting support for data analysis and business 
 analytics. He is also the Chair of the Initiative for Data Science and Eng
 ineering Applications at the College of Engineering. Guang Lin is also a F
 ull Professor in the School of Mechanical Engineering and Department of Ma
 thematics at Purdue University. \n\n\n\nLin received his Ph.D. from Brown
  University in 2007 and worked as a Research Scientist at DOE Pacific Nort
 hwest National Laboratory before joining Purdue in 2014. Prof. Lin has rec
 eived various awards\, such as the NSF CAREER Award\, Mid-Career Sigma Xi 
 Award\, University Faculty Scholar\, College of Science Research Award\, M
 athematical&nbsp\;Biosciences Institute Early Career Award\, and Ronald L.
  Brodzinski Award for Early Career Exception Achievement.\n
CATEGORIES:FDS Events,Colloquium
LOCATION:Yale Institute for Foundations of Data Science\, Kline Tower 13th 
 Floor\, Room 1327\, New Haven\, CT\, 06511\, United States
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=Kline Tower 13th Floor\, Ro
 om 1327\, New Haven\, CT\, 06511\, United States;X-APPLE-RADIUS=100;X-TITL
 E=Yale Institute for Foundations of Data Science:geo:0,0
END:VEVENT
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20241103T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR