BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:America/New_York
X-WR-TIMEZONE:America/New_York
BEGIN:VEVENT
UID:602@fds.yale.edu
DTSTART;TZID=America/New_York:20221109T160000
DTEND;TZID=America/New_York:20221109T180000
DTSTAMP:20250916T142119Z
URL:https://fds.yale.edu/events/fds-seminar-priya-panda-exploring-robustne
 ss-and-energy-efficiency-in-neural-systems-with-spike-based-machine-intell
 igence/
SUMMARY:FDS Seminar: Priya Panda (Department of Electrical Engineering)"Exp
 loring Robustness and Energy–Efficiency in Neural Systems with Spike–b
 ased Machine Intelligence"
DESCRIPTION:Abstract: Spiking Neural Networks (SNNs) have recently emerged 
 as an alternative to deep learning due to their huge energy efficiency ben
 efits on neuromorphic hardware. In this presentation\, I will talk about i
 mportant techniques for training SNNs which bring a huge benefit in terms 
 of latency\, accuracy\, interpretability\, and robustness. We will first d
 elve into how training is performed in SNNs. Training SNNs with surrogate 
 gradients presents computational benefits due to short latency. However\, 
 due to the non-differentiable nature of spiking neurons\, the training bec
 omes problematic and surrogate methods have thus been limited to shallow n
 etworks. To address this training issue with surrogate gradients\, we will
  go over a recently proposed method Batch Normalization Through Time (BNTT
 ) that allows us to train SNNs from scratch with very low latency and enab
 les us to target interesting applications like video segmentation and beyo
 nd traditional learning scenarios\, like federated training. Another criti
 cal limitation of SNNs is the lack of interpretability. While a considerab
 le amount of attention has been given to optimizing SNNs\, the development
  of explainability still is at its infancy. I will talk about our recent w
 ork on a bio-plausible visualization tool for SNNs\, called Spike Activati
 on Map (SAM) compatible with BNTT training. The proposed SAM highlights sp
 ikes having short inter-spike interval\, containing discriminative informa
 tion for classification. Finally\, with proposed BNTT and SAM\, I will hig
 hlight the robustness aspect of SNNs with respect to adversarial attacks. 
 In the end\, I will talk about interesting prospects of SNNs for non-conve
 ntional learning scenarios such as privacy-preserving distributed learning
  as well as unraveling the temporal correlation in SNNs with feedback conn
 ections. Finally\, time permitting\, I will talk about the prospects of SN
 Ns for novel and emerging compute-in-memory hardware that can potentially 
 yield order of magnitude lower power consumption than conventional CPUs/GP
 Us.\n\n\n\nBio: Priya's research interests lie in Neuromorphic Computing: 
 spanning energy-efficient design methodologies for deep learning networks\
 , novel supervised/unsupervised learning algorithms for spiking neural net
 works and developing neural architectures for new computing scenarios (suc
 h as lifelong learning\, generative models\, stochastic networks\, adversa
 rial attacks etc.).\n\n\n\nHer goal is to empower energy-aware and energy-
 efficient machine intelligence through algorithm-hardware co-design while 
 being secure to adversarial scenarios and catering to the resource constra
 ints of Internet of Things (IoT) devices.\n\n\n\nWebsite: https://seas.yal
 e.edu/faculty-research/faculty-directory/priya-panda\n\n\n\n\nWatch (Acces
 s to Yale network required)\n\n
CATEGORIES:FDS Events,Seminar Series
END:VEVENT
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20221106T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR