Gasser Elbanna (جاسر البنا)
Logo 3rd year PhD Student - Harvard University

I study how humans perceive, understand, and produce speech. To investigate these phenomena, I build computational models that can reproduce human behavior and generate testable predictions about the perceptual and neural basis of human spoken communication.

I am currently a third-year PhD student in the Speech and Hearing Bioscience and Technology (SHBT) Program at Harvard University. I conduct my research in the Laboratory for Computational Audition at MIT, where I am advised by Josh McDermott.

I earned my B.Sc. in Systems and Biomedical Engineering from Cairo University (Egypt). During my undergraduate studies, I worked on developing computational models of motor neurons to study the early stages of ALS (Thesis).

I later completed a M.Sc. in Neuroscience and Neuroengineering at EPFL (Switzerland), where I was introduced to speech machine learning and cognitive neuroscience. As a Bertarelli fellow, I headed to HMS and MIT to do my master’s thesis in the Senseable Intelligence Group with Satrajit Ghosh on studying how humans and artificial neural network models recognize and represent voices (Thesis).


Education
  • Harvard University
    Harvard University
    Ph.D. in Speech and Hearing, Bioscience and Technology
    Sep. 2023 - present
  • Harvard University
    Harvard University
    A.M. (with 4.0 GPA) in Speech and Hearing Sciences
    Sep. 2023 - Nov. 2025
  • EPFL
    EPFL
    M.Sc. (with mention d'excellence) in Neuroscience and Neuro-engineering
    Sep. 2020 - April 2023
  • Cairo University
    Cairo University
    B.Sc. (with honors) in Systems and Biomedical Engineering
    Sep. 2015 - Aug. 2020
Experience
  • IDIAP Research Institute
    IDIAP Research Institute
    Speech Machine Learning Intern
    April 2023 - Aug. 2023
  • Logitech
    Logitech
    Voice AI Intern
    Sep. 2021 - Feb. 2022
  • Machine Learning and Optimization Laboratory
    Machine Learning and Optimization Laboratory
    ML & Data Visualization Research Assistant
    March 2021 - Oct. 2021
Synopsis of Research Interests
Develop computational models of human spoken communication
I build neural network models that map acoustic speech signals to linguistic representations, with the goal of explaining how humans recognize speech across variability in talkers, context, and noise.
speech perception artificial neural networks speech production speech understanding
Measure humans spoken communication abilities at scale
I design behavioral experiments to measure spoken communication abilities in humans and models.
psychophysics human-model comparison benchmarks
Understand the role of contextual integration in shaping human speech perception
I use computational models to reveal the role of context in imrpoving and/or biasing speech perception in humans.
talker normalization adaptation effects coarticulation
News
2026
Gave a talk at ANCOR seminar series at Brown University.
Apr 17
Gave a lecture at SHBT 205 graduate class on "Speech perception and its neural basis".
Apr 13
Received the Albert J. Ryan Fellowship from Harvard Medical School
Mar 11
Gave a Poster Blitz talk at ARO conference in Puerto Rico
Feb 07
Became a Peer Tutor at academic resource center at Harvard
Jan 26
2025
Presented a poster at UniReps workshop at NeurIPS in San Diego
Dec 06
Co-organized Muslims in Machine Learning workshop at NeurIPS
Dec 02
Received an en-route master's degree in speech and hearing sciences from Harvard
Nov 30
Presented a poster at APAN in San Diego
Nov 14
Gave a Phonology Circle talk at linguistics department, MIT
Nov 03
Became a resident tutor at Leverett House. Go bunnies!!
Sep 09
Attended the CBMM summer school
Aug 03
Gave a talk at VCCA conference
Jun 27
Presented a poster at the Kempner Symposium for NeuroAI at Harvard
Jun 05
Gave a talk at the Fedorenko Lab, BCS department, MIT
May 06