Our Team

The primary institution for this project is Gallaudet University, the world’s premier university for DHH students, delivering a bilingual (English and ASL) education.

Associate Professor of Educational Neuroscience

Dr. Quandt’s research includes using signing avatars to teach ASL in virtual reality, with over eight years dedicated to developing fluent signing avatars that are learning-powered ASL recognition and demonstrated empirical support for the positive relationship between ASL use and visual-spatial STEM skills. To learn more about Dr. Quandt’s ongoing projects, visit: Action & Brain Lab.

Associate Professor of Biology

Dr. Wooten has extensive experience creating ASL STEM signs and videos that leverage ASL signs to convey STEM content and leads an NSF conference grant for collaboration to evaluate the use of STEM signs and their use in educational settings. She also founded a non-profit organization, Atomic Hands, which is committed to increasing access to STEM through ASL.

PhD Student, Educational Neuroscience

Deanna is a research assistant in the Action and Brain lab. Her research interest includes how using a visual-manual language like American Sign Language (ASL) shapes cognition. Particularly, the influence on executive functions such as cognitive flexibility, working memory, and divergent thinking. She aims to examine the unique ways in which signed language users process language, generate creative ideas, and engage with spatial and linguistic tasks.

Postdoctoral Research Associate

Dr. Kezar is a recent graduate from the University of Southern California, where they developed datasets and language modeling tools for foundational ASL-oriented tasks, including recognition, comprehension, and production. Related work explores how sign language models can include rare and novel signs, such as those introduced in a classroom, by connecting automatically-recognized phonemes to lexical semantics.

Consulting Research Scientist

Dr. Willis is a research scientist with expertise in how people, especially deaf signers, perceive, understand and interact with complex social movements such as signed languages, either in their everyday environment or emerging technology. Her projects include EEG studies on how language deprivation lead to intellectual and developmental disability and collaborations on HCI studies with signed language technologies.

Consulting Research Scientist

Lloyd is a late-stage PhD candidate in Music Technology at Stanford University. Lloyd’s work sits at the intersection of media innovations, Disability studies, and human-computer interaction, with projects in design of novel formats for customizable closed-captions and developing new interfaces to create haptic/vibration-based art.

Assistant Professor of Computer Science

Dr. Alikhani’s research interests center on using representations of communicative structure, machine learning, and cognitive science to design equitable natural language processing systems for applications such as education, health, and social justice. She designed AI systems for sign understanding, sign generation and led the sign translation challenge hosted at EMNLP 2023. Her work has received best paper awards at ACL 2021, UAI2022, INLG2021, UMAP2, EMNLP 2023 and has been supported by DARPA, NIH, CDC, Google, and Amazon. To learn more about Dr. Alikhani’s ongoing projects, visit: Contextual AI Lab.

PhD Candidate, Khoury School of Computer Science

Mert’s research focus is at the intersection of multimodality, dialogue, and cognitive science. Notably, he has worked on signed languages, detecting uncertainty, and discourse through eye gaze and visio-linguistic data. He has been publishing at *CL venues and has been a part of the organizing committees of multiple workshops, such as WMT-SLT'23 and SpLU-RoboNLP'23-24 at EMNLP, as well as *SEM 2023 at ACL, with his work supported by DARPA, NSF, Amazon, and Apple.

PhD Student, Khoury School of Computer Science

Saki's research focuses on natural language processing, with an emphasis on sign language processing, multimodal systems, and accessibility technologies. She is interested in developing inclusive AI systems that support equitable access to education and communication for underrepresented communities, including DHH individuals.

Associate Professor, School of Computing and Information

Dr. Walker’s research integrates interdisciplinary methods to improve the design and implementation of educational technology, and then to understand when and why it is effective. Her current focus is to examine how artificial intelligence techniques can be applied to support social human- human and human agent learning interactions. To learn more about Dr. Walker’s ongoing projects, visit: FACET Lab.

Associate Professor, School of Computing and Information

Dr. Biehl’s research leverages human sensing and tracking and novel interface technologies to improve work practices and processes. He has active research in the deployment of augmented reality technologies in surgical environments, including the use of time-of-flight sensing to track surgical tool use and new approaches to neurological endoscopy that combine live anatomical video with navigation and diagnostic aids. He has significant experience building, deploying, and evaluating technologies in the context of authentic work tasks. To learn more about Dr. Biehl’s ongoing projects, visit: Surreality Lab.


Broader Impacts Project Coordinator

Dr. Kelley has over fifteen years of experience in K-12 and teacher education. She earned her doctorate degree in Learning Technologies from Pepperdine University and is passionate about exploring new tools that can improve teaching and learning. She currently works in the FACET Lab under the leadership of Dr. Walker to support research dissemination and related broader impact project initiatives. 

PhD Student, School of Computing and Information

Griffin is an NSF GRFP fellow and Ph.D. student studying computer science at the University of Pittsburgh. He works in the Surreality Lab under the leadership of Dr. Jacob Biehl and Dr. Edward Andrews researching spatial computing (AR/VR) in the medical domain. Specifically, his research is at the intersection of spatial computing and human-computer interaction.