Research in Progress
We are in the process of using motion capture (MoCap) technology to record high-precision body and hand movement data from native signers. Since our project focuses on generating ASL STEM signs, participants will be signing both conceptually aligned ASL terms and varied signs commonly used by DHH signers to describe these core STEM concepts. MoCap will be used with individual key terms as well as short narrative explanations for each term (see examples below of this process).
This data is visualized as an animated avatar and serves as a key input for training our AI-based sign language recognition system. By capturing detailed motion information, we will ensure the system learns from accurate and nuanced representations of real sign language use.
Motion Capture Data Collection
Dr. Wooten and Gallaudet student perform pre-identified key STEM signs with narrative explanations while outfitted in full-body motion capture suits and Faceware facial capture devices.
Facial features of each particIpant are captured via Faceware devices as they are signing to communicate with one another.
The full-body motion suits use sensors to track and record participant’s movements as they are signing, which are then mapped on a computer screen in real time as a virtual skeleton.
The ASL Experiment is designed to help our team build sign recognition and generation models that can work for STEM vocabulary. For instance, in collaborative learning, DHH participants might identify the need for a new sign-name for previously non-existent terms, especially within topics lacking a meaning-aligned sign (e.g., when discussing a highly specific idea like “photosynthesis,” a new conceptually-motivated sign may be preferred over time-consuming fingerspelling). In such cases, students and teachers typically strive to reach a consensus on the sign to use, which is often an interactive and context-dependent process. Our goal is for BRIDGE to recognize and generate newly agreed-upon signs, in addition to predetermined key signs by utilizing existing text-only LLMs to build compatible representations of signs and gestures. This study aims to address the research question: “How can we learn recognition models that are flexible, work with limited data, and take into account the differences between signs produced by students from diverse backgrounds?”
Modeling Meaning: Recognizing and Generating STEM Signs in Collaborative ASL Contexts
Fingerspelling a STEM concept such as Photosynthesis (letter E shown)
Producing a conceptually relevant ASL sign for the same concept (sign for SUN and sign for EXCHANGE pictured)
In this phase of the project, we are working to identify design principles for intelligent lexical support to signing learners within a collaborative learning context, including investigating the platform and modality of the support provided, as well as the timing and content of support. The aim is for BRIDGE to display an avatar (a virtual human) signing relevant STEM signs to promote a shared understanding of key concepts and provide both just-in-time support to students attempting to produce a STEM sign and a dashboard visualizing a group’s STEM sign use. The representations of the STEM vocabulary will include signing avatars and English captions (see example image below) delivered via augmented reality (AR). Once our initial prototype is developed, we will be testing user-experience (UX) and engaging in further co-design methods with DHH college students to ensure that BRIDGE is responsive to their actual needs. This study aims to address the research question: “Which interventions, deployed at what times, support the alignment of STEM sign use within a classroom during collaboration?”
Augmenting Understanding:
Co-Designing Just-in-Time AR Support for STEM Sign Learning in Collaborative Settings
Depiction of possible AR overlay design