Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
automaticNonmanuals.pdf (9.02 MB)
ETD Abstract Container
Abstract Header
A Computational Study of American Sign Language Nonmanuals
Author Info
Benitez-Quiroz, Carlos Fabian
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1436909704
Abstract Details
Year and Degree
2015, Doctor of Philosophy, Ohio State University, Electrical and Computer Engineering.
Abstract
American Sign Language (ASL) is a multichannel communication system that involves manual components such as hand-shape and movement, and nonmanual components as body posture, head motion and facial expressions. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to understand its nonmanual components. ASL nonmanual linguistic research has been typically addressed by manually annotating facial events (e.g., brow raising, mouth opening, among others), and comparing the frequency of such events to find some grammatical clues about a given event in a sentence or as linguist called them construction. This tedious process is difficult to scale, especially when the number of facial events and the number of samples grow. Additionally, another major obstacle to achieve this goal is the difficulty in finding correlations between facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. It is however unknown for many grammatical constructions the facial features that define these dynamical facial expressions of grammar. In this work, we introduce a computational approach to efficiently carry out analysis of nonmanuals. First, a computational linguistic model of the face is defined to characterize the basic components used in ASL facial and head nonmanuals. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously identified by other approaches. Second, we study the hypothesis that facial expressions of negative moral judgment (i.e., contempt, anger and disgust) have evolved into a facial expression of negation regularly used as a grammatical marker in human language and that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a nonmanual marker. Out results show commonalities in the muscle activation in three different spoken languages (i.e., English, Chinese and Spanish) and across modalities (i.e., ASL) Third, we create computational tools that allows to extract facial events by detecting the facial components (e.g., eyebrows, eyelids, mouth, nose and jawline) over potentially several video sequences of signed sentences. Our results achieve state-of-the-art precision in detecting facial landmarks and it can be easily extend to other objects such as medical images. Furthermore, we modeled the movement of the facial components as continuous functions that are classified using a maximum margin functional approach. Our new algorithm can be applied to a variety of data and application such as dynamical facial expressions and body gestures. We will show that the proposed methodologies seem to be tailor for ASL applications, they can be applied to different areas such as facial expression of emotion, face shape detection, object detection and data mining among many others.
Committee
Aleix Martinez (Advisor)
Kevin Passino (Committee Member)
Yuan Zheng (Committee Member)
Pages
118 p.
Subject Headings
Computer Science
;
Electrical Engineering
;
Linguistics
Keywords
Face perception, American Sign Language, ASL, computational models, funcional analysis, shape detection
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Benitez-Quiroz, C. F. (2015).
A Computational Study of American Sign Language Nonmanuals
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436909704
APA Style (7th edition)
Benitez-Quiroz, Carlos.
A Computational Study of American Sign Language Nonmanuals.
2015. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1436909704.
MLA Style (8th edition)
Benitez-Quiroz, Carlos. "A Computational Study of American Sign Language Nonmanuals." Doctoral dissertation, Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436909704
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1436909704
Download Count:
693
Copyright Info
© 2015, some rights reserved.
A Computational Study of American Sign Language Nonmanuals by Carlos Fabian Benitez-Quiroz is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at etd.ohiolink.edu.
This open access ETD is published by The Ohio State University and OhioLINK.