Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 22)

Mini-Tools

 
 

Search Report

  • 1. Ayyalasomayajula, Meghana Image Emotion Analysis: Facial Expressions vs. Perceived Expressions

    Master of Computer Science (M.C.S.), University of Dayton, 2022, Computer Science

    A picture is worth a thousand words. A single image has the power to influence individuals and change their behaviour, whereas a single word does not. Even a barely visible image, displayed on a screen for only a few milliseconds, appears to be capable of changing one's behaviour. In this thesis, we experimentally investigated the relationship between facial expressions and perceived emotions. To this end, we built two datasets, namely, the image dataset for image emotion analysis and the face dataset for expression recognition. During the annotation of the image dataset, both facial expressions and perceived emotions are recorded via a mobile application. We then use a classifier trained on the face dataset to recognize the user's expression and compare it with the perceived emotion.

    Committee: Tam Nguyen (Advisor) Subjects: Computer Science
  • 2. Abualula, Yosra Emotion Perception and Culture

    Master of Arts in Psychology, Cleveland State University, 2023, College of Sciences and Health Professions

    The process of perceiving and expressing emotion is multifaceted and governed by a plethora of variables. Culture and group membership have been shown to influence how emotions are displayed and interpreted. Individuals demonstrate inaccurate emoting perception to members of an out-group. Furthermore, perceiving emotions depends on contextual cues, preconceived biases, and familiarity. Cultural cues have an embedded meaning that guide emotional inferences. For the present study, a sample of 40 Muslim female participants were shown pictures of veiled female faces. The type of veil was be manipulated using an Islamic niqab or simply a scarf and a winter cap. Participants were asked to identify the emotion being displayed (happiness, sadness, anger, fear, or neutral) within the veiled faces whereby only the eye region of each face will be visible. Overall, participants were able to identify happy and angry faces more accurately compared to neutral, sad and fear. Participants showed no differentiation in perceiving the covered faces between the two head covering conditions This suggests that cultural familiarity with face processing in the presence of head coverings may account for this absence of distinction.

    Committee: Eric Allard (Committee Chair); Kenneth Vail (Committee Member); Shereen Naser (Committee Member) Subjects: Psychology
  • 3. Koveleskie, Michaela Patient Psychological Factors Related to Cosmetic Surgery Satisfaction

    Doctor of Psychology (Psy.D.), Xavier University, 2022, Psychology

    Studies investigating patients who undergo elective facial cosmetic procedures for purely aesthetic reasons are abundant. However, no study has combined all variables documented in prior research to identify those most relevant to post-surgical satisfaction. Additionally, no study has attempted to utilize a novel tool to assess for the presence of Body Dysmorphic Disorder (BDD) in this population, which is important as face valid screening measures are insufficient. The present study, drawing from the limited research on neuropsychological profiles of individuals with BDD, sought to examine the utility of a facial emotion recognition measure, The Adult “Reading the Mind in the Eyes Test” Revised Version (RMET; Baron-Cohen et al., 2001) as a proxy measure for BDD. A total of 95 adults from 12 countries completed all study measures. Most underwent between one and three surgical procedures and 69.3% reported having had rhinoplasty. Using a dichotomous BDD screener (BDDQ; Phillips et al., 1995), 31.6% endorsed criteria indicating the presence of BDD. Total scores were also calculated based on positive endorsements to each item utilized in the screener. Using total scores, 59% were in a possible positive range. The BDD screen and generated total scores were highly correlated (r = .67, p < .001). BDD mediated the relationship between the RMET total and the following areas: satisfaction with facial appearance, decision to undergo surgery, and global satisfaction. Overall, our results suggest that a facial emotion recognition measure (RMET) could potentially be utilized as a proxy measure for BDD in patients seeking facial cosmetic procedures.

    Committee: Jennifer Phillips Ph.D. (Advisor); Karl Stukenberg Ph.D., ABPP (Committee Member); Kathleen Hart Ph.D., ABPP (Committee Member) Subjects: Behavioral Psychology; Behavioral Sciences; Behaviorial Sciences; Clinical Psychology; Cognitive Psychology; Health Care; Medicine; Mental Health; Psychological Tests; Psychology
  • 4. Dawson, Glen The Role of Dispositional Mindfulness in the Development of Emotion Recognition Ability and Inhibitory Control from Late Adolescence to Early Adulthood

    Doctor of Philosophy, Case Western Reserve University, 2020, Psychology

    Emotion recognition, or the accurate identification of affect as expressed by another individual, is integral to healthy social functioning. Research into emotion recognition ability has largely ignored the potential moderating effects of inhibitory control. Further, dispositional mindfulness, or non-judgmental present moment awareness, has been shown to affect emotion recognition both directly as well as indirectly through its influence on inhibitory control. Unfortunately, research into the developmental relationship between emotion recognition ability, inhibitory control, and mindfulness, particularly in late adolescence, is lacking. The present study sought to address this gap in the literature through a novel investigation of emotion recognition ability, inhibitory control, and dispositional mindfulness in a sample of late adolescents ages 16 – 17 and emerging adults 18 – 19. Participants completed an emotional go/no-go task with happy, sad, and neutral emotional facial stimuli while event-related potential (ERP) brain responses were recorded via electroencephalography (EEG). This investigation focused on the N2 and error-related negativity (ERN) ERP components. Results from performance measures indicated stronger emotion recognition ability in the emerging adult group versus the late adolescent group across valences as measured by perceptual sensitivity (d'). ERP results indicated stronger frontal brain region N2 amplitudes towards emotions in the emerging adolescent group, with no difference between groups in ERN amplitude. Mindfulness was associated with longer reaction times on the emotional go/no-go task, but had no relationship with perceptual sensitivity or ERP amplitudes. Implications for the development of emotion recognition ability in late adolescence are discussed.

    Committee: Arin Connell Ph.D. (Committee Chair); Sandra Russ Ph.D. (Committee Member); Julie Exline Ph.D. (Committee Member); Melissa Armstrong-Brine Ph.D. (Committee Member) Subjects: Psychology
  • 5. Liu, Xiao AUTOMATED FACIAL EMOTION RECOGNITION: DEVELOPMENT AND APPLICATION TO HUMAN-ROBOT INTERACTION

    Master of Sciences, Case Western Reserve University, 0, EMC - Mechanical Engineering

    This thesis presents two image processing algorithms for facial emotion recognition (FER). The first method uses two pre-processing filters (pre-filters), i.e., brightness and contrast filter and edge extraction filter, combined with Convolutional Neural Network (CNN) and Support Vector Machine (SVM). By using optimal pre-filter parameters in the pre-processing of the training images, the classification of FER could reach 98.19\% accuracy using CNN with 3,500 epochs for 3,589 face images from the FER2013 datasets. The second approach introduces two geometrical facial features based on action units -- landmark curvatures and vectorized landmarks. This method first detects facial landmarks and extracts action unit (AU) features. The extracted facial segments based on the action units are classified into five groups and input to a SVM. The presented method show how individual parameters, including detected landmarks, AU group selection, and parameters used in the SVM, can be examined and systematically selected for the optimal performance in FER. The results after parameter optimization showed 98.38\% test accuracy with training using 1,479 labeled frames of Cohn-Kanade (CK+) database, and 98.11\% test accuracy with training using 1,710 labeled frames of Multimedia Understanding Group (MUG) database for 6-emotion classification. This technique also shows the real-time processing speed of 6.67 frames per second (fps) for images with a 640x480 resolution. The novelty of the first approach is combining image processing filters with CNN to enhance CNN performance. As for the second approach, it systematically analyzed the effectiveness of proposed geometric features and implemented FER in real-time. The demonstrated algorithms have been applied on human-robot interaction (HRI) application platform - social robot ``Woody" for testing. The presented algorithms have been made publicly available.

    Committee: Kiju Lee (Committee Chair); Kathryn Daltorio (Committee Member); Frank Merat (Committee Member) Subjects: Computer Science; Mechanical Engineering; Robotics
  • 6. Madison, Annelise Social Anxiety Symptoms, Heart Rate Variability, and Vocal Emotion Recognition: Evidence of a Normative Vagally-Mediated Positivity Bias in Women

    Master of Arts, The Ohio State University, 2019, Psychology

    Prior research has revealed differences in social cognition between social anxious and non-anxious individuals. In particular, perceptual biases (i.e., greater attention to negative social stimuli, discounting positive social cues) are common features of social anxiety. Reduced parasympathetic (vagal) activity – especially in social contexts – may underlie these biases. This study examined the association between social anxiety symptoms and vocal emotion recognition accuracy as well as biases. Heart rate variability (HRV), an index of vagal activity, was tested as a potential mediator of this relationship. Female undergraduate students (N=125) completed the Social Anxiety Disorder Dimensional Scale (SAD-D), a measure of social anxiety symptomology, and completed a computerized vocal emotion recognition task using stimuli from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) stimulus set. Participants provided heart rate data throughout study participation, allowing for calculation of root-mean square of successive differences (RMSSD), a time-domain measure of HRV. Social anxiety symptoms were positively related to overall emotion recognition accuracy and inversely associated with intensity ratings of positive stimuli. Additionally, social anxiety was negatively related to HRV during the emotion recognition task. Through task HRV, there was an indirect, negative relationship between social anxiety and positivity bias and an indirect, positive relationship between social anxiety and recognition accuracy. A vagally-mediated positivity bias may be an indicator or facilitator of normal social functioning in women. Moreover, greater emotion recognition accuracy may not always be “better” – especially if it is driven by lower positivity bias.

    Committee: Janice Kiecolt-Glaser PhD (Advisor); Charles Emery PhD (Committee Member); Michael Vasey PhD (Committee Member) Subjects: Psychology
  • 7. Siddiqui, Mohammad Faridul Haque A Multi-modal Emotion Recognition Framework Through The Fusion Of Speech With Visible And Infrared Images

    Doctor of Philosophy, University of Toledo, 2019, Engineering (Computer Science)

    Multi-modal interaction is a type of Human-Computer Interaction (HCI), which involves a combination of sensory and representation modalities. In human-to-human interactions, the participants usually make use of all modalities and media available. These include speech, gestures, facial expressions, eye movements, and documents. These modalities can be captured using different types of sensors such as a microphone for voice, a camera or live video for gesture recognition, and a touchscreen for touch. The redundancies often introduced in this activity are one of the ways in which humans ensure messages are understood without the use of any technology. Multimodal interaction plays an important role in resolving such ambiguities. Their prowess to emanate unambiguous information exchange between the two collaborators make these systems more reliable, efficient, less error prone and capable of putting up complex and varied situation and tasks. Emotion recognition is a realm of HCI that follow the multimodal aspect to achieve more accurate and more natural results. Prodigious uses of affective identification in e-learning, marketing, security, health sciences etc. has resulted in the increase in demand of high precision emotion recognition systems. Machine learning is getting its feet wet to ameliorate the process by tweaking the architectures or by wielding high quality databases. This dissertation presents an insight of the work done in the areas of multi-modal HCI and their use for emotion recognition. Fusion, an important component in the architecture of the multi-modal HCI forms the cornerstone of this research work. Its implementation in various forms is discussed and implemented. Phase I of the research begins with a proposition for the fusion of two modalities at the grammar level while preserving the semantic of the modalities. This is achieved by inferring grammars and then combining those grammars using operators of the GA. The related results, assumptions (open full item for complete abstract)

    Committee: Ahmad Y. Javaid (Committee Chair); Mansoor Alam (Committee Member); Devinder Kaur (Committee Member); Xioli Yang (Committee Member); Weiqing Sun (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science
  • 8. Copps, Emily Interpersonal Functions of Non-Suicidal Self-Injury and Their Relationship to Facial Emotion Recognition and Social Problem-Solving

    Doctor of Psychology (Psy.D.), Xavier University, 2019, Psychology

    Non-suicidal self-injury (NSSI) is a growing area of concern in both clinical and non-clinical populations. Understanding the motivations for engaging in this behavior as well as the characteristics of individuals who engage in NSSI are crucial for developing maximally effective interventions. Previous research has indicated that while nearly all self-injurers report doing so as a way of regulating emotions, a slightly smaller proportion (approximately 85%) of individuals who engage in NSSI report doing so for interpersonal reasons – for example, as a way of communicating with others (Turner, Chapman, & Layden, 2012). The current study sought to examine characteristics of individuals who endorse interpersonal functions of self-injury in comparison to self-injurers who do not endorse interpersonal functions of self-injury and to non-self-injuring control participants. It was hypothesized that individuals who endorsed interpersonal NSSI would have greater deficits in social problem-solving and facial emotion recognition compared to self-injurers who do not endorse interpersonal NSSI and to control participants. There were no significant differences between the three groups on facial emotion recognition abilities. A one-way MANOVA indicated that both groups of self-injuring participants had poorer social problem-solving abilities compared to control participants. It may be that individuals with NSSI utilize self-injury as a coping mechanism to the detriment of more effective social problem-solving strategies.

    Committee: Nicholas Salsman Ph.D. (Advisor) Subjects: Clinical Psychology; Mental Health; Psychology
  • 9. Kuebel, Laura Effectiveness of a Social Skills Curriculum on Preschool Prosocial Behavior and Emotion Recognition

    Specialist in Education (Ed.S.), University of Dayton, 2017, School Psychology

    Preschool children in public school programs are expelled at three times the rate of their K-12 peers. Research demonstrates a decreased emphasis on social-emotional skill development in preschool, despite high incidences of problem behaviors. The present study investigated the effectiveness of a commercially available social skills curriculum on preschoolers' social-emotional development, specifically their pro-social behaviors and emotion recognition. Results showed that students who participated in the social skills curriculum increased prosocial skills and ability to visually recognize emotions in others. While statistical measures indicate that the intervention did not have a statistically significant impact on student emotion recognition and prosocial behavior, anecdotal reports from participating teachers indicated that the intervention was highly beneficial to participating students. Further, the curriculum had a high level of treatment acceptability by participants' teachers. Implications regarding social emotional curriculum and preschool students' prosocial skill and emotion recognition development are provided.

    Committee: Elana Bernstein (Committee Chair); Susan Davies (Committee Member); Joni Baldwin (Committee Member) Subjects: Curricula; Preschool Education
  • 10. Serrano, Verenea Exploring Social Information Processing of Emotion Content and its Relationship with Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder

    Doctor of Philosophy (PhD), Ohio University, 2017, Clinical Psychology (Arts and Sciences)

    Children with Attention-Deficit/Hyperactivity Disorder (ADHD) often experience social and emotional impairments. However, there has been limited success in reducing these impairments and increasing social status through behavioral and pharmacological interventions. A fruitful avenue may be identifying the atypical social and emotional information processes that contribute to the impairments and subsequently developing interventions that target impaired, yet malleable processes. Using social information processing (SIP) theory as a guide, indicators of social and emotional processing and the relationship between these processes and social outcomes were examined. Specifically, cue encoding, cue interpretation, and latency to emotion recognition in children with or at-risk for ADHD and children without ADHD were investigated. Participants were 72 children (aged 8-14; 59.7% male; 61.1% Non-Hispanic White), 24 in the ADHD group and 48 in the control group. The SIP tasks included cue encoding, measured via emotion recognition during a face morphing task, and cue encoding and interpretation during a television episode. Significant differences in performance between the ADHD and control groups were not found on any of the SIP tasks. Further, performance on SIP tasks was related to measures of social skill, but not to measures of social impairment. Implications for future research with children with ADHD are discussed.

    Committee: Julie Owens PhD (Advisor); Steven Evans PhD (Committee Member); Kimberly Rios PhD (Committee Member); Dianne Gut PhD (Committee Member); Amori Mikami PhD (Committee Member) Subjects: Clinical Psychology
  • 11. Walker, Bethany Evaluating the Effectiveness of a Combined Emotion Recognition and Emotion Regulation Intervention for Preschool Children with Autism Spectrum Disorder

    Doctor of Philosophy, Miami University, 2017, Psychology

    Children with Autism Spectrum Disorder (ASD) often have difficulties in emotion recognition and emotion regulation, and these deficits have been implicated in the high rates of anxiety and behavior disorders in this population. Although an early intervention approach is warranted in order to achieve optimal emotion regulation development, few studies have evaluated the effectiveness of emotion regulation interventions with young children with ASD. A combined emotion recognition and emotion regulation intervention was implemented and evaluated using a multiple baseline design in a pilot study with two preschool children with ASD and a main study with three preschool children with ASD. Children participated in discrete trial training focused on identifying happiness, sadness, anger, and fear from three types of isolated cues: facial expressions, situational contexts, and tone of voice. They were successful in identifying emotions from facial expressions and situational context, but not tone of voice. In addition, children learned to use three adaptive emotion regulation strategies (squeezing a stress ball, blowing a pinwheel, and using a handheld fan) via video modeling. Adults also provided calm down coaching, which aimed to help children use these strategies during emotional episodes. Children showed increases in their adaptive emotion regulation behavior when calm down coaching was provided compared to when it was not provided. These findings support the use of an early intervention approach to facilitate the development of emotion recognition and emotion regulation in young children with ASD.

    Committee: Vaishali Raval PhD (Advisor); Jennifer Green PhD (Committee Member); Julie Rubin PhD (Committee Member); Amity Noltemeyer PhD (Committee Member); Stephanie Weber PsyD (Committee Member) Subjects: Clinical Psychology
  • 12. Mehling, Margaret Differential Impact of Drama-Based versus Traditional Social Skills Intervention on the Brain-Basis and Behavioral Expression of Social Communication Skills in Children with Autism Spectrum Disorder

    Doctor of Philosophy, The Ohio State University, 2017, Psychology

    This study examines the differential impact a traditional social skills curriculum, SkillStreaming, and a novel, drama-based social skills intervention, the Hunter Heartbeat Method (HHM), on the core social skills deficits associated with autism spectrum disorder (ASD) and the brain-basis of these deficits. Forty children aged 8-14 years with ASD were recruited to participate in a 12-week social skills intervention. Participants were randomly assigned to receive drama-based or traditional social skills intervention once weekly. Previous research on SkillStreaming and HHM in children with ASD had reported improvement in behavioral measures of social functioning from pre- to post-intervention. Despite evidence of treatment response for both interventions, clear differences between these two types of social skills intervention exist. SkillStreaming is a highly structured, curriculum-based intervention during which specific social skills, such as making a friend or having a conversation are explicitly taught via didactic instruction, modeling, and rehearsal. Conversely, in the HHM, although core elements of modeling and repeated practice with feedback are used, there are no “skills” taught, rather, children learn drama-games that implicitly target core deficits associated with ASD (e.g., eye contact, facial emotion recognition, integration of speech and gesture). Research on drama-based interventions for children with ASD is an emerging literature; previous research has indicated that this treatment is well liked by children and, like traditional treatments, is associated with measurable improvement from pre- to post-intervention. Little is known, however, about how differences in treatment modality (didactic versus experiential) and skills taught (higher-level versus foundational) impact skill acquisition and generalization. It is possible that these key differences impact the neurological substrate of social learning, which may have downstream consequences for skill (open full item for complete abstract)

    Committee: Marc Tassé PhD (Advisor); Zhong-Lin Lu PhD (Committee Member); Luc Lecavalier PhD (Committee Member) Subjects: Psychology
  • 13. Sedall, Stephanie Aging and Emotion Recognition: An Examination of Stimulus and Attentional Mechanisms

    Master of Arts in Psychology, Cleveland State University, 2016, College of Sciences and Health Professions

    Emotion recognition is essential for interpersonal communication. However, previous research has suggested that older adults are not as accurate as younger adults in recognizing certain emotions, particularly negative facial expressions of anger, fear, and sadness. Including additional contextual information (e.g., manipulation of certain facial features) might help us better understand these age differences. The present study investigated how potential age differences in emotion recognition are influenced by stimulus factors (target eye gaze direction) as well as facial viewing patterns, cognitive functioning, and physiological processes. A sample of younger and older adults viewed static facial expressions depicting anger, fear, sadness, happiness, and disgust while their eyes were tracked. For the eye tracking analyses, focus was placed on the proportion of time fixated on the eye vs. mouth regions of the face. This was implemented on account of previous research suggesting that certain expressions are best discriminated through the eye region (i.e., anger, fear, and sadness) or the mouth region (i.e., happiness and disgust). Overall, participants were more adept at recognizing happy expressions relative to all the negative expression categories, with anger being the least recognized. Surprisingly, older adults had higher recognition accuracy for fear faces with an averted gaze. In terms of fixation patterns, significantly greater fixation preferences for eye relative mouth regions was observed for sad, anger, and fear relative to happy and disgust, but this was mainly driven by the younger adults. However, fixation patterns were not predictive of age effects regarding averted fear, or the age equivalence observed with the other facial categories. These effects could also not be easily accounted for by age differences in cognitive and physiological metrics. Rather, certain components of the task design (i.e., stimulus and response timing) might have impacted the (open full item for complete abstract)

    Committee: Eric Allard PhD (Advisor); Ilya Yaroslavsky PhD (Committee Member); Connor McLennan PhD (Committee Member); Kenneth Vail PhD (Committee Member) Subjects: Aging; Experimental Psychology
  • 14. Hughes-Scalise, Abigail Exploring the Roles of Adolescent Emotion Regulation, Recognition, and Socialization in Severe Illness: A Comparison Between Anorexia Nervosa and Chronic Pain

    Doctor of Philosophy, Case Western Reserve University, 2014, Psychology

    Haynos and Fruzetti (2011) provide a transactional model for conceptualizing anorexia nervosa (AN) in which the combination of an invalidating environment and individual emotional vulnerability increases the likelihood of an individual developing pervasive emotion dysregulation, and subsequent use of eating disordered behaviors to regulate emotion. The aim of the current study was to provide initial support for this model in adolescent AN, through examining relationships between specific emotion regulation deficits (e.g., maladaptive attentional deployment and poor emotion recognition), invalidating parental response to adolescent emotion, and adolescent AN status. Adolescents with chronic pain were used as a comparison group, as both conditions are chronic and can significantly impair psychosocial functioning, but have relatively non-overlapping primary symptom profiles. Fifty adolescent girls (25 with AN and 25 with chronic pain) between the ages of 11 and 17 completed a dot-probe attention bias task, the Reading the Mind in the Eyes task (a measure of emotion recognition), and several self-report measures on psychopathology symptoms and perceived difficulties with emotion regulation. Both parents and adolescents filled out the Emotions as a Child Scale to assess parental response to adolescent emotion. Results showed mixed evidence for increased self-reported difficulties with emotion regulation in adolescents with AN compared to adolescents with chronic pain. In regard to maladaptive parental reaction to emotion, adolescents with AN endorsed more maternal neglect of fear, punishment of anger, and magnification of anger. Finally, adolescent attentional bias and emotion recognition served to moderate the relationship between parental response to sadness and adolescent AN status: for adolescents with high attention bias towards angry faces, as well as for adolescents with superior emotion recognition abilities, maladaptive parental response to sadness predicted inc (open full item for complete abstract)

    Committee: Arin Connell PhD (Advisor); Nora Feeny PhD (Committee Member); Sandra Russ PhD (Committee Member); Carolyn Landis PhD (Committee Member) Subjects: Clinical Psychology; Families and Family Life; Individual and Family Studies; Psychology
  • 15. Merchak, Rachel Recognition of Facial Expressions of Emotion: The Effects of Anxiety, Depression, and Fear of Negative Evaluation

    Bachelor of Science, Wittenberg University, 2013, Psychology

    Anxiety is a debilitating disorder that can cause those suffering from it social dysfunction. This research focuses on how anxiety is associated with recognition of emotion on faces, as that may be a contributing factor to the social woes of those suffering from anxiety, both general and social. However, depression and fear of negative evaluation may also be associated with difficulty in recognizing emotions. In this study, 48 college students were presented with 60 facial expressions of emotion for either 500ms or 2s and asked to identify the emotion that was portrayed by choosing from a list of 6 possible choices: anger, disgust, fear, happiness, neutral, and sadness. Participants then completed measures of depressive and anxious (general and social) symptoms and fear of negative evaluation. Partial correlations were used to analyze the data. It was found that when depression and sex were controlled for, higher fear of negative evaluation and high social anxiety scores were correlated with better accuracy in identifying happy facial expressions. Additionally, higher general anxiety scores were marginally correlated with lower accuracy in identifying facial expressions of disgust. The correlations between general and social anxiety and recognition of expressions of disgust and happiness approached marginal significance or were marginally significant, respectively, when depression, fear of negative evaluation, and sex were controlled.

    Committee: Stephanie Little Dr. (Advisor); Michael Anes Dr. (Committee Member); Jeff Ankrom Dr. (Committee Member) Subjects: Psychology
  • 16. Serrano, Verenea The Relationship Between Visual Attention and Emotion Knowledge in Children with Attention-Deficit Hyperactivity Disorder

    Master of Science (MS), Ohio University, 2014, Clinical Psychology (Arts and Sciences)

    In the current study, eye-tracking technology was used in conjunction with emotion knowledge (EK) tasks to examine the relationship between visual attention and EK accuracy in children with and without ADHD. Participants were 45 children (60% male) between the ages of 8 and 12; 19 of whom met DSM-IV criteria for Attention-Deficit/Hyperactivity Disorder (ADHD) and 26 of whom did not. EK was assessed via performance on emotion recognition tasks using images of facial expressions and images of situations where the child was required to infer emotion from the context. Visual attention was measured via an eye-tracking system that recorded visual fixations while the children viewed the images. Contrary to the hypotheses, there were no significant differences between groups on EK accuracy or visual attention across the two image sets. However, small to medium effect sizes were observed (Cohen's d = -0.73 - 0.35), suggesting that, in some cases, children with ADHD are less accurate in identifying emotions and spend less time viewing relevant areas of images compared to children without ADHD. Regression analyses were conducted to examine whether parent and teacher ratings of inattention or visual attention better predict EK and social competence. These measures of inattention/attention did not predict EK accuracy; however, teacher-rated inattention predicted teacher-rated social competence. Additional research examining EK and visual attention across stimuli types and settings is needed to help understand the relationship between these constructs in children with ADHD. Implications of the findings are discussed in the context of previous research and the current study's sample characteristics.

    Committee: Julie Owens Ph.D. (Advisor) Subjects: Clinical Psychology; Psychology
  • 17. Getz, Glen FACIAL AFFECT RECOGNITON DEFICITS IN BIPOLAR DISORDER

    MA, University of Cincinnati, 2001, Arts and Sciences : Psychology

    Patients diagnosed with bipolar disorder (BPD), by definition, have problems with emotional regulation. However, it remains uncertain whether these patients are also deficient at processing other people's emotions, particularly while in the manic state. The present study examined the ability of 25 manic patients and 25 healthy participants on tasks of facial recognition and facial affect recognition at three different presentation durations: 500ms, 750ms, and 1000ms. The groups did not differ in terms of age, education, sex, race or estimated IQ. In terms of facial recognition, the groups did not differ significantly on either a novel computerized facial recognition task or the Benton Facial Recognition task. In contrast, the BPD group performed significantly more poorly than did the comparison group on a novel facial affect discrimination task and a novel facial affect labeling task at 500ms presentation duration. Facial affect processing was not impaired at longer presentation durations. Further, the patient group slower on all three computerized tasks. This study indicates that patients with BPD may need more time to examine facial affect, but exhibit a normal ability to recognize faces.

    Committee: Stephen Strakowski (Advisor) Subjects: Psychology, General
  • 18. Linardatos, Eftihia FACIAL EMOTION RECOGNITION IN GENERALIZED ANXIETY DISORDER AND DEPRESSION: ASSESSING FOR UNIQUE AND COMMON RESPONSES TO EMOTIONS AND NEUTRALITY

    PHD, Kent State University, 2011, College of Arts and Sciences / Department of Psychological Sciences

    Facial emotion recognition has a central role in human communication. Generalized anxiety disorder (GAD) and major depressive disorder (MDD) have been associated with deficits in social and interpersonal functioning raising the question as to whether these conditions are also associated with deficits in facial emotion recognition. In addition to being associated with interpersonal difficulties, GAD and MDD overlap substantially at the genotypic and phenotypic level. However, these mental health conditions differ at the cognitive level in that GAD is associated with thoughts revolving around threatening information, whereas thoughts in depression are related to loss, failure, and sadness. These unique cognitive mechanisms may also play a role in the process of facial emotion recognition resulting in differential patterns of responses to facial expressions of emotions for GAD and depression. Although facial emotion recognition has been investigated in MDD, no studies to date have examined this process in GAD. The goals of the present study were threefold: 1) Examine the overall accuracy of facial emotion recognition as well as that for specific emotions in GAD, MDD, and comorbid MDD+GAD, 2) Examine misattributions in facial expression recognition in response to anger, sadness, and neutral expressions in GAD, MDD, and comorbid MDD+GAD, and 3) Investigate the relationship of facial emotion recognition and interpersonal functioning in the context of GAD and MDD. A sample of 90 participants with GAD, MDD, comorbid MDD+GAD, and healthy controls completed a facial emotion recognition task and a battery of self-report measures. The findings did not support a general or specific deficit in facial emotion recognition in MDD and GAD. Further, individuals with MDD and GAD did not differ in their responses to neutral facial expressions nor to other basic emotions. The findings are discussed in the context of future clinical and research directions.

    Committee: David Fresco PhD (Advisor); John Gunstad PhD (Committee Member); John Updegraff PhD (Committee Member); David Hussey PhD (Committee Member); William Kalkhoff PhD (Committee Member) Subjects:
  • 19. St-Hilaire, Annie Are paranoid schizophrenia patients really more accurate than other people at recognizing spontaneous expressions of negative emotion? A study of the putative association between emotion recognition and thinking errors in paranoia

    PHD, Kent State University, 2008, College of Arts and Sciences / Department of Psychological Sciences

    Impairments in facial affect recognition have been linked to schizophrenia. Recent studies suggest that the degree of impairment varies as a function of clinical subtype. Paranoid patients have been found to be more accurate than nonparanoid patients at recognizing posed and spontaneous facial expressions of negative emotion, and more accurate even than nonpsychiatric controls at identifying spontaneous expressions of negative emotions. This is noteworthy given that spontaneous expressions are generally less intense and more ambiguous than posed expressions of emotion. No studies, however, have attempted to explicate this finding. The main objectives of the present investigation, therefore, were to replicate this finding and test the hypothesis that cognitive biases associated with psychosis cause paranoid patients to interpret ambiguous expressions of emotion as more negative than others. To do so, 24 paranoid schizophrenia patients, 26 nonparanoid schizophrenia patients, and 29 control participants completed an emotion recognition task as well as measures of attentional and referential biases consisting of a probabilistic reasoning task, an attribution style questionnaire, a theory of mind task, and an emotional Stroop task. Contrary to expectations, impairments in the recognition of posed and spontaneous expressions of emotion were found in both the paranoid and nonparanoid schizophrenia groups. Furthermore, although paranoid patients' performance on some of the cognitive measures was suggestive of biased processing of ambiguous information, thinking errors did not predict accurate recognition of spontaneous expressions of negative emotions. IQ was the only significant predictor of performance on the recognition of spontaneous expressions of negative emotion. Results therefore suggest that, regardless of subtype, stable schizophrenia outpatients have more difficulty recognizing facial expressions of negative emotion in others than nonpsychiatric controls. Emotion (open full item for complete abstract)

    Committee: Nancy Docherty PhD (Committee Chair); Deborah Barnbaum PhD (Committee Member); Steven Brown PhD (Committee Member); William Merriman PhD (Committee Member); John Updegraff PhD (Committee Member) Subjects: Psychology
  • 20. Aspiras, Theus Emotion Recognition using Spatiotemporal Analysis of Electroencephalographic Signals

    Master of Science (M.S.), University of Dayton, 2012, Electrical Engineering

    Emotion recognition using electroencephalographic (EEG) recordings is a new area of research which focuses on recognition of emotional states of mind rather than impulsive responses. EEG recordings are found useful for the detection of emotions through monitoring the emotion characteristics of spatiotemporal variations of activations inside the brain. To distinguish between different emotions using EEG data, we need to provide specific spectral descriptors as features to quantify these spatiotemporal variations. We propose several new features, namely Normalized Root Mean Square (NRMS), Absolute Logarithm Normalized Root Mean Square (ALRMS), Logarithmic Power (LP), Normalized Logarithmic Power (NLP), and Absolute Logarithm Normalized Logarithmic Power (ALNLP) for the classification of emotions. A protocol has been established to elicit five distinct emotions: joy, sadness, disgust, fear, surprise, and neutral. EEG signals are collected using a 256-channel system, preprocessed using band-pass filters and a Laplacian Montage, and decomposed into five frequency bands using Discrete Wavelet Transform. The decomposed signals are transformed into different spectral descriptors and are classified using a two-layer Multilayer Perceptron (MLP) neural network. The Logarithmic Power descriptor produces the highest recognition rates, 91.82% and 94.27% recognition for two different experiments, which is more than 2% higher than when using other features.

    Committee: Vijayan Asari PhD (Committee Chair); Tarek Taha PhD (Committee Member); Eric Balster PhD (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Engineering; Neurosciences; Psychology