Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 14)

Mini-Tools

 
 

Search Report

  • 1. Cox, Bethany Spoken Word Recognition as a Function of Musicianship and Age

    Master of Arts in Psychology, Cleveland State University, 2024, College of Liberal Arts and Social Sciences

    There is a balance of gains and losses across the lifespan. One example of a gain is vocabulary, while an example of a loss is in one's reduced ability to understand speech in the presence of background noise. Investigations into the spoken word recognition environment can shed light on the differences in cognitive and auditory processing that occur throughout the lifespan. One component of the spoken word recognition environment is the listener. Patel's OPERA hypothesis (2011) suggests that the benefits listeners derive from musical training on neural encoding of speech is driven by adaptive plasticity in speech-processing networks. In the current research study, I investigated relationships between age, musicianship, and spoken word recognition. Participants heard a male talker say either a word or nonword and responded by pressing a designated key corresponding to word or nonword on their keyboard. Participants then completed the Goldsmiths Musical Sophistication Index, which was used to categorize each participant as either a musician or a nonmusician. Two repeated measures ANCOVAs were used to analyze the data. The covariate was years of musical experience. The results indicate that both younger and older adult musicians had more efficient responses (more accurate & faster) than nonmusicians. Additionally, participants were more efficient at responding to the easy words compared to the hard words. Interestingly, older adults had significantly more accurate responses than younger adults. The current study furthers our understanding of the connections between musicianship and spoken word recognition in younger and older adults.

    Committee: Conor McLennan (Advisor); Eric Allard (Committee Member); Katherine Judge (Committee Member); Philip Allen (Committee Member) Subjects: Aging; Cognitive Psychology; Music; Psychology
  • 2. Cox, Bethany The Effects of Musical Instrument Gender on Spoken Word Recognition

    Master of Arts in Psychology, Cleveland State University, 2021, College of Sciences and Health Professions

    One area of auditory processing research involves investigations into how spoken words are processed. When hearing spoken words, in addition to the word itself, listeners process other information, such as the gender of the talker. Both spoken words and other sounds humans encounter in their environment are managed in the auditory system. A common example of an environmental sound is music. Additional research has demonstrated that participants associate musical instruments with genders. In the current research study, I examined the effects that musical instrument gender and talker gender have on spoken word recognition. Female participants heard a moment of silence or a male, female, or neutral instrument play a song clip followed by either a male or female talker saying a word or nonword. The participant responded by indicating whether the talker said a word or a nonword by pressing the appropriate button on the keyboard. Two repeated measures ANOVAs were used to analyze the data, one on accuracy and one on reaction time to correct responses. A main effect of condition was found in the reaction time analysis, with the silent condition producing the fastest responses. In addition, participants responses were faster following the female instruments than the male and neutral instruments. In the future, researchers could compare these results utilizing a male sample. The current research aids in the understanding of how humans process auditory stimuli and contributes to a body of research revealing connections between environmental sound processing, including (perhaps especially) music, and spoken word recognition.

    Committee: Conor McLennan (Committee Chair); Eric Allard (Committee Member); Andrew Slifkin (Committee Member); Olivia Pethtel (Committee Member) Subjects: Cognitive Psychology
  • 3. Farrell, Megan Examining the electrophysiology of long-term priming: Repetition and talker specificity effects on spoken word recognition

    Master of Arts in Psychology, Cleveland State University, 2020, College of Sciences and Health Professions

    Our knowledge of how spoken words are represented in the brain is currently limited. In this study, we aimed to probe the representation of spoken words to determine if details related to an episode of exposure to a spoken word are included in those representations. We hypothesized that episodic details of a spoken word are included in mental representations of spoken words, but that these details are not accessed until a relatively late stage of processing. Participants were presented with disyllabic high and low frequency real words in American English, as well as nonwords. Participants were initially exposed to stimuli in block 1, completed a distractor math test, and then were re-exposed to the same stimuli in block 2 (long-term repetition priming), in the context of a lexical decision task. Half of the re-exposed stimuli were spoken by the same talkers in blocks 1 and 2, while half were spoken by different talkers. Block 2 also included a control condition with new, unprimed words. Reaction times, accuracy, and event-related potentials were measured during block 2. The results were as follows: There was no evidence for repetition effects (advantages for words repeated by the same talker, compared to unprimed words) or talker effects (advantages for words repeated by the same talker compared to words repeated by different talkers) in accuracy for either high or low frequency words. Significant repetition effects in RT were found for both high and low frequency words, such that participants were quicker to respond to words repeated by the same speaker compared to unprimed words. A trend toward talker effects in high frequency words was observed, but not in low frequency words, as we had predicted.

    Committee: Robert Hurley (Committee Chair); Conor McLennan (Committee Member); Eric Allard (Committee Member); Andrew Slifkin (Committee Member) Subjects: Psychology
  • 4. Bell, Erin Investigating the Electrophysiology of Long-Term Priming in Spoken Word Recognition

    Master of Arts in Psychology, Cleveland State University, 2018, College of Sciences and Health Professions

    When participants are listening to the same words spoken by different talkers, two types of priming are possible: repetition priming and talker-specific priming. Repetition priming refers to the exposure of a stimulus improving responses to a subsequent exposure. Talker-specific priming refers to the exposure of words spoken by same talkers improving responses relative to those same words spoken by different talkers. There are conflicting theories regarding whether talker-specific priming should be observed. Abstract representational theories suggest that episodic details (e.g., talker identity) are not stored in the mental lexicon, while episodic theories of the lexicon posit that lexical representations include episodic details. According to the time-course hypothesis, the mental lexicon includes both types of representations, and abstract representations are accessed earlier than episodic representations. In the present experiment, long-term priming in spoken word recognition was tested using a technique that is particularly well-suited for answering questions about timing: event-related potentials (ERPs). Participants heard words spoken by two different talkers in each of two separate blocks. Stimuli in the second block consisted of three different priming conditions, which are described in relation to what participants heard in the first block: new, unprimed, words (control), repeated words spoken by the same talker (match), and repeated words spoken by different talkers (mismatch). Evidence for long-term repetition priming was obtained in reaction times and accuracy. Electrophysiological evidence of repetition priming was obtained in low frequency words. Talker-specific priming effects were observed in accuracy, with more accurate responses in the match condition than in the mismatch condition, consistent with episodic representational theories. However, there was no evidence of talker-specific priming in the ERP data, which, when considered alone, (open full item for complete abstract)

    Committee: Robert Hurley (Advisor); Conor McLennan (Committee Member); Ilya Yaroslavsky (Committee Member) Subjects: Cognitive Psychology; Experimental Psychology; Psychology
  • 5. Zhang, Yu Processing Speaker Variability in Spoken Word Recognition: Evidence from Mandarin Chinese

    Doctor of Philosophy (PhD), Ohio University, 2017, Speech-Language Science (Health Sciences and Professions)

    Processing speech involves recognition of words in the face of acoustic variability. This dissertation examines the role of processing speaker variability in Mandarin word recognition. Tone language users contrast meaning using fundamental frequency patterns. Given the role of fundamental frequency in both voice and words, this dissertation addresses how speaker variability is resolved by tone language users. Two short-term priming experiments were designed to investigate the effect of speaker variability on processing phonological form and meaning in Mandarin word recognition. Prime-target pairs that are identical (e.g., 画面image–画面image) or semantically related (e.g., 声音sound–画面image) were presented to 48 native listeners in a lexical decision task. It was predicted that the magnitude of priming would be reduced when different speakers produced prime and target pairs. Results from reaction time analyses showed that speaker variability between prime and target reduced the magnitude of phonological priming. The magnitude of semantic priming was unaffected by speaker variability. These results suggest that speaker voice information is processed at a relatively shallow level in Mandarin word recognition. Speaker variability may affect phonological form processing instead of the semantic network in tone languages. These results were comparable to English data, where similar effects of speaker variability were observed.

    Committee: Chao-Yang Lee (Advisor) Subjects: Acoustics; Cognitive Psychology; Health Sciences; Linguistics
  • 6. Wiener, Seth The Representation, Organization and Access of Lexical Tone by Native and Non-Native Mandarin Speakers

    Doctor of Philosophy, The Ohio State University, 2015, East Asian Languages and Literatures

    This dissertation explores how lexical tone in Mandarin Chinese is learned and used during spoken word recognition by native and non-native speakers. This research begins with the hypothesis that lexical tone is not just one of an arbitrary set of unpredictable speech cues, but rather, like any other component of language, its frequency of occurrence and the predictability with which it co-occurs with other aspects of spoken language can be tracked and stored as statistical knowledge. This hypothesis challenges established theories of word recognition by predicting a dynamic contribution of tone; speakers learn to listen for and store tone information on the basis of the frequency and probability of syllables and tones co-occurring over time in speech. To explore the statistical learning of lexical tone and how speaker variability influences this learning, an artificial tonal language was tested over the course of four days. In this four-day training and testing paradigm, three groups of participants — 40 native Mandarin speakers, 40 native English speakers learning Mandarin as a second language, and 40 monolingual native English speakers — learned 130 CV+tone nonce words, each paired with a black and white nonce symbol. Unique CV syllables were each combined with four different tonal contours (directly comparable to those in Mandarin). CV syllable frequency (high/low) was crossed with syllable-specific tonal probability (a tone contour was most probable or least probable to occur with a specific CV) to produce four conditions. Participants were trained and tested with either a single voice or with four voices. Each day, participants' word identification was assessed using a naming task and a word recognition task, which recorded mouse clicks and real-time eye movement responses to speech stimuli. All participants showed rapid, daily improvements across the four sessions. Results from the naming task indicate that all participants, regardless (open full item for complete abstract)

    Committee: Marjorie Chan (Advisor); Shari Speer (Advisor); Mineharu Nakayama (Committee Member); Kiwako Ito (Committee Member); Chao-Yang Lee (Committee Member) Subjects: Asian Studies; Atoms and Subatomic Particles; Experimental Psychology; Foreign Language; Language; Linguistics; Modern Language
  • 7. Weatherholtz, Kodi Perceptual learning of systemic cross-category vowel variation

    Doctor of Philosophy, The Ohio State University, 2015, Linguistics

    Phonological processes such as vowel chain shifting result in complex systems of cross-category vowel variation across spoken varieties of a language (Labov, 1994). The experiments comprising this dissertation aimed to understand how listeners cope with such systemic pronunciation variation to recognize spoken words.

    Committee: Cynthia Clopper (Advisor); Shari Speer (Committee Member); Mark Pitt (Committee Member) Subjects: Linguistics
  • 8. Shin, Jeonghwa Prosodic Effects on Spoken Word Recognition in Second Language:Processing of Lexical Stress by Korean-speaking Learners of English

    Doctor of Philosophy, The Ohio State University, 2012, Linguistics

    Prosody is known to influence how listeners interpret the sequence of sounds, syllables, and higher order organizational units and thus how lexical access proceeds during spoken word recognition. The present study explores how language-specific variation in prosodic structure affects L2 learners' processing of prosodic categories during spoken word recognition. Specifically, the study examines how Korean-speaking learners of English use English lexical stress during spoken word recognition in two eyetracking experiments and a gating experiment.

    Committee: SHARI SPEER PhD (Committee Chair); MARY BECKMAN PhD (Committee Member); CYNTHIA CLOPPER PhD (Committee Member) Subjects: Linguistics
  • 9. Szostak, Christine Identifying the *eel on the Table: An Examination of Processes that Aid Spoken Word Ambiguity Resolution

    Master of Arts, The Ohio State University, 2009, Psychology

    Because ambiguous words frequently occur in running speech, the perceptual system must somehow resolve the ambiguities. If disambiguating context is present, it can aid resolution. When the disambiguation follows the ambiguity, the perceptual system delays resolution. Four Experiments investigated this delay. In Experiment 1, participants heard sentences containing a target word with a phoneme replaced by or intermixed with noise followed by disambiguating context close to or far from the target word (e.g., "The *ing had feathers." or "The *ing had an exquisite set of feathers."). In Experiments 2a and 2b duration between target word offset and disambiguation varied while syllable number remained constant. In Experiment 3 syllable number within this region varied while duration remained constant. Increasing syllable number caused early commitment. When the duration was also increased, early commitment was not observed. The findings suggest that sufficient processing time is necessary for the perceptual system to delay ambiguity resolution.

    Committee: Mark Pitt PhD (Advisor); Simon Dennis PhD (Committee Member); Eric Healy PhD (Committee Member) Subjects: Acoustics; Behaviorial Sciences; Language; Linguistics; Psychology
  • 10. Tracy, Erik Phonological mismatches: how does the position and degree of the mismatch affect spoken word recognition?

    Doctor of Philosophy, The Ohio State University, 2006, Psychology

    The word recognition system is a remarkably robust system. Given this robustness, how tolerant is the system of noise within the speech signal, such as phonological mismatches? A phonological mismatch is when a phoneme is substituted in a word to create a nonsense word. For example, “bemocrat” differs from “democrat” in terms of the initial phoneme. Phonological mismatches vary along two dimensions: position (initial or medial) and distance. With regards to distance, a phoneme can be altered by either one distinctive feature (near change), or two or three distinctive features (far change). To investigate the issue of tolerance, simulations were first performed on TRACE, an influential model of word recognition, because it could provide actual results. The simulations demonstrated that the model is more tolerant of medial rather than initial mismatches, but the results were mixed concerning the distance of the mismatch. Next, three experiments were conducted. The first experiment, which utilized the phoneme monitoring paradigm, produced suspect results. The second experiment, which utilized the form priming paradigm, revealed that the recognition system is more tolerant of medial rather than initial mismatches, which confirms the results of the TRACE simulations. Similarly, the experimental results were mixed with regards to the distance of the mismatch.

    Committee: Mark Pitt (Advisor) Subjects: Psychology, Cognitive
  • 11. Newell, Jessica EXAMINING WHETHER SOCIAL FACTORS AFFECT LISTENERS' SENSITIVITY TO TALKER-SPECIFIC INFORMATION DURING THEIR ONLINE PERCEPTION OF SPOKEN WORDS

    Master of Arts in Psychology, Cleveland State University, 2011, College of Sciences and Health Professions

    McLennan and Luce (2005) found no significant cost associated with changing which talker produced a particular word from the first block of trials to the second (no talker effects) when participants responded relatively quickly (easy lexical decision), and that talker effects emerged when participants responded relatively slowly (hard lexical decision). In a lexical decision task, participants hear words and nonwords and reaction times to correct responses are measured. In the current study, we examined whether social factors would lead to talker effects in an easy lexical decision task. In Experiment 1, participants were told that they have a chance to be part of a desirable high achieving group if they performed with high accuracy. Based on previous time-course findings, we predicted that talker effects would emerge in the current experiment, given that participants' attention to accuracy was expected to slow processing. Participants on the contrary sped up. We successfully demonstrated that group belonging is a sufficiently strong prime to alter the way participants perform in this task. In Experiment 2, participants (all males) were told that they would have the opportunity to meet the two talkers (one male and one female) they would hear during the experiment at the end. Moreover, participants were given some (fabricated) background information about the talkers, including mention that the female is attractive and the male is unattractive. Based on previous findings in social psychology, we predicted that the male participants would attend more to the female's voice than to the male's voice. We demonstrated that the female serves as a more effective prime for words later spoken by both the same female talker, and also by the male talker. Examining the relationship between social factors and talker effects should lead to improved models of spoken word recognition, and provide important new insights into how listeners perceive spoken words in various social conte (open full item for complete abstract)

    Committee: Conor McLennan PhD (Committee Chair); Ernest Park PhD (Committee Member); Naohide Yamamoto PhD (Committee Member) Subjects: Cognitive Psychology; Social Psychology
  • 12. Wilson, Maura Examining the effects of variation in emotional tone of voice on spoken word recognition

    Master of Arts in Psychology, Cleveland State University, 2011, College of Sciences and Health Professions

    Despite the importance of emotional tone of voice for optimal verbal communication, how emotional speech is processed and its effects on spoken word recognition have yet to be fully understood. The current study addressed these gaps in the literature by examining the effects of intra-talker variability in emotional tone of voice on listeners' ability to recognize spoken words. Two lexical decision experiments, varying in task difficulty, were implemented to analyze participants' percent correct (PC) and reaction times (RTs). Previous research on spoken word recognition using this paradigm has found performance costs resulting from stimuli that mismatch on specific information (e.g., the identity of the talker) contained in the speech signal. Such specificity effects occurred only when processing was relatively slow, not when processing was relatively fast. In the current study, when processing was fast (Experiment 1), no specificity effects of emotional tone of voice emerged. When processing was slow (Experiment 2), specificity effects of emotional tone of voice emerged, but only for target words spoken in a sad emotional tone of voice and not for target words spoken in a frightened emotional tone of voice. RTs to sad target words mismatched in emotional tone of voice from prime to target blocks were longer than those that matched, but RTs to frightened target words were the same regardless of the emotional tone of voice of the word in the prime block. Separate analyses were conducted on the top and bottom performers on a Musical Listening Test (MLT). For those who scored in the top 25%, for sad target words only (not frightened), specificity effects of emotional tone of voice emerged. For those who scored in the bottom 75% on the MLT, no specificity effects emerged, regardless of emotional tone of voice. The results of the current study have important implications for theoretical models of spoken word recognition and emotional tone of voice.

    Committee: Conor T. McLennan PhD (Committee Chair); Katherine S. Judge PhD (Committee Member); Ernest S. Park PhD (Committee Member) Subjects: Cognitive Psychology; Experimental Psychology; Music; Psychology
  • 13. Tuft, Samantha The effects of talker variability and talkers' gender on the perception of spoken taboo words

    Master of Arts in Psychology, Cleveland State University, 2013, College of Sciences and Health Professions

    In the current experiment, I examined the effects of inter-talker variability and talkers' gender on listeners' perception of spoken taboo words. Previous spoken word recognition research using the long-term repetition-priming paradigm, in which listeners respond to two separate blocks of spoken words, found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks and the identity of the talker remained the same (e.g., male to male) reaction times (RTs) were faster relative to when the repeated words were spoken by two different talkers (e.g., male to female). Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research has found that explicit and implicit attention towards the talker led to talker effects (even during relatively fast processing). The purpose of the current study was to examine how word meaning could affect the pattern of talker effects. Participants completed an easy lexical decision task and participants' mean accuracy rates and RTs were analyzed. I hypothesized that hearing taboo words would surprise the listeners and grab their attention, such that talker effects are obtained even when processing is relatively fast. The results are consistent with the attention-based hypothesis that talker effects emerge when participants hear both spoken taboo and neutral words. However, talker effects emerged regardless of the talkers' gender. In addition, taboo words were responded to faster than neutral words, suggesting that spoken word recognition can be affected by word meaning. The results of the current study have important implications for theoretical models of spoken word recognition and how attention plays a role.

    Committee: Conor T. McLennan PhD (Committee Chair); Naohide Yamamoto PhD (Committee Member); Katherine S. Judge PhD (Committee Member) Subjects: Cognitive Psychology; Experimental Psychology; Psychology
  • 14. Szostak, Christine Individual Differences in Working Memory Capacity Influence Spoken Word Recognition

    Doctor of Philosophy, The Ohio State University, 2013, Psychology

    Prior work has shown that when speech is unclear, listeners show a greater dependence upon semantic than on acoustic information to aid word identification when distracting stimuli (e.g., other talkers) are present. The current project extended this work to explore whether individual differences in working memory capacity (WMC) would influence the likelihood that listeners will depend on the biasing information when distracted. In five experiments, participants heard sentences that contained an early target word with or without noise at its onset and a subsequent word that was semantically biased in favor of the target word or one of its lexical competitors (e.g., The wing had an exquisite set of feathers or The wing had an exquisite set of diamonds where diamonds would be semantically associated with ring). The sentences were presented in the presence of distracters ranging in their degree of signal-similarity to that of the sentence (e.g., another speaker vs. an everyday nonspeech sound). Participants made target word identification and sentence sensibility judgments for each sentence they heard. The findings showed that those with lower WMC were more likely to depend upon biasing than on acoustic signal information, but only when the signal was masked by noise. In contrast, those with higher WMC showed less dependence upon the biasing information than those with lower WMC, even when the signal was masked by noise. Although performance across distracter similarity was not influenced by WMC, the likelihood of being able to anticipate what distraction would be heard was shown to influence performance as a function of WMC. A discussion of the role of WMC in spoken word recognition, especially during distraction, is provided and the potential mechanisms involved in this process are considered.

    Committee: Pitt Mark Ph.D. (Advisor); Per Sederberg Ph.D. (Committee Member); Simon Dennis Ph.D. (Committee Member); Eric Healy Ph.D. (Committee Member) Subjects: Acoustics; Behaviorial Sciences; Cognitive Psychology; Experimental Psychology; Language; Linguistics; Psychology