Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
Masood_Delfarah_Dissertation.pdf (5.56 MB)
ETD Abstract Container
Abstract Header
Deep learning methods for speaker separation in reverberant conditions
Author Info
Delfarah, Masood
ORCID® Identifier
http://orcid.org/0000-0002-8354-0832
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1563225381590785
Abstract Details
Year and Degree
2019, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Abstract
Speech separation refers to the problem of separating target speech from acoustic interference such as background noise, room reverberation and other speakers. An effective solution to this problem can improve speech intelligibility of human listeners and speech processing systems. Speaker separation is one kind of speech separation in which the interfering source is also human speech. This dissertation addresses the speaker separation problem in reverberant environments. The goal is to increase the speech intelligibility of hearing-impaired and normal-hearing listeners in those conditions. Speaker separation is traditionally approached using model-based methods such as Gaussian mixture models (GMMs) or hidden Markov models (HMMs). These methods are unable to generalize to challenging cases with unseen speakers or nonstationary noise. We employ supervised learning for the speaker separation problem. The idea is inspired from studies that introduced deep neural networks (DNNs) to speech-nonspeech separation. In this approach, training data is used to learn a mapping function from noisy speech features to an ideal time-frequency (T-F) mask. We start this study by investigating an extensive set of acoustic features extracted in adverse conditions. DNNs are used as the learning machine, and separation performance is evaluated using standard objective speech intelligibility metrics. Separation performance is systematically evaluated in both nonspeech and speech interference, a variety of SNRs, reverberation times, and direct-to-reverberant energy ratios. We construct feature combination sets using a sequential floating forward selection algorithm, and combined features outperform individual ones. Next, we address the problem of separating two-talker mixtures in reverberant conditions. We employ recurrent neural networks (RNNs) with bidirectional long short-term memory (BLSTM) to separate and dereverberate the target speech signal. We propose two-stage networks to effectively deal with both speaker separation and speech dereverberation. In the two-stage model, the first stage separates and dereverberates two-talker mixtures and the second stage further enhances the separated target signal. We have extensively evaluated the two-stage architecture, and our empirical results demonstrate large improvements over unprocessed mixtures and clear performance gain over single-stage networks. We then evaluate the performance of our proposed speaker separation systems using listening experiments in anechoic and reverberant conditions. Substantial intelligibility improvements were found for human listeners across a wide range of reverberation times and target-to-interfering ratios, and in some cases, hearing-impaired listeners with algorithm benefit match or surpass the performance of normal hearing people listening to unprocessed two-talker mixtures. We also address talker-independent speaker separation in which test speakers are unseen during training. Talker-independent solutions have mostly been limited to anechoic conditions. Here, we address reverberant talker-independent separation. We propose two-stage algorithms for the task by extending the deep CASA system that was recently proposed for talker-independent speaker separation.
Committee
DeLiang Wang (Advisor)
Fosler-Lussier Eric (Committee Member)
Healy Eric (Committee Member)
Pages
118 p.
Subject Headings
Artificial Intelligence
;
Computer Engineering
;
Computer Science
Keywords
Speaker separation
;
dereverberation
;
deep learning
;
feature selection
;
speech intelligibility
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Delfarah, M. (2019).
Deep learning methods for speaker separation in reverberant conditions
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1563225381590785
APA Style (7th edition)
Delfarah, Masood.
Deep learning methods for speaker separation in reverberant conditions.
2019. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1563225381590785.
MLA Style (8th edition)
Delfarah, Masood. "Deep learning methods for speaker separation in reverberant conditions." Doctoral dissertation, Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1563225381590785
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1563225381590785
Download Count:
506
Copyright Info
© 2019, all rights reserved.
This open access ETD is published by The Ohio State University and OhioLINK.