Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 35)

Mini-Tools

 
 

Search Report

  • 1. Marapakala, Shiva Machine Learning Based Average Pressure Coefficient Prediction for ISOLATED High-Rise Buildings

    Master of Science in Mechanical Engineering, Cleveland State University, 2023, Washkewicz College of Engineering

    In structural design, the distribution of wind-induced pressure exerted on structures is crucial. The pressure distribution for a particular building is often determined by scale model tests in boundary layer wind tunnels (BLWTs). For all combinations of interesting building shapes and wind factors, experiments with BLWTs must be done. Resource or physical testing restrictions may limit the acquisition of needed data because this procedure might be time- and resource-intensive. Finding a trustworthy method to cyber-enhance data-collecting operations in BLWTs is therefore sought. This research analyzes how machine learning approaches may improve traditional BLWT modeling to increase the information obtained from tests while proportionally lowering the work needed to complete them. The more general question centers on how a machine learning-enhanced method ultimately leads to approaches that learn as data are collected and subsequently optimize the execution of experiments to shorten the time needed to complete user-specified objectives. 3 Different Machine Learning models, namely, Support vector regressors, Gradient Boosting regressors, and Feed Forward Neural networks were used to predict the surface Averaged Mean pressure coefficients cp on Isolated high-rise buildings. The models were trained to predict average cp for missing angles and also used to train for varying dimensions. Both global and local approaches to training the models were used and compared. The Tokyo Polytechnic University's Aerodynamic Database for Isolated High-rise buildings was used to train all the models in this study. Local and global prediction approaches were used for the DNN and GBRT models and no considerable difference has been found between them. The DNN model showed the best accuracy with (R2 > 99%, MSE < 1.5%) among the used models for both missing angles and missing dimensions, and the other two models also showed high accuracy with (R2 > 97%, MSE < 4%).

    Committee: Navid Goudarzi (Committee Chair); Prabaha Sikder (Committee Member); Mustafa Usta (Committee Member) Subjects: Artificial Intelligence; Design; Engineering; Urban Planning
  • 2. Zerai, Finhas Mineral Prospectivity Mapping Using Integrated Remote Sensing and GIS in Kerkasha - Southwest Eritrea

    Master of Science (MS), Bowling Green State University, 2023, Geology

    This study evaluates the potential for mineral prospectivity mapping (MPM) within the Kerkesha area, southwestern Eritrea using remote sensing and geochemical data analysis. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) remote sensing data was used for mapping zones of hydrothermal alteration, while assessment of geologic structures is based on automated extraction of lineaments from a digital elevation model. Integration of these alteration and structural dataset with surface geochemical data were used in identifying pathfinder elements associated with Au-Cu-Zn mineralization as well as evaluating and delineating anomalous mineralization regions in this relatively underexplored region of Arabia Nubia Shield (ANS). Specifically, the modeling approach for the extraction and the interpretation of mineralization-related spectral footprints uses selective principal component analysis (SPCA), while the lineament features, which were extracted from different digital terrain models, were integrated with the soil geochemical data and modeled by principal component analysis (PCA). The results reveal a northeast-southwest trend of lineaments, delineate zones of hydrothermal alteration which indicate presence of multi-deposit type mineralization, and identify pathfinder elements. In addition, Au-Cu-Zn anomalous zones are extracted by one class support vector machine (OCSVM) and performances of such classification is validated by Kruskal-Wallis and Pearson's Chi-square tests. The results show significance in differences between the anomalous and non-anomalous zones and existence of a relationship between known mineral deposits and predicted anomalies. The proposed MPM shows promising results for robust automated delineation and understanding of mineralization processes.

    Committee: Peter Gorsevski Ph. D. (Committee Chair); Kurt Panter Ph. D. (Committee Member); John Farver Ph. D. (Committee Member) Subjects: Geochemistry; Geographic Information Science; Geology
  • 3. Demus, Justin Prognostic Health Management Systems for More Electric Aircraft Applications

    Master of Science, Miami University, 2021, Computational Science and Engineering

    As power electronics permeate critical infrastructure in modern society, more precise and effective diagnostic methods are required to improve system reliability as well as reduce maintenance costs and unexpected failures. Prognostic and Health Management (PHM) systems are real-time analysis hardware that estimate device health by monitoring underlying failure mechanisms. While several variants of PHM methods have been explored, the use of electromagnetic interference (EMI) as a conditional monitoring tool, referred to as E-PHM, has received limited attention despite its utility as a sensitive and non-invasive prognostic tool. This research demonstrates the feasibility of E-PHM techniques to measure, in real-time, the junction temperature of power devices using machine learning algorithms (MLAs). This is accomplished, in situ, without interruption of device operation and without altering the system's performance. Semiconductor operating parameters are sensitive to changes in temperature, altering device behavior. These changes in behavior are reflected in the electromagnetic spectrum of the circuit. Preliminary research has classified changes in EMI via Support Vector Machine algorithm to predict device junction temperature. The proposed approach will shift from classification-based models, such as the SVM, to regression-based models to improve accuracy and precision in junction temperature prediction.

    Committee: Mark Scott (Advisor); Miao Wang (Committee Member); Chi-Hao Cheng (Committee Member) Subjects: Electrical Engineering; Engineering
  • 4. Bradley, Rebecca Spectroscopy and Machine Learning: Development of Methods for Cancer Detection Using Mid-Infrared Wavelengths

    Doctor of Philosophy, The Ohio State University, 2021, Chemical Physics

    Cancer is a disease that affects millions of people each year, and cancer detection is currently done using costly and inefficient methods. The purpose of this research has been to develop methods that use infrared spectroscopy and machine learning to accurately and efficiently detect cancer. The vibrational information from the molecules of tissue can be accessed through infrared spectroscopy and various spectral metrics including those from spectral peak ratios, calibrant spectra, and principal component analysis of spectral libraries. This information is coupled with machine learning methods for separation and feature selection. Using these methods, two imaging experiments were conducted on SKH-1 mice with skin cancer and colorectal cancer metastatic to the liver in humans. Support vector machine learning methods were able to separate the tumor spectra from other spectra, including nontumor, with high accuracy. Support vector machines were also used to determine optimum peak ratios for separation to reduce the number of wavelengths needed. Support vector machine methods were also compared with metrics from currently used tissue staining techniques – hematoxylin and eosin – which showed that infrared spectra are more effective at separating cancer under the present conditions. These known optical techniques were also joined with infrared spectroscopy for a combined approach. Using the support vector machine decision equation, images of tissue were created to aid in the diagnosis of cancer. The methods developed were used as a basis for the design of a fast infrared probe that can detect skin cancer with high levels of accuracy in a clinical trial. The fast infrared probe was also able to separate between two different types of skin cancer – basal and squamous cell carcinomas. This prototype probe could be modified with an etalon filter to increase the efficiency of the probe when used in a clinical setting. This research develops and tests methods that show that i (open full item for complete abstract)

    Committee: James Coe Ph.D. (Advisor); Heather Allen Ph.D. (Committee Member); Sherwin Singer Ph.D. (Committee Member); Dongping Zhong Ph.D. (Committee Member) Subjects: Chemistry; Physics
  • 5. Bard, Ari Modeling and Predicting Heat Transfer Coefficients for Flow Boiling in Microchannels

    Master of Sciences, Case Western Reserve University, 2021, EMC - Mechanical Engineering

    Flow boiling has become a reliable mode of adapting to larger power densities and greater functions because it is able to utilize both the latent and sensible heat contained within a specified coolant. There are currently few available tools proven reliable when predicting heat transfer coefficients during flow boiling in microchannels. The most popular methods rely on semi-empirical correlations derived from experimental data but can only be applied to a narrow subset of testing conditions. This study will use multiple data science methods to accurately predict the heat transfer coefficient during flow boiling in micro-channels on a database consisting of 16,953 observations collected across 50 experiments using 12 working fluids. The support vector machine model performed best, with a Mean Absolute Percentage Error (MAPE) of 11.3%. The heat flux, vapor-only Froude number, and quality proved to be especially significant variables across 90% of over 110 different models.

    Committee: Chirag Kharangate PHD (Advisor); Brian Maxwell PHD (Committee Member); Roger French PHD (Committee Member) Subjects: Mechanical Engineering
  • 6. Park, Samuel A Comparison of Machine Learning Techniques to Predict University Rates

    Master of Science, University of Toledo, 2019, Mathematics

    In recent years the use of machine learning techniques in data analysis has grown immensely in popularity. While the use of such techniques has been helpful for those interested in data analytics, it is important to understand the underlying structures of these methods in order to implement them more effectively. In this thesis we will discuss the motivation behind decision trees, random forests, support vector machines, and neural networks, alongside the more traditional logistic regression and the Generalized Additive Partially Linear Model (GAPLM) estimator developed by Liu et al. We will also discuss cross validation as well as ROC and AUC as ways to compare the effectiveness between these models. We conclude this thesis with an application of these methods by predicting whether or not an undergraduate student, who is enrolled in the fall semester, will enroll in the following spring semester. We also include Linear Discriminant Analysis and Quadratic Discriminant Analysis in the data analysis portion of this thesis. We find that the GAPLM method performs the best out of all the methods used.

    Committee: Rong Liu (Committee Chair); Geoffrey Martin (Committee Member); Qin Shao (Committee Member) Subjects: Mathematics; Statistics
  • 7. Elkin, Colin Development of Adaptive Computational Algorithms for Manned and Unmanned Flight Safety

    Doctor of Philosophy, University of Toledo, 2018, Engineering (Computer Science)

    A strong emphasis on safety in commercial and military aviation is as old and as significant as the field of aviation itself. With the growing role of autonomy in aviation, the future of flight comprises of two general directions: manned and unmanned. Manned aircraft is the more established area, in which a human flight crew serves as the main driving force in ensuring an aircraft's safety and success. Within this time-tested concept, the most significant bottleneck of safety lies within a crew managing tasks of high mental workload. In recent years, autonomy has aided in easing cognitive workload. From there, the challenge lies within applying a seamless blend of human and autonomous control based on the needs of one's mental load. Meanwhile, the field of unmanned aerial vehicles (UAVs) poses its own unique challenges of integrating into a shared airspace and transitioning from remote human-centric control to fully autonomous control. In such a case, minimizing discrepancies between predicted UAV behavior and actual outcomes is an ongoing task to ensure a safe and reliable flight. While manned and unmanned flight safety may seem distinctly different in these regards, this dissertation proposes an overarching common theme that lies within the ability to effectively model inputs and outputs through machine learning to predict potential safety hazards and thereby improve the overall flight experience. This process is conducted by 1) evaluating different machine learning techniques on assessing cognitive workload, 2) predicting trajectories for autonomous UAVs, and 3) developing adaptive systems that dynamically select appropriate algorithms to ensure optimal prediction accuracy at any given time. The first phase of the research involves the manned side of flight safety and does so by examining effects of different machine learning techniques used for assessing cognitive workload. This begins by comparing the different algorithms on four different datasets i (open full item for complete abstract)

    Committee: Vijay Devabhaktuni PhD (Committee Chair); Mansoor Alam PhD (Committee Member); Ahmad Javaid PhD (Committee Member); Devinder Kaur PhD (Committee Member); Weiqing Sun PhD (Committee Member); Lawrence Thomas PhD (Committee Member) Subjects: Computer Engineering; Computer Science
  • 8. Pavy, Anne SV-Means: A Fast One-Class Support Vector Machine-Based Level Set Estimator

    Doctor of Philosophy (PhD), Wright State University, 2017, Electrical Engineering

    In this dissertation, a novel algorithm, SV-Means, is developed motivated by the many functions needed to perform radar waveform classification in an evolving, contested environment. Important functions include the ability to: reject classes not in the library, provide confidence in the classification decision, adapt the decision boundary on-the-fly, discover new classes, and quickly add new classes to the library. The SV-Means approach addresses these functions by providing a fast algorithm that can be used for anomaly detection, density estimation, open set classification, and clustering, within a Bayesian generative framework. The SV-Means algorithm extends the quantile one-class support vector machine (q-OCSVM) density estimation algorithm into a classification formulation with inspiration from k-means and stochastic gradient descent principles. In addition, the algorithm can be trained at least an order of magnitude faster than the q-OCSVM and other OCSVM algorithms. SV-Means has been thoroughly tested with a phase-modulated radar waveform data set, and several data sets from the University of California Irvine (UCI) machine learning repository, in each application area except clustering. In clustering, a novel algorithm, SV-Means Level Set Clustering, was formulated using the SV-Means algorithm as a first step to determine the number of clusters per level set and distinguish overlapping clusters. Finally, an end-to-end demonstration from training, to testing, to clustering, to adding a new class to the library, was demonstrated using the SV-Means algorithm.

    Committee: Brian Rigling Ph.D. (Advisor); Fred Garber Ph.D. (Committee Member); Kefu Xue Ph.D. (Committee Member); Michael Bryant Ph.D. (Committee Member); Randolph Moses Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 9. Dalvi, Aditi Performance of One-class Support Vector Machine (SVM) in Detection of Anomalies in the Bridge Data

    MS, University of Cincinnati, 2017, Engineering and Applied Science: Electrical Engineering

    In a time where Structural Health Monitoring (SHM) is a topic of vital importance for safety and maintenance of critical structures such as bridges, detecting damages or anomalies as well as analyzing the normal behavior of structures has also gained significance in recent years. Data models have been increasingly used in recent years for tracking normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long-term measurement data of structures. Structural Health Monitoring (SHM) of civil infrastructure like highway bridges, during construction or in-service use is executed at University of Cincinnati Infrastructure Institute (UCII), thus giving access to the actual measurement data of the bridges. The essence of this SHM system lies in the processing of data, where it is able to detect anomalies in the data. The current system utilizes linear regression method to detect outliers in the bridge data. This study introduces a novel anomaly detection method employing one-class Support Vector Machines (SVM) and compares the performance of SVMs with traditional regression model. This method is implemented on the measurement data of Ironton-Russell Bridge monitored by UCII, which was in-service use, and its results are compared with linear regression as a case study. The method is further implemented on Ironton-Russell Replacement Bridge which UCII has been monitoring since the construction stage. The actual construction events of the Ironton-Russell Replacement Bridge are being used as validation for the comparison. The aim is to show advantages of employing SVMs due to their abilities to classify damages even with minimum training data. The results show that using SVMs will improve the detectability and also the (open full item for complete abstract)

    Committee: Arthur Helmicki Ph.D. (Committee Chair); Victor Hunt Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 10. Yu, Andrew NBA ON-BALL SCREENS: AUTOMATIC IDENTIFICATION AND ANALYSIS OF BASKETBALL PLAYS

    Master of Computer and Information Science, Cleveland State University, 2017, Washkewicz College of Engineering

    The on-ball screen is a fundamental offensive play in basketball; it is often used to trigger a chain reaction of player and ball movement to obtain an effective shot. All teams in the National Basketball Association (NBA) employ the on-ball screen on offense. On the other hand, a defense can mitigate its effectiveness by anticipating the on-ball screen and its goals. In the past, it was difficult to measure a defender's ability to disrupt the on-ball screen, and it was often described using abstract words like instincts, experience, and communication. In recent years, player motion-tracking data in NBA games has become available through the development of sophisticated data collection tools. This thesis presents methods to construct a framework which can extract, transform, and analyze the motion-tracking data to automatically identify the presence of on-ball screens. The framework also provides assistance for NBA players and coaches to adjust their game plans regarding the on-ball screen using trends from past games. With the help of support vector machines, the framework identifies on-ball screens with an accuracy of 85%, which shows considerable improvement from the current published results in existing literature.

    Committee: Sunnie Chung Ph.D. (Committee Chair); Yongjian Fu Ph.D. (Committee Member); Nigamanth Sridhar Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 11. Jiao, Weiwei Predictive Analysis for Trauma Patient Readmission Database

    Master of Science, The Ohio State University, 2017, Public Health

    Introduction: Identifying the key elements associated with hospital readmission is critical in terms of controlling the cost for hospitals and improving the care quality for patients. Our goal is to compare three different statistical models of predicting readmission rate in pediatric trauma patients and identify important risk factors. Methods: Logistic regression, random forest and support vector machine are popular statistical models for predicting binary outcomes. We apply these three methods to the Healthcare Cost and Utilization Project (HCUP) National Readmissions Database (NRD) 2013-2014 to compare their predictive performance for readmission. Results: The Support Vector Machine method with linear function has the greatest mean AUC (0.6724) across 10-fold cross validation in the training set. The logistic regression model has the greatest AUC value (0.6862) in the validation set. Support Vector Machine with linear function (AUC=0.6842) has the lowest misclassification rate and highest sensitivity in the validation set. Conclusions: Pediatric trauma patients have a low readmission risk. The key factors of readmission are CCS diagnosis, age and mechanism of trauma.

    Committee: Bo Lu (Advisor); Chi Song (Committee Member) Subjects: Biostatistics
  • 12. Albanwan, Hessah Remote Sensing Image Enhancement through Spatiotemporal Filtering

    Master of Science, The Ohio State University, 2017, Civil Engineering

    The analysis of time-sequence satellite images is a powerful tool in remote sensing; it is used to explore the statics and dynamics of the surface of the earth. Usually, the quality of multitemporal images is influenced by metrological conditions, high reflectance of surfaces, illumination, and satellite sensor conditions. These negative influences may produce noises and different radiances and appearances between the images, which can affect the applications that process them. Thus, a spatiotemporal bilateral filter has been adopted in this research to enhance the quality of an image before using it in any application. The filter takes advantage of the temporal information provided by multi-temporal images and attempts to reduce the differences between them to improve transfer learning used in classification. The classification method used here is support vector machine (SVM). Three experiments were conducted in this research, two were on Landsat 8 images with low-medium resolution, and the third on high-resolution images of Planet satellite. The newly developed filter proved that it can enhance the accuracy of classification using transfer learning by about 5%,15% and 2% for the three experiments

    Committee: Rongjun Qin (Advisor); Alper Yilmaz (Committee Member); Charles Toth (Committee Member) Subjects: Civil Engineering
  • 13. Roos, Jason Probabilistic SVM for Open Set Automatic Target Recognition on High Range Resolution Radar Data

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2016, Electrical Engineering

    The application of Automatic Target Recognition (ATR) on High Range Resolution (HRR) radar data in a scenario that contains unknown targets is of great interest for military and civilian applications. HRR radar data provides greater resolution of a target as well as the ability to perform ATR on a moving target, which gives it an advantage over other imaging systems. With the added resolution of HRR comes the disadvantage that a change in the aspect angle or orientation results in greater changes in the collected data, making classical ATR more difficult. Closed set ATR on HRR radar data is defined when all potential targets are assumed to be part of the training target data base. Closed set ATR has been able to achieve higher rates of correct classification by the selection of proper feature extraction algorithms, however, only a few methods for performing open set ATR have been developed. Open set ATR is the ability to identify and discard when a target is not one of the trained targets. By identifying these untrained targets, the number of misclassified targets is reduced, thereby, increasing the probability of a correct classification in a realistic setting. While the open set ATR produces a more realistic approach, the classical closed-set ATR is the standard method of ATR. One of the more popular classification algorithms currently used today is the Support Vector Machine (SVM). The SVM by nature only works on a binary closed-set problem. However, by extracting probabilities from an SVM as proposed by Platt [1], this classification algorithm can be applied to open set. In this thesis, the feature extraction methods established in closed-set ATR are modified to facilitate the application of the Probabilistic Open Set Support Vector Machine (POS-SVM). Utilizing the Eigen Template (ET) and Mean Template (MT) feature extraction methods developed for closed-set ATR, in combination with centroid alignment, an open set ATR Probability of correct classification (PCC (open full item for complete abstract)

    Committee: Arnab Shaw Ph.D. (Advisor); Brian Rigling Ph.D. (Committee Member); Michael Saville Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 14. Shalev, Ronny AUTOMATED MACHINE LEARNING BASED ANALYSIS OF INTRAVASCULAR OPTICAL COHERENCE TOMOGRAPHY IMAGES

    Doctor of Philosophy, Case Western Reserve University, 2016, EECS - Electrical Engineering

    Coronary artery disease (CAD) is the leading cause of death in the world. Most acute coronary events (e.g. heart attacks) are due to the rupture of atherosclerotic plaques inside the arteries, however, calcified lesion is the most widely treatable, typically, by stent implantation via percutaneous coronary intervention (PCI). Intravascular Optical Coherence Tomography (IVOCT) imaging has the resolution, contrast, and penetration depth to characterize coronary artery plaques. Conventional manual evaluation of IVOCT images, based on qualitative interpretation of image features, is tedious and time consuming. The aim of this PhD dissertation was to develop advanced algorithms to fully automate the task of plaque characterization, thereby significantly reduce image analysis time, enable intervention planning, and increase IVOCT data usability. We based our algorithms on machine learning combined with advanced image processing techniques. We developed a processing pipeline on a 3D local region of support for estimation of optical properties of atherosclerotic plaques from coronary artery, IVOCT pullbacks. Performance was assessed in comparison with observer-defined standards using clinical pullback data. Values (calcium 3.58±1.74mm−¹, lipid 9.93±2.44mm−¹ and fibrous 1.96±1.11mm−¹) were consistent with previous measurements. We, then, created a method to automatically classify plaque tissues as fibrous, calcified, or lipid-rich. For this multi-class problem, we used one-versus-rest SVM classifiers for each of the three plaque types, rules to exclude many voxels called “other,” and both physics-inspired and local texture features to classify voxels. Experiments on the clinical training data yielded 5-fold, voxel-wise accuracy of 87.7±8.6%, 96.7±4.9% and 97.3±2.4% for calcified, lipid-rich and fibrotic tissues, respectively. Experiments on the independent validation data (ex-vivo image data accurately labeled using registered 3D microscopic cryo-imaging and was used as (open full item for complete abstract)

    Committee: David Wilson PhD (Advisor); Soumya Ray PhD (Committee Member); Hiram Bezerra PhD, MD (Committee Member); Murat Cavusoglu PhD (Committee Chair); Francis Merat PhD (Committee Member) Subjects: Artificial Intelligence; Computer Science; Medical Imaging
  • 15. Scherreik, Matthew A Probabilistic Technique For Open Set Recognition Using Support Vector Machines

    Master of Science in Engineering (MSEgr), Wright State University, 2014, Electrical Engineering

    Classification algorithms trained using finite sets of target and confuser data are limited by the training set. These algorithms are trained under closed set assumptions and do not account for the infinite universe of confusers found in practice. In contrast, classification algorithms developed under open set assumptions label inputs not present in the training data as unknown instead of assigning the most likely class. We present an approach to open set recognition, the probabilistic open set SVM, that utilizes class posterior estimates to determine probability thresholds for classification. This is accomplished by first training an SVM in a 1-vs-all configuration on a training dataset containing only target classes. A validation set containing only class data belonging to the training set is used to iteratively determine appropriate posterior probability thresholds for each target class. The testing dataset, which contains targets present in the training data as well as several confuser classes, is first classified by the 1-vs-all SVM. If the estimated posterior for an input falls below the threshold, the target is labeled as unknown. Otherwise, it is labeled with the class resulting from the SVM decision. We apply our method to classification of synthetic ladar range images of civilian vehicles and measured infrared images of military vehicles. We show that the POS-SVM offers improved performance over other open set algorithms by allowing the use of nonlinear kernels, incorporating intuitive free parameters, and empirically determining good thresholds.

    Committee: Brian Rigling Ph.D. (Advisor); Fred Garber Ph.D. (Committee Member); Arnab Shaw Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 16. Whitney, G. Adam Characterization of the Frictional-Shear Damage Properties of Scaffold-Free Engineered Cartilage and Reduction of Damage Susceptibility by Upregulation of Collagen Content

    Doctor of Philosophy, Case Western Reserve University, 2015, Biomedical Engineering

    Cartilage tissue engineers have made great inroads on understanding the factors controlling chondrogenesis, however, the biomechanical properties of tissue engineered cartilage (TEC) are chronically inferior to that of native cartilage. The focus of this dissertation was to determine the ability of scaffold-free TEC to withstand frictional-shear stress, and if needed, to improve that ability to a physiologically relevant level. Frictional-shear testing performed at a sub-physiological normal stress of 0.55 MPa demonstrated that constructs exhibited lubrication patterns characteristic of native cartilage lubrication, but severe damage also occurred. Low absolute collagen content, and a low collagen-to-glycosaminoglycan (GAG) ratio were also found in the same constructs. Reduction in damage was attempted by increasing the collagen content of the ECM. Scaffold-free TEC treated with T4 at 25 ng/ml exhibited increased collagen concentration in a statistically significant manner, and the average collagen-to-GAG ratio was also increased although statistical significance was not achieved. Western blotting showed that type II collagen was increased, type X collagen was not detected. COL2A1, and biglycan gene expression were also found to have increased, no statistically significant difference was found for COLX gene expression. When compared to control constructs, T4 treated constructs exhibited a large and statistically significant decrease in the extent of damage incurred by frictional-shear testing. At the 2.8 MPa normal stress, total damage was reduced by 60% in the 2-month constructs. Correlation coefficients calculated between compositional properties and the amount of damage showed that at the 2.8 MPa normal stress collagen concentration and the collagen-to-GAG ratio exhibited the greatest correlation to damage (correlation coefficient of approximately -0.7 with a 95% confidence interval of approximately -0.87 to -0.38 for both). In conclu (open full item for complete abstract)

    Committee: James Dennis Ph.D. (Advisor); Joseph Mansour Ph.D. (Advisor); Horst von Recum Ph.D. (Committee Chair); Eben Alsberg Ph.D. (Committee Member) Subjects: Biomechanics; Biomedical Engineering; Biomedical Research; Engineering; Materials Science
  • 17. Han, Kun Supervised Speech Separation And Processing

    Doctor of Philosophy, The Ohio State University, 2014, Computer Science and Engineering

    In real-world environments, speech often occurs simultaneously with acoustic interference, such as background noise or reverberation. The interference usually leads to adverse effects on speech perception, and results in performance degradation in many speech applications, including automatic speech recognition and speaker identification. Monaural speech separation and processing aim to separate or analyze speech from interference based on only one recording. Although significant progress has been made on this problem, it is a widely regarded challenge. Unlike traditional signal processing, this dissertation addresses the speech separation and processing problems using machine learning techniques. We first propose a classification approach to estimate the ideal binary mask (IBM) which is considered as a main goal of sound separation in computational auditory scene analysis (CASA). We employ support vector machines (SVMs) to classify time-frequency (T-F) units as either target-dominant or interference-dominant. A rethresholding method is incorporated to improve classification results and maximize hit minus false alarm rates. Systematic evaluations show that the proposed approach produces accurate estimated IBMs. In a supervised learning framework, the issue of generalization to conditions different from those in training is very important. We then present methods that require only a small training corpus and can generalize to unseen conditions. The system utilizes SVMs to learn classification cues and then employs a rethresholding technique to estimate the IBM. A distribution fitting method is introduced to generalize to unseen signal-to-noise ratio conditions and voice activity detection based adaptation is used to generalize to unseen noise conditions. In addition, we propose to use a novel metric learning method to learn invariant speech features in the kernel space. The learned features encode speech-related information and can generalize to unseen noise (open full item for complete abstract)

    Committee: DeLiang Wang (Advisor); Eric Fosler-Lussier (Committee Member); Mikhail Belkin (Committee Member) Subjects: Computer Science
  • 18. Plis, Kevin The Effects of Novel Feature Vectors on Metagenomic Classification

    Master of Science (MS), Ohio University, 2014, Computer Science (Engineering and Technology)

    Metagenomics plays a crucial role in our understanding of the world around us. Machine learning and bioinformatics methods have struggled to accurately identify the organisms present in metagenomic samples. By using improved feature vectors, higher classification accuracy can be found when using the machine learning classification approach to identify the organisms present in a metagenomic sample. This research is a pilot study that explores novel feature vectors and their effect on metagenomic classification. A synthetic data set was created using the genomes of 32 organisms from the Archaea and Bacteria domains, with 450 fragments of varying length per organism used to train the classification models. By using a novel feature vector one tenth of the size of the currently used feature vectors, a 6.34%, 21.91%, and 15.07% improvement was found over the species level accuracy on 100, 300, and 500 bp fragments, respectively, for this data set. The results of this study also show that using more features does not always translate to a higher classification accuracy, and that higher classification accuracy can be achieved through feature selection.

    Committee: Lonnie Welch PhD (Advisor) Subjects: Artificial Intelligence; Bioinformatics; Computer Science
  • 19. Wehmann, Adam A Spatial-Temporal Contextual Kernel Method for Generating High-Quality Land-Cover Time Series

    Master of Arts, The Ohio State University, 2014, Geography

    In order to understand the variability, drivers, and effects of the currently unprecedented rate, extent, and intensity of land-cover change, land change science requires remote sensing products that are both highly accurate and spatial-temporally consistent. This need for accuracy is exacerbated from the shift in the discipline from the detection of change between two points in time to the analysis of trajectories of change over time. As the length of temporal record increases, the problem becomes more severe. This follows because the accuracy of change detection is bounded below by the product of the accuracies of the source maps. Without exceedingly high accuracy at individual dates, the accuracy of change detection will be low, as map errors simply and vastly outweigh the occurrence of real change. Land-cover classifiers that can better utilize spatial and temporal information offer the chance to increase the accuracy of change detection and the consistency of classification results. By increasing the spatial and temporal dependence of errors between classification maps, the overall area among maps subject to error may be minimized, producing higher quality land-cover products. Such products enable more accurate and consistent detection, monitoring, and quantification of land-cover change and therefore can have wide-reaching impacts on downstream environmental, ecological, and social research. To address these problems fundamental to the creation of land-cover products, this thesis seeks to develop a novel contextual classifier for multi-temporal land-cover mapping that fully utilizes spatial-temporal information to increase the accuracy of change detection, while remaining resistant to future advances in the spatial and spectral characteristics of remote sensor technology. By combining the complementary strengths of two leading techniques for the classification of land cover – the Support Vector Machine and the Markov Random Field – through a novel s (open full item for complete abstract)

    Committee: Desheng Liu (Advisor); Ningchuan Xiao (Committee Member); Brian Kulis (Committee Member) Subjects: Computer Science; Geographic Information Science; Geography; Remote Sensing
  • 20. Kothiyal, Prachi Detection and Classification of Sequence Variants for Diagnostic Evaluation of Genetic Disorders

    PhD, University of Cincinnati, 2010, Engineering : Biomedical Engineering

    Identifying and cataloguing individual and population-level DNA sequence variations is a critical step towards understanding the genetic basis of disease and clinically significant human variation. Recent advances in molecular microarray technology have made it feasible to rapidly screen DNA samples for possible genetic mutations. This dissertation focuses on evaluating the efficacy of resequencing arrays as a tool for variant detection and proposes mechanistic bases and computational algorithms that can be employed for an improvement in performance. We present results from hearing loss arrays developed in two different research facilities and highlight some of the approaches we adopted to enhance the applicability of the arrays in a clinical setting. We leveraged sequence and intensity pattern features responsible for diminished coverage and accuracy and developed a novel algorithm, sPROFILER, which resolved >80% of no-calls from Affymetrix™ GSEQ and allowed 99.6% (range: 99.2-99.8%) of sequence to be called, while maintaining overall accuracy at >99.8% based upon dideoxy sequencing comparison. We implemented a bioinformatics pipeline that incorporated sPROFILER to support clinical genetic testing of hearing loss patients at the Cincinnati Children's Hospital Medical Center. The utility of any molecular diagnostic tool in determining the genetic basis of a disease is fully realized only when an effective variant detection method is complemented by a rigorous framework for evaluating the potential clinical significance of these variants. We evaluated the contribution of various properties related to amino acid substitution in determining whether a residue change is damaging or not. We developed a machine learning-based framework to assess the functional impact of missense variants using childhood Sensorineural Hearing Loss and Hypertrophic/Dilated Cardiomyopathy as specific instances of application of the methodology. We compared our method with some of the repres (open full item for complete abstract)

    Committee: Bruce Aronow PhD (Committee Chair); Marepalli Rao PhD (Committee Member); John Greinwald Jr., MD (Committee Member) Subjects: Bioinformatics