Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 32)

Mini-Tools

 
 

Search Report

  • 1. Seltzer, Gregory Measured Phase History Data for Target Recognition Studies

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2024, Electrical Engineering

    Performing automatic target recognition (ATR) on full-size aircraft targets using inverse synthetic aperture radar (ISAR) data is challenging and expensive. The use of scale models and radar systems of such large targets saves time and reduces facility requirements. This study examines the feasibility of performing ATR on 1:144 scale model airplanes at Ka-band. The scale model and Ka-band radar simulate the collection of full-scale targets at VHF-band. The phase history measurement collections were completed in the Sensors and Signals Exploitation Laboratory (SSEL) at Wright State University. To ensure sufficient data for training and testing, the phase history data was augmented through mathematical translation and rotation of the scene. These augmented images were processed using the polar format algorithm and subsequently classified using support vector machines and convolutional neural networks. The resulting ATR models achieved a classification accuracy of over 82 percent for all aircraft types, except for the very similar B747-8 and B747-8F, which exhibited misclassification rates consistent with expectations for such similar targets.

    Committee: Michael A. Saville Ph.D. (Advisor); Cheryl B. Schrader Ph.D. (Committee Member); Michael L. Raymer Ph.D. (Committee Member); Josh Ash Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 2. Singh, Harshdeep AI-Enabled Hardware Security Approach for Aging Classification and Manufacturer Identification of SRAM PUFs

    Master of Science (MS), Wright State University, 2024, Computer Science

    Semiconductor microelectronics integrated circuits (ICs) are increasingly integrated into modern life-critical applications, from intelligent infrastructure and consumer electronics to the Internet of Things (IoT) and advanced military and medical systems. Unfortunately, these applications are vulnerable to new hardware security attacks, including microelectronics counterfeits and hardware modification attacks. Physical Unclonable Functions (PUFs) are state-of-the-art hardware security solutions that utilize process variations of integrated circuits for device authentication, secret key generation, and microelectronics counterfeit detection. The negative impact of aging on Static Random \linebreak Access Memory Physical Unclonable Functions (SRAM PUFs) has significant consequences for microelectronics authentication, security, and reliability. This research thoroughly \linebreak examines the effect of aging on the reliability of SRAM PUFs used for secure and trusted microelectronics integrated circuit applications. It initially provides an overview of SRAM PUFs, highlighting their significance and essential features while addressing encountered challenges. The study then covers mitigation techniques, including multi-modal PUFs, that already exist to boost the resilience of SRAM PUFs against aging impacts, highlighting their advantages and the gap in the research addressed in this research. This work proposes a novel AI-enabled security for reliable SRAM PUFs. The proposed approach aims to study and countermeasure the impact of aging on SRAM PUF by analyzing data samples, including Bias Temperature Instability (BTI), Bit Flips, Accelerated aging, and Hot Carrier Injection (HCI) and to study their effects on SRAM PUF cell properties and output. Accelerated aging is a direct result of a change in the environmental temperature and voltage for a few hours. We aim to mitigate the impact of accelerated aging on the reliability authentication and encryption keys of (open full item for complete abstract)

    Committee: Fathi Amsaad Ph.D. (Advisor); Wen Zhang Ph.D. (Committee Member); John Emmert Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Information Technology; Technology
  • 3. Marapakala, Shiva Machine Learning Based Average Pressure Coefficient Prediction for ISOLATED High-Rise Buildings

    Master of Science in Mechanical Engineering, Cleveland State University, 2023, Washkewicz College of Engineering

    In structural design, the distribution of wind-induced pressure exerted on structures is crucial. The pressure distribution for a particular building is often determined by scale model tests in boundary layer wind tunnels (BLWTs). For all combinations of interesting building shapes and wind factors, experiments with BLWTs must be done. Resource or physical testing restrictions may limit the acquisition of needed data because this procedure might be time- and resource-intensive. Finding a trustworthy method to cyber-enhance data-collecting operations in BLWTs is therefore sought. This research analyzes how machine learning approaches may improve traditional BLWT modeling to increase the information obtained from tests while proportionally lowering the work needed to complete them. The more general question centers on how a machine learning-enhanced method ultimately leads to approaches that learn as data are collected and subsequently optimize the execution of experiments to shorten the time needed to complete user-specified objectives. 3 Different Machine Learning models, namely, Support vector regressors, Gradient Boosting regressors, and Feed Forward Neural networks were used to predict the surface Averaged Mean pressure coefficients cp on Isolated high-rise buildings. The models were trained to predict average cp for missing angles and also used to train for varying dimensions. Both global and local approaches to training the models were used and compared. The Tokyo Polytechnic University's Aerodynamic Database for Isolated High-rise buildings was used to train all the models in this study. Local and global prediction approaches were used for the DNN and GBRT models and no considerable difference has been found between them. The DNN model showed the best accuracy with (R2 > 99%, MSE < 1.5%) among the used models for both missing angles and missing dimensions, and the other two models also showed high accuracy with (R2 > 97%, MSE < 4%).

    Committee: Navid Goudarzi (Committee Chair); Prabaha Sikder (Committee Member); Mustafa Usta (Committee Member) Subjects: Artificial Intelligence; Design; Engineering; Urban Planning
  • 4. Ogundeji, Seyi Object-Based Classification of Unmanned Aerial Vehicles (UAVs)/Drone Images to monitor H2Ohio Wetlands

    Master of Science, University of Toledo, 2022, Geology

    Wetlands are referred to as the kidney of the catchment due to their ability to reduce nutrient loads adjoining water bodies, hence mitigating eutrophication. The Ohio government has employed this beneficial mechanism as part of the H2Ohio program to abate the immediate release of nutrients in the water body by reconstructing wetlands at a number of locations in the Maumee watershed. We are using a combination of Unmanned Aerial Vehicles (UAVs), machine learning, and field mapping to generate maps of wetland vegetation communities which assist in establishing the effectiveness of these restored sites in nutrient removal. The vegetation in most of these wetlands can take in and recycles most of the nutrients (especially Nitrogen and Phosphorus) from the incoming runoff, thereby reducing the nutrient loads into the waterbodies. Out of 82 wetlands restoration projects in northwest Ohio, this research focuses on two (Forder Bridge and Oakwoods Nature Preserve). High-resolution near-infra-red (NIR) and visible images acquired with UAV were corrected geometrically and radiometrically in Agisoft Metshape to generate Orthomosaic and Digital Elevation Model (DEM). These products are combined and then segmented into homogenous units (objects) based on the similarities in shape, scale, color, smoothness, texture, etc. Then, object-based classifiers, including Support Vector Machine (SVM) and Random Forest (RF), were used to classify the landcover into desired classes based on field sampling. Each class represents a vegetation type or other landcover (like waterbody, bare ground, etc.) present on the wetland. The accuracy of the classification model was tested with held-back field validation data using a confusion matrix. The entire process was compared across varying flight altitudes (100ft, 200ft, and 300ft), two Object based image analysis (OBIA) software (eCognition and Orfeo Toolbox(OTB)), and two wetland types (Forder Bridge and Oakwoods). At Forder Bridge, the kappa coe (open full item for complete abstract)

    Committee: Richard Becker (Committee Chair); Kennedy Doro (Committee Member); Kevin Czajkowski (Committee Member) Subjects: Environmental Science; Remote Sensing
  • 5. Abhishek, Abhinav Cyberbullying Detection Using Weakly Supervised and Fully Supervised Learning

    Master of Computer and Information Science, Cleveland State University, 2022, Washkewicz College of Engineering

    Machine learning is a very useful tool to solve issues in multiple domains such as sentiment analysis, fake news detection, facial recognition, and cyberbullying. In this work, we have leveraged its ability to understand the nuances of natural language to detect cyberbullying. We have further utilized it to detect the subject of cyberbullying such as age, gender, ethnicity, and religion. Further, we have built another layer to detect the cases of misogyny in cyberbullying. In one of our experiments, we created a three-layered architecture to detect cyberbullying , then to detect if it is gender based and finally if it is a case of misogyny or not. In each of our experimentation we trained models with support vector machines, RNNLSTM, BERT and distilBERT, and evaluated it using multiple performance measuring parameters like accuracy, bias, mean square error, recall, precision and F1 score to evaluate each model more efficiently in terms of bias and fairness. In addition to fully supervised learning, we also used weakly supervised learning techniques to detect the cyberbullying and its subject during our experimentations. Finally, we compared the performance of models trained using fully supervised learning and weakly supervised learning algorithms. This comparison further demonstrated that using weak supervision we can develop models to handle complex use cases such as cyberbullying. Finally, the thesis document concludes by describing lessons learned, future work recommendations and the concluding remarks.

    Committee: Sathish Kumar, Ph.D. (Committee Chair); Hongkai Yu, Ph.D. (Committee Member); Chansu Yu, Ph.D. (Committee Member) Subjects: Computer Science
  • 6. Li, Jiasen Prediction of Electricity Price Quotation Data of Prioritized Clean Energy Power Generation of Power Plants in The Buyer's Market

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    Electric power is one of the most important energy sources in the world. The stable supply of electric power plays an important role in production development, work, and life. People in all walks of life can do nothing without electricity. Therefore, if the power system is unstable, resulting in the occurrence of power failure, a certain area will be basically paralyzed, the communication will be blocked, the production will not be able to proceed, and the hospital will not be able to treat the patients, people's lives cannot be guaranteed. Therefore, it is very important to ensure the continuous, reliable and stable supply of electricity. In recent years, with the continuous development of global electricity market reform, electricity has become a freely traded commodity, and its price changes in real-time. Therefore, electricity price has become the most concerned issue. Generally, load and power generation are affected by meteorological factors such as temperature, wind speed, and precipitation, as well as the intensity of business and daily activities, such as weekends, hour etc. Therefore, electricity prices show seasonal and highly complex volatility in different time scales (daily, weekly, and annual), and there are often sudden and short-term price spikes. The fluctuation of electricity prices makes it more difficult to predict the behavior of participants in the power market and increases the risk of imbalance between supply and demand in the power market, which affects the stability of power grid operation. In the market-oriented environment of power trade, accurate price forecasting is of great significance to all stakeholders in the power market. As the buyer of the electricity trading market, it is more likely to obtain more profits in electricity trading by obtaining accurate price information in advance. From the perspective of power consumers, such as some factories with large power consumption, electricity price occupies a key part of their pr (open full item for complete abstract)

    Committee: Yizong Cheng Ph.D. (Committee Chair); Ali Minai Ph.D. (Committee Member); Boyang Wang (Committee Member) Subjects: Computer Science
  • 7. Sysoeva, Viktoriia Hidden Markov Model-Supported Machine Learning for Condition Monitoring of DC-Link Capacitors

    Master of Science, Miami University, 2020, Computational Science and Engineering

    Power electronics are critical components in society's modern infrastructure. In electrified vehicles and aircraft, losing power jeopardizes personal safety and incur financial penalties. Because of these concerns, many researchers explore condition monitoring (CM) methods that provide real-time information about a system';s health. This thesis develops a CM method that determines the health of a DC-link capacitor in a three-phase inverter. The approach uses measurements from a current transducer in two Machine Learning (ML) algorithms, a Support Vector Machine (SVM), and an Artificial Neural Network (ANN), that classify the data into groups corresponding to the capacitor's health. This research evaluates six sets of data: time-domain, frequency-domain, and frequency-domain data subjected to four smoothing filters: the moving average with a rectangular window (MARF) and a Hanning window, the locally weighted linear regression, and the Savitzky-Golay filter. The results show that both ML algorithms estimate the DC-link capacitor health with the highest accuracy being 91.8% for the SVM and 90.7% for the ANN. The MARF-smoothed data is an optimal input data type for the ML classifiers due to its low computational cost and high accuracy. Additionally, a Hidden Markov Model increases the classification accuracy up to 98% when utilized with the ANN.

    Committee: Mark Scott Dr. (Advisor); Chi-Hao Cheng Dr. (Committee Member); Peter Jamieson Dr. (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 8. TU, SHANSHAN Case Influence and Model Complexity in Regression and Classification

    Doctor of Philosophy, The Ohio State University, 2019, Statistics

    Case influence and model complexity play very important roles in model diagnostics and model comparison. They have been extensively studied in linear regression and generalized linear model (GLM). In this dissertation, we focus on how to assess case influence and estimate model complexity for penalized M-estimators with non-smooth loss functions in regression and classification. Cook's distance is commonly used for case influence assessment in least squares regression. It measures the overall change in the fitted model when one case is deleted from the data. Unlike least squares regression, however, the relation between the full-data solution and leave-one-out (LOO) solution is not explicit for general penalized M-estimators, which makes the computation challenging. We propose a new algorithm to relate the full-data solution with the LOO solution through a case-weight adjusted solution path. We take penalized quantile regression and support vector machine (SVM) as an example in regression and classification, respectively. Resorting to the homotopy technique in optimization, we introduce a case weight for each individual data point as a continuous embedding parameter and decrease the weight gradually from one to zero to link the estimators based on the full data and those with a case deleted. We show that the case-weight adjusted solution path is piecewise linear in the weight parameter. This allows us to compute all LOO estimators efficiently. Moreover, we can use the solution path to generate case influence graphs and perform LOO cross validation for model selection. Case influence measures for classification methods are understudied in the literature. We propose a variety of overall case influence measures for large margin classifiers and empirically find that using some loss functions are quite effective in assessing case influence. Moreover, we demonstrate using real-world datasets that the proposed method is able to detect outliers in the feature spac (open full item for complete abstract)

    Committee: Yoonkyung Lee (Advisor); Yunzhang Zhu (Advisor); Steve MacEachern (Committee Member); Mario Peruggia (Committee Member) Subjects: Statistics
  • 9. Srinivasan, Ramprakash Computational Models of the Production and Perception of Facial Expressions

    Doctor of Philosophy, The Ohio State University, 2018, Electrical and Computer Engineering

    By combining different facial muscle actions, called Action Units (AUs), humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science have long hypothesized the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional Magnetic Resonance Imaging (fMRI), we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the pattern analysis decoder. Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Useful annotations include AUs and their intensities, as well as emotion category. This process cannot be practically achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of a million images of facial expressions of emotion from the wild (i.e., face images downloaded from the Internet). We further use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. The downloaded images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology. Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in and a reddish color (open full item for complete abstract)

    Committee: Aleix Martinez (Advisor); Julie Golomb (Committee Member); Yuan Zheng (Committee Member) Subjects: Cognitive Psychology; Computer Engineering; Computer Science; Social Psychology
  • 10. Shrestha, Ujjwal Automatic Liver and Tumor Segmentation from CT Scan Images using Gabor Feature and Machine Learning Algorithms

    Master of Science, University of Toledo, 2018, Engineering (Computer Science)

    Automatic segmentation of liver is a difficult task and moreover, segmenting tumor from liver adds extra dimensionality of difficulty. It is very unfavorable to segment the liver and tumor from abdominal Computed Tomography (CT) images solely based on gray levels or shape alone because of the overlap in intensity and variability in position and shape of soft tissues. To deal with these issues, this thesis proposes a more efficient method of liver and tumor segmentation from CT images using Gabor Features (GF) and three different machine learning algorithms: Random forest (RF), support vector machine (SVM), and Deep Neural Network (DNN). The texture information provided by GF is expected to be uniform and consistent across multiple slices of the same organ. In this thesis, first, an array of Gabor filters is used to extract pixel level features. Secondly, liver segmentation is performed to extract liver from abdominal CT image using three different classifiers: RF, SVM, and DNN trained on GF. Finally, tumor segmentation is done on the segmented liver image using GF and same set of classifiers. The Gabor filter is similar to perception in the human visual system (HVS), and all the mentioned algorithm for classification are robust and accurate ML techniques that have been applied for pixel-wise segmentation. Thirty-one CT image slices were used to validate our proposed method. The 3D-IRCADb (3D Image Reconstruction for Comparison of Algorithm Database) was the source for CT image slices. The classification accuracy for liver segmentation was 99.55%, 97.88%, and 98.13% for RF, SVM, and DNN respectively, while the classifier accuracy for tumor segmentation on the extracted liver segment was 99.52%, 98.07% and 98.45% for RF, SVM, and DNN, respectively. The Dice Similarity Coefficient (DSC) for liver segmentation was 99.03%, 96.79%, and 97.11% for RF, SVM, and DNN, respectively, while the DSC for tumor segmentation on the extracted liver segment was 99.43%, 96.18%, and (open full item for complete abstract)

    Committee: Ezzatollah Salari (Committee Chair); Junghwan Kim (Committee Member); Jackson Carvalho (Committee Member) Subjects: Computer Science
  • 11. Dalvi, Aditi Performance of One-class Support Vector Machine (SVM) in Detection of Anomalies in the Bridge Data

    MS, University of Cincinnati, 2017, Engineering and Applied Science: Electrical Engineering

    In a time where Structural Health Monitoring (SHM) is a topic of vital importance for safety and maintenance of critical structures such as bridges, detecting damages or anomalies as well as analyzing the normal behavior of structures has also gained significance in recent years. Data models have been increasingly used in recent years for tracking normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long-term measurement data of structures. Structural Health Monitoring (SHM) of civil infrastructure like highway bridges, during construction or in-service use is executed at University of Cincinnati Infrastructure Institute (UCII), thus giving access to the actual measurement data of the bridges. The essence of this SHM system lies in the processing of data, where it is able to detect anomalies in the data. The current system utilizes linear regression method to detect outliers in the bridge data. This study introduces a novel anomaly detection method employing one-class Support Vector Machines (SVM) and compares the performance of SVMs with traditional regression model. This method is implemented on the measurement data of Ironton-Russell Bridge monitored by UCII, which was in-service use, and its results are compared with linear regression as a case study. The method is further implemented on Ironton-Russell Replacement Bridge which UCII has been monitoring since the construction stage. The actual construction events of the Ironton-Russell Replacement Bridge are being used as validation for the comparison. The aim is to show advantages of employing SVMs due to their abilities to classify damages even with minimum training data. The results show that using SVMs will improve the detectability and also the (open full item for complete abstract)

    Committee: Arthur Helmicki Ph.D. (Committee Chair); Victor Hunt Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 12. Shubbar, Safa Ultrasound Medical Imaging Systems Using Telemedicine and Blockchain for Remote Monitoring of Responses to Neoadjuvant Chemotherapy in Women's Breast Cancer: Concept and Implementation

    MS, Kent State University, 2017, College of Arts and Sciences / Department of Computer Science

    Malignant tumors are a worldwide concern. Breast cancer is the most common cause of death among women and is ranked as the second most serious malignant tumor in women, after lung cancer. Consequently, different techniques and technologies have been studied, researched, and developed to detect breast cancer at an early stage. Early diagnosis contributes to the preservation of lives in both developed and developing countries. The survival rate increases dramatically when the cancer tumors are discovered via a screening process before the appearance of cancer symptoms. Therefore, monitoring the responses of breast cancer patients and detecting the presence of new lesions are the main intended outcomes of this research. In this research, we use a breast ultrasound imaging technique to monitor the response of breast cancer patients who receive neoadjuvant chemotherapy (the systemic therapy of breast cancer before surgical therapy), as well as detecting new tumors which may arise during treatment. In this technique, the Support Vector Machine (SVM) algorithm is used for image classification, and the regionprops tool in Matlab is used for calculating the tumor size. SVM is a supervised learning method that is used for classification and regression predictive problems. In this work, SVM is considered as a binary classifier by which the abnormalities in the breast tissues can be distinguished, and then it can be determined whether these abnormalities are cancerous or not. To establish remote healthcare to monitor cancerous tumors treatments, telecommunication infrastructure through primarily Teleradiology and blockchain technology along with smart contract will be used. Blockchain technology is deemed as one of the main components of Bitcoin cryptocurrency. The smart contract concept is a collection of code that is governing something important or valuable in the blockchain. This remote healthcare will be achieved through specialized medical centers as well as technologi (open full item for complete abstract)

    Committee: Austin Melton (Advisor); Angela Guercio (Committee Member); Gokarna Sharma (Committee Member) Subjects: Computer Science; Medical Imaging
  • 13. Albanwan, Hessah Remote Sensing Image Enhancement through Spatiotemporal Filtering

    Master of Science, The Ohio State University, 2017, Civil Engineering

    The analysis of time-sequence satellite images is a powerful tool in remote sensing; it is used to explore the statics and dynamics of the surface of the earth. Usually, the quality of multitemporal images is influenced by metrological conditions, high reflectance of surfaces, illumination, and satellite sensor conditions. These negative influences may produce noises and different radiances and appearances between the images, which can affect the applications that process them. Thus, a spatiotemporal bilateral filter has been adopted in this research to enhance the quality of an image before using it in any application. The filter takes advantage of the temporal information provided by multi-temporal images and attempts to reduce the differences between them to improve transfer learning used in classification. The classification method used here is support vector machine (SVM). Three experiments were conducted in this research, two were on Landsat 8 images with low-medium resolution, and the third on high-resolution images of Planet satellite. The newly developed filter proved that it can enhance the accuracy of classification using transfer learning by about 5%,15% and 2% for the three experiments

    Committee: Rongjun Qin (Advisor); Alper Yilmaz (Committee Member); Charles Toth (Committee Member) Subjects: Civil Engineering
  • 14. Kandakatla, Rajeshwari Identifying Offensive Videos on YouTube

    Master of Science (MS), Wright State University, 2016, Computer Science

    Harassment on social media has become a critical problem and social media content depicting harassment is becoming common place. Video-sharing websites such as YouTube contain content that may be offensive to certain community, insulting to certain religion, race etc., or make fun of disabilities. These videos can also provoke and promote altercations leading to online harassment of individuals and groups. In this thesis, we present a system that identifies offensive videos on YouTube. Our goal is to determine features that can be used to detect offensive videos efficiently and reliably. We conducted experiments using content and metadata available for each YouTube video such as comments, title, description and number of views to develop Naive Bayes and Support Vector Machine classifiers. We used training dataset of 300 videos and test dataset of 86 videos and obtained a classification F-Score of 0.86. It was surprising to note that sentiment and content of the comments were less effective in detecting offensive videos than the unigrams and bigrams in the video title and any other feature combinations does not improve the performance appreciably.Thus, the simplicity of these features contributes to the efficiency of computation and implies that the up-loaders provide good titles.

    Committee: Krishnaprasad Thirunarayan Ph.D. (Advisor); Amit Sheth Ph.D. (Committee Member); Valerie Shalin Ph.D. (Committee Member) Subjects: Computer Science
  • 15. Shalev, Ronny AUTOMATED MACHINE LEARNING BASED ANALYSIS OF INTRAVASCULAR OPTICAL COHERENCE TOMOGRAPHY IMAGES

    Doctor of Philosophy, Case Western Reserve University, 2016, EECS - Electrical Engineering

    Coronary artery disease (CAD) is the leading cause of death in the world. Most acute coronary events (e.g. heart attacks) are due to the rupture of atherosclerotic plaques inside the arteries, however, calcified lesion is the most widely treatable, typically, by stent implantation via percutaneous coronary intervention (PCI). Intravascular Optical Coherence Tomography (IVOCT) imaging has the resolution, contrast, and penetration depth to characterize coronary artery plaques. Conventional manual evaluation of IVOCT images, based on qualitative interpretation of image features, is tedious and time consuming. The aim of this PhD dissertation was to develop advanced algorithms to fully automate the task of plaque characterization, thereby significantly reduce image analysis time, enable intervention planning, and increase IVOCT data usability. We based our algorithms on machine learning combined with advanced image processing techniques. We developed a processing pipeline on a 3D local region of support for estimation of optical properties of atherosclerotic plaques from coronary artery, IVOCT pullbacks. Performance was assessed in comparison with observer-defined standards using clinical pullback data. Values (calcium 3.58±1.74mm−¹, lipid 9.93±2.44mm−¹ and fibrous 1.96±1.11mm−¹) were consistent with previous measurements. We, then, created a method to automatically classify plaque tissues as fibrous, calcified, or lipid-rich. For this multi-class problem, we used one-versus-rest SVM classifiers for each of the three plaque types, rules to exclude many voxels called “other,” and both physics-inspired and local texture features to classify voxels. Experiments on the clinical training data yielded 5-fold, voxel-wise accuracy of 87.7±8.6%, 96.7±4.9% and 97.3±2.4% for calcified, lipid-rich and fibrotic tissues, respectively. Experiments on the independent validation data (ex-vivo image data accurately labeled using registered 3D microscopic cryo-imaging and was used as (open full item for complete abstract)

    Committee: David Wilson PhD (Advisor); Soumya Ray PhD (Committee Member); Hiram Bezerra PhD, MD (Committee Member); Murat Cavusoglu PhD (Committee Chair); Francis Merat PhD (Committee Member) Subjects: Artificial Intelligence; Computer Science; Medical Imaging
  • 16. Mora, Omar Morphology-Based Identification of Surface Features to Support Landslide Hazard Detection Using Airborne LiDAR Data

    Doctor of Philosophy, The Ohio State University, 2015, Civil Engineering

    Landslides are natural disasters that cause environmental and infrastructure damage worldwide. In order to reduce future risk posed by them, effective detection and monitoring methods are needed. Landslide susceptibility and hazard mapping is a method for identifying areas suspect to landslide activity. This task is typically performed in a manual, semi-automatic or automatic form, or a combination of these, and can be accomplished using different sensors and techniques. As landslide hazards continue to impact our environment and impede the lives of many, it is imperative to improve the tools and methods of effective and reliable detecting of such events. Recent developments in remote sensing have significantly improved topographic mapping capabilities, resulting in higher spatial resolution and more accurate surface representations. Dense 3D point clouds can be directly obtained by airborne Light Detection and Ranging (LiDAR) or created photogrammetrically, allowing for better exploitation of surface morphology. The potential of extracting spatial features typical to landslides, especially small scale failures, provides a unique opportunity to advance landslide detection, modeling, and prediction process. This dissertation topic selection was motivated by three primary reasons. First, 3D data structures, including data representation, surface morphology, feature extraction, spatial indexing, and classification, in particular, shape-based grouping, based on LiDAR data offer a unique opportunity for many 3D modeling applications. Second, massive 3D data, such as point clouds or surfaces obtained by the state-of-the-art remote sensing technologies, have not been fully exploited for landslide detection and monitoring. Third, unprecedented advances in LiDAR technology and availability to the broader mapping community should be explored at the appropriate level to assess the current and future advantages and limitations of LiDAR-based detection and modeling of land (open full item for complete abstract)

    Committee: Dorota Grejner-Brzezinska (Advisor); Charles Toth (Advisor); Tien Wu (Committee Member) Subjects: Civil Engineering
  • 17. Scherreik, Matthew A Probabilistic Technique For Open Set Recognition Using Support Vector Machines

    Master of Science in Engineering (MSEgr), Wright State University, 2014, Electrical Engineering

    Classification algorithms trained using finite sets of target and confuser data are limited by the training set. These algorithms are trained under closed set assumptions and do not account for the infinite universe of confusers found in practice. In contrast, classification algorithms developed under open set assumptions label inputs not present in the training data as unknown instead of assigning the most likely class. We present an approach to open set recognition, the probabilistic open set SVM, that utilizes class posterior estimates to determine probability thresholds for classification. This is accomplished by first training an SVM in a 1-vs-all configuration on a training dataset containing only target classes. A validation set containing only class data belonging to the training set is used to iteratively determine appropriate posterior probability thresholds for each target class. The testing dataset, which contains targets present in the training data as well as several confuser classes, is first classified by the 1-vs-all SVM. If the estimated posterior for an input falls below the threshold, the target is labeled as unknown. Otherwise, it is labeled with the class resulting from the SVM decision. We apply our method to classification of synthetic ladar range images of civilian vehicles and measured infrared images of military vehicles. We show that the POS-SVM offers improved performance over other open set algorithms by allowing the use of nonlinear kernels, incorporating intuitive free parameters, and empirically determining good thresholds.

    Committee: Brian Rigling Ph.D. (Advisor); Fred Garber Ph.D. (Committee Member); Arnab Shaw Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 18. Zhao, Haitao Analyzing TCGA Genomic and Expression Data Using SVM with Embedded Parameter Tuning

    Master of Science, University of Akron, 2014, Computer Science

    The high-throughput next generation sequencing revolutionized the genomic sequencing techniques. It allows the study of thousands of genes and even the entire exome in a given organism simultaneously. This as well as other high-throughput technologies such as DNA microarray has broadened the genomic sequencing applications and changed biomedical research in a profound way. Comparing with microarray, the big data generated from next generation sequencing is considerably more reliable. As such, the technique has rapidly emerged as a major tool to obtain gene mutation and expression profiles of human cancers. The availability of these big genomic data presents unique scientific challenges and opportunities. One such challenge is to understand and characterize the patterns of genomic mutation and gene expression in different cancer types presented in the datasets. Many data mining approaches have already been developed to analyze the large datasets for feature selections and sample classifications. Since mutation and gene expression profiles are noisy due to both biological and technical variations in the data, it is clear that the effectiveness and robustness of a machine learning based classification system significantly depends upon the nature of the input data. In this study, we explore the DNA mutation and gene expression patterns in lung cancer using support vector machines with embedded parameter tuning. Two datasets used are derived from somatic mutation data and RNA-seq gene expression profiles presented in TCGA (The Cancer Genome Atlas). The embedded parameter tuning is based on data mining the training dataset using validation techniques and concepts of committee voting approach. We show that the support vector machines with tuning significantly improve the robustness and the classification accuracy when they are compared to the regular support vector machines. The approach was applied to the two datasets to explore the mutation patterns in lung adenocarcin (open full item for complete abstract)

    Committee: Zhong-Hui Duan Dr. (Advisor); Yingcai Xiao Dr. (Committee Member); En Cheng Dr. (Committee Member) Subjects: Bioinformatics; Computer Science
  • 19. Plis, Kevin The Effects of Novel Feature Vectors on Metagenomic Classification

    Master of Science (MS), Ohio University, 2014, Computer Science (Engineering and Technology)

    Metagenomics plays a crucial role in our understanding of the world around us. Machine learning and bioinformatics methods have struggled to accurately identify the organisms present in metagenomic samples. By using improved feature vectors, higher classification accuracy can be found when using the machine learning classification approach to identify the organisms present in a metagenomic sample. This research is a pilot study that explores novel feature vectors and their effect on metagenomic classification. A synthetic data set was created using the genomes of 32 organisms from the Archaea and Bacteria domains, with 450 fragments of varying length per organism used to train the classification models. By using a novel feature vector one tenth of the size of the currently used feature vectors, a 6.34%, 21.91%, and 15.07% improvement was found over the species level accuracy on 100, 300, and 500 bp fragments, respectively, for this data set. The results of this study also show that using more features does not always translate to a higher classification accuracy, and that higher classification accuracy can be achieved through feature selection.

    Committee: Lonnie Welch PhD (Advisor) Subjects: Artificial Intelligence; Bioinformatics; Computer Science
  • 20. Wehmann, Adam A Spatial-Temporal Contextual Kernel Method for Generating High-Quality Land-Cover Time Series

    Master of Arts, The Ohio State University, 2014, Geography

    In order to understand the variability, drivers, and effects of the currently unprecedented rate, extent, and intensity of land-cover change, land change science requires remote sensing products that are both highly accurate and spatial-temporally consistent. This need for accuracy is exacerbated from the shift in the discipline from the detection of change between two points in time to the analysis of trajectories of change over time. As the length of temporal record increases, the problem becomes more severe. This follows because the accuracy of change detection is bounded below by the product of the accuracies of the source maps. Without exceedingly high accuracy at individual dates, the accuracy of change detection will be low, as map errors simply and vastly outweigh the occurrence of real change. Land-cover classifiers that can better utilize spatial and temporal information offer the chance to increase the accuracy of change detection and the consistency of classification results. By increasing the spatial and temporal dependence of errors between classification maps, the overall area among maps subject to error may be minimized, producing higher quality land-cover products. Such products enable more accurate and consistent detection, monitoring, and quantification of land-cover change and therefore can have wide-reaching impacts on downstream environmental, ecological, and social research. To address these problems fundamental to the creation of land-cover products, this thesis seeks to develop a novel contextual classifier for multi-temporal land-cover mapping that fully utilizes spatial-temporal information to increase the accuracy of change detection, while remaining resistant to future advances in the spatial and spectral characteristics of remote sensor technology. By combining the complementary strengths of two leading techniques for the classification of land cover – the Support Vector Machine and the Markov Random Field – through a novel s (open full item for complete abstract)

    Committee: Desheng Liu (Advisor); Ningchuan Xiao (Committee Member); Brian Kulis (Committee Member) Subjects: Computer Science; Geographic Information Science; Geography; Remote Sensing