Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 26)

Mini-Tools

 
 

Search Report

  • 1. Couture Del Valle, Christopher Optimization of Convolutional Neural Networks for Enhanced Compression Techniques and Computer Vision Applications

    Master of Science in Computer Engineering, University of Dayton, 2022, Electrical and Computer Engineering

    Image compression algorithms are the basis of media transmission and compression in the field of image processing. Decades after their inception, algorithms such as the JPEG image codec continue to be the industry standard. A notable research topic gathering momentum in the field of compression is deep learning (DL). This paper explores the opti- mization of DL models for ideal image compression and object detection (OD) applications. The DL model to be optimized is based upon an existing compression framework known as the CONNECT model. This framework wraps the traditional JPEG image codec within two convolutional neural networks (CNNs). The first network, ComCNN, focuses on com- pressing an input image into a compact representation to be fed into the image codec. The second network, RecCNN, focuses on reconstructing the output image from the codec as similarly as possible to the original image. To enhance the performance of the CONNECT model, an optimization software called Optuna wraps the framework. Hyperparameters are selected from each CNN to be evaluated and optimized by Optuna. Once the CONNECT model produces ideal results, the output images are applied to the YOLOv5 OD network. This paper explores the impact of DL hyperparameters on image quality and compres- sion metrics. In addition, a detection network will provide context to the effect of image compression on computer vision applications.

    Committee: Bradley Ratliff (Committee Chair); Eric Balster (Committee Member); Barath Narayanan (Committee Member) Subjects: Computer Engineering
  • 2. Tan, Arie The Integration of Fuzzy Fault Trees and Artificial Neural Networks to Enhance Satellite Imagery for Detection and Assessment of Harmful Algal Blooms

    Doctor of Philosophy, The Ohio State University, 2019, Geodetic Science

    The study of harmful algal blooms is a well-established field with several conventional approaches such as processing samples, field observations, and examining weather and environmental effects that could change how the algae develop. However, with the frequency and severity of these blooms increasing worldwide, it has become progressively tougher to properly and efficiently assess their development and spread with the limited resources available. Therefore, assessment of the severity of algal blooms through remote sensing is helpful, especially since data from satellite imagery is already widely available for public use. Subjective judgments of existing data could yield additional human expertise that could be used in conjunction with the existing biology of bloom development to provide a more efficient framework to base future studies from. What is introduced in this dissertation is the development of a deductive technique based on a combination of fuzzy fault tree analysis and artificial neural network image recognition, utilizing readily available data from already well-established remote sensing equipment. The main advantage of this is that it could provide results that are just as accurate and yet could accept far more flexible inputs not as dependent on strict boundaries regarding lake conditions to compensate for limitations in gathering a priori knowledge. At the same time, the process is highly customizable and user-oriented, so that an assessor utilizing the interface need not necessarily have a comprehensive understanding of the underlying logic to interpret the results accordingly. Testing the interface and procedure on actual data arising from in situ sampling and satellite imagery through the fuzzy fault tree and convolutional neural network proved to give rise to accurate and logically consistent results, with the assessment of conditions arising from the fault tree consistent with the initial factors; the neural network also achieved a high d (open full item for complete abstract)

    Committee: Tarunjit Butalia (Committee Chair); Michael Durand (Advisor); Steven Lower (Committee Member) Subjects: Earth; Geographic Information Science; Logic; Public Health
  • 3. Elavarthi, Pradyumna Semantic Segmentation of RGB images for feature extraction in Real Time

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    Deep learning networks for semantic segmentation are the core of modern computer vision applications involving target identification and scene extraction. Tremendous research in the area combined with the increased computation power and abundance of labelled datasets allowed the rise of deep neural networks to provide solutions for the long lasting problems. However, for the real time applications, huge parameter count and computational efficiency of the algorithms have taken significance. The novel method presented here will tackle the above mentioned problems for the effective real time segmentation and color extraction to identify the missing person.

    Committee: Janet Jiaxiang Dong Ph.D. (Committee Chair); Daniel Humpert M.S. (Committee Member); Anca Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 4. DiMascio, Michelle Convolutional Neural Network Optimization for Homography Estimation

    Master of Science (M.S.), University of Dayton, 2018, Electrical Engineering

    This thesis proposes an optimized convolutional neural network architecture to improve homography estimation applications. The parameters and structure of the CNN including the number of convolutional filters, stride lengths, kernel size, learning parameters, etc are altered from previous implementations. Multiple modifications of the network are trained and evaluated until a final network yields a corner pixel error of 4.7 which is less than a network proposed in previous literature's.

    Committee: Eric Balster (Advisor); Yakov Diskin (Committee Member); Tarek Taha (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 5. Pech, Thomas A Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging

    Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences

    This work investigates a strategy for evaluating the navigability of terrain from 3-D imaging. Labeled training data was automatically generated by running a simulation of a mobile robot nai¨vely exploring a virtual world. During this exploration, sections of terrain were perceived through simulated depth imaging and saved with labels of safe or unsafe, depending on the outcome of the robot's experience driving through the perceived regions. This labeled data was used to train a deep convolutional neural network. Once trained, the network was able to evaluate the safety of perceived regions. The trained network was shown to be effective in achieving safe, autonomous driving through novel, challenging, unmapped terrain.

    Committee: Wyatt Newman (Advisor); Cenk Cavusoglu (Committee Member); Michael Lewicki (Committee Member) Subjects: Computer Science; Robotics; Robots
  • 6. Bettaieb, Luc A Deep Learning Approach To Coarse Robot Localization

    Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Electrical Engineering

    This thesis explores the use of deep learning for robot localization with applications in re-localizing a mislocalized robot. Seed values for a localization algorithm are assigned based on the interpretation of images. A deep neural network was trained on images acquired in and associated with named regions. In application, the neural net was used to recognize a region based on camera input. By recognizing regions from the camera, the robot can be localized grossly, and subsequently refined with existing techniques. Explorations into different deep neural network topologies and solver types are discussed. A process for gathering training data, training the classifier, and deployment through a robot operating system (ROS) package is provided.

    Committee: Wyatt Newman (Advisor); Murat Cavusoglu (Committee Member); Gregory Lee (Committee Member) Subjects: Computer Science; Electrical Engineering; Robotics
  • 7. Siddiqui, Nimra Dr. Lego: AI-Driven Assessment Instrument for Analyzing Block-Based Codes

    Master of Computing and Information Systems, Youngstown State University, 2024, Department of Computer Science and Information Systems

    The field of coding education is rapidly evolving, with emerging technologies playing a pivotal role in transforming traditional learning methodologies. This thesis introduces Dr. Lego, an innovative framework designed to revolutionize the assessment and understanding of block-based coding through the integration of sophisticated deep learning models. Dr. Lego combines cutting-edge technologies such as MobileNetV3 (Howard, 2019), for visual recognition and BERT (Devlin et al., 2018), and XLNet (Yang et al., 2019) for natural language processing to offer a comprehensive approach to evaluating coding proficiency. The research methodology involves the meticulous curation of a diverse dataset comprising projects from the LEGO SPIKE app (LEGO Education, 2022), ensuring that the models are subjected to a broad range of coding scenarios. Leveraging the dynamic educational environment provided by the LEGO SPIKE app (LEGO Education, 2022), Dr. Lego empowers users to design and implement various coding projects, fostering hands-on learning experiences. This thesis delves into methodologies aimed at enhancing coding education by exploring model integration, data generation, and fine-tuning of pre-trained models. Dr. Lego not only evaluates coding proficiency but also provides cohesive and insightful feedback, enhancing the learning experience for users. The adaptability of the framework highlights its potential to shape the future of coding education, paving the way for a new era of interactive and engaging learning experiences.

    Committee: Abdu Arslanyilmaz PhD (Advisor); Feng Yu PhD (Committee Member); Carrie Jackson EdD, BCBA (Committee Member) Subjects: Computer Science; Engineering; Information Systems; Robotics; Teaching
  • 8. Tomczak, Nathaniel Convolutional Neural Networks for Indexing Transmission Electron Microscopy Patterns: a Proof of Concept

    Master of Engineering, Case Western Reserve University, 2023, Materials Science and Engineering

    Understanding phase transitions and their behaviors are critical in nuclear fuel applications. The transmission electron microscopy (TEM) diffraction field is used to further the understanding of plutonium-zirconium alloys in metallic fuels. Indexing patterns to gain an understanding is time intensive for researchers. Convolutional neural networks (CNN) have exhibited exceptional performance in classification tasks in other fields. Applying them to selected area electron diffraction (SAED) patterns has yielded better than random results. A CNN is capable of classifying between three phase groups, at 650 patterns per second, with a total accuracy up to 83.74%. CNNs can also classify between two phase groups at 82.31% accuracy while being able to discern a difference between patterns of identical symmetry but distinct structure.

    Committee: Jennifer Carter (Advisor); Sunniva Collins (Committee Member); Laura Bruckman (Committee Member) Subjects: Computer Science; Materials Science
  • 9. Casukhela, Rohan Designing Robust Decision-Making Systems for Accelerated Materials Development

    Master of Science, The Ohio State University, 2022, Materials Science and Engineering

    Recent increases in computational power have led to growing enthusiasm about the volume of data that can be collected and analyzed for many applications. However, the amount of data some physical/virtual systems generate is so great that an increased reliance on mathematical, statistical, and algorithmic based approaches to analyze and make decisions from the data is required. Application of these computational tools can lead to sharper decision making and vast amounts of knowledge discovered. The abstraction of the scientific decision-making process has led many researchers to consider observing systems with more tunable experimental parameters. This makes traditional experimentation, which is based on human researchers conducting the experiment and using their intuition to drive the next set of experiments, intractable for these applications. Autonomous experimentation (AE) systems, which are also a byproduct of the computational explosion, are able to address this issue and have found use across the fields of biology, chemistry, and materials science. AE systems are typically capable of conducting certain types of experiments with lower and more reliable turnaround times as opposed to their human counterparts. The automated execution of experiments naturally leads one to think about how those experiments can be parallelized and otherwise completed faster due to the lack of human presence in the experimentation environment. Therefore, AE systems are considered when designing many high-throughput experimentation (HTE) efforts. This thesis presents an overview of the current state-of-the-art for AE systems in Chapter 1, a framework developed to increase the independence of AE systems from human assistance in Chapter 2, and a machine-learning (ML) data processing pipeline that automates the image post-processing phase of the analysis of backscattered-electron scanning electron microscope images in Chapter 3.

    Committee: Stephen Niezgoda (Advisor); Joerg Jinschek (Advisor); Sriram Vijayan (Other); Gopal Viswanathan (Committee Member); Oksana Chkrebtii (Committee Member) Subjects: Business Administration; Computer Science; Engineering; Experiments; Industrial Engineering; Information Science; Information Systems; Information Technology; Metallurgy; Operations Research; Robotics; Statistics
  • 10. Kumar, Navneet Predicting High Stress Regions in a Microstructure using Convolutional Neural Networks

    Doctor of Philosophy, The Ohio State University, 2022, Industrial and Systems Engineering

    Origins of failure are often driven by localizations in material response due to the applied stress/strain state. These stress “hot spots” intuitively represent regions that accumulate higher damage than their surroundings, serving as prime locations for crack nucleation. There are a number of microstructural factors that contribute to whether a given neighborhood is prone to forming a hot spot, including the morphology of local features, their relative crystallographic misorientations and elastic anisotropy, and their preferred orientations with respect to the load state. Given the large number of features that may influence the formation of stress hot spots, it is advantageous to develop techniques that attempt to predict their formation based solely on an image of the underlying microstructure. We describe such a method based on a convolutional neural network (CNN). The CNN is trained by cutting local patches out of a synthetic microstructure, created in DREAM.3D, that had its elastic response modeled using a spectral technique based on fast Fourier transforms. The resulting trained CNN is able to predict which regions of a microstructure are susceptible to forming hot spots, based only on an image description and features.

    Committee: Michael Groeber (Advisor); Stephen Niezgoda (Advisor) Subjects: Artificial Intelligence; Computer Science; Industrial Engineering; Materials Science
  • 11. Abunajm, Saleh Predicting Lung Cancer using Deep Learning to Analyze Computed Tomography Images

    MS, University of Cincinnati, 2022, Education, Criminal Justice, and Human Services: Information Technology-Distance Learning

    Among other cancers worldwide, lung cancer is the leading cause of death. The lives that we lose every year to lung cancer are more than combined of those lost to pancreatic, breast, and prostate cancer. However, lung cancer receives the least amount of research funds for each life lost to cancer each year. Lung cancer receives $3,580 per lost life, pancreatic cancer receives $4796 per lost life, prostate cancer receives $8116 per lost life, and breast cancer receives $19050 per lost life. The survival rate for lung cancer patients is very low compared to other cancer patients. If doctors diagnose a patient with stage I lung cancer, the survival rate will be 55%, which means that the patient will most likely survive cancer for five or more years. However, the survival rate will drop to 5% if the patient is diagnosed with stage IV lung cancer. Diagnosing cancer at an early stage gives doctors more time for their treatment plan, increasing the survival rate or even becoming cancer-free. In this thesis, we aim to develop a deep learning model that will help doctors predict and diagnose lung cancer early to save more lives. This thesis proposes a 2D CNN architecture, using IQ-OTH/NCCD - Lung Cancer Dataset in Kaggle. The dataset consists of 1097 CT scan images, which include three classes, normal cases, malignant cases, and benign cases. The experiment shows that the model has achieved high performance with 99.45% accuracy, and 1.75% loss. The weighted average is 99% and 99% for the macro average. The proposed model can be a particularly useful tool to support radiologists' decisions in predicting and classifying lung cancer.

    Committee: Nelly Elsayed Ph.D. (Committee Member); M. Murat Ozer Ph.D. (Committee Member); Zaghloul Elsayed Ph.D. (Committee Member) Subjects: Information Technology
  • 12. Karim, Rashid Saadman A Novel Ensemble Method using Signed and Unsigned Graph Convolutional Networks for Predicting Mechanisms of Action of Small Molecules from Gene Expression Data

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Computer Science and Engineering

    Identification of the mechanism of action (MoA) of a small molecule which causes pharmacological effects on cellular networks governing gene expression levels is an important field of study for the purpose of drug development and repurposing. While gene expression can be used for the prediction of small molecule MoA using traditional machine learning algorithms, these algorithms do not consider the underlying complexity of cellular level biological networks driving gene expression. In particular, capturing predictive features from the polarity of interaction in cell signaling networks where nodes in the network either activate or inhibit other nodes is still a challenging problem for the prediction of drug MoA. We propose an ensemble deep learning meta-algorithm for predicting small molecule MoA from gene expression data using unsigned and signed graph convolutional networks (GCN). We developed a GCN algorithm to extract features from signed networks and combined predictive probabilities with that of an unsigned GCN using stacking. Our ensemble methodology improves the overall predictive capabilities significantly when compared to unsigned or signed GCN.

    Committee: Mario Medvedovic Ph.D. (Committee Member); Gowtham Atluri Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member); Jaroslaw Meller Ph.D. (Committee Member); Raj Bhatnagar Ph.D. (Committee Member) Subjects: Bioinformatics
  • 13. Tan, Ke Convolutional and recurrent neural networks for real-time speech separation in the complex domain

    Doctor of Philosophy, The Ohio State University, 2021, Computer Science and Engineering

    Speech signals are usually distorted by acoustic interference in daily listening environments. Such distortions severely degrade speech intelligibility and quality for human listeners, and make many speech-related tasks, such as automatic speech recognition and speaker identification, very difficult. The use of deep learning has led to tremendous advances in speech enhancement over the last decade. It has been increasingly important to develop deep learning based real-time speech enhancement systems due to the prevalence of many modern smart devices that require real-time processing. The objective of this dissertation is to develop real-time speech enhancement algorithms to improve intelligibility and quality of noisy speech. Our study starts by developing a strong convolutional neural network (CNN) for monaural speech enhancement. The key idea is to systematically aggregate temporal contexts through dilated convolutions, which significantly expand receptive fields. Our experimental results suggest that the proposed model consistently outperforms a feedforward deep neural network (DNN), a unidirectional long short-term memory (LSTM) model and a bidirectional LSTM model in terms of objective speech intelligibility and quality metrics. Although significant progress has been made on deep learning based speech enhancement, most existing studies only exploit magnitude-domain information and enhance the magnitude spectra. We propose to perform complex spectral mapping with a gated convolutional recurrent network (GCRN). Such an approach simultaneously enhances magnitude and phase of speech. Evaluation results show that the proposed GCRN substantially outperforms an existing CNN for complex spectral mapping. Moreover, the proposed approach yields significantly better results than magnitude spectral mapping and complex ratio masking. To achieve strong enhancement performance typically requires a large DNN, making it difficult to deploy such speech enhancement syst (open full item for complete abstract)

    Committee: DeLiang Wang (Advisor); Eric Fosler-Lussier (Committee Member); Eric Healy (Committee Member) Subjects: Computer Science; Engineering
  • 14. Gnacek, Matthew Convolutional Neural Networks for Enhanced Compression Techniques

    Master of Science in Electrical Engineering, University of Dayton, 2021, Electrical and Computer Engineering

    Image compression is a foundational topic in the world of image processing. Reducing an image's size allows for the image to be stored in less memory and speeds up the processing and storage time. In addition, deep learning (DL) has been a featured topic. This paper seeks to find a model that uses DL for optimal image compression. There are several image codecs that already are used for image compression. The framework that is designed in this paper does not focus on eliminating these codecs; rather, it uses a method that incorporates standard codecs. The image codec is wrapped with two convolutional neural networks (CNNs). The first network, ComCNN, has the goal of compressing an image into an optimal compact representation that can be passed into an image codec for maximum compression. The second network,, RecCNN, has the goal of reconstructing the decoded compact representation of the image into an output that is as similar to the original image as possible. By continuing to use tradition image codes such as JPEG and JPEG2000, the process is standardized while still producing optimal results. The paper gives an overview of image compression, machine learning, and different quality and compression metrics that determine the success of the network. In addition, the model is described in great detail, and results with different parameters and data types are presented.

    Committee: Bradley Ratliff Ph.D. (Committee Chair); Eric Balster Ph.D. (Committee Member); Frank Scarpino Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 15. Rehman Faridi, Shah Mohammad Hamoodur Artificial Intelligence Based Real-Time Processing of Sterile Preparations Compounding

    Master of Science, University of Toledo, 2020, Engineering (Computer Science)

    The objective of this research is to develop a fully functional semi-automated monitoring and verification system to improve the quality standards in compounding sterile preparations (CSP). To avoid the errors made in the CSP process, a material selection graphical user interface (MSGUI) is integrated with a video processing system (VPS) that provide in-process feedback to the pharmacist preparing a medication order (MO) in the work surface of a laminar airflow workbench (LAFW). A hand gesture-based monitoring and verification (HGMV) system is developed on deep learning technology that helps in monitoring as well as verification of the process using different types of hand-gestures. A barcode enabled product verification (BEPV) technique is also developed and integrated with a compounding database which helps in selecting correct products to be used in CSP. The complete model also includes some other important verification and monitoring features such as a video recording process (VRP) that is used to track all the steps performed in completing a MO, image capturing in between the process, and electronic documentation of all the products used in the process as well as important events that occurred while doing CSP. The developed system was tested for different scenarios that a pharmacist can face in CSP, and the final version of the model was found to be of the highest accuracy. The BEPV and HGMV were modified based on the results from the initial phase of testing, and the final version was highly robust and efficient. Mistakes were made deliberately at the testing phase, and the results matched the expected output. The compounding sterile preparations monitoring and verification system (CSPTVS) provides a cost-effective solution that is capable of improving the quality standards in the field of pharmacy by complete monitoring of the process and providing real-time in-process feedback to the pharmacist while reducing wastage of wrongly-selected products.

    Committee: Vijay Devabhaktuni (Committee Chair); Jerry Nesamony (Committee Co-Chair); Ahmad Javaid (Committee Member); Weiqing Sun (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science; Pharmaceuticals
  • 16. Robertson, Curtis Deep Learning-Based Speed Sign Detection and Recognition

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Electrical Engineering

    In this thesis we apply the newly developing field of deep learning to the task of speed sign detection and recognition. Previous work has mostly used traditional image processing techniques or shallow neural networks and has focused on simple images that contain only the speed sign or are taken from ideal distances. We have chosen to apply our novel speed sign detection and recognition method to the complex images that contain a large variety of other objects, variable illumination or brightness levels, potential partial obstructions of the signs, and variable distance of the sign from the camera. While related work classified speed signs as one class, our work classifies speed signs into different classes based on their numerical speed value. The existing state-of-the-art YOLO (You Only Look Once) framework is applied to complete the task of object detection. YOLO is a more desirable choice for an object detection method for speed signs than other methods due to the advantages of YOLO. YOLO's main advantages are being extremely fast, making less background errors than other methods, and being highly generalizable by having the capability to handle both natural images and artwork. This new method of speed sign detection is fast enough to detect and recognize speed signs in real-time. Future autonomous vehicles will benefit from this method because it allows for the numerical speed value of a speed sign to be determined in real-time from the complex images that occur on the road.

    Committee: Xuefu Zhou Ph.D. (Committee Chair); Chia Han Ph.D. (Committee Member); Heng Wei Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 17. Singh, Vineeta Understanding convolutional networks and semantic similarity

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Electrical Engineering

    Humans can easily assess the similarity of two images. By contrast, in computer vision, this is a challenging problem and algorithms do not always succeed in matching pictures that are semantically similar to each other. This is mainly due to the well-known semantic gap problem, which refers to the difference or gap between low-level image pixels and high-level semantic concepts. Estimating semantic similarity of two images is a fundamental task that seeks to fill this semantic gap by accurately evaluating how alike two or more pictures are [1]. Assessing similarity in images can be used in a myriad of operations such as image registration, image retrieval and image matching, among others. We aim to design and implement an algorithm which can bridge the semantic gap in images. We do so by extracting high-level concepts from images and rank these concepts based on their contribution to image similarity. These chosen high-information concepts are then used to find semantic similarity between images. Convolutional networks have shown to capture image content and proved very powerful for matching images. However, at the same time they are known for their black-box behavior. The users are unable to understand the inner working of the CNN and explain why the model reached a particular decision. We design an experiment aimed at obtaining a better understanding of the inner workings of convolution neural networks designed for classification.

    Committee: Anca Ralescu Ph.D. (Committee Chair); Kenneth Berman Ph.D. (Committee Member); Rashmi Jha Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 18. Kaster, Joshua Training Convolutional Neural Network Classifiers Using Simultaneous Scaled Supercomputing

    Master of Science (M.S.), University of Dayton, 2020, Electrical Engineering

    Convolutional neural networks (CNN) are revolutionizing and improving today's technological landscape at a remarkable rate. Yet even in their success, creating optimal trained networks depends on expensive empirical processing to generate the best results. They require powerful processors, expansive datasets, days of training time, and hundreds of training instances across a range of hyperparameters to identify optimal results. These requirements can be difficult to access for the typical CNN technologist and ultimately wasteful of resources, since only the most optimal model will be utilized. To overcome these challenges and create a foundation for the next generation of CNN technologist, a three-stage solution is proposed: (1) To cultivate a new dataset containing millions of domain-specific (aerial) annotated images; (2) to design a flexible experiment generator framework which is easy to use, can operate on the fastest supercomputers in the world, and can simultaneously train hundreds of unique CNN networks; and (3) to establish benchmarks of accuracies and optimal training hyperparameters. An aerial imagery database is presented which contains 260 new cultivated datasets, features tens of millions of annotated image chips, and provides several distinct vehicular classes. Accompanying the database, a CNN-training framework is presented which can generate hundreds of CNN experiments with extensively customizable input parameters. It operates across 11 cutting-edge CNN architectures, any Keras-formatted database, and is supported on 3 unique Linux operating systems - including two supercomputers ranked in the top 70 worldwide. Training can be easily performed by simply inputting desirable parameter ranges in a pre-formatted spreadsheet. The framework creates unique training experiments for every combination of dataset, hyperparameter, data augmentation, and super computer requested. The resulting hundreds of trained networks provides the performance to perform (open full item for complete abstract)

    Committee: Eric Balster (Committee Chair); Patrick Hytla (Committee Member); Vijayan Asari (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science; Electrical Engineering; Engineering
  • 19. Plummer, Dylan Facilitating the Study of Chromatin Organization with Deep Learning

    Master of Sciences (Engineering), Case Western Reserve University, 2020, EECS - Computer and Information Sciences

    Learning-based computational methods offer significant benefits to the field of genomics since studying the genome often requires gathering large datasets for which classical statistical methods of analysis can fall short. In this work we propose a machine learning pipeline for denoising and upsampling high resolution Hi-C data: an experiment which measures the genome-wide 3D spatial interactions (loops) of chromatin inside the cell nucleus. Our models are able to recover the true chromatin loops with high confidence from only ~1/40th of the usual billions of sequencing reads required. This level of precision will allow for re-analyses of existing Hi-C datasets and future experiments at a fraction of the cost. Limiting barriers to entry in studying chromatin organization can help increase the speed of progress in understanding gene regulation and how it influences development and disease.

    Committee: Jing Li PhD (Advisor); Fulai Jin PhD (Committee Member); Michael Lewicki PhD (Committee Member) Subjects: Bioinformatics; Computer Science
  • 20. Radhakrishnan, Aswathnarayan A Study on Applying Learning Techniques to Remote Sensing Data

    Master of Science, The Ohio State University, 2020, Computer Science and Engineering

    A major issue with data-hungry deep learning algorithms is the lack of annotated ground truth for specific applications. In this thesis, we explore the challenges of applying artificial intelligence (AI) techniques on remote sensing data due to the unavailability of large annotated datasets for training in comparison to regular imagery data. We first tackle the problem of improving object tracking in Wide Area Motion Imagery data by using a semantic segmentation model to predict a building mask to filter false tracker points on buildings. The combination of AI image understanding techniques with tracker output significantly improved tracking quality. However, manual annotation of training data limited the performance of the semantic segmentation model. We propose a solution for this problem by developing a framework for automated annotation of remote sensing data. We pick satellite imagery as our data source, due to the high volume of Earth Observation Satellites available today coupled with crowd-sourced map data that can enable a new means for automated annotation of remote sensing data. In the second part of this thesis, we present a modular pipeline for collecting and labeling satellite imagery using geo-databases and OpenStreetMap data to facilitate training custom deep learning models. We demonstrate this approach by automatically collecting labeled imagery of solar power plants and building a classifier to detect the presence of such structures. The framework developed in this thesis can collect labeled satellite imagery of any object mapped by spatial databases creating use cases for geospatial and defense applications previously constrained by the lack of annotated datasets.

    Committee: James Davis (Advisor); Roger Crawfis (Committee Member); Roman Ilin (Committee Member) Subjects: Artificial Intelligence; Computer Science; Geographic Information Science; Remote Sensing