Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 44)

Mini-Tools

 
 

Search Report

  • 1. Bhagvat, Sitha Designing and enhancing the sockets direct protocol (SDP) over iWARP and InfiniBand /

    Master of Science, The Ohio State University, 2006, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 2. Chereddy, Sathvik SketchGNN: Generating CAD Sketches as Graphs

    Master of Science in Computer Science, Miami University, 2025, Computer Science and Software Engineering

    Computer-aided design (CAD) is widely used for 3D modeling in many technical fields, yet the creation of 2D sketches remains a manual step in typical CAD modeling workflows. Automatically generating 2D sketches can help users in CAD modeling by reducing their workload and by streamlining the design process. While sketches inherently possess a graph structure, with geometric primitives as nodes and constraints as edges, the application of graph neural networks (GNNs) to this domain remains relatively unexplored. To address this gap, we introduce SketchGNN, a graph diffusion model designed to generate CAD sketches using a joint continuous-discrete diffusion process. Our approach includes a novel discrete diffusion technique, wherein Gaussian-perturbed logits are projected onto the probability simplex via a softmax transformation. This enables our model to express uncertainty in the discrete diffusion process unlike traditional methods. We demonstrate that SketchGNN achieves state-of-the-art performance, reducing the Frechet Inception Distance (FID) from 16.04 to 7.80 and the negative log-likelihood (NLL) from 84.8 to 81.33.

    Committee: John Femiani (Advisor); Khodakhast Bibak (Committee Member); Karen Davis (Committee Member) Subjects: Artificial Intelligence; Computer Science; Information Science
  • 3. Ehret, Elizabeth Learning in neural networks : a comparison of networks of Hodgkin-Huxley neurons and networks of map model neurons each endowed with spike-time-dependent plasticity /

    Master of Science, The Ohio State University, 2007, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 4. Casukhela, Rohan Designing Robust Decision-Making Systems for Accelerated Materials Development

    Master of Science, The Ohio State University, 2022, Materials Science and Engineering

    Recent increases in computational power have led to growing enthusiasm about the volume of data that can be collected and analyzed for many applications. However, the amount of data some physical/virtual systems generate is so great that an increased reliance on mathematical, statistical, and algorithmic based approaches to analyze and make decisions from the data is required. Application of these computational tools can lead to sharper decision making and vast amounts of knowledge discovered. The abstraction of the scientific decision-making process has led many researchers to consider observing systems with more tunable experimental parameters. This makes traditional experimentation, which is based on human researchers conducting the experiment and using their intuition to drive the next set of experiments, intractable for these applications. Autonomous experimentation (AE) systems, which are also a byproduct of the computational explosion, are able to address this issue and have found use across the fields of biology, chemistry, and materials science. AE systems are typically capable of conducting certain types of experiments with lower and more reliable turnaround times as opposed to their human counterparts. The automated execution of experiments naturally leads one to think about how those experiments can be parallelized and otherwise completed faster due to the lack of human presence in the experimentation environment. Therefore, AE systems are considered when designing many high-throughput experimentation (HTE) efforts. This thesis presents an overview of the current state-of-the-art for AE systems in Chapter 1, a framework developed to increase the independence of AE systems from human assistance in Chapter 2, and a machine-learning (ML) data processing pipeline that automates the image post-processing phase of the analysis of backscattered-electron scanning electron microscope images in Chapter 3.

    Committee: Stephen Niezgoda (Advisor); Joerg Jinschek (Advisor); Sriram Vijayan (Other); Gopal Viswanathan (Committee Member); Oksana Chkrebtii (Committee Member) Subjects: Business Administration; Computer Science; Engineering; Experiments; Industrial Engineering; Information Science; Information Systems; Information Technology; Metallurgy; Operations Research; Robotics; Statistics
  • 5. Ghosh, Saibal Building more performant large scale networks for the Internet of Things

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Computer Science and Engineering

    Inspired by the rise of smaller computing devices and ubiquitous network connectivity, this dissertation focuses on the dynamics of extremely large scale computer networks and mechanisms to improve their performance. Traditional networks were not designed to support the billions (and possibly more than a trillion) of devices that are expected to be part of the Internet of Things (IoT) and eventually the Internet of Everything (IoE). We look at mechanisms to efficiently allocate resources in large networks and show how they can scale with the network size. We tackle the hard problem of power in energy limited networks and show how setting priorities can improve latencies even in huge networks. We devise ways to enhance the mobile computing experience through collaboration and moving the computation away to the network edge. Finally we devise a mechanism to improve caching at the network edge and show that improved caching at the edge can support a vast number of users without sacrificing on the Quality of Service. We propose to extend this work by incorporating mobility for the vast number of smaller devices connecting to the network. Mobility modeling on extremely large scales in computationally very expensive. We propose to use heuristics and machine learning models to improve computation times. Current networks take heavy advantage of the centralized nature of the cloud to compute network traffic flows and take routing decisions. As networks scale, this centralized approach would become infeasible. We propose to take a hybrid approach wherein the cloud only acts as an exchange for these decisions and network management is performed by devices in the network itself. Furthermore, since decisions need not be coordinated centrally, we propose to show that this approach can lead to optimum network performance on a massive scale.

    Committee: Marc Cahay Ph.D. (Committee Chair); Fred Annexstein Ph.D. (Committee Member); Emily Kang Ph.D. (Committee Member); Yizong Cheng Ph.D. (Committee Member); Rui Dai Ph.D. (Committee Member); Raj Bhatnagar Ph.D. (Committee Member) Subjects: Computer Science
  • 6. Alamari, Mohammed Barakat Neural Network Emulation for Computer Model with High Dimensional Outputs using Feature Engineering and Data Augmentation

    PhD, University of Cincinnati, 2022, Arts and Sciences: Mathematical Sciences

    Expensive computer models (simulators) are frequently used to simulate the behavior of a complex system in many scientific fields because an explicit experiment is very expensive or dangerous to conduct. Usually, only a limited number of computer runs are available due to limited sources. Therefore, one desires to use the available runs to construct an inexpensive statistical model, an emulator. Then the constructed statistical model can be used as a surrogate for the computer model. Building an emulator for high dimensional outputs with the existing standard method, the Gaussian process model, can be computationally infeasible because it has a cubic computational complexity that scales with the total number of observations. Also, it is common to impose restrictions on the covariance matrix of the Gaussian process model to keep computations tractable. This work constructs a flexible emulator based on a deep neural network (DNN) with feedforward multilayer perceptrons (MLP). High dimensional outputs and limited runs can pose considerable challenges to DNN in learning a complex computer model's behavior. To overcome this challenge, we take advantage of the computer model's spatial structure to engineer features at each spatial location and then make the training of DNN feasible. Also, to improve the predictive performance and avoid overfitting, we adopt a data augmentation technique into our method. Finally, we apply our approach using data from the UVic ESCM model and the PSU3D-ICE model to demonstrate good predictive performance and compare it with an existing state-of-art emulation method.

    Committee: Won Chang Ph.D. (Committee Member); Xia Wang Ph.D. (Committee Member); Emily Kang Ph.D. (Committee Member) Subjects: Statistics
  • 7. Shiflett, Kyle Photonic Deep Neural Network Accelerators for Scaling to the Next Generation of High-Performance Processing

    Doctor of Philosophy (PhD), Ohio University, 2022, Electrical Engineering & Computer Science (Engineering and Technology)

    Improvements from electronic processor and interconnect performance scaling are narrowing due to fundamental challenges faced at the device level. Compounding the issue, increasing demand for large, accurate deep neural network models has placed significant pressure on the current generation of processors. The slowing of Moore's law and the breakdown of Dennard scaling leaves no room for innovative solutions in traditional digital architectures to meet this demand. To address these scaling issues, architectures have moved away from general-purpose computation towards fixed-function hardware accelerators to handle demanding computation. Although electronic accelerators alleviate some of the pressure of deep neural network workloads, they are still burdened by electronic device and interconnect scaling problems. There is potential to further scale computer architectures by utilizing emerging technology, such as photonics. The low-loss interconnects and energy-efficient modulators provided by photonics could help drive future performance scaling. This could innovate the next generation of high-bandwidth, bandwidth-dense interconnects, and high-speed, energy-efficient processors by taking advantage of the inherent parallelism of light. This dissertation investigates photonic architectures for communication and computation acceleration to meet the machine learning processing requirements of future systems. The benefits of photonics is explored for bit-level parallelism, data-level parallelism, and in-network computation. The research performed in this dissertation shows that photonics has the4 potential to enable the next generation of deep neural network application performance by improving energy-efficiency and reducing compute latency. The evaluations in this dissertation conclude that photonic accelerators can: (1) Reduce energy-delay product by 73.9% at the bit-level on convolutional neural network workloads; (2) Improve throughput by 110× (open full item for complete abstract)

    Committee: Avinash Karanth (Advisor) Subjects: Computer Engineering; Computer Science; Electrical Engineering
  • 8. Arabyarmohammadi, Sara NOVEL IMAGE BIOMARKERS FROM MULTIMODAL MICROSCOPY FOR PREDICTING POST-TREATMENT OUTCOME IN CARDIAC AND CANCER PATIENTS

    Doctor of Philosophy, Case Western Reserve University, 2022, EECS - Computer and Information Sciences

    Decision support tools and image biomarkers are required to provide patients with improved prognosis prediction and response prediction post-treatment and prevent them from being subjected to extensive testing, over- and under-treatment. This dissertation focuses on developing new, robust multimodal image biomarkers. In addition, the dissertation evaluates these image biomarkers in three use cases. We developed one set of cytological image biomarkers to characterize myeloblasts' chromatin patterns. We built a second set of image biomarkers to characterize immune cell architecture across histopathology slides. In the third set, we went beyond morphological quantification of individual pathological primitives such as single cell type characterization and introduced a graph-based algorithm for extracting a set of features that quantify the differences in the spatial interplay of multiple cell families. This algorithm is not restricted to an image modality and can be employed on different pathological images, including but not limited to H&E, Wright-Giemsa, and immunofluorescence images. Our graph interplay method is not disease-specific or particular to a tissue type either and can be applied to address any digital pathology problem. In fact, our algorithm can be utilized whenever quantification of the spatial interplay between multiple graphs is of interest. We applied our cytological image biomarkers on N=92 aspirate images of acute myeloid leukemia and found that our image biomarkers were predictive of relapse and could prognosticate relapse-free survival after bone marrow transplantation. In the second use case, we applied our histopathological image biomarkers on N=2358 endomyocardial biopsy images to predict cardiac rejection grades after heart transplantation. Our results illustrated the ability of the model to predict rejection grades with high sensitivity and a low false negative rate. Furthermore, we demonstrated on N= 2881 slides that our image biomark (open full item for complete abstract)

    Committee: Anant Madabhushi (Advisor); Vipin Chaudhary (Committee Chair); Satish Viswanath (Committee Member); Shuo Li (Committee Member); Mehmet Koyuturk (Committee Member) Subjects: Artificial Intelligence; Bioinformatics; Biomedical Engineering; Biomedical Research; Biostatistics; Computer Engineering; Computer Science; Information Science; Information Systems; Information Technology; Medical Imaging; Pathology; Scientific Imaging
  • 9. Couture Del Valle, Christopher Optimization of Convolutional Neural Networks for Enhanced Compression Techniques and Computer Vision Applications

    Master of Science in Computer Engineering, University of Dayton, 2022, Electrical and Computer Engineering

    Image compression algorithms are the basis of media transmission and compression in the field of image processing. Decades after their inception, algorithms such as the JPEG image codec continue to be the industry standard. A notable research topic gathering momentum in the field of compression is deep learning (DL). This paper explores the opti- mization of DL models for ideal image compression and object detection (OD) applications. The DL model to be optimized is based upon an existing compression framework known as the CONNECT model. This framework wraps the traditional JPEG image codec within two convolutional neural networks (CNNs). The first network, ComCNN, focuses on com- pressing an input image into a compact representation to be fed into the image codec. The second network, RecCNN, focuses on reconstructing the output image from the codec as similarly as possible to the original image. To enhance the performance of the CONNECT model, an optimization software called Optuna wraps the framework. Hyperparameters are selected from each CNN to be evaluated and optimized by Optuna. Once the CONNECT model produces ideal results, the output images are applied to the YOLOv5 OD network. This paper explores the impact of DL hyperparameters on image quality and compres- sion metrics. In addition, a detection network will provide context to the effect of image compression on computer vision applications.

    Committee: Bradley Ratliff (Committee Chair); Eric Balster (Committee Member); Barath Narayanan (Committee Member) Subjects: Computer Engineering
  • 10. Dozier, Robbie Navigating the Metric Zoo: Towards a More Coherent Model For Quantitative Evaluation of Generative ML Models

    Master of Sciences, Case Western Reserve University, 2022, EECS - Computer and Information Sciences

    This thesis studies a family of high-dimensional generative procedures modeled by Deep Generative Models (DGMs). These models can sample from complex manifolds to create realistic images, video, audio, and more. In prior work, generative models were evaluated using likelihood criteria. However, likelihood has been shown to suffer from the Curse of Dimensionality, and some generative architectures such as Generative Adversarial Networks (GANs) do not admit a likelihood measure. While some other metrics for GANs have been proposed in the literature, there has not been a systematic study and comparison between them. In this thesis I conduct the first comprehensive empirical analysis of these generative metrics, comparing them across several axes including sample quality, diversity, and computational efficiency. Second, I propose a new metric which employs the concept of typicality from information theory and compare it to existing metrics. My work can be used to answer questions about when to use which kind of metric when training DGMs.

    Committee: Soumya Ray (Advisor); Michael Lewicki (Committee Member); Harold Connamacher (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 11. Synakowski, Stuart Novel Instances and Applications of Shared Knowledge in Computer Vision and Machine Learning Systems

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    The fields of computer vision and machine learning have made enormous strides in developing models which solve tasks only humans have been capable of solving. However, the models constructed to solve these tasks came at an enormous price in terms of computational resources and data collection. Motivated by the sustainability of continually developing models from scratch to tackle every additional task humans can solve, researchers are interested in efficiently constructing new models for developing solutions to new tasks. The sub-fields of machine learning devoted to this line of research go by many names. Such names include multi-task learning, transfer learning, and few-shot learning. All of these frameworks use the same assumption that knowledge should be shared across models to solve a set of tasks. We define knowledge as the set of conditions used to construct a model that solves a given task. By shared knowledge, we are referring to conditions that are consistently used to construct a set of models which solve a set of tasks. In this work, we address two sets of tasks posed in the fields of computer vision and machine learning. While solving each of these sets of tasks, we show how each of our methods exhibits a novel implementation of shared knowledge leading to many implications for future work in developing systems that further emulate the abilities of human beings. The first set of tasks fall within the sub-field of action analysis, specifically the recognition of intent. Instead of a data-driven approach, we construct a hand-crafted model to infer between intentional/non-intentional movement using common knowledge concepts known by humans. These knowledge concepts are ultimately used to construct an unsupervised method to infer between intentional and non-intentional movement across levels of abstraction. By layers of abstraction we mean that the model needed to solve the most abstract instances of intent recognition, is useful in developing models whi (open full item for complete abstract)

    Committee: Aleix Martinez (Advisor); Abhishek Gupta (Committee Member); Yingbin Liang (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science
  • 12. Weborg, Brooke Reservoir Computing: Empirical Investigation into Sensitivity of Configuring Echo State Networks for Representative Benchmark Problem Domains

    Master of Science, University of Toledo, 2021, Engineering (Computer Science)

    This research examines Echo State Network, a reservoir computer, performance using four different benchmark problems, then proposes heuristics or rules of thumb for configuring the architecture, as well as the selection of parameters and their values, which are applicable to problems within the same domain, to help serve to fill the ‘experience gap' needed by those entering this field of study. The influence of various parameter selections and their value adjustments, as well as architectural changes made to an Echo State Network, a powerful recurrent neural network configured as a reservoir computer, can be difficult to understand without experience in the field, and even some hyperparameter optimization algorithms may have difficulty adjusting parameter values without proper manual selections made first; therefore, it is imperative to understand the effects of parameters and their value selection on echo state network architecture performance for a successful build. Thus, to address the requirement for an extensive background in Echo State Network architecture, as well as examine how Echo State Network performance is affected with respect to variations in architecture, design, and parameter selection and values, a series of benchmark tasks representing different problem domains, including time series prediction, pattern generation, chaotic system prediction, and time series classification, were modeled and experimented on to show the impact on the performance of Echo State Network.

    Committee: Gursel Serpen (Advisor); Kevin Xu (Committee Member); Joshua Stuckner (Committee Member); Lawrence Thomas (Committee Member) Subjects: Computer Engineering; Computer Science
  • 13. Goodman, Garrett Design of a Novel Wearable Ultrasound Vest for Autonomous Monitoring of the Heart Using Machine Learning

    Doctor of Philosophy (PhD), Wright State University, 2020, Computer Science and Engineering PhD

    As the population of older individuals increases worldwide, the number of people with cardiovascular issues and diseases is also increasing. The rate at which individuals in the United States of America and worldwide that succumb to Cardiovascular Disease (CVD) is rising as well. Approximately 2,303 Americans die to some form of CVD per day according to the American Heart Association. Furthermore, the Center for Disease Control and Prevention states that 647,000 Americans die yearly due to some form of CVD, which equates to one person every 37 seconds. Finally, the World Health Organization reports that the number one cause of death globally is from CVD in the form of either myocardial infarctions or strokes. The primary ways of assisting individuals affected with CVD are from either improved treatments, monitoring research, or primary and secondary prevention measures. In the form of cardiovascular structural monitoring, there are multiple ways of viewing the human heart. That is, Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Computed Tomography (CT), and Ultrasonography are the four fundamental imaging techniques. Though, continuous monitoring with these imaging techniques is far from currently possible. Large financial cost and size (MRI), radiation exposure (PET and CT), or necessary physician assistance (Ultrasonography) are the current primary problems. Though, of the four methodologies, Ultrasonography allows for multiple configurations, is the least expensive, and has no detrimental side effects to the patient. Therefore, in an effort to improve continuous monitoring capabilities for cardiovascular health, we design a novel wearable ultrasound vest to create a near 3D model of the heart in real-time. Specifically, we provide a structural modeling approach specific to this system's design via a Stereo Vision 3D modeling algorithm. Similarly, we introduce multiple Stochastic Petri Net (SPN) models of the heart for future functiona (open full item for complete abstract)

    Committee: Nikolaos G. Bourbakis Ph.D. (Advisor); Soon M. Chung Ph.D. (Committee Member); Yong Pei Ph.D. (Committee Member); Iosif Papadakis Ktistakis Ph.D. (Committee Member); Konstantina Nikita Ph.D. (Committee Member); Anthony Pothoulakis M.D. (Other) Subjects: Biomedical Engineering; Biomedical Research; Computer Science; Medical Imaging
  • 14. Rajan, Rachel Semi Supervised Learning for Accurate Segmentation of Roughly Labeled Data

    Master of Science (M.S.), University of Dayton, 2020, Electrical and Computer Engineering

    Recent advancements in Neural Networks have obtained immense popularity in the field of computer vision applications including image classification, semantic segmentation, object detection and many more. Studies show that semantic segmentation has always been a challenging task in computer vision. This requires a significantly number of pixel-level annotated to assign a label to each image pixel. But, for supervised deep learning techniques, the unavailability of labeled data has limited applications for accurate semantic segmentation. Hence, an enhanced adversarial learning approach in semi-supervised segmentation is proposed for incremental training of the deep learning-based model to utilize unlabeled data in achieving better learning performance. Studies reveal that unlabeled data combined with small amount of labeled data can improve the overall performance considerably. Since most of the existing methods use weakly labeled images, the proposed technique utilizes unlabeled instances to improve the segmentation model. A Generative and Adversarial Network (GAN) based semi-supervised framework is implemented here. This mainly consists of a generator and a discriminator, the generator provides extra training examples to classifier, while the discriminator works on providing labels to the samples from the possible classes else assigns it as a pseudo label. The main motive of this implementation is to adding large pseudo labels turns the real samples to be closer in the feature space hence improving the pixel level classification. Experiments on a publicly available datasets such as PASCAL VOC 2012 and PODOCYTE Benchmark dataset released by University at Buffalo, which demonstrate the effectiveness of the proposed method.

    Committee: Vijayan Asari Dr. (Advisor) Subjects: Computer Engineering
  • 15. Sommer, Nathan A Machine Learning Approach to Controlling Musical Synthesizer Parameters in Real-Time Live Performance

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Computer Science and Engineering

    Musicians who perform with electronic synthesizers often adjust synthesis parameters during live performance to achieve more expressive sounds. Enabling the performer to teach a computer to make these adjustments automatically during performance frees the performer from this responsibility, while maintaining an expressive sound in line with the performer's musical vision. We have created a machine learning system called Larasynth that can be trained by a musician to make these parameter adjustments in real-time during live performances. Larasynth is trained using examples in the form of MIDI files created by the user. Learning is achieved using Long Short-Term Memory (LSTM) recurrent neural networks. To accomplish this, we have devised a set of features which capture the state of the synthesizer controller at regular intervals and are used to make regular predictions of parameter values using an LSTM network. To achieve sufficient generalization during training, transformations are applied to the training data set before each training epoch to simulate variations that may occur during performance. We have also created a new lightweight LSTM library suitable for small networks under real-time constraints. In this thesis we present details behind Larasynth's implementation and use, and experiments that were performed to demonstrate Larasynth's ability to learn behaviors based on different musical situations.

    Committee: Anca Ralescu Ph.D. (Committee Chair); Yizong Cheng Ph.D. (Committee Member); Chia Han Ph.D. (Committee Member); Mara Helmuth D.M.A. (Committee Member); Ali Minai Ph.D. (Committee Member) Subjects: Computer Science
  • 16. Srinivasan, Ramprakash Computational Models of the Production and Perception of Facial Expressions

    Doctor of Philosophy, The Ohio State University, 2018, Electrical and Computer Engineering

    By combining different facial muscle actions, called Action Units (AUs), humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science have long hypothesized the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional Magnetic Resonance Imaging (fMRI), we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the pattern analysis decoder. Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Useful annotations include AUs and their intensities, as well as emotion category. This process cannot be practically achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of a million images of facial expressions of emotion from the wild (i.e., face images downloaded from the Internet). We further use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. The downloaded images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology. Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in and a reddish color (open full item for complete abstract)

    Committee: Aleix Martinez (Advisor); Julie Golomb (Committee Member); Yuan Zheng (Committee Member) Subjects: Cognitive Psychology; Computer Engineering; Computer Science; Social Psychology
  • 17. Ghayoumi, Mehdi FACIAL EXPRESSION ANALYSIS USING DEEP LEARNING WITH PARTIAL INTEGRATION TO OTHER MODALITIES TO DETECT EMOTION

    PHD, Kent State University, 2017, College of Arts and Sciences / Department of Computer Science

    Analysis of human emotion is very important as the field of social robotics where a new generation of humanoids and other smart devices will interact with humans. Emotional expression is a universal language for interaction with humans. Understanding human emotions is a necessary and important step for human-computer interaction. Human emotion is expressed as a complex combination of facial expressions, speech (including silence) and gestures postures, various limb-motions, gaze, and blinking. Multiple research models have been developed for limited facial expression analysis, speech based emotion analysis, limited models for gesture analysis and their limited integration. However, such analysis is limited to single frame analysis time-efficiency, limited handling of occlusion, notion of colors in facial expression analysis, lack of exploitation of symmetry, lack of dynamic change in assigning weight between the modalities based upon environmental requirement and six basic emotions. This research develops a convolutional neural network based deep learning model that recognizes human facial expressions exploiting a combination of symmetrical representation to handle occlusion; a unified model based upon transforming facial muscle motion to geometric feature points; fusion of multiple modalities and fast hashing techniques for real-time emotion recognition. It also proposes a new model for recognition of mixed-emotion in real-time.

    Committee: Arvind K. Bansal (Advisor); Javed I. Khan (Committee Member); Cheng Chang Lu (Committee Member); Stephen B. Fountain (Committee Member); William E. Merriman (Committee Member) Subjects: Artificial Intelligence; Computer Science; Robotics; Robots
  • 18. Shakeel, Amlaan Service robot for the visually impaired: Providing navigational assistance using Deep Learning

    Master of Science, Miami University, 2017, Computational Science and Engineering

    Assistive technology helps improve the day to day activities for people with disabilities. One of the methods utilized by assistive technologists employs the use of robots. These are called service robots. This thesis explores the idea of a service robot for the visually impaired to assist with navigation and is inspired by the use of guide dogs. The focus of this thesis is to develop a robot to achieve autonomous indoor navigation using computer vision to identify image based goals in an unfamiliar environment. The method presented in this thesis utilizes a deep learning framework, called Faster R-CNN, to train a computer to classify and localize exit signs in real time. A proof of concept is presented using NVIDIA Jetson, and TurtleBot, a robot kit, which runs a robot software development framework Robot Operating System (ROS). The model is trained successfully using Faster R-CNN and is validated. The model is used for real-time object classification on the prototype robot.

    Committee: Yamuna Rajasekhar (Advisor); John Femiani (Committee Member); Donald Ucci (Committee Member) Subjects: Computer Science; Electrical Engineering; Robotics
  • 19. Narasimhan, Sridhar Topological design of networks for data communication systems /

    Doctor of Philosophy, The Ohio State University, 1987, Graduate School

    Committee: Not Provided (Other) Subjects: Business Administration
  • 20. Lu, Ching-sung Automated validation of communication protocols /

    Doctor of Philosophy, The Ohio State University, 1986, Graduate School

    Committee: Not Provided (Other) Subjects: Computer Science