Search Results (1 - 25 of 116 Results)

Sort By  
Sort Dir
 
Results per page  

Imbulgoda Liyangahawatte, Gihan Janith MendisHardware Implementation and Applications of Deep Belief Networks
Master of Science in Engineering, University of Akron, 2016, Electrical Engineering
Deep learning is a subset of machine learning that contributes widely to the contemporary success of artificial intelligence. The essential idea of deep learning is to process complex data by abstracting hierarchical features via deep neural network structure. As one type of deep learning technique, deep belief network (DBN) has been widely used in various application fields. This thesis proposes an approximation based hardware realization of DBNs that requires low hardware complexity. This thesis also explores a set of novel applications of the DBN-based classifier that will benefit from a fast implementation of DBN. In my work, I have explored the application of DBN in the fields of automatic modulation classification method for cognitive radio, Doppler radar sensor for detection and classification of micro unmanned aerial systems, cyber security applications to detect false data injection (FDI) attacks and localize flooding attacks, and applications in social networking for prediction of link properties. The work in this thesis paves the way for further investigation and realization of deep learning techniques to address critical issues in various novel application fields.

Committee:

Jin Wei (Advisor); Arjuna Madanayaka (Committee Co-Chair); Subramaniya Hariharan (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering; Electrical Engineering; Engineering; Experiments; Information Technology

Keywords:

deep belief networks; multiplierless digital architecture; Xilinx FPGA implementations; low-complexity; applications of deep belief networks; spectral correlation function; modulation classification; drone detection; doppler radar; cyber security

Moharreri, KayhanAugmenting Collective Expert Networks to Improve Service Level Compliance
Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering
This research introduces and develops the new subfield of large-scale collective expert networks (CEN) concerned with time-constrained triaging which has become critical to the delivery of increasingly complex enterprise services. The main research contribution augments existing human-intensive interactions in the CEN with models that use ticket content and transfer sequence histories to generate assistive recommendations. This is achieved with a recommendation framework that improves the performance of CEN by: (1) resolving incidents to meet customer time constraints and satisfaction, (2) conforming to previous transfer sequences that have already achieved their Service Levels; and additionally, (3) addressing trust to encourage adoption of recommendations. A novel basis of this research is the exploration and discovery of resolution process patterns, and leveraging them towards the construction of an assistive resolution recommendation framework. Additional interesting new discoveries regarding CENs include existence of resolution workflows and their frequent use to carry out service-level-effective resolution on regular content. In addition, the ticket-specific expertise of the problem solvers and their dynamic ticket load were found to be factors in the time taken to resolve an incoming ticket. Also, transfers were found to reflect the experts' local problem-solving intent with respect to the source and target nodes. The network performs well if certain transfer intents (such as resolution and collective) are exhibited more often than the others (such as mediation and exploratory). The assistive resolution recommendation framework incorporates appropriate strategies for addressing the entire spectrum of incidents. This framework consists of a two-level classifier with the following parts: (1) content tagger for routine/non-routine classification, (2) A sequence classifier for resolution workflow recommendation, (3) Response time estimation based on learned dynamics of the CEN (i.e. Expertise, and ticket load), and (4) transfer intent identification. Our solution makes reliable proactive recommendations only in the case of adequate historical evidence thus helping to maintain a high level of trust with the interacting users in the CEN. By separating well-established resolution workflows from incidents that depend on experts’ experiential and `tribal' knowledge for the resolution, this research shows a 34% performance improvement over existing content-aware greedy transfer model; it is also estimated that there will be a 10% reduction in the volume of service-level breached tickets. The contributions are shown to benefit the enterprise support and delivery services by providing (1) lower decision and resolution latency, (2) lower likelihood of service-level violations, and (3) higher workforce availability and effectiveness. More generally, the contributions of this research are applicable to a broad class of problems where time-constrained content-driven problem-solving by human experts is a necessity.

Committee:

Jayashree Ramanathan (Advisor); Rajiv Ramnath (Committee Member); Srinivasan Parthasarathy (Committee Member); Gagan Agrawal (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science; Information Technology

Keywords:

IT Service Management, Collective Expert Networks, Process Discovery, Ticket Routing Recommendations, Resolution Time Estimation, Event Mining, IT Service Support, Service Level Compliance, Human-in-the-loop, Learning from Enterprise Event Data

Robinson, Jace D.A Model for Seasonal Dynamic Networks
Master of Science (MS), Wright State University, 2018, Computer Science
Sociotechnological and geospatial processes exhibit time varying structure that make insight discovery challenging. This paper presents statistical model of systems with seasonal dynamics, modeled as a dynamic network, to address this challenge. It assumes the probability of edge formations depend on a type assigned to incident nodes and the current time. Time dependencies are modeled by unique seasonal processes. The model is studied on several synthetic and real datasets. Superior fidelity of this model on seasonal datasets compared to existing network models, while being able to remain equally accurate for networks with randomly changing structure, is shown. The model is found to be twice as accurate at predicting future edge counts over competing models on New York City taxi trips, U.S. airline flights, and email communication within the Enron company. An anomaly detection use case for the model is shown for NYC traffic dynamics and email communications between Enron employees.

Committee:

Derek Doran, Ph.D. (Committee Chair); Tanvi Banerjee, Ph.D. (Committee Member); Fred Garber, Ph.D. (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science

Keywords:

Stochastic Blockmodel; Dynamic Networks; Seasonal Time Series; Kalman Filter

Howard, Shaun MichaelDeep Learning for Sensor Fusion
Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences
The use of multiple sensors in modern day vehicular applications is necessary to provide a complete outlook of surroundings for advanced driver assistance systems (ADAS) and automated driving. The fusion of these sensors provides increased certainty in the recognition, localization and prediction of surroundings. A deep learning-based sensor fusion system is proposed to fuse two independent, multi-modal sensor sources. This system is shown to successfully learn the complex capabilities of an existing state-of-the-art sensor fusion system and generalize well to new sensor fusion datasets. It has high precision and recall with minimal confusion after training on several million examples of labeled multi-modal sensor data. It is robust, has a sustainable training time, and has real-time response capabilities on a deep learning PC with a single NVIDIA GeForce GTX 980Ti graphical processing unit (GPU).

Committee:

Wyatt Newman, Dr (Committee Chair); M. Cenk Cavusoglu, Dr (Committee Member); Michael Lewicki, Dr (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

deep learning; sensor fusion; deep neural networks; advanced driver assistance systems; automated driving; multi-stream neural networks; feedforward; multilayer perceptron; recurrent; gated recurrent unit; long-short term memory; camera; radar;

Khazanova, YekaterinaExperiments with Neural Network Libraries
MS, University of Cincinnati, 2013, Engineering and Applied Science: Computer Science
When solving problems using neural networks, the user/developer is usually limited by the number of neural networks he or she has already studied or descriptions of which are readily available. If there was a library of many neural networks, one easy to access and intuitive to use, it would greatly assist developers in nding the optimum method for their needs, as well determining on average which neural networks are better suited to which problems. This project was part of a larger work to compile a library of neural networks. During the course of the research we explored the Zak Model, the Torus Attractor Model, the Hopeld Model, and the Kaneko Model - their usefulness in solving pattern recognition problems and their stability.

Committee:

Anca Ralescu, Ph.D. (Committee Chair); Elena N. Benderskaya, Ph.D. (Committee Member); Chia Han, Ph.D. (Committee Member)

Subjects:

Artificial Intelligence

Angeline, Peter JohnEvolutionary algorithms and emergent intelligence /
Doctor of Philosophy, The Ohio State University, 1993, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Tanner, Michael ClayExplaining knowledge systems : justifying diagnostic conclusions /
Doctor of Philosophy, The Ohio State University, 1989, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Keuneke, Anne MarieMachine understanding of devices causal explanation of diagnostic conclusions /
Doctor of Philosophy, The Ohio State University, 1989, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Lipkin, IlyaTesting Software Development Project Productivity Model
Doctor of Philosophy in Manufacturing and Technology Management, University of Toledo, 2011, Manufacturing and Technology Management

Software development is an increasingly influential factor in today’s business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted.

There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis.

This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD.

Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers.

Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc¿¿¿ This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

Committee:

Jeen Su Lim (Committee Chair); James Pope (Committee Member); Michael Mallin (Committee Member); Michael Jakobson (Committee Member); Wilson Rosa (Advisor)

Subjects:

Aerospace Engineering; Armed Forces; Artificial Intelligence; Business Administration; Business Costs; Computer Engineering; Computer Science; Economic Theory; Economics; Electrical Engineering; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology; Management; Marketing; Mathematics

Keywords:

"Software Estimation"; "Software Cost Model"; "Department of Defense Data"; COCOMO; "Software Project Productivity Model"

Al-Olimat, Hussein SOptimizing Cloudlet Scheduling and Wireless Sensor Localization using Computational Intelligence Techniques
Master of Science, University of Toledo, 2014, Engineering (Computer Science)
Optimization algorithms are truly complex procedures that consider many elements when optimizing a specific problem. Cloud computing (CCom) and Wireless sensor networks (WSNs) are full of optimization problems that need to be solved. One of the main problems of using the clouds is the underutilization of the reserved resources, which causes longer makespans and higher usage costs. Also, the optimization of sensor nodes' power consumption, in WSNs, is very critical due to the fact that sensor nodes are small in size and have constrained resources in terms of power/energy, connectivity, and computational power. This thesis formulates the concern on how CCom systems and WSNs can take advantage of the computational intelligent techniques using single- or multi-objective particle swarm optimization (SOPSO or MOPSO), with an overall aim of concurrently minimizing makespans, localization time, energy consumption during localization, and maximizing the number of nodes fully localized. The cloudlet scheduling method is implemented inside CloudSim advancing the work of the broker, which was able to maximize the resource utilization and minimize the makespan demonstrating improvements of 58\% in some cases. Additionally, the localization method optimized the power consumption during a Trilateration-based localization (TBL) procedure, through the adjustment of sensor nodes' output power levels. Finally, a parameter-study of the applied PSO variants for WSN localization is performed, leading to results that show algorithmic improvements of up to 32\% better than the baseline results in the evaluated objectives.

Committee:

Mansoor Alam (Committee Chair); Robert Green, II (Committee Co-Chair); Weiqing Sun (Committee Member); Vijay Devabhaktuni (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Engineering

Keywords:

Cloud Computing; Particle Swarm Optimization; Random Inertia Weight; Cloudlet Scheduling; Makespan; Utilization; CloudSim; Wireless Sensor Network; Trilateration; Localization; Multi-objective; ZigBee; RSSI; Genetic Algorithm; Simulated Annealing

Shao, YuanlongLearning Sparse Recurrent Neural Networks in Language Modeling
Master of Science, The Ohio State University, 2014, Computer Science and Engineering
In the context of statistical language modeling, we explored the task of learning an Elman network with sparse weight matrices, as a pilot study towards learning a sparsely con-nected fully recurrent neural network, which would be potentially useful in many cases. We also explored how efficient and scalable it can be in practice. In particular, we explored these tasks: (1) We adapted the Iterative Hard Thresholding (IHT) algorithm into the BackPropagation Through Time (BPTT) learning. (2) To accel-erate convergence of the IHT algorithm, we designed a scheme for expanding the net-work by replicating the existing hidden neurons. Thus we can start training from a small and dense network which is already learned. (3) We implemented this algorithm in GPU. Under small minibatch sizes and large network sizes (e.g., 2000 hidden neurons) it achieves 160 times speedup compared to the RNNLM toolkit in CPU. With larger mini-batch sizes there could be another 10 times speedup, though the convergence rate be-comes an issue in such cases and further effort is needed to address this problem. (4) Without theoretical convergence guarantee of the IHT algorithm in our problem setting, we did an empirical study showing that learning a sparse network does give competitive perplexity in language modeling. In particular, we showed that a sparse network learned in this way can outperform a dense network when the number of effective parameters is kept the same. (5) We gathered performance metric comparing the computational effi-ciency of the matrix operations of interest in both sparse and dense settings. The results suggest that for network sizes which we can train in reasonable time at this moment, it’s hard for sparse matrices to run faster, unless we are allowed to have very sparse networks. Thus for research purposes we may want to focus on using dense matrices, while for en-gineering purposes a more flexible matrix design leveraging the power of dense and sparse matrices might be necessary.

Committee:

Eric Fosler-Lussier, Dr. (Advisor); Mikhail Belkin, Dr. (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

language modeling; recurrent neural networks; sparse recurrent neural networks

Baumgartner, DustinGlobal-Local Hybrid Classification Ensembles: Robust Performance with a Reduced Complexity
Master of Science in Engineering, University of Toledo, 2009, Computer Science

The current trend in machine learning ensemble classifier research is to improve performance, at times marginally, beyond what existing methods can deliver. This tendency has consequently complicated ensemble designs to a level that is possibly not justified for many domains. This thesis proposes a new design for classification ensembles, Global-Local Hybrid Ensemble (GLHE), which offers robust performance with a less complex design than comparably performing ensembles. GLHE exploits two sources of diversity in its base-classifiers, heterogeneous (hybrid) and homogeneous. Heterogeneity is achieved with two learning algorithms – one global and one local – that are assumed to have an intrinsic difference in learning to ensure high levels of diversity. Homogeneity is implemented through the use of multiple parameterizations of the same learning algorithm to allow both global and local learners to explore their respective region of the hypothesis space while also creating additional, albeit small, diversity among the base-classifiers.

A comprehensive simulation study is conducted to profile the performance capabilities of the proposed design, considering three types of classification performance measures, three types of diversity measures, and training/testing execution time as features of analysis. GLHE is implemented with decision tree (global) and nearest-neighbor (local) learners, and its performance on 46 benchmark datasets compared to more than 70 ensembles from the literature and in-house simulations. Specific hypotheses are tested and evaluated with nonparametric statistical significance calculations. First, it is shown that GLHE performs comparable to hybrid ensembles with more learning algorithms (more complexity) and better than data manipulation ensembles. Second, the importance of co-presence of global-local learners and heterogeneous/homogeneous diversity in the GLHE design is validated; along with our assumption the global and local learners produce high levels of diversity. Finally, we create another implementation of GLHE with neural networks, which shows that the design is generic and allows for trade-offs between performance robustness and execution speed. Another experiment compares the performance of GLHE against those achieved by contestants in a data mining competition. Although the contestants likely fine-tuned their algorithms to optimize performance, the standard GLHE implementation still scores no worse than half of them.

The results of the simulation study indicate that GLHE is indeed robust, even in comparison to more complex ensembles. Major contributions of this work are 1) global and local learners can effectively create high levels of diversity, 2) the GLHE design may offer a compromise between the robustness of traditional hybrid ensembles and the simplicity of data manipulation ensembles – an area not satisfied by other ensemble designs, and 3) the GLHE design is a suitable technique for applying to new problems if robust performance is needed but users do not have resources for complex designs or in-depth empirical analysis.

Committee:

Gursel Serpen (Advisor); Henry Ledgard (Committee Member); Han Yu (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

ensemble; robust classification; global local learning, heterogeneous homogeneous diversity

Weinkam, Matthew J.Very Normal Things
Master of Arts, Miami University, 2011, English
This thesis is a collection of very normal things called stories. Stories concerning the usual trials, tribulations, triumphs, and daily goings-on of your basic, run-of-the-mill characters. It’s about what you’d expect. Plots, settings, conflicts, narratives made up of sentences and paragraphs and what have you. There are a lot of words in here. Meant to communicate some such thing or other. Nothing out of the ordinary. Standard stuff really. There’s nothing to see here.

Committee:

Margaret Luongo (Committee Chair); Joseph Bates (Committee Member); Tim Melley (Committee Member)

Subjects:

Artificial Intelligence; Cartography; Dance; Morphology; Virology; Wood

Keywords:

slacks; death; stale Apple Jacks; questions; robins; more death; regret

Marvel, Jeremy AlanAutonomous Learning for Robotic Assembly Applications
Doctor of Philosophy, Case Western Reserve University, 2010, EECS - Computer Engineering
Robotic manipulators have been used to perform a myriad of repetitive industrial tasks with varying degrees of success and precision over the past several decades. Their use in mechanical assembly tasks, however, has been relatively minor due to both their limitations in classical position-control paradigms and the difficulty in algorithmically describing the process of assembly. Though the technology for sensing and compliantly adapting to physical contact has improved, robotic assembly solutions are still largely relegated to simple responsibilities such as peg-in-hole and rigidly fixtured configurations. This dissertation represents the progressive development and assessment of self-guided learning for model-assisted robotic assembly applications. Utilizing industrial manipulators outfitted with six-degree of freedom (DoF) force/torque sensors for compliant motion control, a method for self-optimization of assembly search parameters is developed that allows the robot to determine when its performance has improved using simple metrics of success. Based on prior experiences, the robot then generates internal representations—or models—of the assembly process in order to attempt to predict when certain parameter sequences are likely to result in superior assembly performances. This method is further augmented to algorithmically determine the quality and anticipated effectiveness of the models based on their profiles in the parameter-performance mapping space. Analysis of simulations with arbitrarily-large N-dimensional parameter spaces suggest that even relatively simple models are capable of abstracting useful information of assemblies, even in the presence of noise. These results were then corroborated by running physical trials with and without assistive models on a variety of automobile part assemblies.

Committee:

Wyatt Newman, PhD (Committee Chair); Francis Merat, PhD (Committee Member); Roger Quinn, PhD (Committee Member); Kiju Lee, PhD (Committee Member)

Subjects:

Artificial Intelligence; Engineering; Robots

Keywords:

Assembly; Machine Learning; Robot Learning

Young, William AlbertA Team-Compatibility Decision Support System to Model the NFL Knapsack Problem: An Introduction to HEART
Doctor of Philosophy (PhD), Ohio University, 2010, Industrial and Systems Engineering (Engineering and Technology)

Many tangible and intangible factors are considered when making a hiring decision in the National Football League (NFL). One difficult decision that executives must make is whom they will select in the NFL Draft or which NFL Free Agent they will sign in the offseason. Mathematical models can be developed to aid humans in their decision-making process because they are able to find non-obvious relationships within numeric data. HEART, or Heuristic Evaluation of Artificially Replaced Teammates, is a mathematical model that utilizes machine learning and statistical-based methodologies to aid managers with their hiring decisions. HEART is not intended to be a ‘decision tool,' or a tool that explicitly states who a team should hire. A ‘decision tool' would need to encompass not only the tangible information available to hiring managers but also intangible aspects that are difficult or impossible for mathematical model to capture accurately. HEART is a ‘decision support tool' that provides additional information for hiring managers to use in conjunction with other available resources.

The goal of HEART is to determine an Expected and Theoretical Contribution Value for a potential hiring candidate, which represents a player's ability to increase or decrease the estimated number of games won by a particular team in an upcoming season. This value is significant because it represents a player's level of compatibility with potential teammates and considers the effect that aging has on players' physiological ability to play football. HEART is also designed to allow direct comparisons of players from any playing position as well as players from either college or professional leagues.

From a quantitative standpoint, the results of the HEART methodology were statistically validated using both parametric and nonparametric testing procedures. This validation procedure analyzed the results collected from a convenient sample of experts who participated in a survey instrument. The validation results show that the HEART methodology provided at least ‘Useful' results, and at times ‘Very Useful' results, using a five-point Likert scale for a case study involving the 2007 NFL Draft Class and Free Agent Players.

Committee:

Gary Weckman, PhD (Advisor); Masel Dale, PhD (Committee Member); Kaya Savas, PhD (Committee Member); Snow Andrew, PhD (Committee Member); Genaidy Ashraf, PhD (Committee Member)

Subjects:

Artificial Intelligence; Engineering; Industrial Engineering; Management

Keywords:

Team Compatibility; Decision Support System; NFL Draft; NFL Free Agency; Performance Aging Curves; Combine; Theoretical and Expected Contribution Values; League-Wide, Team-Specific; Quantitative Analysis in Sports

Sam, MonicaAdapting the Search Space while Limiting Damage during Learning in a Simulated Flapping Wing Micro Air Vehicle
Doctor of Philosophy (PhD), Wright State University, 2017, Computer Engineering
Cyber-Physical Systems (CPS) are characterized by closely coupled physical and software components that operate simultaneously on different spatial and temporal scales; exhibit multiple and distinct behavioral modalities; and interact with one another in ways not entirely predictable at the time of design. A commonly appearing type of CPS are systems that contain one or more smart components that adapt locally in response to global measurements of whole system performance. An example of a smart component robotic CPS system is a Flapping Wing Micro Air Vehicle (FW-MAV) that contains wing motion oscillators that control their wing flapping patterns to enable the whole system to fly precisely after the wings are damaged in unpredictable ways. Localized learning of wing flapping patterns using meta-heuristic search optimizing flight precision has been shown effective in recovering flight precision after wing damage. However, such methods provide no insight into the nature of the damage that necessitated the learning. Additionally, if the learning is done while the FW-MAV is in service, it is possible for the search algorithm to actually damage the wings even more due to overly aggressive testing of candidate solutions. In previous work, a method was developed to extract estimates of wing damage as a side effect of the corrective learning of wing motion patterns. Although effective, that method lacked in two important respects. First, it did not settle on wing gait solutions quickly enough for the damage estimates to be created in a time acceptable to a user. Second, there were no protections against testing excessively aggressive wing motions that could potentially damage the system even further during the attempted behavior level repair. This work addresses both of those issues by making modifications to the representation and search space of wing motion patterns potentially visited by the online metaheuristic search. The overarching goals were to lessen the time to required to achieve effective repair and damage estimates and to avoid further damage to wings by limiting the search's access to overly aggressive wing motions. The key challenge was understanding how to modify representations and search space to provide the desired benefits without destroying the method's ability to find solutions at all. With the recent emergence of functional insect-sized and bird-sized FW-MAV and an expected need to modify wing behavior in service, this study, believed to be the first of its kind, is of contemporary relevance.

Committee:

John Gallagher, Ph.D. (Advisor); Michael Raymer, Ph.D. (Committee Member); Mateen Rizki, Ph.D. (Committee Member); Joseph Slater, Ph.D. (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering

Keywords:

artificial intelligence; computer engineering

Cui, ChenConvolutional Polynomial Neural Network for Improved Face Recognition
Doctor of Philosophy (Ph.D.), University of Dayton, 2017, Electrical and Computer Engineering
Deep learning is the state-of-art technology in pattern recognition, especially in face recognition. The robustness of the deep network leads a better performance when the size of the training set becomes larger and larger. Convolutional Neural Network (CNN) is one of the most popular deep learning technologies in the modern world. It helps obtain various features from multiple filters in the convolutional layer and performs well in the hand written digits classification. Unlike the unique structure of each hand written digit, face features are more complex, and many difficulties are existed for face recognition in current research field, such as the variations of lighting conditions, poses, ages, etc. So the limitation of the nonlinear feature fitting of the regular CNN appears in the face recognition application. In order to create a better fitting curve for face features, we introduce a polynomial structure to the regular CNN to increase the non-linearity of the obtained features. The modified architecture is named as Convolutional Polynomial Neural Network (CPNN). CPNN creates a polynomial input for each convolutional layer and captures the nonlinear features for better classification. We firstly prove the proposed concept with MNIST handwritten database and compare the proposed CPNN with regular CNN. Then, different parameters in CPNN are tested by CMU AMP face recognition database. After that, the performance of the proposed CPNN is evaluated on three different face databases: CMU AMP, Yale and JAFFE as well as the images captured in real world environment. The proposed CPNN obtains the best recognition rates (CMU AMP: 99.95%, Yale: 90.89%, JAFFE: 98.33%, Real World: 97.22%) when compared to other different machine learning technologies. We are planning to apply the state-of-art structures, such as inception and residual, to the current CPNN to increase the depth and stability as our future research work.

Committee:

Vijayan Asari (Advisor)

Subjects:

Artificial Intelligence; Bioinformatics; Computer Engineering; Electrical Engineering

Keywords:

Deep Learning, Convolutional Polynomial Neural Network, Face Recognition, Computer Vision, Image Processing

Mallios, StavrosVirtual Doctor: An Intelligent Human-Computer Dialogue System for Quick Response to People in Need
Doctor of Philosophy (PhD), Wright State University, 2018, Computer Science and Engineering PhD
One of the challenges of our society is the existence of chronic-related conditions and diseases among the elderly and people at risk. Apart from the welfare of people, a significant impact of this phenomenon is the accumulation of high financial costs for both individuals and health care systems. In order to address this issue and to reduce its effects, many efforts have been made towards preventing, identifying in early stages and, generally, managing chronic-related medical conditions and diseases. As a result, there has been a keen research and market interest in health monitoring devices during the past few decades. Nevertheless, despite the progress in the field of health monitoring, these devices are still unable to measure certain symptoms with sensors. A feasible solution to the aforementioned problem comes from the area of human-machine interaction. However, although human-machine interaction devices have advanced recently, they are still far from replacing the human from the interaction loop. Their major drawback is that they cannot reliably and efficiently respond to human requests, since they mainly behave as “answering machines”. Moreover, the majority of these systems fail to take certain human factors, such as one’s emotional condition, into consideration. In response to this need, we propose a Virtual Doctor system that is able to measure a patient’s pathological data and also competently extract their non-measurable symptoms by incorporating an intelligent human-computer dialogue system that is modeled with Stochastic Petri Nets. In addition to this, the dialogue system will also be able to take human communication factors, such as the patient’s emotional condition, as well as other resources, such as their medical history, into account. Therefore, the ultimate goal of the general system is health monitoring, quick and reliable prognosis of a human’s health condition, real-time response to critical situations and, generally, the life improvement for certain categories of people in need.

Committee:

Nikolaos Bourbakis, Ph.D. (Advisor); Soon Chung, Ph.D. (Committee Member); Yong Pei, Ph.D. (Committee Member); Larry Lawhorne, M.D. (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering; Computer Science

Keywords:

Virtual Doctor; Human-Computer

Pathak, AmitForecasting Models to Predict EQ-5D Model Indicators for Population Health Improvement
Master of Science (MS), Ohio University, 2016, Industrial and Systems Engineering (Engineering and Technology)
The healthcare sector possesses big issues needing to be addressed in a number of nations including the United States. Problems within and effecting healthcare arena are complex as they are interdependent on several factors. It. To cope this situation and find solutions, best of predictions backed by data for effective decision making are required. Even though predictions are made, it takes extreme cautiousness to make claims for policy inaction. The EuroQol five Dimension (EQ-5D) questionnaire developed by the Euro-Qol group is one of the most widespread used tools assessing the generic health status of a population using 5 dimensions namely mobility, self-care, usual activities, pain/discomfort and anxiety/depression. This thesis develops a methodology to create forecasting models to predict these EQ-5D model indicators using chosen 65 indicators, capable of defining population health, from the World Bank, World Health Organization and the United Nations Development Programme databases. The thesis provides the capability to gauge an insight into the well-being at individual levels of population by maneuvering the macroscopic factors. The analysis involves data from 12 countries namely Argentina, Belgium, Denmark, Finland, France, Germany, Italy, Netherlands, Slovenia, Spain and United States, for both sexes with ages ranging from 18 to 75+. The models are created using Artificial Neural Networks (ANN) and are contrasted with statistical models. It is observed that the ANN model with all 65 indicators performed the best and the age group of 75+ was found to be the most correlated with EQ-5D dimensions. Conclusively the research also provides with the countries and indicators that need the most attention to improve the corresponding EQ-5D parameter. This thesis aims at fostering better policy making for increasing well-being of populations by understanding the impact of predominating factors affecting population health.

Committee:

Gary Weckman (Advisor); Diana Schwerha (Committee Member); Tao Yuan (Committee Member); Andy Snow (Committee Member)

Subjects:

Aging; Artificial Intelligence; Behavioral Psychology; Behavioral Sciences; Behaviorial Sciences; Cognitive Psychology; Demographics; Demography; Developmental Psychology; Economics; Educational Tests and Measurements; Evolution and Development; Finance; Gender Studies; Health; Health Care; Health Care Management; Health Sciences; Higher Education; Industrial Engineering; Information Science; Information Systems; Information Technology; Literacy; Mental Health; Public Health; Public Policy; Sanitation; Social Psychology; Social Research; Statistics; Sustainability

Keywords:

Population Health;EQ-5D;Forecasting;Policy;Artificial Neural Networks;Statistics;Healthcare;Visualize;Data; Decision;World Bank,World Health Organization;WHO;United Nations Development Programme;UNDP; Indicator;Regression;Exploratory;Principal Components

Adams, William A.Analysis of Robustness in Lane Detection using Machine Learning Models
Master of Science (MS), Ohio University, 2015, Electrical Engineering (Engineering and Technology)
An appropriate approach to incorporating robustness into lane detection algorithms is beneficial to autonomous vehicle applications and other problems relying on fusion methods. While traditionally rigorous empirical methods were developed for mitigating lane detection error, an evidence-based model-driven approach yields robust results using multispectral video as input to various machine learning models. Branching beyond the few network structures considered for image understanding applications, deep networks with unique optimization functions are demonstrably more robust while making fewer assumptions. This work adopts a simple framework for data collection; retrieving image patches for comparison via regression through a learning model. Along a horizontal scanline, the most probable sample is selected to retrain the network. Models include simple regressors, various autoencoders, and a few specialized deep networks. Samples are compared by robustness and the results favor deep and highly specialized network structures.

Committee:

Mehmet Celenk (Advisor); Jeffrey Dill (Committee Member); Maarten Uijt de Haag (Committee Member); Rida Benhaddou (Committee Member)

Subjects:

Artificial Intelligence; Automotive Engineering; Computer Science; Engineering

Keywords:

Machine Learning; ADAS; Lane Detection; Autoencoder; Regressor; Deep Network; Deep Learning

Lakshminarayanan, SrinivasanNature Inspired Discrete Integer Cuckoo Search Algorithm for Optimal Planned Generator Maintenance Scheduling
Master of Science, University of Toledo, 2015, Engineering (Computer Science)
In this thesis, Discrete Integer Cuckoo Search Optimization Algorithm (DICS) is proposed for generating an Optimal Maintenance Schedule for power utility with multiple generator units and complex constraints of Man Power Availability, Load Demand and strict Maintenance Window. The objective is to maximize the levelness of the Reserve Power over the entire planning period while satisfying the multiple constraints. This is an NP hard problem and there is no unique solution available for it. Nature inspired Cuckoo Search algorithm has been chosen to address this problem. Cuckoo search algorithm is a metaheuristic algorithm based on the obligate brood parasitism of cuckoo bird species, where cuckoo tries to find the best nest of other birds whose eggs resemble her own to lay her eggs to be hatched by other birds. Therefore the problem is formulated to find the best host nest. The host nest is defined according to the constraints of the power utility. The algorithm was tested on two test systems, one with 21 generator units and the other with 9 generator units which is called IEEE RTS test system. The results obtained with the DICS on the 21 generator power utility system are compared with the work of previous researchers using the same test system and using the five traditional algorithms namely the Genetic Algorithm with Binary Representation (GABR), Genetic Algorithm with Integer Representation (GAIR), Discrete Particle Swarm Optimization (DPSO), Modified Discrete Particle Swarm Optimization (MDPSO) and Hybrid Scatter Genetic Algorithm (HSGA). The results obtained by applying DICS on the IEEE RTS test system are compared with HSGA algorithm. The results show that DICS outperformed all the other algorithms in the two test systems.

Committee:

Devinder Kaur (Committee Chair); Mansoor Alam (Committee Member); Srinivasa Vemuru (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Engineering

Keywords:

Cuckoo Search; Swarm Intelligence; Generator Maintenance Scheduling; Combinatorial Optimization Problem; Bio-Inspired algorithm; Reliability in Power System Planning; Power System Planning; Nature-Inspired Computing

John, Zubin RPredicting Day-Zero Review Ratings: A Social Web Mining Approach
Master of Science, The Ohio State University, 2015, Computer Science and Engineering
Social Web Mining: is a term closely associated with modern day use of the Internet; with large Internet companies Google, Apple, IBM moving towards integration of intelli- gence into their product eco-system, a large number of different applications have popped up in the Social sphere. With the aid of machine learning techniques there is no dearth of learning that is possible from endless streams of user-generated content. One of the tasks in this domain that has seen relatively less research is the task of predicting review scores prospectively i.e. prior to the release of the entity - a movie, electronic product, game or book in question. It is easy to locate this chatter on social streams such as Twitter; what’s difficulty is extracting relevant information and facts about these entities and even more - the task of predicting these Day-Zero review rating scores which provide insightful information about these products, prior to their release. In this thesis, we propose just such a framework - a setup capable of extracting facts about reviewable entities. Populating a list of potential objects for a year, we follow an approach similar to boot-strapping in order to learn relevant facts about these prospective entities, all geared towards the task of learning to predict scores in a machine learning setting. Towards the end-goal of predicting review scores for potential products - our system supports alternative strategies which perform competitively on the task problem. All the predictions from the learning framework, within a certain allowable error margin output scores comparable to human judgment. The results bode well for potential large-scale predictive tasks on real-time data streams; in addition this framework proposes alternative feature spaces which in aggregation go on to describe a multi-method approach to achieving higher accuracy on tasks which have previously seen lack-luster results.

Committee:

Alan Ritter (Advisor); Eric Fosler-Lussier (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

twitter, social web mining, information extraction, applied machine learning

Ghosal, SubhabrataObject Oriented Simulation of Agent Based Flexible Manufacturing System with Dynamic Routing and Scheduling
Master of Science (MS), Ohio University, 2015, Industrial and Systems Engineering (Engineering and Technology)
Flexible Manufacturing Systems (FMS) have long been a proposed solution towards effective aut omated manufacturing with improved cost, time, efficiency and resource flexibilities. The two major factors towards the development of an efficient FMS are machine flexibility (i.e. the systems latitude towards changed product type and order of operations), and routing flexibility (i.e. the ability to use different machines or group of machines to perform similar operations on a manufacturing part). These two factors can be addressed by selecting an optimized schedule of manufactured products. In this research, the problem of selecting an optimized schedule and process plan is addressed through machine activity based dynamic process plan generation using priority-based process planning network and simulation of the selected process plans to make an assessment of their performance. A market oriented distributed agent based prototype architecture of FMS has been developed using the Java Agent Development Environment (JADE), a Java based middleware for multi- agent systems, capable of executing manufacturing tasks disintegrated into modular sub-tasks through intelligent agents. This agent-based model communicates with the simulation module through object oriented data transfer. Simulation is conducted on the various process plans generated out of the process planning network for multiple parts produced by the process planning network agent through java interface. The simulation model of the FMS has been implemented in AnyLogic. The process simulation agent executes simulation using process plan data to record statistics of performance and measure effectiveness of the process plans. The performance of this system is compared with results obtained from discrete event simulation of process plans generated through feature based heuristics on a monolithic FMS. A considerable improvement in performance is observed.

Committee:

Dusan Sormaz, Dr. (Advisor)

Subjects:

Artificial Intelligence; Industrial Engineering

Keywords:

Agent-based programming; Object-oriented simulation; AnyLogic; JADE; Flexible Manufacturing System; Process Planning;

Middleton, Victor EatonImperfect Situation Analysis: Representing the Role of Error and Uncertainty in Modeling, Simulation and Analysis
Doctor of Philosophy (PhD), Wright State University, 2014, Engineering PhD
Much of traditional modeling, simulation and analysis (MS&A) is supported by engineering models - deterministic, Newtonian physics-based representations of closed systems. Such approaches are not well-suited to represent the intricacies of human behavior. This research advocates and seeks to articulate the concept of a more human- centric approach to MS& A, one that better represents decision-making and other cognitive aspects of human behavior as well as it does physical activity. It starts with a view of individuals and groups as complex adaptive systems, which are best represented using agent-based modeling. Representation of human behavior through intelligent agents incorporates models of decision-making, knowledge engineering and knowledge representation, as well as the whole gamut of the psychological and physiological interactions of humans with each other and their environment. This representation is exemplified by consideration of situation awareness/situation understanding (SA/SU) as a core element. This leads to the development of a proof-of-concept simulation of a specific, easily understood, and quantifiable example of human behavior: intelligent agents being spatially "lost" while trying to navigate in a simulation world. This model is named MOdeling Being Intelligent and Lost (MOBIL), noting the ability to be in both of these states is central to the simulation. MOBIL uses a blend of object oriented software principles with agent based modeling to establish the utility of applying the human- centric approach to analysis. Applying that simulation in a number of virtual experiments illustrates how it supports investigation into an individual's SA/SU and associated decision-making processes.

Committee:

Frank Ciarallo, Ph.D. (Advisor); Raymond Hill, Ph.D. (Committee Member); Yan Liu, Ph.D. (Committee Member); Mateen Rizki, Ph.D. (Committee Member); Mary E. Fendley, Ph.D. (Committee Member); David Hudak, Ph.D. (Committee Member)

Subjects:

Armed Forces; Artificial Intelligence; Cognitive Psychology; Computer Science; Engineering; Industrial Engineering; Information Systems

Keywords:

Situation awareness; situation understanding; error; modeling; simulation; arc node networks; agent based modeling; intelligent agents, human-centric analysis; dismounted combatants; war games

Plis, Kevin A.The Effects of Novel Feature Vectors on Metagenomic Classification
Master of Science (MS), Ohio University, 2014, Computer Science (Engineering and Technology)
Metagenomics plays a crucial role in our understanding of the world around us. Machine learning and bioinformatics methods have struggled to accurately identify the organisms present in metagenomic samples. By using improved feature vectors, higher classification accuracy can be found when using the machine learning classification approach to identify the organisms present in a metagenomic sample. This research is a pilot study that explores novel feature vectors and their effect on metagenomic classification. A synthetic data set was created using the genomes of 32 organisms from the Archaea and Bacteria domains, with 450 fragments of varying length per organism used to train the classification models. By using a novel feature vector one tenth of the size of the currently used feature vectors, a 6.34%, 21.91%, and 15.07% improvement was found over the species level accuracy on 100, 300, and 500 bp fragments, respectively, for this data set. The results of this study also show that using more features does not always translate to a higher classification accuracy, and that higher classification accuracy can be achieved through feature selection.

Committee:

Lonnie Welch, PhD (Advisor)

Subjects:

Artificial Intelligence; Bioinformatics; Computer Science

Keywords:

Metagenomics; Classification; Machine Learning; SVM; Support Vector Machine; Feature Vector; Feature Selection; Bioinformatics

Next Page