Search Results (1 - 25 of 103 Results)

Sort By  
Sort Dir
 
Results per page  

Imbulgoda Liyangahawatte, Gihan Janith MendisHardware Implementation and Applications of Deep Belief Networks
Master of Science in Engineering, University of Akron, 2016, Electrical Engineering
Deep learning is a subset of machine learning that contributes widely to the contemporary success of artificial intelligence. The essential idea of deep learning is to process complex data by abstracting hierarchical features via deep neural network structure. As one type of deep learning technique, deep belief network (DBN) has been widely used in various application fields. This thesis proposes an approximation based hardware realization of DBNs that requires low hardware complexity. This thesis also explores a set of novel applications of the DBN-based classifier that will benefit from a fast implementation of DBN. In my work, I have explored the application of DBN in the fields of automatic modulation classification method for cognitive radio, Doppler radar sensor for detection and classification of micro unmanned aerial systems, cyber security applications to detect false data injection (FDI) attacks and localize flooding attacks, and applications in social networking for prediction of link properties. The work in this thesis paves the way for further investigation and realization of deep learning techniques to address critical issues in various novel application fields.

Committee:

Jin Wei (Advisor); Arjuna Madanayaka (Committee Co-Chair); Subramaniya Hariharan (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering; Electrical Engineering; Engineering; Experiments; Information Technology

Keywords:

deep belief networks; multiplierless digital architecture; Xilinx FPGA implementations; low-complexity; applications of deep belief networks; spectral correlation function; modulation classification; drone detection; doppler radar; cyber security

Howard, Shaun MichaelDeep Learning for Sensor Fusion
Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences
The use of multiple sensors in modern day vehicular applications is necessary to provide a complete outlook of surroundings for advanced driver assistance systems (ADAS) and automated driving. The fusion of these sensors provides increased certainty in the recognition, localization and prediction of surroundings. A deep learning-based sensor fusion system is proposed to fuse two independent, multi-modal sensor sources. This system is shown to successfully learn the complex capabilities of an existing state-of-the-art sensor fusion system and generalize well to new sensor fusion datasets. It has high precision and recall with minimal confusion after training on several million examples of labeled multi-modal sensor data. It is robust, has a sustainable training time, and has real-time response capabilities on a deep learning PC with a single NVIDIA GeForce GTX 980Ti graphical processing unit (GPU).

Committee:

Wyatt Newman, Dr (Committee Chair); M. Cenk Cavusoglu, Dr (Committee Member); Michael Lewicki, Dr (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

deep learning; sensor fusion; deep neural networks; advanced driver assistance systems; automated driving; multi-stream neural networks; feedforward; multilayer perceptron; recurrent; gated recurrent unit; long-short term memory; camera; radar;

Angeline, Peter JohnEvolutionary algorithms and emergent intelligence /
Doctor of Philosophy, The Ohio State University, 1993, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Tanner, Michael ClayExplaining knowledge systems : justifying diagnostic conclusions /
Doctor of Philosophy, The Ohio State University, 1989, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Keuneke, Anne MarieMachine understanding of devices causal explanation of diagnostic conclusions /
Doctor of Philosophy, The Ohio State University, 1989, Graduate School

Committee:

Not Provided (Other)

Subjects:

Artificial Intelligence

Lipkin, IlyaTesting Software Development Project Productivity Model
Doctor of Philosophy in Manufacturing and Technology Management, University of Toledo, 2011, Manufacturing and Technology Management

Software development is an increasingly influential factor in today’s business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted.

There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis.

This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD.

Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers.

Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc¿¿¿ This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

Committee:

Jeen Su Lim (Committee Chair); James Pope (Committee Member); Michael Mallin (Committee Member); Michael Jakobson (Committee Member); Wilson Rosa (Advisor)

Subjects:

Aerospace Engineering; Armed Forces; Artificial Intelligence; Business Administration; Business Costs; Computer Engineering; Computer Science; Economic Theory; Economics; Electrical Engineering; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology; Management; Marketing; Mathematics

Keywords:

"Software Estimation"; "Software Cost Model"; "Department of Defense Data"; COCOMO; "Software Project Productivity Model"

Al-Olimat, Hussein SOptimizing Cloudlet Scheduling and Wireless Sensor Localization using Computational Intelligence Techniques
Master of Science, University of Toledo, 2014, Engineering (Computer Science)
Optimization algorithms are truly complex procedures that consider many elements when optimizing a specific problem. Cloud computing (CCom) and Wireless sensor networks (WSNs) are full of optimization problems that need to be solved. One of the main problems of using the clouds is the underutilization of the reserved resources, which causes longer makespans and higher usage costs. Also, the optimization of sensor nodes' power consumption, in WSNs, is very critical due to the fact that sensor nodes are small in size and have constrained resources in terms of power/energy, connectivity, and computational power. This thesis formulates the concern on how CCom systems and WSNs can take advantage of the computational intelligent techniques using single- or multi-objective particle swarm optimization (SOPSO or MOPSO), with an overall aim of concurrently minimizing makespans, localization time, energy consumption during localization, and maximizing the number of nodes fully localized. The cloudlet scheduling method is implemented inside CloudSim advancing the work of the broker, which was able to maximize the resource utilization and minimize the makespan demonstrating improvements of 58\% in some cases. Additionally, the localization method optimized the power consumption during a Trilateration-based localization (TBL) procedure, through the adjustment of sensor nodes' output power levels. Finally, a parameter-study of the applied PSO variants for WSN localization is performed, leading to results that show algorithmic improvements of up to 32\% better than the baseline results in the evaluated objectives.

Committee:

Mansoor Alam (Committee Chair); Robert Green, II (Committee Co-Chair); Weiqing Sun (Committee Member); Vijay Devabhaktuni (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Engineering

Keywords:

Cloud Computing; Particle Swarm Optimization; Random Inertia Weight; Cloudlet Scheduling; Makespan; Utilization; CloudSim; Wireless Sensor Network; Trilateration; Localization; Multi-objective; ZigBee; RSSI; Genetic Algorithm; Simulated Annealing

Shao, YuanlongLearning Sparse Recurrent Neural Networks in Language Modeling
Master of Science, The Ohio State University, 2014, Computer Science and Engineering
In the context of statistical language modeling, we explored the task of learning an Elman network with sparse weight matrices, as a pilot study towards learning a sparsely con-nected fully recurrent neural network, which would be potentially useful in many cases. We also explored how efficient and scalable it can be in practice. In particular, we explored these tasks: (1) We adapted the Iterative Hard Thresholding (IHT) algorithm into the BackPropagation Through Time (BPTT) learning. (2) To accel-erate convergence of the IHT algorithm, we designed a scheme for expanding the net-work by replicating the existing hidden neurons. Thus we can start training from a small and dense network which is already learned. (3) We implemented this algorithm in GPU. Under small minibatch sizes and large network sizes (e.g., 2000 hidden neurons) it achieves 160 times speedup compared to the RNNLM toolkit in CPU. With larger mini-batch sizes there could be another 10 times speedup, though the convergence rate be-comes an issue in such cases and further effort is needed to address this problem. (4) Without theoretical convergence guarantee of the IHT algorithm in our problem setting, we did an empirical study showing that learning a sparse network does give competitive perplexity in language modeling. In particular, we showed that a sparse network learned in this way can outperform a dense network when the number of effective parameters is kept the same. (5) We gathered performance metric comparing the computational effi-ciency of the matrix operations of interest in both sparse and dense settings. The results suggest that for network sizes which we can train in reasonable time at this moment, it’s hard for sparse matrices to run faster, unless we are allowed to have very sparse networks. Thus for research purposes we may want to focus on using dense matrices, while for en-gineering purposes a more flexible matrix design leveraging the power of dense and sparse matrices might be necessary.

Committee:

Eric Fosler-Lussier, Dr. (Advisor); Mikhail Belkin, Dr. (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

language modeling; recurrent neural networks; sparse recurrent neural networks

Baumgartner, DustinGlobal-Local Hybrid Classification Ensembles: Robust Performance with a Reduced Complexity
Master of Science in Engineering, University of Toledo, 2009, Computer Science

The current trend in machine learning ensemble classifier research is to improve performance, at times marginally, beyond what existing methods can deliver. This tendency has consequently complicated ensemble designs to a level that is possibly not justified for many domains. This thesis proposes a new design for classification ensembles, Global-Local Hybrid Ensemble (GLHE), which offers robust performance with a less complex design than comparably performing ensembles. GLHE exploits two sources of diversity in its base-classifiers, heterogeneous (hybrid) and homogeneous. Heterogeneity is achieved with two learning algorithms – one global and one local – that are assumed to have an intrinsic difference in learning to ensure high levels of diversity. Homogeneity is implemented through the use of multiple parameterizations of the same learning algorithm to allow both global and local learners to explore their respective region of the hypothesis space while also creating additional, albeit small, diversity among the base-classifiers.

A comprehensive simulation study is conducted to profile the performance capabilities of the proposed design, considering three types of classification performance measures, three types of diversity measures, and training/testing execution time as features of analysis. GLHE is implemented with decision tree (global) and nearest-neighbor (local) learners, and its performance on 46 benchmark datasets compared to more than 70 ensembles from the literature and in-house simulations. Specific hypotheses are tested and evaluated with nonparametric statistical significance calculations. First, it is shown that GLHE performs comparable to hybrid ensembles with more learning algorithms (more complexity) and better than data manipulation ensembles. Second, the importance of co-presence of global-local learners and heterogeneous/homogeneous diversity in the GLHE design is validated; along with our assumption the global and local learners produce high levels of diversity. Finally, we create another implementation of GLHE with neural networks, which shows that the design is generic and allows for trade-offs between performance robustness and execution speed. Another experiment compares the performance of GLHE against those achieved by contestants in a data mining competition. Although the contestants likely fine-tuned their algorithms to optimize performance, the standard GLHE implementation still scores no worse than half of them.

The results of the simulation study indicate that GLHE is indeed robust, even in comparison to more complex ensembles. Major contributions of this work are 1) global and local learners can effectively create high levels of diversity, 2) the GLHE design may offer a compromise between the robustness of traditional hybrid ensembles and the simplicity of data manipulation ensembles – an area not satisfied by other ensemble designs, and 3) the GLHE design is a suitable technique for applying to new problems if robust performance is needed but users do not have resources for complex designs or in-depth empirical analysis.

Committee:

Gursel Serpen (Advisor); Henry Ledgard (Committee Member); Han Yu (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

ensemble; robust classification; global local learning, heterogeneous homogeneous diversity

Weinkam, Matthew J.Very Normal Things
Master of Arts, Miami University, 2011, English
This thesis is a collection of very normal things called stories. Stories concerning the usual trials, tribulations, triumphs, and daily goings-on of your basic, run-of-the-mill characters. It’s about what you’d expect. Plots, settings, conflicts, narratives made up of sentences and paragraphs and what have you. There are a lot of words in here. Meant to communicate some such thing or other. Nothing out of the ordinary. Standard stuff really. There’s nothing to see here.

Committee:

Margaret Luongo (Committee Chair); Joseph Bates (Committee Member); Tim Melley (Committee Member)

Subjects:

Artificial Intelligence; Cartography; Dance; Morphology; Virology; Wood

Keywords:

slacks; death; stale Apple Jacks; questions; robins; more death; regret

Marvel, Jeremy AlanAutonomous Learning for Robotic Assembly Applications
Doctor of Philosophy, Case Western Reserve University, 2010, EECS - Computer Engineering
Robotic manipulators have been used to perform a myriad of repetitive industrial tasks with varying degrees of success and precision over the past several decades. Their use in mechanical assembly tasks, however, has been relatively minor due to both their limitations in classical position-control paradigms and the difficulty in algorithmically describing the process of assembly. Though the technology for sensing and compliantly adapting to physical contact has improved, robotic assembly solutions are still largely relegated to simple responsibilities such as peg-in-hole and rigidly fixtured configurations. This dissertation represents the progressive development and assessment of self-guided learning for model-assisted robotic assembly applications. Utilizing industrial manipulators outfitted with six-degree of freedom (DoF) force/torque sensors for compliant motion control, a method for self-optimization of assembly search parameters is developed that allows the robot to determine when its performance has improved using simple metrics of success. Based on prior experiences, the robot then generates internal representations—or models—of the assembly process in order to attempt to predict when certain parameter sequences are likely to result in superior assembly performances. This method is further augmented to algorithmically determine the quality and anticipated effectiveness of the models based on their profiles in the parameter-performance mapping space. Analysis of simulations with arbitrarily-large N-dimensional parameter spaces suggest that even relatively simple models are capable of abstracting useful information of assemblies, even in the presence of noise. These results were then corroborated by running physical trials with and without assistive models on a variety of automobile part assemblies.

Committee:

Wyatt Newman, PhD (Committee Chair); Francis Merat, PhD (Committee Member); Roger Quinn, PhD (Committee Member); Kiju Lee, PhD (Committee Member)

Subjects:

Artificial Intelligence; Engineering; Robots

Keywords:

Assembly; Machine Learning; Robot Learning

Young, William AlbertA Team-Compatibility Decision Support System to Model the NFL Knapsack Problem: An Introduction to HEART
Doctor of Philosophy (PhD), Ohio University, 2010, Industrial and Systems Engineering (Engineering and Technology)

Many tangible and intangible factors are considered when making a hiring decision in the National Football League (NFL). One difficult decision that executives must make is whom they will select in the NFL Draft or which NFL Free Agent they will sign in the offseason. Mathematical models can be developed to aid humans in their decision-making process because they are able to find non-obvious relationships within numeric data. HEART, or Heuristic Evaluation of Artificially Replaced Teammates, is a mathematical model that utilizes machine learning and statistical-based methodologies to aid managers with their hiring decisions. HEART is not intended to be a ‘decision tool,' or a tool that explicitly states who a team should hire. A ‘decision tool' would need to encompass not only the tangible information available to hiring managers but also intangible aspects that are difficult or impossible for mathematical model to capture accurately. HEART is a ‘decision support tool' that provides additional information for hiring managers to use in conjunction with other available resources.

The goal of HEART is to determine an Expected and Theoretical Contribution Value for a potential hiring candidate, which represents a player's ability to increase or decrease the estimated number of games won by a particular team in an upcoming season. This value is significant because it represents a player's level of compatibility with potential teammates and considers the effect that aging has on players' physiological ability to play football. HEART is also designed to allow direct comparisons of players from any playing position as well as players from either college or professional leagues.

From a quantitative standpoint, the results of the HEART methodology were statistically validated using both parametric and nonparametric testing procedures. This validation procedure analyzed the results collected from a convenient sample of experts who participated in a survey instrument. The validation results show that the HEART methodology provided at least ‘Useful' results, and at times ‘Very Useful' results, using a five-point Likert scale for a case study involving the 2007 NFL Draft Class and Free Agent Players.

Committee:

Gary Weckman, PhD (Advisor); Masel Dale, PhD (Committee Member); Kaya Savas, PhD (Committee Member); Snow Andrew, PhD (Committee Member); Genaidy Ashraf, PhD (Committee Member)

Subjects:

Artificial Intelligence; Engineering; Industrial Engineering; Management

Keywords:

Team Compatibility; Decision Support System; NFL Draft; NFL Free Agency; Performance Aging Curves; Combine; Theoretical and Expected Contribution Values; League-Wide, Team-Specific; Quantitative Analysis in Sports

Cui, ChenConvolutional Polynomial Neural Network for Improved Face Recognition
Doctor of Philosophy (Ph.D.), University of Dayton, 2017, Electrical and Computer Engineering
Deep learning is the state-of-art technology in pattern recognition, especially in face recognition. The robustness of the deep network leads a better performance when the size of the training set becomes larger and larger. Convolutional Neural Network (CNN) is one of the most popular deep learning technologies in the modern world. It helps obtain various features from multiple filters in the convolutional layer and performs well in the hand written digits classification. Unlike the unique structure of each hand written digit, face features are more complex, and many difficulties are existed for face recognition in current research field, such as the variations of lighting conditions, poses, ages, etc. So the limitation of the nonlinear feature fitting of the regular CNN appears in the face recognition application. In order to create a better fitting curve for face features, we introduce a polynomial structure to the regular CNN to increase the non-linearity of the obtained features. The modified architecture is named as Convolutional Polynomial Neural Network (CPNN). CPNN creates a polynomial input for each convolutional layer and captures the nonlinear features for better classification. We firstly prove the proposed concept with MNIST handwritten database and compare the proposed CPNN with regular CNN. Then, different parameters in CPNN are tested by CMU AMP face recognition database. After that, the performance of the proposed CPNN is evaluated on three different face databases: CMU AMP, Yale and JAFFE as well as the images captured in real world environment. The proposed CPNN obtains the best recognition rates (CMU AMP: 99.95%, Yale: 90.89%, JAFFE: 98.33%, Real World: 97.22%) when compared to other different machine learning technologies. We are planning to apply the state-of-art structures, such as inception and residual, to the current CPNN to increase the depth and stability as our future research work.

Committee:

Vijayan Asari (Advisor)

Subjects:

Artificial Intelligence; Bioinformatics; Computer Engineering; Electrical Engineering

Keywords:

Deep Learning, Convolutional Polynomial Neural Network, Face Recognition, Computer Vision, Image Processing

Pathak, AmitForecasting Models to Predict EQ-5D Model Indicators for Population Health Improvement
Master of Science (MS), Ohio University, 2016, Industrial and Systems Engineering (Engineering and Technology)
The healthcare sector possesses big issues needing to be addressed in a number of nations including the United States. Problems within and effecting healthcare arena are complex as they are interdependent on several factors. It. To cope this situation and find solutions, best of predictions backed by data for effective decision making are required. Even though predictions are made, it takes extreme cautiousness to make claims for policy inaction. The EuroQol five Dimension (EQ-5D) questionnaire developed by the Euro-Qol group is one of the most widespread used tools assessing the generic health status of a population using 5 dimensions namely mobility, self-care, usual activities, pain/discomfort and anxiety/depression. This thesis develops a methodology to create forecasting models to predict these EQ-5D model indicators using chosen 65 indicators, capable of defining population health, from the World Bank, World Health Organization and the United Nations Development Programme databases. The thesis provides the capability to gauge an insight into the well-being at individual levels of population by maneuvering the macroscopic factors. The analysis involves data from 12 countries namely Argentina, Belgium, Denmark, Finland, France, Germany, Italy, Netherlands, Slovenia, Spain and United States, for both sexes with ages ranging from 18 to 75+. The models are created using Artificial Neural Networks (ANN) and are contrasted with statistical models. It is observed that the ANN model with all 65 indicators performed the best and the age group of 75+ was found to be the most correlated with EQ-5D dimensions. Conclusively the research also provides with the countries and indicators that need the most attention to improve the corresponding EQ-5D parameter. This thesis aims at fostering better policy making for increasing well-being of populations by understanding the impact of predominating factors affecting population health.

Committee:

Gary Weckman (Advisor); Diana Schwerha (Committee Member); Tao Yuan (Committee Member); Andy Snow (Committee Member)

Subjects:

Aging; Artificial Intelligence; Behavioral Psychology; Behavioral Sciences; Behaviorial Sciences; Cognitive Psychology; Demographics; Demography; Developmental Psychology; Economics; Educational Tests and Measurements; Evolution and Development; Finance; Gender Studies; Health; Health Care; Health Care Management; Health Sciences; Higher Education; Industrial Engineering; Information Science; Information Systems; Information Technology; Literacy; Mental Health; Public Health; Public Policy; Sanitation; Social Psychology; Social Research; Statistics; Sustainability

Keywords:

Population Health;EQ-5D;Forecasting;Policy;Artificial Neural Networks;Statistics;Healthcare;Visualize;Data; Decision;World Bank,World Health Organization;WHO;United Nations Development Programme;UNDP; Indicator;Regression;Exploratory;Principal Components

Adams, William A.Analysis of Robustness in Lane Detection using Machine Learning Models
Master of Science (MS), Ohio University, 2015, Electrical Engineering (Engineering and Technology)
An appropriate approach to incorporating robustness into lane detection algorithms is beneficial to autonomous vehicle applications and other problems relying on fusion methods. While traditionally rigorous empirical methods were developed for mitigating lane detection error, an evidence-based model-driven approach yields robust results using multispectral video as input to various machine learning models. Branching beyond the few network structures considered for image understanding applications, deep networks with unique optimization functions are demonstrably more robust while making fewer assumptions. This work adopts a simple framework for data collection; retrieving image patches for comparison via regression through a learning model. Along a horizontal scanline, the most probable sample is selected to retrain the network. Models include simple regressors, various autoencoders, and a few specialized deep networks. Samples are compared by robustness and the results favor deep and highly specialized network structures.

Committee:

Mehmet Celenk (Advisor); Jeffrey Dill (Committee Member); Maarten Uijt de Haag (Committee Member); Rida Benhaddou (Committee Member)

Subjects:

Artificial Intelligence; Automotive Engineering; Computer Science; Engineering

Keywords:

Machine Learning; ADAS; Lane Detection; Autoencoder; Regressor; Deep Network; Deep Learning

Lakshminarayanan, SrinivasanNature Inspired Discrete Integer Cuckoo Search Algorithm for Optimal Planned Generator Maintenance Scheduling
Master of Science, University of Toledo, 2015, Engineering (Computer Science)
In this thesis, Discrete Integer Cuckoo Search Optimization Algorithm (DICS) is proposed for generating an Optimal Maintenance Schedule for power utility with multiple generator units and complex constraints of Man Power Availability, Load Demand and strict Maintenance Window. The objective is to maximize the levelness of the Reserve Power over the entire planning period while satisfying the multiple constraints. This is an NP hard problem and there is no unique solution available for it. Nature inspired Cuckoo Search algorithm has been chosen to address this problem. Cuckoo search algorithm is a metaheuristic algorithm based on the obligate brood parasitism of cuckoo bird species, where cuckoo tries to find the best nest of other birds whose eggs resemble her own to lay her eggs to be hatched by other birds. Therefore the problem is formulated to find the best host nest. The host nest is defined according to the constraints of the power utility. The algorithm was tested on two test systems, one with 21 generator units and the other with 9 generator units which is called IEEE RTS test system. The results obtained with the DICS on the 21 generator power utility system are compared with the work of previous researchers using the same test system and using the five traditional algorithms namely the Genetic Algorithm with Binary Representation (GABR), Genetic Algorithm with Integer Representation (GAIR), Discrete Particle Swarm Optimization (DPSO), Modified Discrete Particle Swarm Optimization (MDPSO) and Hybrid Scatter Genetic Algorithm (HSGA). The results obtained by applying DICS on the IEEE RTS test system are compared with HSGA algorithm. The results show that DICS outperformed all the other algorithms in the two test systems.

Committee:

Devinder Kaur (Committee Chair); Mansoor Alam (Committee Member); Srinivasa Vemuru (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Engineering

Keywords:

Cuckoo Search; Swarm Intelligence; Generator Maintenance Scheduling; Combinatorial Optimization Problem; Bio-Inspired algorithm; Reliability in Power System Planning; Power System Planning; Nature-Inspired Computing

John, Zubin RPredicting Day-Zero Review Ratings: A Social Web Mining Approach
Master of Science, The Ohio State University, 2015, Computer Science and Engineering
Social Web Mining: is a term closely associated with modern day use of the Internet; with large Internet companies Google, Apple, IBM moving towards integration of intelli- gence into their product eco-system, a large number of different applications have popped up in the Social sphere. With the aid of machine learning techniques there is no dearth of learning that is possible from endless streams of user-generated content. One of the tasks in this domain that has seen relatively less research is the task of predicting review scores prospectively i.e. prior to the release of the entity - a movie, electronic product, game or book in question. It is easy to locate this chatter on social streams such as Twitter; what’s difficulty is extracting relevant information and facts about these entities and even more - the task of predicting these Day-Zero review rating scores which provide insightful information about these products, prior to their release. In this thesis, we propose just such a framework - a setup capable of extracting facts about reviewable entities. Populating a list of potential objects for a year, we follow an approach similar to boot-strapping in order to learn relevant facts about these prospective entities, all geared towards the task of learning to predict scores in a machine learning setting. Towards the end-goal of predicting review scores for potential products - our system supports alternative strategies which perform competitively on the task problem. All the predictions from the learning framework, within a certain allowable error margin output scores comparable to human judgment. The results bode well for potential large-scale predictive tasks on real-time data streams; in addition this framework proposes alternative feature spaces which in aggregation go on to describe a multi-method approach to achieving higher accuracy on tasks which have previously seen lack-luster results.

Committee:

Alan Ritter (Advisor); Eric Fosler-Lussier (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

twitter, social web mining, information extraction, applied machine learning

Ghosal, SubhabrataObject Oriented Simulation of Agent Based Flexible Manufacturing System with Dynamic Routing and Scheduling
Master of Science (MS), Ohio University, 2015, Industrial and Systems Engineering (Engineering and Technology)
Flexible Manufacturing Systems (FMS) have long been a proposed solution towards effective aut omated manufacturing with improved cost, time, efficiency and resource flexibilities. The two major factors towards the development of an efficient FMS are machine flexibility (i.e. the systems latitude towards changed product type and order of operations), and routing flexibility (i.e. the ability to use different machines or group of machines to perform similar operations on a manufacturing part). These two factors can be addressed by selecting an optimized schedule of manufactured products. In this research, the problem of selecting an optimized schedule and process plan is addressed through machine activity based dynamic process plan generation using priority-based process planning network and simulation of the selected process plans to make an assessment of their performance. A market oriented distributed agent based prototype architecture of FMS has been developed using the Java Agent Development Environment (JADE), a Java based middleware for multi- agent systems, capable of executing manufacturing tasks disintegrated into modular sub-tasks through intelligent agents. This agent-based model communicates with the simulation module through object oriented data transfer. Simulation is conducted on the various process plans generated out of the process planning network for multiple parts produced by the process planning network agent through java interface. The simulation model of the FMS has been implemented in AnyLogic. The process simulation agent executes simulation using process plan data to record statistics of performance and measure effectiveness of the process plans. The performance of this system is compared with results obtained from discrete event simulation of process plans generated through feature based heuristics on a monolithic FMS. A considerable improvement in performance is observed.

Committee:

Dusan Sormaz, Dr. (Advisor)

Subjects:

Artificial Intelligence; Industrial Engineering

Keywords:

Agent-based programming; Object-oriented simulation; AnyLogic; JADE; Flexible Manufacturing System; Process Planning;

Middleton, Victor EatonImperfect Situation Analysis: Representing the Role of Error and Uncertainty in Modeling, Simulation and Analysis
Doctor of Philosophy (PhD), Wright State University, 2014, Engineering PhD
Much of traditional modeling, simulation and analysis (MS&A) is supported by engineering models - deterministic, Newtonian physics-based representations of closed systems. Such approaches are not well-suited to represent the intricacies of human behavior. This research advocates and seeks to articulate the concept of a more human- centric approach to MS& A, one that better represents decision-making and other cognitive aspects of human behavior as well as it does physical activity. It starts with a view of individuals and groups as complex adaptive systems, which are best represented using agent-based modeling. Representation of human behavior through intelligent agents incorporates models of decision-making, knowledge engineering and knowledge representation, as well as the whole gamut of the psychological and physiological interactions of humans with each other and their environment. This representation is exemplified by consideration of situation awareness/situation understanding (SA/SU) as a core element. This leads to the development of a proof-of-concept simulation of a specific, easily understood, and quantifiable example of human behavior: intelligent agents being spatially "lost" while trying to navigate in a simulation world. This model is named MOdeling Being Intelligent and Lost (MOBIL), noting the ability to be in both of these states is central to the simulation. MOBIL uses a blend of object oriented software principles with agent based modeling to establish the utility of applying the human- centric approach to analysis. Applying that simulation in a number of virtual experiments illustrates how it supports investigation into an individual's SA/SU and associated decision-making processes.

Committee:

Frank Ciarallo, Ph.D. (Advisor); Raymond Hill, Ph.D. (Committee Member); Yan Liu, Ph.D. (Committee Member); Mateen Rizki, Ph.D. (Committee Member); Mary E. Fendley, Ph.D. (Committee Member); David Hudak, Ph.D. (Committee Member)

Subjects:

Armed Forces; Artificial Intelligence; Cognitive Psychology; Computer Science; Engineering; Industrial Engineering; Information Systems

Keywords:

Situation awareness; situation understanding; error; modeling; simulation; arc node networks; agent based modeling; intelligent agents, human-centric analysis; dismounted combatants; war games

Plis, Kevin A.The Effects of Novel Feature Vectors on Metagenomic Classification
Master of Science (MS), Ohio University, 2014, Computer Science (Engineering and Technology)
Metagenomics plays a crucial role in our understanding of the world around us. Machine learning and bioinformatics methods have struggled to accurately identify the organisms present in metagenomic samples. By using improved feature vectors, higher classification accuracy can be found when using the machine learning classification approach to identify the organisms present in a metagenomic sample. This research is a pilot study that explores novel feature vectors and their effect on metagenomic classification. A synthetic data set was created using the genomes of 32 organisms from the Archaea and Bacteria domains, with 450 fragments of varying length per organism used to train the classification models. By using a novel feature vector one tenth of the size of the currently used feature vectors, a 6.34%, 21.91%, and 15.07% improvement was found over the species level accuracy on 100, 300, and 500 bp fragments, respectively, for this data set. The results of this study also show that using more features does not always translate to a higher classification accuracy, and that higher classification accuracy can be achieved through feature selection.

Committee:

Lonnie Welch, PhD (Advisor)

Subjects:

Artificial Intelligence; Bioinformatics; Computer Science

Keywords:

Metagenomics; Classification; Machine Learning; SVM; Support Vector Machine; Feature Vector; Feature Selection; Bioinformatics

Assaad, Firas SouhailBiometric Multi-modal User Authentication System based on Ensemble Classifier
Master of Science, University of Toledo, 2014, Engineering
User authentication performed through the traditional method of authorizing based on username and password results in the weakest link between users and their authentication credentials. Stolen authentication credentials in the form of user id and password pairs result in countless large-scale and detrimental security breaches across many segments of the social and economic groups. Authentication based on biometric features offer a much stronger link between the users and their credentials. In this project, we propose a multi-modal biometric authentication methodology to provide a more trusted authentication of the actual user. The system utilizes two biometric traits in its authorization procedure; these are face and voice recognition. During training, the face recognition subsystem is responsible for detecting the face in an image using the Viola-Jones algorithm, and implementing face preprocessing steps of eye detection and several geometric transformations to filter out unneeded details of the face. Using the Eigenfaces technique, the face recognition subsystem trains on those multiple images of a user being authorized for access and stores the resulting user-specific templates in the authorized users database. Similarly, during training, the voice recognition module acquires a voice sample from the user, extracts the voice features using the Mel-Frequency Cepstral Coefficients that are used to represent the “voiceprint” of the user, and then models those features using the Linde–Buzo–Gray algorithm. Each of the two modules, namely face recognition and voice recognition, following training performs as an independent base classifier within an ensemble design. The outputs from these two modules are fused together using score-level transformation to generate a final decision to either grant or deny access to a user who is in the process of authentication. The proposed algorithm has been trained and tested on Yale Extended, NIST FERET, and ELSDSR databases. System performance evaluation for authentication was tested in real time using a distributed framework that employed cellular phones or PDAs with cellular and WiFi connectivity, GSM networks, and a personal computer based server that is connected to the Internet. The ensemble classifier based authentication system performed as follows: accuracy at 99.22%, true positive at 99.15%, false positive at 0.71%, true negative at 99.28%, false negative at 0.84% and finally, precision at 99.24%. The system authentication response times were small enough to facilitate real-time implementation. We believe these results indicate that a biometric authentication system can reliably function as a supplementary to the existing password-based authentication procedures to result in a more trusted access control procedure.

Committee:

Gursel Serpen (Advisor); Jackson Carvalho (Committee Member); Lawrence Thomas (Committee Member)

Subjects:

Artificial Intelligence; Bioinformatics; Computer Engineering; Computer Science

Keywords:

algorithm;authentication;biometric;classifier;credentials;eigenfaces;multi-modal;recognition;score level;fusion;MFCC;Viola Jones;LBG;Mel Frequency Cepstral Coefficients;voice print;Linde Buzo Gray;ELSDSR;Yale Extended;NIST FERET;Server client;

Doran, Gary BrianMultiple-Instance Learning from Distributions
Doctor of Philosophy, Case Western Reserve University, 2015, EECS - Computer and Information Sciences
I propose a new theoretical framework for analyzing the multiple-instance learning (MIL) setting. In MIL, training examples are provided to a learning algorithm in the form of labeled sets, or "bags," of instances. Applications of MIL include 3-D quantitative structure-activity relationship prediction for drug discovery and content-based image retrieval for web search. The goal of an algorithm is to learn a function that correctly labels new bags or a function that correctly labels new instances. I propose that bags should be treated as latent distributions from which samples are observed. I show that it is possible to learn accurate instance- and bag-labeling functions in this setting as well as functions that correctly rank bags or instances under weak assumptions. Additionally, my theoretical results suggest that it is possible learn to rank efficiently using traditional, well-studied "supervised" learning approaches. These results also indicate that supervised approaches for learning from distributions can be used to directly learn bag-labeling functions efficiently. I perform an extensive empirical evaluation that supports the theoretical predictions entailed by the new framework. In addition to showing how supervised approaches can be applied to MIL, I prove new hardness results on using MI-specific algorithms to learn hyperplane labeling functions for instances. Finally, I propose a new resampling approach for MIL, analyze it under the new theoretical framework, and show that it can improve the performance of MI classifiers when training set sizes are small. In summary, the proposed theoretical framework leads to a better understanding of the relationship between the MI and standard supervised learning settings, and it provides new methods for learning from MI data that are more accurate, more efficient, and have better understood theoretical properties than existing MI-specific algorithms.

Committee:

Soumya Ray (Advisor); Harold Connamacher (Committee Member); Michael Lewicki (Committee Member); Stanislaw Szarek (Committee Member); Kiri Wagstaff (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

machine learning; multiple-instance learning; kernel methods; learning theory; classfiication

Sankaranarayanan, KarthikMultiple Instance Learning for Localization and Tracking of Persistent Targets
Doctor of Philosophy, The Ohio State University, 2011, Computer Science and Engineering

Most high level vision tasks (behavior analysis, action recognition, etc.), especially in the context of video surveillance, often focus on targets of interest within the given scene. Therefore, a key task that needs to precede these activities is the identification of important targets. In this work, we focus on targets of interest as being those individuals that persist across the entire input video sequence, since many applications in surveillance naturally arise from this. More specifically, we address the problem of automatic localization and tracking of persistent targets in video sequences and study the problem in the domain of multiple pan-tilt-zoom (PTZ) cameras.

We propose a novel Multiple Instance Learning (MIL) framework which consists of a softmax-based combining function along with a logistic model for the instances employing log covariance features. The learned target models can be updated in an online manner and can also be used to learn models for multiple targets present in the scene, and track them in a “tracking-by-detection” mould. We develop active camera control and coordination infrastructure to extend the proposed approach to multiple cameras. We also investigate a one-class version of the Multiple Instance problem and develop a support-vector machine based algorithm based on prior probabilities on instance labels. Lastly, we perform detailed experiments to demonstrate the validity and usefulness of the proposed approach to localize targets in various scenes using commercial-grade surveillance cameras.

Committee:

James Davis, PhD (Advisor); Richard Parent, PhD (Committee Member); James Todd, PhD (Committee Member); Catherine Calder, PhD (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

multiple instance learning; MIL; tracking; surveillance

Henson, Cory AndrewA Semantics-based Approach to Machine Perception
Doctor of Philosophy (PhD), Wright State University, 2013, Computer Science and Engineering PhD
Machine perception can be formalized using semantic web technologies in order to derive abstractions from sensor data using background knowledge on the Web, and efficiently executed on resource-constrained devices. Advances in sensing technology hold the promise to revolutionize our ability to observe and understand the world around us. Yet the gap between observation and understanding is vast. As sensors are becoming more advanced and cost-effective, the result is an avalanche of data of high volume, velocity, and of varied type, leading to the problem of too much data and not enough knowledge (i.e., insights leading to actions). Current estimates predict over 50 billion sensors connected to the Web by 2020. While the challenge of data deluge is formidable, a resolution has profound implications. The ability to translate low-level data into high-level abstractions closer to human understanding and decision-making has the potential to disrupt data-driven interdisciplinary sciences, such as environmental science, healthcare, and bioinformatics, as well as enable other emerging technologies, such as the Internet of Things. The ability to make sense of sensory input is called perception; and while people are able to perceive their environment almost instantaneously, and seemingly without effort, machines continue to struggle with the task. Machine perception is a hard problem in computer science, with many fundamental issues that are yet to be adequately addressed, including: (a) annotation of sensor data, (b) interpretation of sensor data, and (c) efficient implementation and execution. This dissertation presents a semantics-based machine perception framework to address these issues. The tangible primary contributions created to support the thesis of this dissertation include the development of a Semantic Sensor Observation Service (SemSOS) for accessing and querying sensor data on the Web, an ontology of perception (Intellego) that provides a formal semantics of machine perception and reasoning framework for the interpretation of sensor data, and efficient algorithms for the machine perception inference tasks to enable interpretation of sensor data on resource-constrained devices, such as smart phones. Each of these contributions has been prototyped, evaluated, and applied towards solving real-world problems in multiple domains including weather and healthcare.

Committee:

Amit Sheth, Ph.D. (Advisor); Krishnaprasad Thirunarayan,, Ph.D. (Committee Member); Payam Barnaghi, Ph.D. (Committee Member); Satya Sahoo, Ph.D. (Committee Member); John Gallagher, Ph.D. (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science

Keywords:

Semantic Web; Semantic Sensor Web; Semantic Sensor Networks; Semantic Perception; Machine Perception

Hardas, Manas SudhakarSEGMENTATION AND INTEGRATION IN TEXT COMPREHENSION: A MODEL OF CONCEPT NETWORK GROWTH
PHD, Kent State University, 2012, College of Arts and Sciences / Department of Computer Science

Text comprehension (understanding by reading) is a fundamental way in which people will learn about the world around them. During text comprehension the reader segments the concepts from the text into recognized or unrecognized concepts and then integrates the recognized concepts into their present knowledge base which is represented as a concept network. Formation of this concept network is the central process in understanding from texts.

In this thesis we present a mathematical framework for the segmentation (recognition) and integration of concepts. The model can explain how and why different readers construct different concept networks on reading the same text. It can also describe why some readers may understand a text easily as compared to other readers, and also why some texts are difficult to understand than other texts for the same reader. The model is also used to explain the effect of the age of acquisition of a concept on comprehension. It is seen that earlier a concept is acquired the more important it is for comprehension of other concepts.

The model leads to an algorithm which is used to simulate concept network growth during text comprehension. These networks are then analyzed to investigate their structural properties. It is seen that these networks are small worlds with high local clustering and a normal degree distribution. These properties are indicative of the high connectivity and reachability not observed in similar random networks. It is also seen that although concept networks may start off with multiple disconnected components, the process of comprehension leads to most of the nodes getting connected to form a single giant component.

Committee:

Javed Khan, I (Advisor); Austin Melton, PhD (Committee Member); Arvind Bansal, PhD (Committee Member); Katherine Rawson, PhD (Committee Member); Denise Bedford, PhD (Committee Co-Chair)

Subjects:

Artificial Intelligence; Cognitive Psychology; Computer Science; Experimental Psychology; Physiological Psychology

Keywords:

text comprehension; concept network; network analysis; Alzheimer's; Autism; clustering; giant component; phase transition; semantic network; semantic memory

Next Page