Search Results (1 - 25 of 39 Results)

Sort By  
Sort Dir
 
Results per page  

Gummadi, JayaramA Comparison of Various Interpolation Techniques for Modeling and Estimation of Radon Concentrations in Ohio
Master of Science in Engineering, University of Toledo, 2013, Engineering (Computer Science)
Radon-222 and its parent Radium-226 are naturally occurring radioactive decay products of Uranium-238. The US Environmental Protection Agency (USEPA) attributes about 10 percent of lung cancer cases that is `around 21,000 deaths per year’ in the United States, caused due to indoor radon. The USEPA has categorized Ohio as a Zone 1 state (i.e. the average indoor radon screening level greater than 4 picocuries per liter). In order to implement preventive measures, it is necessary to know radon concentration levels in all the zip codes of a geographic area. However, it is not possible to survey all the zip codes, owing to reasons such as inapproachability. In such places where radon data are unavailable, several interpolation techniques are used to estimate the radon concentrations. This thesis presents a comparison between recently developed interpolation techniques to new techniques such as Support Vector Regression (SVR), and Random Forest Regression (RFR). Recently developed interpolation techniques include Artificial Neural Network (ANN), Knowledge Based Neural Networks (KBNN), Correction-Based Artificial Neural Networks (CBNN) and the conventional interpolation techniques such as Kriging, Local Polynomial Interpolation (LPI), Global Polynomial Interpolation (GPI) and Radial Basis Function (RBF) using the K-fold cross validation method.

Committee:

William Acosta (Committee Chair); Vijay Devabhaktuni (Committee Co-Chair); Ashok Kumar (Committee Member); Rob Green (Committee Member)

Subjects:

Computer Science

Keywords:

artificial neural networks; cross-validation; correction based artificial neural networks; prior knowledge input; source difference; space-mapped neural networks; support vector regression; radon; random forest regression

Hope, PriscillaUsing Artificial Neural Networks to Identify Image Spam
Master of Science, University of Akron, 2008, Computer Science

Internet technology has made international communication easy and convenient. This convenience has compelled a number of people to rely on electronic mail for almost all spheres of life – personal, business etc. Scrupulous organizations/individuals have taken undue advantage of this convenience and populate users’ inboxes with unwanted messages making email spam a menace. Even as anti-spam software producers think they have almost solved the problem, spammers come out with new techniques. One such tactic in the spammers’ toolbox comes in the form of image spam – messages that contain little more than a link to an image rendered in an HTML mail reader. The image typically contains the spam message one hopes to avoid, yet it is able to bypass most filters due to the composition and format of these pictures.

This research focuses on identifying these images as spam by using an artificial neural network (ANN), software programs used for recognizing patterns, based on the biological neural networks in our brains. As information propagates through a neural network, it “learns” about the data. A large collection of both spam and non-spam images have being used to train an ANN, and then test the effectiveness of the trained network against an unidentified or already identified set of pictures. This process involves formatting images and adding the desired training values expected by the ANN. Several different ANNS have being trained using different configurations of hidden layers and nodes per layer. A detailed process for preprocessing spam image files is given, followed by a description on how to train an artificial neural network to distinguish between ham and spam. Finally, the trained network is tested against both known and unknown images.

Committee:

Kathy Liszka, PhD (Advisor); Timothy O’Neil (Other); Tim Marguish (Other)

Subjects:

Computer Science

Keywords:

image spam; FANN; artificial neural networks; using artificial neural networks to identify image spam

Sahinoglu, Mehmet MuratDevelopment of a real-time learning scheduler using adaptive critics concepts
Master of Science (MS), Ohio University, 1993, Industrial and Manufacturing Systems Engineering (Engineering)

Manufacturing systems involve human beings, machinery, non-linear dynamics, and a fusion of hierarchical and distributed organizational schemes. The cost-effective control and scheduling of these manufacturing systems require adaptability. Therefore, it is very important to implement competent on-line learning mechanisms that may accomplish balanced and adequate operation of processes with unknown dynamics in the manufacturing environment.

The integration of Dynamic Programming and Artificial Neural Networks has very important characteristics which may provide efficient real-time learning mechanisms for manufacturing. This thesis presents a system that achieves real-time learning using the mentioned integration for manufacturing scheduling. The system is capable of operational mappings. In addition, it utilizes reinforcement signals of the environment (a measure of how desirable the achieved state is taking into consideration the performance criteria) due to the lack of an expert scheduler. Conclusions are drawn and further research issues are discussed.

Committee:

Luis Rabelo (Advisor)

Subjects:

Engineering, Industrial

Keywords:

Real-Time Learning Scheduler; Adaptive Critics; Artificial Neural Networks; Dynamic Programming

Egilmez, GokhanRoad Safety Assessment of U.S. States: A Joint Frontier and Neural Network Modeling Approach
Master of Science (MS), Ohio University, 2013, Civil Engineering (Engineering and Technology)
In this thesis, road safety assessment and prediction modeling for U.S. states fatal crashes are addressed. In the first part, a DEA-based Malmquist Index model was developed to assess the relative efficiency and productivity of U.S. states in decreasing the number of road fatalities. Even though the national trend in fatal crashes has reached to the lowest level since 1949 (Traffic Safety Annual Assessment Highlights, 2010), a state-by-state analysis and comparison has not been studied considering other characteristics of the holistic national road safety assessment problem in any work in the literature or organizational reports. The single output, fatal crashes, and five inputs were aggregated into single road safety score and utilized in the DEA-based Malmquist Index mathematical model. The period of 2002-2008 was considered due to data availability for the inputs and the output considered. According to the results, there is a slight negative productivity (an average of -0.2 percent productivity) observed in the U.S. on minimizing the number of fatal crashes along with an average of 2.1 percent efficiency decline and 1.8 percent technological improvement. The productivity in reducing the fatal crashes can only be attributed to the technological growth since there is a negative efficiency growth is occurred. It can be concluded that even though there is a declining trend observed in the fatality rates, the efficiency of states in utilizing societal and economical resources towards the goal of zero fatality is not still efficient. In the second part, a nonparametric prediction model, Artificial Neural Network, was developed to assist policy makers in minimizing fatal crashes across the United States. Seven input variables from four safety performance input domains while fatal crashes was utilized as the single output variable for the scope of the research. Artificial Neural Networks (ANN) was utilized and the best neural network model was developed out of 1000 networks. The proposed neural network model predicted data with 84 percent coefficient of determination. In addition, developed ANN model was benchmarked with a multiple linear regression model and outperformed in all performance metrics including r, R2 and the standard error of estimate. A sensitivity analysis was also conducted and the results indicated that road length, vehicle miles traveled, and safety expenditures were the top three input variables on fatal crashes. In conclusion, more effective policy making towards increasing safety belt usage and better utilization of safety expenditures to improve road condition are derived as the key areas to focus on for state highway safety agencies from the scope of current research. This research also reveals the significance of the relationship between the four input domains and fatal crashes for the United States from a holistic perspective and offers a robust nonparametric model to policy makers for the prediction of fatal crashes.

Committee:

Deborah McAvoy, Ph.D. (Advisor); Byung-Cheol Kim, Ph.D. (Committee Member); Ken Walsh, Ph.D. (Committee Member); M. Khurrum S. Bhutta, Ph.D. (Committee Member)

Subjects:

Civil Engineering; Industrial Engineering; Transportation

Keywords:

Road Safety Assessment; Benchmarking; Data Envelopment Analysis; Malmquist Productivity Index; Nonparametric Predictive Modeling; Artificial Neural Networks; Machine Learning; US States

Holland, William S.Development of an Indoor Real-time Localization System Using Passive RFID Tags and Artificial Neural Networks
Master of Science (MS), Ohio University, 2009, Industrial and Systems Engineering (Engineering and Technology)
Radio frequency identification (RFID) technology is used for inventory and asset tracking because of its accuracy and speed. Currently, RFID tracking systems are being used to identify and locate tagged objects in indoor environments. In this research, received signal strength indication (RSSI) values are collected from off-the-shelf passive RFID readers and antennas to be used in conjunction with an artificial neural network (ANN) to create a localization algorithm for two-dimensional location estimation with a single tag. The aim of this research is to create a highly accurate real-time location tracking system to be used in a room with objects that create RF interference. Multiple linear regression is used as a benchmark method for comparison with artificial neural networks.

Committee:

Gary Weckman, PhD (Advisor); Kevin Berisso, PhD (Committee Member); Diana Schwerha, PhD (Committee Member); Andrew Snow, PhD (Committee Member)

Subjects:

Artificial Intelligence; Engineering; Industrial Engineering; Systems Design

Keywords:

RFID; RSSI; artificial neural networks; location system

Anderson, Jerone S.A Study of Nutrient Dynamics in Old Woman Creek Using Artificial Neural Networks and Bayesian Belief Networks
Master of Science (MS), Ohio University, 2009, Industrial and Systems Engineering (Engineering and Technology)
The Old Woman Creek National Estuary is studied in this project to evaluate effective modelling techniques for predicting Net Ecosystem Metabolism (NEM). NEM is modelled using artificial neural networks, Bayesian belief networks, and a hybrid model. A variety of data preprocessing techniques are considered prior to model development. The effects of discretization on model development are considered and discrete data is ultimately used to produce models which classify NEM into three ranges based on inputs with information significance. Artificial neural networks are found to be the most accurate for classification while Bayesian belief networks are found to provide a better framework for dynamically predicting NEM as inputs are changed.

Committee:

Gary R. Weckman, PhD (Advisor); David Millie, PhD (Committee Member); Kevin Berisso, PhD (Committee Member); Diana Schwerha, PhD (Committee Member)

Subjects:

Ecology; Engineering; Environmental Engineering; Industrial Engineering

Keywords:

BBN; ANN; ecology; NEM; Bayesian Belief Networks; Artificial Neural Networks; computer modelling

Sneath, Evan BArtificial neural network training for semi-autonomous robotic surgery applications
MS, University of Cincinnati, 2014, Engineering and Applied Science: Computer Engineering
As telesurgical robots become more frequently used in surgical operating rooms, emphasis is shifting from human-controlled robotics to semi- or full automaticity. Safe and efficient methods of training and execution during an automated surgical task are required for real-world success. The approach of path generation using artificial neural networks allows for an effective and scalable solution for the supervised learning and real-time performance of a surgical procedure. This study makes use of long short-term memory (LSTM) recurrent neural networks (RNNs) in conjunction with the Evolino learning algorithm for tooltip path optimization. The RNNgenerated path is trained from human-performed procedures in a simulated testing environment. Changes in movement of path markers are accounted for by adjusting the tooltip acceleration with respect to target markers along the path. Results include smooth generated paths successfully meeting test procedure requirements of accuracy and speed in environments with both static and dynamic marker configurations.

Committee:

Fred Beyette, Ph.D. (Committee Chair); Ali Minai, Ph.D. (Committee Member); Grant Schaffner, Ph.D. (Committee Member)

Subjects:

Computer Engineering

Keywords:

telesurgery;robotics;artificial neural networks;evolino

Muralidharan Nair, MithunStatistical Leakage Estimation Using Artificial Neural Networks
MS, University of Cincinnati, 2014, Engineering and Applied Science: Computer Engineering
Present day integrated circuit designs have become very densely packed with smaller devices. The scaling down of technology has increased the significance of modeling the effect of process variations. Increasing leakage power consumption is another factor that the circuit designers are concerned about, in the smaller technology nodes. This continuous reduction in size of the devices has made engineers to give high importance to the effect of process variations in these designs. The effect of variation can be very drastic when variations affect the functionality of the chip. There could be a finite probability that the chip is functional in the presence of variations but does not meet performance and/or power consumption requirements. The modern day chip designing is very much oriented towards high performance and low power designs. In this scenario, an adverse effect from process variations can prove counter productive for the designer. A good design methodology should be able to predict and address these adverse effects at early design stages. So it is the need of the hour to have techniques that could model the effect of process variations from synthesis to post routing stages. In this thesis we propose a methodology that could accurately estimate the leakage in the presence of variations after synthesis stage. Since we are addressing the issue at pre-layout stage, we have given importance to the variations in device dimensions and threshold voltage. The methodology starts from the RTL description of a design. This design is synthesized to a netlist of Standard Cells. We have used all the standard cell definitions and characterized power and timing values from Synopys 90nm EDK. This netlist is used for our experiments. The core of this method is the artificial neural network models for standard cell leakage. These models are generated for a wide range of cells in the standard cell library and in turn are used in a tool that produces a Statistical Leakage estimate in the presence of process variations. The standard methodology for statistical leakage analysis in the presence of variations is Monte Carlo simulations in HSPICE. The conventional Statistical Leakage estimation is done using Monte Carlo simulations for smaller circuits. For larger benchmarks one of the method used for Statistical Leakage estimation is Wilkinson's approach. This method is applied on standard circuit benchmarks and a comparison has been made with the conventional Statistical Leakage estimation methodologies.

Committee:

Ranganadha Vemuri, Ph.D. (Committee Chair); Wen Ben Jone, Ph.D. (Committee Member); Carla Purdy, Ph.D. (Committee Member)

Subjects:

Computer Engineering

Keywords:

Process Variations;Leakage Power;Statistical Leakage;Artificial Neural Networks

Regmi, Hem KantaA Real-Time Computational Decision Support System for Compounded Sterile Preparations using Image Processing and Artificial Neural Networks
Master of Science, University of Toledo, 2016, Electrical Engineering
The purpose of this research is to design a computational decision support system (DSS) for compounded sterile preparations (CSP). Error-free compounding is dependent on the proper selection of components and adherence to procedure during compounding. A material selection system (MSS) based on a graphical user interface (GUI), coupled with a barcode scanner and back-end database, has been developed and tested for proper selection of items involving three different medication orders (MO). A video processing system (VPS) has been implemented in MATLAB that evaluates the live video feed from the compounding hood to monitor the compounding procedure when compounding the MO’s. Surf detection is used to detect and locate compounding items placed in the hood. Various algorithms have been developed and tested to enhance the accuracy and robustness of the VPS. The Decision Support System (DSS) is further improved with integration of another digital camera to ensure that correct volume of medicine with appropriate syringe is performed during the whole compounding process. The template matching and SURF object detection application on the digital image of the syringe, along with minimum distance classifier and artificial neural networks (ANNs) on the previously collected data from several experimental observations, were explored in classification and volume measurement of a syringe. The MSS was tested for all items used in compounding the MO’s and performed error-free. The VPS evolved to VPS.03 from VPS.01 and VPS.02. The greatest accuracy and ability for real-time realization were seen in VPS.03. All deliberate mistakes made when compounding the tested medication orders were captured by VPS.03. Leur-lock syringes of different sizes from 1 mL to 30 mL were tested, and an accuracy of 95+ % was obtained with very high precision. The new computational decision support system facilitates error-free selection of components and is able to monitor and evaluate the compounding process and correct volume measurement in real time. The platform may be used in CSP compounding rooms to audit techniques and procedures as well as in training or educational settings.

Committee:

Vijay Devabhaktuni, Dr. (Committee Chair); Jerry Nesamony, Dr. (Committee Co-Chair); Devinder Kaur, Dr. (Committee Member); Ezzatollah Salari, Dr. (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Compounding Sterile Preparations, Graphical User Interface, Artificial Neural Networks, Image Processing, Video Processing, Decision Support System, Medication Order, Object Detection, Correlation Calculation, Connected Component Analysis

Reinoso, Nicholas LForecasting Harmful Algal Blooms for Western Lake Erie using Data Driven Machine Learning Techniques
Master of Science in Civil Engineering, Cleveland State University, 2017, Washkewicz College of Engineering
Harmful algal blooms (HAB) have been documented for more than a century occurring all over the world. The western Lake Erie has suffered from Cyanobacteria blooms for many decades. There are currently two widely available HAB forecasting models for Lake Erie. The first forecasting model gives yearly peak bloom forecast while the second provides weekly short-term forecasting and offers size as well as location. This study focuses on bridging the gap of these two models and improve HAB forecast accuracy in western Lake Erie by letting historical observations tell the behavior of HABs. This study tests two machine learning techniques, artificial neural network (ANN) and classification and regression tree (CART), to forecast monthly HAB indicators in western Lake Erie for July to October. ANN and CART models were created with two methods of selecting input variables and two training periods: 2002 to 2011 and 2002 to 2013. First a nutrient loading period method which considers all nutrient contributing variables averaged from March to June and second a Spearman rank correlation to choose separate input sets for each month considering 224 different combinations of averaging and lag periods. The ANN models showed a correlation coefficient increase from 0.70 to 0.77 for the loading method and 0.79 to 0.83 for the Spearman method when extending the training period. The CART models followed a similar trend increasing overall precision from 85.5% to 92.9% for the loading method and 82.1% to 91% for the Spearman method. Both selection methods had similar variable importance with river discharge and phosphorus mass showing high importance across all methods. The major limitation for ANN is the time required for each forecast to be complete while the CART forecasts earlier is only able to produce a class forecast. In future work, the ANN model accuracy can be improved and use new sets of variables to allow earlier HAB forecasts. The final form of ANN and CART models will be coded in a user interface system to forecast HABs. The monthly forecasting system developed allows watershed planners and decision-makers to timely manage HABs in western Lake Erie.

Committee:

Ung Tae Kim, Ph.D. (Committee Chair); Mehdi Jalalpour, Ph.D. (Committee Member); Walter Kocher, Ph.D. (Committee Member)

Subjects:

Civil Engineering

Keywords:

Harmful algal bloom forecasting; Lake Erie; Artificial neural networks; Classification and regression trees; Machine learning

Hincapie, Juan GabrielEMG-Based Control of Upper Extremity Neuroprostheses for C5/C6 Spinal Cord Injury
Doctor of Philosophy, Case Western Reserve University, 2008, Biomedical Engineering
Functional Electrical Stimulation (FES) has been used to restore upper extremity function in individuals with C5/C6 level of spinal cord injury (SCI). Neuroprostheses for this SCI population typically restore hand grasp. In their shoulder and elbow, these individuals have a combination of voluntary, denervated and paralyzed muscles that reduces their workspace and forces them to adopt non-natural kinematic strategies. Controlling these FES systems and integrating them seamlessly with the remaining function is still challenging. This project explored the use of electromyographic signals (EMG) recorded from muscles that remain under voluntary control to automatically stimulate paralyzed muscles in the shoulder and elbow; restoring proximal arm function in a more natural manner; interacting synergistically with the remaining function; and complementing the hand grasp function provided by the current systems. A musculoskeletal model of the shoulder and elbow was used to select an optimal set of muscles for stimulation. The model was also used to generate the patterns of activation required to restore high level reaching function. We demonstrated that a neural network controller could be trained to predict activations for the paralyzed muscles using voluntary muscle activations as inputs. The controller was then implemented in one human subject, where his recorded EMG signals were used to train it. The implemented strategy showed that it is possible to restore reaching function, controlling the stimulation automatically. Furthermore, the intervention proved useful in increasing the range of motion of the arm and improving overall shoulder stability.

Committee:

Robert Kirsch, PhD (Advisor); Edward Chadwick, PhD (Committee Member); Patrick Crago, PhD (Committee Member); Michael Branicky, PhD (Committee Member); Musa Audu, PhD (Committee Member)

Subjects:

Biomedical Research

Keywords:

Functional Electrical Stimulation; Neuroprostheses; Musculoskeletal Modeling; EMG; Artificial Neural Networks

Xu, SiyaoTHE RECONSTRUCTION OF CLOUD-FREE REMOTE SENSING IMAGES: AN ARTIFICIAL NEURAL NETWORKS (ANN) APPROACH
MA, Kent State University, 2009, College of Arts and Sciences / Department of Geography
Spatial or temporal serial remote sensing images are taking more and more important roles in monitoring, utilizing and analyzing resources. However, a large number of remote sensing images are contaminated by clouds, which cause missing information and, moreover, result in the difficulty of extracting complete information. Traditional resolutions to this problem have limits such as low resolution, data lost or large computational load. In this paper, a method that utilizes Artificial Neural Networks (ANN) interpolator is implemented, which may avoid those problems stated above. For the sake of assessing the performance of ANN interpolator, a small area of forest, mountain, valley and road is clipped off from an ETM+ file. Several “cloud” areas will be manually created to test the ANN model. One band of the image is transformed into ASCII files. In the next step, a K Nearest Neighbor (KNN) searching algorithm is applied on these ASCII files, and k neighbors was found for every pixel in this area. Then an ANN model is built. For each pixel that was contaminated by cloud, its neighbors are used as input information, and the output for this pixel is its predicted DN value. Finally, the output will be restored to a raster file. Root-Mean-Square-Error, Quantile-Quantile Plot, and Error Distribution Map are adopted to assess the performance of this ANN interpolator. Finally, the thesis concludes that activate functions and neighborhood search do not cause significant difference in the output of ANN interpolators, and the interpolation results are globally good but largely biased regionally.

Committee:

Mandy Munro-Stasiuk, Phd (Advisor); Milton Harvey, Phd (Advisor)

Subjects:

Geography

Keywords:

Remote Sensing Image; Cloud-free; Artificial Neural Networks

Taylor, Brent S.Utilizing ANNs to Improve the Forecast for Tire Demand
Master of Science (MS), Ohio University, 2015, Industrial and Systems Engineering (Engineering and Technology)
This study is an initial attempt to investigate the relationship between economic factors and monthly tire sales, using artificial neural networks (ANNs) and comparing the results to stepwise regression. Data for this research were collected through a privately held tire warehouse located in Wheeling, West Virginia. Research has shown that artificial neural network models have been successfully applied to many real world forecasting applications. However, up to this date no research has been found using artificial neural networks and economic factors to predict tire demand. The first part of this research describes why the chosen economic factors were selected for this study and explains the initial methodology with results. The next stage of the research gives details on why the methodology was revised and also clarifies why Google Trends and additional mathematical inputs were applied to the study. The final research focused on separating the master database into three different categories based on selling percentages. The results of the study show that the artificial neural network models were capable of forecasting the number of high selling tires, with a validation technique, but were unable to be applied sufficiently for the medium and low selling products.

Committee:

Gary Weckman, Ph.D. (Advisor)

Subjects:

Engineering; Industrial Engineering

Keywords:

Artificial Neural Networks; Tire Forecasting; Demand Planning; ANN; Tire Demand; Economic Factors

Pech, Thomas JoelA Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging
Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences
This work investigates a strategy for evaluating the navigability of terrain from 3-D imaging. Labeled training data was automatically generated by running a simulation of a mobile robot nai¨vely exploring a virtual world. During this exploration, sections of terrain were perceived through simulated depth imaging and saved with labels of safe or unsafe, depending on the outcome of the robot's experience driving through the perceived regions. This labeled data was used to train a deep convolutional neural network. Once trained, the network was able to evaluate the safety of perceived regions. The trained network was shown to be effective in achieving safe, autonomous driving through novel, challenging, unmapped terrain.

Committee:

Wyatt Newman (Advisor); Cenk Cavusoglu (Committee Member); Michael Lewicki (Committee Member)

Subjects:

Computer Science; Robotics; Robots

Keywords:

Mobile robots, Autonomous Navigation, Machine Learning, Artificial Neural Networks, Terrain, Simulation, Training Data, Data Generation, Labeling, Classifiers, Convolutional Neural Networks, Point Clouds, Perception, Prediction, Artificial Intelligence

Ghosh Dastidar, SamanwoyModels of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networks
Doctor of Philosophy, The Ohio State University, 2007, Biomedical Engineering
A multi-paradigm approach integrating three novel computational paradigms: wavelet transforms, chaos theory, and artificial neural networks is developed for EEG-based epilepsy diagnosis and seizure detection. This research challenges the assumption that the EEG represents the dynamics of the entire brain as a unified system. It is postulated that the sub-bands yield more accurate information about constituent neuronal activities underlying the EEG. Consequently, certain changes in EEGs not evident in the original full-spectrum EEG may be amplified when each sub-band is analyzed separately. A novel wavelet-chaos methodology is presented for analysis of EEGs and delta, theta, alpha, beta, and gamma sub-bands of EEGs for detection of seizure and epilepsy. The methodology is applied to three different groups of EEGs: healthy subjects, epileptic subjects during a seizure-free interval (interictal), and epileptic subjects during a seizure (ictal). Two potential markers of abnormality quantifying the non-linear chaotic EEG dynamics are discovered: the correlation dimension and largest Lyapunov exponent. A novel wavelet-chaos-neural network methodology is developed for EEG classification. Along with the aforementioned two parameters, the standard deviation (quantifying the signal variance) is employed for EEG representation. It was discovered that a particular mixed-band feature space consisting of nine parameters and LMBPNN result in the highest classification accuracy (96.7%). To increase the robustness of classification, a novel principal component analysis-enhanced cosine radial basis function neural network classifier is developed. The rearrangement of the input space along the principal components of the data improves the classification accuracy of the cosine radial basis function neural network employed in the second stage significantly. The new classifier is as accurate as LMBPNN and is twice as robust. Next, biologically realistic artificial neural networks are developed to reach the next milestone in artificial intelligence. First, an efficient spiking neural network (SNN) model is presented using three training algorithms: SpikeProp, QuickProp, and RProp. Three measures of performance are investigated: number of convergence epochs, computational efficiency, and classification accuracy. Next, a new Multi-Spiking Neural Network (MuSpiNN) and supervised learning algorithm (Multi-SpikeProp) are developed. Finally, the models are applied to the epilepsy and seizure detection problems to achieve high classification accuracies.

Committee:

Hojjat Adeli (Advisor)

Keywords:

Temporal Lobe Epilepsy; Electroencephalogram (EEG); EEG Classification; Epilepsy Diagnosis; Seizure Detection; Wavelet Transform; Chaos Theory; Artificial Neural Networks; Spiking Neural Networks; Principal Component Analysis; Cosine Radial Basis Function

Cabrera Rios, MauricioMULTIPLE CRITERIA OPTIMIZATION STUDIES IN REACTIVE IN-MOLD COATING
Doctor of Philosophy, The Ohio State University, 2002, Industrial and Systems Engineering
Reactive in-mold coating (IMC) products have been used successfully for many years to improve the surface quality of Sheet Molding Compound (SMC) compression molded parts. IMC provides a smooth, sealed surface, used as a conductive or nonconductive primer for subsequent painting operations. The success of IMC for SMC parts has recently attracted the interest of thermoplastic injection molders. The potential environmental and economic benefits of using IMC as a primer and, in the ideal case, to replace painting completely are large. Acceptance of IMC as a competitor to the traditional painting processes will depend upon the improvement of its ability to deliver in-mold coated parts in short cycle times at the highest possible quality level. Most optimization studies in Reactive Polymer Processing involve compromising between different performance measures since, frequently, the controllable variables have conflicting effects on these measures. IMC is not the exception to the rule. The performance measures need to be balanced, each against the other, in order to obtain the best compromises. The goal of this research work is to develop an optimization strategy for the application of reactive in-mold coating to SMC and thermoplastic parts in presence of multiple and conflicting performance measures. To achieve this goal we explore the use of Artificial Neural Networks as metamodeling techniques and the use of Data Envelopment Analysis to solve multiple criteria optimization problems.

Committee:

Jose Castro (Advisor)

Keywords:

In-Mold Coating; Multiple Criteria Optimization; Artificial Neural Networks; Data Envelopment Analysis; Reactive Polymer Processing

Zhou, YingqingSolving adaptive multiple criteria problems by using artificial neural networks
Doctor of Philosophy, Case Western Reserve University, 1992, Systems and Control Engineering
This dissertation addresses the application of Feedforward Artificial Neural Networks (FANNs) in solving Multiple Criteria Decision Making (MCDM) problems from the following aspects: (1) Designing the structure of FANNs; (2) Using FANNs in static MCDM; (3) Using FANNs in dynamic MCDM; and (4) Applying the method developed in this dissertation to machining Operations. In the structure design of a FANN, the capacity consideration of the FANN is important. In this dissertation, the capacity of a FANN with a given number of hidden nodes in approximating polynomial functions is given. The proofs and examples are presented. Secondly, an Adaptive Feedforward Artificial Neural Network (AF-ANN) is developed in solving static MCDM problems. The AF-ANN starts with an initial structure and increases its number of nodes until the desired structure is obtained. When training patterns change, the AF-ANN model can adapt itself to the changes by re-training or expanding the existing model. The static MCDM through AF-ANN consists of two steps: (a) Training an AF-ANN by the elicited information, and (b) Choosing the optimal alternative by using the trained AF-ANN. The theoretical basis for AF-ANN and examples are presented. The concept s of efficiency, concave, and convex in MCDM problems are discussed within the structure of AF-ANN. In dynamic MCDM problems, the Decision Maker (DM) may change his/her preferences. A FANN method is used to identify the changing preference function of the DM. To train a FANN efficiently in this dynamic environment, an adaptive algorithm is developed. This algorithm has two advantages: (a) The size of memory for training patterns does not increase as training patterns increase; and (b) the FANN is modified by using the algorithm such that, the new training pattern is always represented by the FANN while the total training error for the rest of training patterns is minimized. In terms of machining operations, a monitoring and supervising system is developed by using in-process regressions and FANNs. The system is designed for (a) in-process tool life measurement and prediction, (b) supervision of machining operations in terms of the best machining set-up, and (c) tool failure monitoring.

Committee:

Behnam Malakooti (Advisor)

Keywords:

Solving adaptive multiple criteria problems artificial neural networks

Attal, AsadullahDevelopment of Neural Network Models for Prediction of Highway Construction Cost and Project Duration
Master of Science (MS), Ohio University, 2010, Civil Engineering (Engineering and Technology)

Early understanding of construction cost and time represents a critical factor of a feasibility study in the early design phase of a project. All parties involved in the construction of a project; owners, contractors, and services companies are in need of reliable information about the cost and time in the early stages of the project. Therefore, this research work attempts to develop a consistent model of forecasting early design construction cost of highway and the project’s duration. A wide review of the literature of the cost and project duration determined the significance of reliable methods to predict construction cost and project’s duration. At this time, researchers suffer lack of information in the early phases of project to identify and analyze parameters and their relationship related to duration of projects. However, there have been numerous attempts to develop improved models for construction cost prediction of highway based on different techniques.

This research aims to develop construction cost and duration prediction models of highway based on statistical analysis. Consequently, the statistical techniques used in this research work represent Artificial Neural Networks (ANN) and step wise regression analysis to identify the influential parameters and forecast the early design phase of highway construction cost and duration. The input data used to develop the mathematical models were compiled and maintained by the Virginia Department of Transportation .The data used in these modeling were extracted from two sources within VDOT: Data Warehouse Management Information Portal (DWMIP) and Project Cost Estimating System (PCES)”. The parametric stage data were maintained in Project Cost Estimating System (PCES) by VDOT. In addition, for the identification of effective parameters used in these models, two separate techniques were used; sorting and identifying the effective parameters used in traditional techniques, the trial and elimination method of ANNs, and sensitivity analysis. Consequently, the chosen parameters were analyzed by two distinct statistical techniques: linear regression analysis and non linear ANN. Also, the given data were classified and analyzed for full depth section and improvement of highway and each section was classified as a full or condensed model. The same classification and analytical procedure was used for both the highway’s cost prediction and the projects’ duration prediction.

As the result of effective parameters identification and prediction models analysis the ANN outcome represented higher accuracy and reliability than linear regression analysis. Also, the identification of influential parameters represents a crucial impact on the future investigations.

Committee:

Mehmet Tatari, PhD (Advisor); Byong Cheol Kim, PhD (Committee Member); Munir Nazzal, PhD (Committee Member); Martin J Mohlenkamp, PhD (Committee Member)

Subjects:

Civil Engineering

Keywords:

Highway Cost Prediction; Highway Construction Duration Prediction; Artificial Neural Networks; Regresssion; Influential Parameters

Bandreddy, Neel KamalEstimation of Unmeasured Radon Concentrations in Ohio Using Quantile Regression Forest
Master of Science, University of Toledo, 2014, College of Engineering
The most stable isotope of radon is Radon-222, which is a decay product of radium-226 and an indirect decay product of uranium-238, a natural radioactive element. According to the United States Environmental Protection Agency (USEPA), radon is the primary cause of lung cancer among non-smokers. The USEPA classifies Ohio as a zone 1 state because the average radon screening level is more than 4 picocuries per liter. To perform preventive measures, knowing radon concentration levels in all the zip codes of a geographic area is necessary. However, it is impractical to collect the information from all the zip codes due to its inapproachability. Several interpolation techniques have been implemented by researchers to predict the radon concentrations in places where radon data is not available. Hence, to improve the prediction accuracy of radon concentrations, a new technique called Quantile Regression Forests (QRF) is proposed in this thesis. The conventional techniques like Kriging, Local Polynomial Interpolation (LPI), Global Polynomial Interpolation (GPI), and Radial Basis Function (RBF) estimate output using complex mathematics. Artificial Neural Networks (ANN) have been introduced to overcome this problem. Although ANNs show better prediction accuracy in comparison to more conventional techniques, many issues arise, including local minimization and over fitting. To overcome the inadequacies of existing methods, statistical learning techniques such as Support Vector Regression (SVR) and Random Forest Regression (RFR) were implemented. In this thesis, Quantile Regression Forest (QRF) is introduced and compared with SVR, RFR, and other interpolation techniques using available operational performance measures. The study shows that QRF has least validation error compared with other interpolation techniques.

Committee:

Vijay Devabhaktuni (Committee Chair); Ashok Kumar (Committee Member); Mansoor Alam (Committee Member)

Subjects:

Applied Mathematics; Electrical Engineering; Mathematics

Keywords:

Radon; Kriging; Local Polynomial Interpolation; Global Polynomial Interpolation; Radial Basis Function; Artificial Neural Networks; Random Forest Regression; Quantile Regression Forest; operational performance measures

Lin, Chii-WannOptical measurement of intracellular pH in brain tissue and the quantitative application of artificial neural networks to spectral analysis
Doctor of Philosophy, Case Western Reserve University, 1993, Biomedical Engineering
Compartmental distribution of protons and associated regulation mechanisms are important aspects of brain functions. The dynamic regulation of proton concentration in brain tissue is essential for maintaining normal metabolic and electrophysiological activities. Two optical methods are used because of their superior spatial and temporal resolution and the potential capability for measurement of multiple ionic species. Neutral red (NR) and carboxy-seminaphthorhodaflur-1 (SNARF-1) are used to measure intracellular pH in hippocampal brain slices and in vivo brain. The evidence suggests that these two dyes locate in different compartments. NR may enter both neuronal and glial compartments while SNARF-1 predominantly stains the neuronal compartment. The different baseline pH i reading observed by using these two dyes also suggest that different pH regulation schemes are used in these two compartments. The effect of the Na+/H+ exchanger blockers, amiloride and its analogs, are tested on the recovery slope of NH4Cl acid-loading technique. The different responses to the amiloride suggest that different set point for the activation of Na+/H+ exchanger in these two compartments may operate in the slice pr eparation. Quantitative application of artificial neural network is demonstrated with the spectral recognition for pH value output. A working network can be trained with a set of teaching spectra from a small random connection weight matrix or from one with previous experience by using generalized delta rule and back-propogation for weight modification. The imprinting of principal components of the teaching patterns is distributively stored within the connection weight matrix of the input to hidden layers. A calibration curve needs to be constructed to translate the actual output values of the network to pH values after the convergence with training patterns. The quantitative output during performing phase is the inner product of weight matrix and the input vectors (unknown patterns). This method can thus achieve the real-time quantitative application with learning from example spectra.

Committee:

Joseph LaManna (Advisor)

Keywords:

Optical measurement intracellular pH brain tissue quantitative application artificial neural networks spectral analysis

Hanittinan, WichaiResilient modulus prediction using neural network algorithm
Doctor of Philosophy, The Ohio State University, 2007, Civil Engineering
The resilient modulus (MR) of subgrade or unbound materials is a key parameter current and proposed methods for predicting the structural response of pavements (the 2002 Mechanistic-Empirical Pavement Design Guide, M-E PDG). Backpropagation neural network algorithms were adopted to construct artificial neural networks (ANNs) were then used to predict the resilient modulus of three Ohio cohesive soil types: A- A-6, and A-7-6. The key input parameters for ANN analysis and simulations are percent of soil particles passing through a #200 sieve, plasticity index, liquid limit, unconfined compressive strength, percent of optimum moisture content, percent moisture content, degree of saturation, confining stress, deviator stress, and MR. Once developed, the ANNs were embedded in a soil utility model. This soil utility model has several features to help users prepare the required input data for the MR prediction using the developed ANNs and analyze the outcome. These features included discrepancy estimator, an optimum moisture content estimator using a one point proctor data inquiry for similarly matched soil data sets, a basic sensitivity analysis tool, Histogram of each key required parameter, a summary report, unit conversions, the Ohio classification system, a California bearing ratio estimator, soil unconfined compressive strength (qu) estimator, and MR estimators using the developed ANNS, the algorithms defined by the M-E PDG, or the Ohio department transportation (ODOT). These ANN regression algorithms can be used as an advisory tool which predicts MR for the M-E PDG model. Some advantages of the ANN models as a regression analysis tool were that no pre-determined relationship is required. The ANN algorithms can learn from the data to handle non-linear problems. Disadvantages are that they provide no explanation on their outcomes. In addition, the results can be overfitted if ANNs are not trained properly. For future studies, statistical techniques, information theory, fuzzy theory, and decision tree and matrix analysis can be incorporated in the ANN algorithms. Additional soil data are also needed so that they can represent all available Ohio soils under saturated and optimum moisture conditions and that they also extend the coverage to possible minimum and maximum MR prediction.

Committee:

Fabian Tan (Advisor)

Keywords:

resilient modulus; Mechanistic-Empirical Pavement Design Guide; M-E PDG; Pavement design; Ohio soils; artificial neural networks

Wilson, Daniel B.Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop system
Master of Science (MS), Ohio University, 1996, Industrial and Manufacturing Systems Engineering (Engineering)
Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop system

Committee:

Luis Rabelo (Advisor)

Subjects:

Engineering, Industrial

Keywords:

Artificial Neural Networks; Heterogeneous Dispatching Rules; Genetic Algorithms

Motamarri, SrinivasDevelopment of a Neural Based Biomarker Forecasting Tool to Classify Recreational Water Quality
MS, University of Cincinnati, 2010, Engineering and Applied Science: Environmental Engineering

The public may be exposed to elevated health risks when using recreational surface waters due to pathogen loadings that originate from overland runoff and combined sewer overflows, especially after a storm event. Hence, water quality is monitored regularly using indicator organisms such as E. coli and fecal coliforms. Since the analysis time required for microbial samples range from 24 to 48 hours, there is a need to develop models that can quickly predict and/or classify the water quality. Previous studies have developed regression and artificial neural network (ANN) models for predicting microbial concentrations that can then be classified based on the pertinent water quality standards, but these models tend to suffer from high false negative rates. The primary objectives of this research are to develop a classification model using learning vector quantization (LVQ), which directly classifies the samples thereby avoiding the prediction step, and to compare the LVQ performance with regression and ANN techniques. The second objective is to identify the more important explanatory variables that provide adequate performance of the algorithms.

The models were developed using data collected at the Larz Anderson Bridge site on the Charles River.

A preliminary analysis was performed to compare the classification efficiencies of the multivariate linear regression (MLR), ANN, and LVQ models using the same three explanatory variables suggested by Eleria and Vogel (2005), which includes the lag-1 fecal coliform concentration as an explanatory variable. All three models performed well in predicting safe conditions (true negative rates > 90%) for both the primary and secondary recreation standards. While the MLR and ANN models had false negative rates between 35% to 55%,the LVQ model produced lower false negative rates between 12% to 18%. The primary drawback of the proposed models is the use of the lag1 fecal coliform concentrations as an explanatory variable, which requires too much time to be adequately included in a predictive model.

Additional research was performed to determine the appropriate explanatory variables to adequately classify the water quality without using the fecal coliform data. The input selection methodology included ranking the explanatory variables (using a different approach for each model) and individually removing the least important input variables. The results provided a tradeoff curve between the model performance and the number of variables included in the model. While all three models were capable of classifying the non-violated samples (> 90%), only the LVQ model had reasonably low false negative rates (< 20%); the MLR and ANN models had false negative rates ranging between 35% to 50%. In addition to evaluating the model performance, the input selection process provided insight into the explanatory variables, which indicated that discharge (from the current or previous day), rainfall during the last week, and the time since the last moderate rainfall were important parameters in predicting and/or classifying microbial water quality. Overall, the LVQ approach appeared to be a suitable solution for the development of a model to classify recreational water quality with a limited number of explanatory variables.

Committee:

Dominic Boccelli, PhD (Committee Chair); James Uber, PhD (Committee Member); Dionysios Dionysiou, PhD (Committee Member)

Subjects:

Environmental Engineering

Keywords:

Learning vector quatization;Artificial neural networks;Input selection;fecal coliform;recreational water quality;Water quality modeling

Kadiyala, AkhilDevelopment and Evaluation of an Integrated Approach to Study In-Bus Exposure Using Data Mining and Artificial Intelligence Methods
Doctor of Philosophy in Engineering, University of Toledo, 2012, Civil Engineering

The objective of this research was to develop and evaluate an integrated approach to model the occupant exposure to in-bus contaminants using the advanced methods of data mining and artificial intelligence. The research objective was accomplished by executing the following steps. Firstly, an experimental field program was implemented to develop a comprehensive one-year database of the hourly averaged in-bus air contaminants (carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), nitrogen dioxide (NO2), sulfur dioxide (SO2), 0.3-0.4 micrometer (¿¿¿¿m) sized particle numbers, 0.4-0.5 ¿¿¿¿m sized particle numbers, particulate matter (PM) concentrations less than 1.0 ¿¿¿¿m (PM1.0), PM concentrations less than 2.5 ¿¿¿¿m (PM2.5), and PM concentrations less than 10.0 ¿¿¿¿m (PM10.0)) and the independent variables (meteorological variables, time-related variables, indoor sources, on-road variables, ventilation settings, and ambient concentrations) that can affect indoor air quality (IAQ). Secondly, a novel approach to characterize in-bus air quality was developed with data mining techniques that incorporated the use of regression trees and the analysis of variance. Thirdly, a new approach to modeling in-bus air quality was established with the development of hybrid genetic algorithm based neural networks (or evolutionary neural networks) with input variables optimized from using the data mining techniques, referred to as the GART approach. Next, the prediction results from the GART approach were evaluated using a comprehensive set of newly developed IAQ operational performance measures. Finally, the occupant exposure to in-bus contaminants was determined by computing the time weighted average (TWA) and comparing them with the recommended IAQ guidelines.

In-bus PM concentrations and sub-micron particle numbers were predominantly influenced by the month/season of the year. In-bus SO2 concentrations were mainly affected by indoor relative humidity (RH) and the month of the year. NO concentrations inside the bus cabin were largely influenced by the indoor RH, while NO2 concentrations primarily varied with the month of the year. Passenger ridership and the month of the year mainly affected the in-bus CO2 concentrations; while the month and sky conditions had a significant impact on CO concentrations within the bus compartment.

The hybrid GART models captured majority of the variance in in-bus contaminant concentrations and performed much better than the traditional artificial neural networks methods of back propagation and radial basis function networks.

Exposure results indicated the average 8-hr. exposure of biodiesel bus occupants to CO2, CO, NO, SO2, and PM2.5 to be 559.67 ppm (¿¿¿¿ 45.01), 18.33 ppm (¿¿¿¿ 9.23), 5.23 ppm (¿¿¿¿ 4.49), 0.13 ppm (¿¿¿¿ 0.01), and 13.75 ¿¿¿¿g/m3 (¿¿¿¿ 4.24), respectively. The statistical significance of the difference in exposure levels to in-bus contaminants were compared during morning, afternoon, and evening/night time periods. There was statistically significant difference only between the morning (driver 1) and the evening/night (driver 3) exposure levels for CO2 and PM2.5. CO levels exceeded the TWA in some months.

Committee:

Dr. Ashok Kumar, PhD (Committee Chair); Dr. Devinder Kaur, PhD (Committee Member); Dr. Cyndee Gruden, PhD (Committee Member); Dr. Defne Apul, PhD (Committee Member); Dr. Farhang Akbar, PhD (Committee Member)

Subjects:

Civil Engineering; Environmental Engineering; Environmental Health

Keywords:

Indoor Air Quality; Public Transportation Buses; Biodiesel; Data Mining; Sensitivity of the Regression Trees; Artificial Neural Networks; Genetic Algorithm Neural Networks; Evolutionary Neural Networks; In-Bus Exposure; Air Quality Model Validation

ARELLANO-NERI, OLIMPIAAN IMPROVED METHODOLOGY FOR LAND-COVER CLASSIFICATION USING ARTIFICIAL NEURAL NETWORKS AND A DECISION TREE CLASSIFIER
PhD, University of Cincinnati, 2004, Arts and Sciences : Geography
Mapping is essential for the analysis of the land and land-cover dynamics, which influence many environmental processes and properties. When creating land-cover maps, it is important to minimize error, since error will propagate into later analyses based upon these land cover maps. The reliability of land cover maps derived from remotely sensed data depends upon an accurate classification. For decades, traditional statistical methods have been applied in land-cover classification with varying degree of accuracy. One of the most significant developments in the field of land-cover classification using remotely sensed data has been the introduction of Artificial Neural Networks (ANN) procedures. In this research, Artificial Neural Networks were applied to remotely sensed data of the southwestern Ohio region for land-cover classification. Three variants on traditional ANN-based classifiers were explored here: 1) the use of a customized architecture of the neural network in terms of the input layer for each land-cover class, 2) the use of texture analysis to combine spectral information and spatial information which is essential for urban classes, and 3) the use of decision tree (DT) classification to refine the ANN classification and ultimately to achieve a more reliable land-cover thematic map. The objective of this research was to prove that a classification based on Artificial Neural Networks (ANN) and decision tree (DT) would outperform by far the National Land Cover Data (NLCD). The NLCD is a land-cover classification produced by a cooperative effort between the United States Geological Survey (USGS) and the United States Environmental Protection Agency (USEPA). In order to achieve this objective, an accuracy assessment was conducted for both NLCD classification and ANN/DT classification. Error matrices resulting from the accuracy assessments provided overall accuracy, accuracy of each class, omission errors, and commission errors for each classification. The overall accuracy for the ANN/DT classification was 85.13%. This accuracy fulfills the United States Geological Survey standards for Anderson classification (Anderson et al. 1976). The overall accuracy for the NLCD was 67.97%.

Committee:

Dr. Robert Frohn (Advisor)

Keywords:

Land-Cover Classification; Artificial Neural Networks; Decision Tree Classifier; Southwestern Ohio Land-Cover

Next Page