Search Results (1 - 10 of 10 Results)

Sort By  
Sort Dir
 
Results per page  

Howard, Shaun MichaelDeep Learning for Sensor Fusion
Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences
The use of multiple sensors in modern day vehicular applications is necessary to provide a complete outlook of surroundings for advanced driver assistance systems (ADAS) and automated driving. The fusion of these sensors provides increased certainty in the recognition, localization and prediction of surroundings. A deep learning-based sensor fusion system is proposed to fuse two independent, multi-modal sensor sources. This system is shown to successfully learn the complex capabilities of an existing state-of-the-art sensor fusion system and generalize well to new sensor fusion datasets. It has high precision and recall with minimal confusion after training on several million examples of labeled multi-modal sensor data. It is robust, has a sustainable training time, and has real-time response capabilities on a deep learning PC with a single NVIDIA GeForce GTX 980Ti graphical processing unit (GPU).

Committee:

Wyatt Newman, Dr (Committee Chair); M. Cenk Cavusoglu, Dr (Committee Member); Michael Lewicki, Dr (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

deep learning; sensor fusion; deep neural networks; advanced driver assistance systems; automated driving; multi-stream neural networks; feedforward; multilayer perceptron; recurrent; gated recurrent unit; long-short term memory; camera; radar;

Ozbilgin, GuchanRelationship of Simulator and Emulator and Real Experiments on Intelligent Transportation Systems
Master of Science, The Ohio State University, 2016, Electrical and Computer Engineering
This thesis focuses on the importance of early and continued testing for Intelligent Transportation System applications, utilizing simulation environments and scaled- down testbeds. By introducing complete end-to end testing procedures and illustrating these on state-of-the-art ITS algorithms, the relationship between different test platforms and scales are described in detail. Results from different scales, and the corresponding quality metrics are presented and compared. A low-cost and flexible supplement to full-scale ITS testing is presented through the use of small-scale testbeds, which reduces the time and effort spent on the testing stage of ITS system design and development. This allows the researchers to implement, compare, and assess different architectures for intelligent transportation by deploying hardware-in-the-loop (HIL) simulations and tests, and it gives strong indications on the performance and high-level behavior of such systems at full scale. A range of concepts were demonstrated at The Ohio State University Control and Intelligent Transpiration Research Laboratories. Detailed implementations of applications based on an autonomous parking system, stop sign precedence system, green light speed advisory system, and a collaborative vehicle tracking system are provided. Finally, the design, development, and implementation details for a novel testing and evaluation methodology for Lane Departure Warning and Prevention systems are discussed. Development and testing steps from computer simulations to full-scale vehicle experiments are presented.

Committee:

Umit Ozguner (Advisor); Keith Redmill (Committee Member)

Subjects:

Computer Science; Electrical Engineering

Keywords:

Automated Vehicles, Intelligent Transportation Systems, Advance Driving Assistant Systems, Robotics, Sensor Fusion

Warren, Emily AmandaMachine Learning for Road Following by Autonomous Mobile Robots
Master of Sciences (Engineering), Case Western Reserve University, 2008, EECS - Computer Engineering
This thesis explores the use of machine learning in the context of autonomous mobile robots driving on roads, with the focus on improving the robot's internal map. Early chapters cover the mapping efforts of DEXTER, Team Case's entry in the 2007 DARPA Urban Challenge. Competent driving may include the use of a priori information, such as road maps, and online sensory information, including vehicle position and orientation estimates in absolute coordinates as well as error coordinates relative to a sensed road. An algorithm may select the best of these typically flawed sources, or more robustly, use all flawed sources to improve an uncertain world map, both globally in terms of registration corrections and locally in terms of improving knowledge of obscured roads. It is shown how unsupervised learning can be used to train recognition of sensor credibility in a manner applicable to optimal data fusion.

Committee:

Wyatt Newman, PhD (Advisor); M. Cenk Cavusoglu, PhD (Committee Member); Francis Merat, PhD (Committee Member)

Subjects:

Computer Science; Engineering; Robots

Keywords:

machine learning; autonomous robot; driving; Urban Challenge; sensor fusion; unsupervised learning; global map; road following

Gale, Nicholas C.FUSION OF VIDEO AND MULTI-WAVEFORM FMCW RADAR FOR TRAFFIC SURVEILLANCE
Doctor of Philosophy (PhD), Wright State University, 2011, Engineering PhD

Modern frequency modulated continuous wave (FMCW) radar technology provides the ability to modify the system transmission frequency as a function of time, which in turn provides the ability to generate multiple output waveforms from a single radar unit. Current low-power multi-waveform FMCW radar techniques lack the ability to reliably associate measurements from the various waveform sections in the presence of multiple targets and multiple false detections within the field-of-view. Two approaches are developed here to address this problem.

The first approach takes advantage of the relationships between the waveform segments to generate a weighting function for candidate combinations of measurements from the waveform sections. This weighting function is then used to choose the best candidate combinations to form polar-coordinate measurements. Simulations show that this approach provides a ten to twenty percent increase in the probability of correct association over the current approach while reducing the number of false alarms in generated in the process, but still fails to form a measurement if a detection form a waveform section is missing.

The second approach models the multi-waveform FMCW radar as a set of independent sensors and uses distributed data fusion to fuse estimates from those individual sensors within a tracking structure. Tracking in this approach is performed directly with the raw frequency and angle measurements from the waveform segments. This removes the need for data association between the measurements from the individual waveform segments.

A distributed data fusion model is used again to modify the radar tracking systems to include a video sensor to provide additional angular and identification information into the system. The combination of the radar and vision sensors, as an end result, provides an enhanced roadside tracking system.

Committee:

Lang Hong, PhD (Advisor); Michael Temple, PhD (Committee Member); Kefu Xue, PhD (Committee Member); Zhiqiang (John) Wu, PhD (Committee Member); Arthur Goshtasby, PhD (Committee Member)

Subjects:

Electrical Engineering; Engineering

Keywords:

multi-waveform fmcw radar; multi-sensor fusion; target tracking

Gallagher, Jonathan G.Likelihood as a Method of Multi Sensor Data Fusion for Target Tracking
Master of Science, The Ohio State University, 2009, Electrical and Computer Engineering
This thesis addresses the problem of detecting and tracking objects in a scene, using a distributed set of sensing devices in different locations, and in general use a mix of different sensing modalities. The goal is to combine data in an efficient but statistically principled way to realize optimal or near-optimal detection and tracking performance. Using the Bayesian framework of measurement likelihood, sensor data can be combined in a rigorous manner to produce a concise summary of knowledge of a target’s location in the state-space. This framework allows sensor data to be fused across time, space and sensor modality. When target motion and sensor measurements are modeled correctly, these “likelihood maps” are optimal combinations of sensor data. By combining all data without thresholding for detections, targets with low signal to noise ratio (SNR) can be detected where standard detection algorithms may fail. For estimating the location of multiple targets, the likelihood ratio is used to provide a sub-optimal but useful representation of knowledge of the state space. As the calculation cost of computing likelihood or likelihood ratio maps over the entire state space is prohibitively high for most practical applications, an approximation computed in a distributed fashion is proposed and analyzed. This distributed method is tested in simulation for multiple sensor modalities, displaying cases where it is and is not a good approximation of central calculation. Detection and tracking examples using measured data from multi-modal sensors (Radar, EO, Seismic) are also presented.

Committee:

Randolph Moses (Advisor); Emre Ertin (Advisor); Lee Potter (Committee Member)

Subjects:

Electrical Engineering

Keywords:

target tracking; state estimation; distributed calculation; likelihood maps; likelihood ratio; data fusion; sensor fusion; sensor networks

Ozyalcin, Anil ESOLITONS: A COMPACT, LOW-COST, AND WIRELESS BODY MOTION CAPTURE SYSTEM
Master of Science, The Ohio State University, 2015, Electrical and Computer Engineering
This thesis introduces the Solitons, which are low-cost, compact, and wireless human body motion capture sensors. They utilize microelectromechanical (MEMS) inertial measurement units (IMU) to measure orientation of human body parts. Each Soliton is only slightly larger than a US quarter (29 mm diameter), and can run for up to 6 hours on one battery charge. The sensors are worn by the human subject, and are placed on certain links between the joints on the subject’s body. Each Soliton sends the subject’s limb’s orientation data wirelessly using ANT protocol to a Soliton Station, which is a gateway for all data that is transmitted between the Solitons and the PC client. Soliton Station connects to a PC over a USB connection, and relays the wirelessly received data to the PC client over a serial connection. The PC can then use a forward kinematics model to translate limb orientation to limb position. They were specifically designed to be compact and low-weight enough to be worn by infants with muscular disorders, so they can be used to capture motion on infants, as well as adults. Soliton Reach is also introduced, which is a regular Soliton equipped with both Bluetooth Low Energy and ANT modules. Soliton Reach is designed to interface with mobile devices to provide human motion data to mobile device applications.

Committee:

Robert Lee (Advisor); Joel Johnson (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Human Body Motion Capture; Sensor Fusion; Inertial Measurement Unit

Vincent, David E.PORTABLE INDOOR MULTI-USER POSITION TRACKING SYSTEM FOR IMMERSIVE VIRTUAL ENVIRONMENTS USING SENSOR FUSION WITH MULTIDIMENSIONAL SCALING
Master of Science, Miami University, 2012, Computer Science & Software Engineering
Immersive virtual systems require a tracking technology that is capable of maintaining accurate user position. Current virtual reality systems are commonly dependent on infrastructure based technologies that require line of sight communication between a head mounted device and several other devices that are positioned in known locations around the tracking area. These systems often require precise calibration and are difficult to move, thus limiting tracking to a designated facility. The development of a portable tracking system opens new possibilities for the application of immersive virtual reality by allowing an immersive virtual environment to be setup quickly in any large real-world space. The custom hardware and novel localization algorithm presented in this thesis provide a portable tracking system that supports multiple users in Immersive Virtual Environment applications.

Committee:

Eric Bachmann, Dr. (Advisor); Scott Campbell, Dr. (Committee Member); David Waller, Dr. (Committee Member)

Subjects:

Computer Science

Keywords:

virtual reality; tracking system; immersive; multidimensional scaling; sensor fusion; scipt; inertial; tdoa

McMichael, Scott ThomasLane Detection for DEXTER, an Autonomous Robot, in the Urban Challenge
Master of Sciences, Case Western Reserve University, 2008, Computer Engineering
This thesis describes the lane detection system developed for the autonomous robot DEXTER in the 2007 DARPA Urban Challenge. Though DEXTER was capable of navigating purely off of GPS signals, it often needed to drive in areas where GPS navigation could not be trusted completely. In these areas it was necessary to use a method of automatically detecting the lane of travel so that DEXTER could drive properly within it. The developed system functions by merging the outputs of a number of independent road detection modules coming from several sensors into a single drivable output path. This sensor derived path is compared with the map derived path in order to produce an optimal output based on the relative confidences of the two information sources. The full lane detection system is able to adaptively drive according to the best information source and perform well in a variety of diverse driving environments.

Committee:

Wyatt Newman (Advisor)

Keywords:

autonomous robot; DARPA Urban Challenge; lane detection; road detection; computer vision; sensor fusion

Bebek, OzkanROBOTIC-ASSISTED BEATING HEART SURGERY
Doctor of Philosophy, Case Western Reserve University, 2008, Systems and Control Engineering

Coronary heart disease is a leading cause of death in the USA. A promising treatment option for this disease is off-pump coronary artery bypass graft (CABG) surgery as the artery grafting is done without stopping the heart. In the robotic assisted-surgery concept the surgeon views the surgical scene on a video display and operates on the heart as if it were stationary while the robotic system actively compensates for the motion of the heart. With the proposed system concept, the CABG surgery will be possible without using passive stabilizers, and the hospitalization time and cost of the operation will be decreased.

In this dissertation intelligent robotic tools for assisting off-pump (beating heart) CABG surgery are presented. Most important aspects of such a robotic system are accurately measuring and predicting the heart motion as they are instrumental in canceling the relative motion between the heart surface and surgical tools attached to the robotic manipulators. The proposed control algorithm contributes to the field by using biological signals in the estimation of heart's future motion for active relative motion canceling. Also a novel contact position sensor is developed to measure the position of the beating heart and a preliminary noise characterization for the future sensor system implementation is presented.

Committee:

Murat Cavusoglu (Advisor)

Keywords:

bypass surgery; flexible structures; medical robotics; motion canceling; motion sensing; real time tracking; sensor fusion; whisker-like.

Karvir, HrishikeshDesign and Validation of a Sensor Integration and Feature Fusion Test-Bed for Image-Based Pattern Recognition Applications
Doctor of Philosophy (PhD), Wright State University, 2010, Engineering PhD

We integrated a sensor hardware test-bed using scientific grade, commercial off-the-shelf (COTS) technology and developed supporting software to enable rapid prototyping. The validity of this test-bed and associated software was demonstrated through the delivery of a ground-based multispectral visual surveillance prototype for improvised explosive devices (IED) detection using electro-optical (EO) and short-wave infrared (SWIR) cameras. Software developed to support the test-bed included modules for image acquisition, preconditioning, segmentation, feature extraction, data regularization and pattern recognition. To provide spatially co-aligned data, we optimized a mutual information-based image registration algorithm to improve its convergence rate and benchmarked it against the established simplex method. For four different multimodal test image sets, our algorithm convergence success improved by 15 to 40% as compared to the downhill simplex method, albeit with an approximately four times higher computational cost. Additional strategies, such as bit-depth reduction, image down-sampling and gradient-based regions of interest (ROI) selection, were systematically evaluated and led to the registration of high resolution images at nearly 60 times faster than the standard approach.

To automatically identify IED in the acquired multi-spectral imagery, four different pattern classifiers were tested; Bayes, k-nearest neighbor (knn-NN), support vector machines (SVM) and our novel piece-wise linear convex-hull classifier. Initial tests with the convex-hull classifier using simulated data indicated significant reduction in error rates of up to 89% (p=3e-6) when compared to the Bayes classifier. Subsequently, each of the four classifiers was tested using the IED data set that consisted of 154 different intensity-based and content-based features extracted from the EO and SWIR imagery. Salient features were selected using receiver operating characteristic (ROC) curve analysis and a wrapper-based process that used minimum error rate as the criterion function. A stratified 10-fold cross-validation analysis was used to compare classifier performances using t-tests with a 95% confidence interval. Under different operating conditions (OC), the generalized classification error rate ranged from 13.0% (±1.2) to 29.6% (±2.7). For six of the eight acquired data sets, the convex-hull classifier provided equivalent or lower (as compared to at least one other classifier) generalized error rate, used a lower number of features (at most 3), reduced training time (as compared to the knn-NN and SVM classifiers) and relied on lower function complexity. Concurrently, the class-label assignment time using the convex-hull approach was a factor of 10 lower than the Bayes classifier and a factor of 100 lower than the knn-NN and SVM classifiers. Considering the attributes of a good classifier, the convex-hull approach provided the overall best balance between the different measures of performance.

Committee:

Julie A. Skipper, PhD (Advisor); Thomas N. Hangartner, PhD (Committee Member); Lang Hong, PhD (Committee Member); S. Narayanan, PhD (Committee Member); Mark E. Oxley, PhD (Committee Member)

Subjects:

Engineering; Remote Sensing; Scientific Imaging; Systems Design

Keywords:

Machine learning; Pattern classification; Target detection; Convex hull; Piecewise linear classifier; Image registration; Systems integration; Multi-sensor fusion; Multi-spectral imaging