Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Altun, Melih Road Scene Content Analysis for Driver Assistance and Autonomous Driving

    Doctor of Philosophy (PhD), Ohio University, 2015, Electrical Engineering & Computer Science (Engineering and Technology)

    This research aims to develop a vision based driver assistance system that achieves scene awareness using video frames obtained from a dashboard camera. A saliency image map is formed with features pertinent to the driving scene. This saliency map, based on contour and motion sensitive human visual perception, is devised by extracting spatial, spectral and temporal information from the input frames and applying data fusion. Fusion output contains high level descriptors for segment boundaries and non-stationary objects. Following the segmentation and foreground object detection stage, an adaptive Bayesian learning framework classifies road surface regions and the detected foreground objects are tracked via Kalman filtering. In turn, this oversees potential collisions with the tracked objects. Furthermore, the vehicle path is used in conjunction with the extracted road information to detect deviations from the road surfaces. The system forms an augmented reality output in which video frames are context enhanced with the object tracking and road surface information. The proposed scene driven vision system improves the driver's situational awareness by enabling adaptive road surface classification, object tracking and collision estimation. As experimental results demonstrate, context aware low level to high level information fusion based on human vision model produces superior segmentation, tracking and classification results that lead to high level abstraction of driving scene.

    Committee: Mehmet Celenk PhD (Advisor) Subjects: Computer Science; Electrical Engineering
  • 2. Beggrow, Elizabeth The effect of context on student understanding of evolution: An exploration of physical anthropology students' reasoning about evolutionary change

    Doctor of Philosophy, The Ohio State University, 2014, EDU Teaching and Learning

    When it comes to the teaching, learning and assessment of science, research across a variety of domains has shown that context makes a difference. More specifically regarding evolutionary biology, prior research has demonstrated students' knowledge and naive ideas about evolution vary depending on the specific contextual features of assessment items (Nehm and Ha 2011). While such studies shed light on the issue of contextuality in evolution education, they have only investigated those issues within populations of biology students, teachers and experts. Yet this narrow scope ignores alternative disciplines, like physical anthropology, that also use evolutionary theory as their framework. Physical anthropology provides students with a case study exploration of evolution situated within the context of humans, which could provide various cognitive advantages for reasoning about evolution. Furthermore, if students' knowledge representations are situated within the context of their learning (e.g., Anderson, Reder, and Simon 1996, Barsalou 2009, Greeno 1997, Kirsh 2009), then an alternative approach to learning evolutionary theory could result in different reasoning patterns about evolutionary change. Despite these potential advantages to learning and teaching of human evolution and the insight investigations into such learning and teaching could provide, the effect of human context on student understanding has not been studied empirically. To address this gap in the literature on evolution education, this dissertation aimed to explore physical anthropology student understanding of evolution and compare this to the understanding of commonly used biology student populations. By doing so, a new population will be incorporated into the literature and provide insight on the effect of context on student reasoning patterns. Three studies examining student explanations of evolutionary change, performance on a multiple-choice test of natural selection knowledge and acceptance (open full item for complete abstract)

    Committee: David Haury Ph.D. (Advisor); Ross Nehm Ph.D. (Committee Member); Dawn Kitchen Ph.D. (Committee Member) Subjects: Education
  • 3. Ding, Liya Modelling and Recognition of Manuals and Non-manuals in American Sign Language

    Doctor of Philosophy, The Ohio State University, 2009, Electrical and Computer Engineering

    In American Sign Language (ASL), the manual and the non-manual components play crucial semantical and grammatical roles. The design of systems that can analyze and recognize ASL sentences requires the recovery of both these manual and non-manual components. Manual signs in ASL are constructed using three building blocks – handshape, motion, and place of articulation. Only when these three are successfully estimated, can a sign be uniquely identified. The first part of my research is to define algorithms to recognize manual signs based on the recovery of these three components from a single video sequence of two-dimensional images of a sign. The 3D handshape is obtained with a structure-from-motion algorithm based on the linear fitting of matrices with missing data. To recover the 3D motion of the hand, a robust algorithm is defined which selects the most stable solution from the pool of all the solutions given by the three point resection problem. Faces of the signers in the video sequence are detected, with which the coordinate system with respect to the signer is defined and hence we recover the place of articulation of the sign. Based on the recognition results of the three recovered components, the manual signs are recognized using a tree-like structure. For the non-manual component of ASL, we need to provide an accurate and detailed description of external and internal facial features. The second part of this research focuses on the precise detailed detection of faces and facial features. Learning to discriminate the features from their context permits a precise detection of facial components, which is the key point of the feature detection algorithm. And because the shape and texture of facial features vary widely under changing expression, pose and illumination, the detection of a feature versus the context is challenging. This problem is addressed with the use of subclass division, which is employed to automatically divide the training samples of each facial (open full item for complete abstract)

    Committee: Aleix Martinez PhD (Advisor); Yuan F. Zheng PhD (Committee Member); Mikhail Belkin PhD (Committee Member) Subjects: Electrical Engineering