Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
Diskin_Thesis format approved LW 4-19-13.pdf (5.8 MB)
ETD Abstract Container
Abstract Header
Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision
Author Info
Diskin, Yakov
ORCID® Identifier
http://orcid.org/0000-0003-3733-5958
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933
Abstract Details
Year and Degree
2013, Master of Science (M.S.), University of Dayton, Electrical Engineering.
Abstract
We present a 3D reconstruction algorithm designed to support various automation and navigation applications. The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point cloud model of its unknown surroundings. In this thesis, we present the step by step methodology of the development of a reconstruction technique. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. In an improved version of the algorithm, we utilized optical flow features to create an extremely dense representation model. Although dense, this model is hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear super resolution. With this addition, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel’s neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. Our final contribution of additional preprocessing steps is designed to filter noise points and mismatched features, giving birth to the complete Dense Point-cloud Representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy and computational expense of the reconstruction technique and compare with two state-of-the-arts techniques. After the presentation of rigorous analysis and comparison, we conclude by presenting the future direction of development and its plans for deployment in real-world applications.
Committee
Asari Vijayan, PhD (Committee Chair)
Raul Ordonez, PhD (Committee Member)
Eric Balster, PhD (Committee Member)
Pages
140 p.
Subject Headings
Electrical Engineering
;
Engineering
Keywords
monocular vision
;
3D Scene Reconstruction
;
Dense Point-cloud Representation
;
Point Cloud Model
;
DPR
;
Super Resolutoin
;
Vision Lab
;
University of Dayton
;
Computer Vision
;
Vision Navigation
;
UAV
;
UAS
;
UGV
;
RAIDER
;
Yakov Diskin
;
Depth Resolution Enhancement
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Diskin, Y. (2013).
Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision
[Master's thesis, University of Dayton]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933
APA Style (7th edition)
Diskin, Yakov.
Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision.
2013. University of Dayton, Master's thesis.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
MLA Style (8th edition)
Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision." Master's thesis, University of Dayton, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
dayton1366386933
Download Count:
3,738
Copyright Info
© 2013, some rights reserved.
Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision by Yakov Diskin is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at etd.ohiolink.edu.
This open access ETD is published by University of Dayton and OhioLINK.