Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Schellenberg, Andrew ForgeScan: A Framework for Iterative Voxel Reconstructions and Next-Best View Selection

    Master of Science, The Ohio State University, 2023, Mechanical Engineering

    This thesis investigates methods for autonomously reconstructing digital models from measurements of a physical environment. For embodied autonomous systems, understanding the current state of the surrounding space is critical for many higher-level decisions. This work is motivated by manufacturing systems performing iterative deformation processes on workpieces. However, perception-informed decisions occur in all kinds of systems: an aerial drone mapping a warehouse, a delivery robot driving around an obstacle, or a bin-picking robot selecting the correct object. Despite active research concerning methods to best integrate measurements into voxelized reconstructions during mapping or scanning tasks, there is no set of tools to develop and compare approaches. In response to this need, this work introduces ForgeScan, an open-source library that unifies voxel grid representations, their update methods, and next-best view selection algorithms with simulated or real depth sensors. Rather than reworking the same development path, ForgeScan is designed to be minimal and adaptable. New voxel update rules are easily implemented as a subclass of an abstract voxel grid base class and the use of C++17 variants provides datatype flexibility. More than a versatile voxel grid data structure, ForgeScan provides an abstract policy class to perform view selection algorithms and a lightweight depth camera simulator to generate synthetic data. User-defined policies can suggest camera poses, generate depth images of a mesh, and then add these measurements to one or more voxel grid implementations. ForgeScan is designed to be flexible at runtime. While collecting data, users may interactively add new voxel grids or change what policy is running. At any time, the state of each grid's reconstruction may be saved and inspected with VTK. Common voxel methodologies – space carving, truncated signed distance fields, and occupation probability – are implemented to demonstrate the (open full item for complete abstract)

    Committee: Michael Groeber (Advisor); Andrew Gillman (Committee Member); Haijun Su (Committee Member); Ayonga Hereid (Committee Member) Subjects: Computer Engineering; Mechanical Engineering; Robotics
  • 2. AGRAWAL, VIKAS Data Warehouse Operational Design: View Selection and Performance Simulation

    Doctor of Manufacturing Management, University of Toledo, 2005, Manufacturing Management

    Decision support systems are a key to gaining competitive advantage. Many corporations have built or are building unified decision-support databases called data warehouses on which decision makers can carry out their analysis. A data warehouse is a very large data base that integrates information extracted from multiple, independent, heterogeneous data sources to support business analysis activities and decision-making tasks. The data that is likely to be in demand is generally pre-computed and stored ahead of time at the data warehouse in the form of materialized views. This dramatically reduces execution time of decision support queries from hours or days to minutes or even seconds. There are many architectural issues concerning the efficient design of a data warehouse. This dissertation studies in depth three important issues. The first issue addressed is the Materialized View Selection (MVS) problem, which is the problem of choosing an optimal set of views to materialize under resource constraints. We have formulated interesting bottleneck versions of this problem and presented the 0-1 Integer Programming models as well as the heuristic procedures. Performance analysis of the heuristic procedures is also presented. Formulation of the MVS problem requires knowledge of the number of rows in each view in a given lattice structure, which refers to views and their interrelationships for a given set of dimensions. Counting actual number of rows present in each view takes considerable time. The second issue addressed in this dissertation focuses on the statistical sampling techniques applied to data warehouses to estimate number of rows in each view in a given lattice structure. We have shown that the application of sampling techniques results in significant time savings without compromising on accuracy. The third issue deals with modeling the behavior and performance of a data warehouse system using simulation. We implemented the model in ARENA. The model enables a da (open full item for complete abstract)

    Committee: Mesbah Ahmed (Advisor) Subjects: Business Administration, Management
  • 3. Bordoloi, Udeepta Importance-driven algorithms for scientific visualization

    Doctor of Philosophy, The Ohio State University, 2005, Computer and Information Science

    Much progress has been made in the field of visualization over the past few years; but in many situations, it is still possible that the available visualization resources are overwhelmed by the amount of input data. The bottleneck may be the available computational power, storage capacity or available manpower, or a combination of these. In such situations, it is necessary to adapt the algorithms so that they can be run efficiently with less computation, with less space requirements, and with less time and effort from the human user. In this thesis, we present three algorithms that work towards reducing the resource constraints while maintaining the integrity of the visualizations. They are bound by a common underlying theme that all data elements are not equal in the particular visualization context – some are more important than others. We use certain data properties to create “importance“ measures for the data. These measures allow us to control the distribution of resources – computational, storage or human – to different portions of the data. We present a space efficient algorithm for speeding up isosurface extraction. Even though there exist algorithms that can achieve optimal search performance to identify isosurface cells, they prove impractical for large datasets due to a high storage overhead. With the dual goals of achieving fast isosurface extraction and simultaneously reducing the space requirement, we introduce an algorithm based on transform coding. We present a view selection method using a viewpoint goodness measure based on the formulation of entropy from information theory. It can be used as a guide which suggests good viewpoints for further exploration. We generate a view space partitioning, and select one representative view for each partition. Together, this set of views encapsulates the most important and distinct views of the data. We present an interactive global visualization technique for dense vector fields using levels of detail. It comb (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor) Subjects: Computer Science