Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 25)

Mini-Tools

 
 

Search Report

  • 1. Brown, Susan The conception and production of the costume designs for Don Nigro's mariner

    Master of Fine Arts, The Ohio State University, 1991, Theatre

    Committee: Dennis Parker (Advisor) Subjects:
  • 2. Thurnauer, Mark Lightscape as a Design Tool for Thematic Daylighting Design

    Master of Architecture, Miami University, 2001, Architecture and Interior Design

    It is imperative to have visual imagery when designing for lighting, therefore most architects of the past and present have used sketches, painting, or physical models for determining if the light in a space meets the design intentions. These tools all have their limitations. The advancement of computer technology, computer light simulation software, such as Radiance, Lightscape, etc., can be used as an effective tool for thematic daylight design. This thesis focuses on Lightscape, because of its reputation and affordability. The work found the tool to have variable precision, but a lot of research effort is focused on understanding what “precision” is necessary in a design context. It also strives to develop a method to incorporate Lightscape in the architectural design process.

    Committee: Murali Paranandi (Advisor) Subjects: Architecture
  • 3. Koirala, Aayog View synthesis for 360° panoramic spherical images using Multiplane Images.

    Master of Science in Computer Science, Miami University, 2025, Computer Science and Software Engineering

    View synthesis for 360° panoramic images is critical for immersive experiences in virtual reality (VR), augmented reality (AR), and interactive media. However, existing methods struggle with handling spherical projections and generating accurate, parallax-consistent views. This thesis proposes a novel approach for view synthesis using Multiplane Images (MPIs) constructed from 360° video frames. To address the complexities of spherical imagery, each frame is converted into six-face cube maps, and MPIs are generated for each face. Depth maps are estimated using the DepthAnything v2 model, providing metric depth in meters. The depth range is divided into intervals to create MPI layers and a cubic alpha transition is applied to smooth blending between layers. The method supports novel view synthesis and view interpolation to generate intermediate perspectives, which are evaluated using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). The proposed method is compared against a neural network-based MPI method developed by Google Research to benchmark its effectiveness. The results demonstrate that the depth-based approach achieves comparable or superior performance, offering an interpretable, efficient alternative for view synthesis. This work contributes to computer vision, VR, and AR, enabling more realistic and immersive experiences in virtual environments.

    Committee: John Femiani (Advisor); Xianglong Feng (Committee Member); Eric Bachmann (Committee Member) Subjects: Computer Science
  • 4. Dhakal, Rabin Towards a Low-Cost Distributed AWOS: Machine Learning for Optical Ceilometry, Cloud Detection, and Classification

    Master of Science (MS), Ohio University, 2024, Computer Science (Engineering and Technology)

    Larger, commercial, towered airports are highly equipped to provide pilots with real-time weather related data before flying the aircraft. In the case of small airports, there is a weather data gap for the aircraft flying at a lower altitude. Accurate cloud information (cloud type and its height from the ground) is crucial data for pilots flying at low altitudes because it affects both visibility and safety. A ceilometer is a device that estimates cloud height from the ground, but it is often costly and lacks portability. This thesis proposes an innovative, cost-effective approach using computer vision and deep learning to address these limitations. One of the primary challenges for these methods is the need for extensive datasets for training and evaluation, as real-world data collection of cloud height and type is time-consuming and resource-intensive. To overcome this, we generated synthetic cloud data using a stereo camera setup with ground truth height information in a virtual environment. In this thesis, cloud information involves cloud-base height estimation and classification of the type of cloud. We proposed methods that can provide better efficiency in predicting the cloud-base height than state-of-the-art methods when applied to the real-world dataset in the future. We have incorporated synthetic data to evaluate the performance of our method. These synthetic data, created by simulating VDB clouds, enable the testing and validation of cloud detection models and calibrating height predictions. We rendered the 3D scene and created ground truth bounding box and cloud-type datasets, such as Altocumulus, Altostratus, Cirrocumulus, Cumulonimbus, Cumulus, Cirrostratus, Cirrus, Stratocumulus, and Stratus. We trained the YOLO-v8 model with the cloud detection dataset and employed unseen synthetic data to assess its robustness and accuracy. Once vetted, we generated disparity images from the stereo pairs. We trained several CNN-based regression models using this di (open full item for complete abstract)

    Committee: Chad Mourning (Advisor); Zhewei Wang (Committee Member); Jundong Liu (Committee Member); Bhaven Naik (Committee Member) Subjects: Computer Science
  • 5. Kunting, Qi Automatic PBR Texture Reconstruction for Window Images

    Master of Computer Science, Miami University, 2023, Computer Science

    In various fields, computer-generated architecture plays a pivotal role, especially in tasks like historical scene reconstruction and real estate marketing. However, reverse-engineering architectural details from images poses significant challenges due to diverse constraints. Our research focuses on one central architectural element—windows—as a stepping stone towards 3D modeling of complex scenes. We employ Physically Based Rendering (PBR) texture mapping to accurately represent material information, including transparency and reflectance, in window surfaces. Estimating PBR textures, including depth and albedo maps, is crucial for capturing the 3D appearance of windows. Given limited or unavailable ground truth data, we adopt unsupervised learning methods. Our approach utilizes a Resnet-50 backbone for window image viewpoint estimation and implements a neural network for unsupervised PBR texture estimation. This work aims to enhance the efficiency and accuracy of 3D modeling in architectural contexts, addressing the inherent challenges of reverse-engineering architectural features from images.

    Committee: John Femiani (Advisor); Khodakhast Bibak (Committee Member); Daniela Inclezan (Committee Member) Subjects: Computer Science
  • 6. Riley, Joseph Noise Function Turbulence Optical Phase Screens and Physics Based Rendering

    Doctor of Philosophy (Ph.D.), University of Dayton, 2021, Electro-Optics

    Wave optics is used for modeling laser propagation through turbulence, and with laser technology maturation, the simulation space has expanded to the point that current turbulence representations via phase screens are lacking. This research eliminates such deficiencies by adopting a fractal description of turbulence in order to facilitate a noise function based phase screen. Primarily focused on aero-optical data collected using Shack-Hartmann wavefront sensors, novel analysis processes are developed that leverages off of wavelets, circular statistics, optical flow, and radial basis functions. The resulting values serve as inputs for noise function based phase screens generators supported by a dedicated physics based render engine developed from first principles. Finally, multiple wave optics simulations demonstrate the flexibility of this methodology, culminating with an airborne example that includes turret slew over the hemisphere, producing angle dependent far-field irradiance profiles distorted by localized, non-stationary turbulence.

    Committee: Paul McManamon (Committee Chair); David Goorskey (Committee Member); Andrew Sarangan (Committee Member); Edward Watson (Committee Member) Subjects: Computer Science; Electromagnetics; Optics; Physics
  • 7. King, Laurence Real-Time Spherical Harmonic Surface Space Thickness Reconstruction

    Master of Science, The Ohio State University, 2019, Computer Science and Engineering

    Translucency modeling algorithms used in real-time applications often incorporate object thickness; however, directionally sensitive thickness evaluation techniques are frequently too slow for real-time use and faster non-directional approximations are used. In this work, we propose a directionally sensitive surface space thickness reconstruction technique utilizing spherical harmonics. Our work can be seen as an improvement to current local-thickness based techniques. We evaluate our method's ability to accurately approximate surface thickness and compare it to both local-thickness and more accurate far-near techniques (wherein a surface's front and back faces are rendered and the far and near depths are subtracted). Additionally, we explore the effects of reconstruction parameters such as sampling density and order on reconstruction error and sensitivity.

    Committee: Han-Wei Shen (Advisor); Huamin Wang (Committee Member) Subjects: Computer Science
  • 8. Mazursky, Alex Application of Electrorheological Fluid for Conveying Realistic Haptic Feedback in Touch Interfaces

    Master of Science, Miami University, 2019, Mechanical and Manufacturing Engineering

    Realistic haptic feedback is necessary to provide meaningful touch information to users of numerous technologies, such as virtual reality, mobile devices and robotics. For a device to convey realistic haptic feedback, two touch sensations must be present: tactile feedback and kinesthetic feedback. Tactile feedback is felt at the surface of one's skin and displays textures and vibrations, whereas kinesthetic feedback is felt in one's joints and muscles and transmits position and movement information. While many devices today display tactile feedback through vibrations, most neglect to incorporate kinesthetic feedback due to size constraints. To provide comprehensive feedback, this study investigates a new haptic device based on an unconventional actuation method: electrorheological (ER) fluid, a smart fluid with tunable yield stress under the application of electric field. The device's control electronics and structural components are integrated into a compact printed circuit board, resulting in a slim device suitable for mobile applications. By controlling the ER fluid flow via applied electric fields, the device can generate a wide and distinct range of both tactile and kinesthetic sensations. These sensations were derived analytically from ER fluid's governing equations as well as experimentally. The device may be used as a haptic interface between a user and virtual environment.

    Committee: Jeong-Hoi Koo Ph.D. (Advisor); Tae-Heon Yang Ph.D. (Committee Member); Michael Bailey Van Kuren Ph.D. (Committee Member) Subjects: Computer Engineering; Materials Science; Mechanical Engineering
  • 9. Ran, Hao VISION-BASED SURGICAL TOOL POSE ESTIMATION FOR DA VINCI ROBOTIC SYSTEM

    Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Electrical Engineering

    This thesis explores Bayesian filtering based methods for surgical tool tracking using stereo vision. The tool geometry and silhouette generating algorithm are defined based on the surgical tool model. A virtual rendering algorithm is proposed to create the virtual images by rendering the silhouette of the defined tool geometry under the stereo cameras. Both parametric and nonparametric filtering algorithms are explored for 9-degree of freedom surgical tool tracking, represented by Particle Filter and Unscented Kalman Filter respectively. The tracking performances for both Particle Filter and Unscented Kalman Filter are evaluated on da Vinci surgical robotic system and the da Vinci simulation environment (Gazebo) through Robot Operating System (ROS).

    Committee: Murat Cavusoglu Dr. (Committee Chair); Wyatt Newman Dr. (Committee Member); Mehmet Koyuturk Dr. (Committee Member) Subjects: Robotics
  • 10. Nykl, Scott Interactive Mesostructures

    Doctor of Philosophy (PhD), Ohio University, 2013, Electrical Engineering & Computer Science (Engineering and Technology)

    Humans live in a 3D reality and experience a 3D world. As a result, 3D computer graphics is a natural way to present digital data to humans. Virtual worlds of great complexity are naturally perceived and understood by the human visual system. This mechanism offers an efficient pathway to transfer digital data from a display into human knowledge. Computing 3D rendering output requires a specific computational complexity for a specific scene. The ever increasing demand to expand a scene's boundaries, add additional details, and enhance a scene's behavior present challenges to both the rendering hardware and rendering algorithms. This work presents a set of novel image-based algorithmic rendering approaches designed for massive concurrent execution on modern programmable Graphical Processor Units (GPUs). These approaches make use of constructs known as billboards and mesostructures, their goal is to render as much interactive detail as possible while maintaining real-time framerates. First, billboards are used to dynamically generate image-based impostors of computationally expensive objects within a scene. These impostors are subsequently rendered in lieu of the original geometry thus reducing the scene's overall rendering time. Impostors are then applied to the real-time visualization of tens of millions of 3D Light Detection and Ranging (LIDAR) data points enabling highly interactive visualizations previously unable to be achieved on commodity hardware. Second, mesostructures are used to dynamically generate a 3D height field on top of an existing geometrical surface via a 2D texture known as a displacement map. This work presents a technique for interactively deforming and colliding with mesostructures at a per-texel level. This technique integrates well with existing physics engines and is able to reduce traditional 3D geometrical deformations (vertex-based) to 2D image space operations (pixel-based) that are parallelized on a GPU without CPU-GPU data shuffl (open full item for complete abstract)

    Committee: David Chelberg PhD (Advisor) Subjects: Computer Science
  • 11. DWITYABASWARA, DIONISIUS Voluntary Movement in Architectural Representation: The Exploration of Video Game Technology in Architecture

    MS ARCH, University of Cincinnati, 2008, Design, Architecture, Art and Planning : Architecture

    Architecture has just recently started to explore the emerging potential of real-time rendering (video game) technology. This thesis argues that such technology provides for an improved sense of "being there," compared with previous architectural three-dimensional representation methods. Current methods typically put individuals in the role of spectators who passively examine the space in the represented architecture. The new interactive medium enables individuals to be more actively involved with the representational environment, which in turn makes for a better understanding of its spatial, experiential effects. The sort of viewer involvement specific to this research is the ability to actively and voluntarily traverse the virtual environment in a manner similar to the voluntary movement capability of real-life presence.In order to demonstrate the potential of an architectural representation method that allows for voluntary movement through space, this thesis will investigate current philosophical and practical discourse, as well as directly experiment with the technology by building a demonstration. Recent theoretical and philosophical discussions of digital technology and real-time rendering raise a variety of arguments, both positive and skeptical, about how we may encounter physically non-existent environments. Several philosophical bases for the discourse will be investigated, such as Merleau-Ponty's theory of the perception of space. Merleau-Ponty tries to restore the cognitive role of the body against a philosophical tradition begun with Descartes that emphasizes the mind, arguing that in order to learn, the physical body has to be present. Learning requires "an active body coping with things." (Dreyfus, 2001, p. 57) This account shows that a body, or a self, who is able to cope voluntarily with things is required for spatial perception, which is exactly what this technology offers as an enhancement over passive modes of representation. While real-time rend (open full item for complete abstract)

    Committee: John E. Hancock Dr (Committee Chair); David G. Saile PhD (Committee Member); Jose M. Kozan MS. Arch (Committee Member) Subjects: Architecture
  • 12. Nouanesengsy, Boonthanome High-Concurrency Visualization on Supercomputers

    Doctor of Philosophy, The Ohio State University, 2012, Computer Science and Engineering

    Many scientific and engineering fields employ computer simulations of specific phenomena to help solve complex problems. Supercomputers and other high performance computing machines are regularly used to perform these scientific simulations. The resulting data then needs to be analyzed and visualized, which is difficult when the data is large. One approach to producing visualizations faster is to generate them in parallel. Many challenges remain, though, when attempting to analyze and visualize large data in parallel, while maintaining good performance and scalability. The size of the data is one challenge. When data size becomes very large, the I/O overhead from loading the data becomes a bottleneck, which could hinder performance. In addition, some visualization algorithms have unknown communication and computational load, which results in poor workload distribution and load balancing. This load imbalance hinders overall scalability. Another possible reason for poor parallel performance is that the method does not take advantage of the specific hardware architecture of the host machine. In order to meet these challenges, we present methods to parallelize several visualization techniques. First, a scalable shared memory rendering technique was found by adapting established parallel rendering methods to a shared memory architecture. Three rasterization methods, including sort-first, sort-last, and a hybrid method, were tested on a large shared-memory machine. Next, parallel streamline generation in static flow fields, due to the nature of the problem, suffers from high load imbalance. To make the computation more load balanced, we analyzed the flow field and estimated the workload of each block in the flow field. A load balanced partitioning of data blocks was then computed from this workload estimation. In our tests, we were able to scale up to thousands of processes while using hundreds of thousands of seeds. For time-varying flow fields, the Finite-Time Lyapunov (open full item for complete abstract)

    Committee: Han-Wei Shen PhD (Advisor); Yusu Wang PhD (Committee Member); Gagan Agrawal PhD (Committee Member); Kate Calder PhD (Committee Member) Subjects: Computer Science
  • 13. Kerwin, Thomas Enhancements in Volumetric Surgical Simulation

    Doctor of Philosophy, The Ohio State University, 2011, Computer Science and Engineering

    Computer surgical simulation has a great deal of potential in medical education and testing. However, there are numerous problems in integrating simulation software technology into a medical curriculum. Review and analysis of the data from the simulation is important to evaluate and assist students. A combination of realistic rendering for good translation of skills to the operating room and illustrative rendering to aid novices can help the simulation system target a wide range of students. In the context of an ongoing project to develop and improve a temporal bone surgical simulator, this document describes algorithms that address these issues and provides solutions to them. In collaboration with expert surgeons, we have met some of the technological challenges that limit surgical simulation. Storage and playback of the interactions that users have with the simulation system is achieved via a snapshot technique using forward differences for efficient compression. A technique for realistic rendering of fluid and wet surfaces in a virtual surgical environment using modern graphics hardware is explained. Using a modified distance field technique, we show how to display context around important anatomical structures in segmented datasets. A method of automatic scoring of the users of the simulator is detailed. This method involves partitioning the volume based on proximity to critical structures and then using the Earth Mover's Distance to compare the content of those partitions. Distance fields are also employed for shape analysis techniques to extract features that are used in a visualization system. This system allows expert surgeons to examine and compare the virtual mastoidectomies perfomed by residents during training.

    Committee: Han-Wei Shen PhD (Committee Chair); Roger Crawfis PhD (Committee Member); Raghu Machiraju PhD (Committee Member) Subjects: Computer Science; Medical Imaging
  • 14. XUE, Daqing Volume Visualization Using Advanced Graphics Hardware Shaders

    Doctor of Philosophy, The Ohio State University, 2008, Computer Science and Engineering

    Graphics hardware based volume visualization techniques have been the active research topic over the last decade. With the more powerful computation ability, the availability of large texture memory, and the high programmability, modern graphics hardware has been playing a more and more important role in volume visualization.In the first part of the thesis, we focus on the graphics hardware acceleration techniques. Particularly, we develop a fast X-Ray volume rendering technique using point-convolution. An X-ray image is generated by convolving the voxel projection in the rendering buffer with a reconstruction kernel. Our technique allows users to interactively view large datasets at their original resolutions on standard PC hardware. Later, an acceleration technique for slice based volume rendering (SBVR) is examined. By means of the early z-culling feature from the modern graphics hardware, we can properly set up the z-buffer from isosurfaces to gain significant improvement in rendering speed for SBVR. The high programmability of the graphics processing unit (GPU) incurs a great deal of research work on exploring this advanced graphics hardware feature. In the second part of the thesis, we first revisit the texture splat for flow visualization. We develop a texture splat vertex shader to achieve fast animated flow visualization. Furthermore, we develop a new rendering shader of the implicit flow. By careful tracking and encoding of the advection parameters into a three-dimensional texture, we achieve high appearance control and flow representation in real time rendering. Finally, we present an indirect shader synthesizer to combine different shader rendering effects to create a highly informative image to visualize the investigating data. One or more different shaders are associated with the voxels or geometries. The shader is resolved at run time to be selected for rendering. Our indirect shader synthesizer provides a novel method to control the appearance of the (open full item for complete abstract)

    Committee: Roger Crawfis PhD (Advisor); Raghu Machiraju PhD (Committee Member); Han-Wei Shen PhD (Committee Member) Subjects: Computer Science
  • 15. Wang, Chaoli A multiresolutional approach for large data visualization

    Doctor of Philosophy, The Ohio State University, 2006, Computer and Information Science

    The sizes of large data sets, ranging from gigabytes to terabytes, pose a formidable challenge to conventional volume visualization algorithms. Multiresolution rendering proves to be a viable solution to this challenge by reducing the actual amount of data sent to the rendering pipeline. However, previous multiresolution rendering algorithms are inherently sequential, which hinders their applications in parallel environments, such as PC clusters with increasing availability. Moreover, most of the existing algorithms for large volume visualization use data-based metrics for level-of-detail selection and provide very limited user interaction and control. There is lack of techniques and tools for more effective level-of-detail selection and rendering. I present a multiresolutional approach for representing, managing, selecting, and rendering large-scale three-dimensional steady and time-varying data sets. A multiresolution volume rendering algorithm is proposed to visualize large data sets in parallel environments that ensures a well-balanced workload. A comprehensive image-based quality metric is introduced for quality-driven interactive level-of-detail selection and rendering of large data sets. Furthermore, a new visual navigation interface is presented for the user to examine, compare, and validate different level-of-detail selection algorithms. Future research focuses on transfer function design for large-scale time-varying data, which includes spatio-temporal data reduction, transfer function design, and user interface support for space-time data exploration.

    Committee: Han-Wei Shen (Advisor) Subjects: Computer Science
  • 16. Zhang, Caixia Advanced volume rendering on shadows, flows and high-dimensional rendering

    Doctor of Philosophy, The Ohio State University, 2006, Computer and Information Science

    Although many advances have been achieved within the visualization community in the last decade, many challenging problems are still open in volume rendering. In this dissertation, we mainly study three challenging topics in advanced volume rendering on shadows, flows, and high-dimensional rendering. Shadows are essential to realistic and informative scenes. In volume rendering, the shadow calculation is difficult because the light intensity is attenuated as the light traverses the volume. We investigate a new shadow algorithm that properly determines the light attenuation and generates more accurate volumetric shadows with low storage requirements by using 2D shadow buffers. We have extended our shadow algorithm to deal with extended light sources and generate volumetric soft shadows with an analytic method and using a convolution technique. This shadow and soft shadow algorithm also has been applied to mixed scenes of volumetric and polygonal objects. Multiple light scattering is also modeled in our volumetric lighting model. Interval volume algorithm is a region-of-interest extraction algorithm for steady and time-varying three-dimensional structured and unstructured grids. We present several new rendering operations to provide effective visualizations of the 3D scalar field. This technique has been extended to four dimensions to extract time-varying interval volumes. The time-varying interval volumes are rendered directly, from 4-simplices to image space. We propose a high-dimensional rendering algorithm and solve this technical challenge. In this way, we can visualize the integrated interval volumes across time steps and see how interval volumes change over time in a single view. Three-dimensional flow visualization is a challenging topic. We propose an implicit flow field method to visualize 3D flow fields. An implicit flow field is first extracted using an advection operator on the flow, with a set of flow-related attributes stored. Two techniques are then em (open full item for complete abstract)

    Committee: Roger Crawfis (Advisor) Subjects: Computer Science
  • 17. Garcia, Antonio Parallel time varying volume rendering on tile displays

    Doctor of Philosophy, The Ohio State University, 2006, Computer and Information Science

    Volume rendering is a process with high computation demands. Time-Varying Volume Data (TVVD) increases the challenge not only for computing but also for loading and storing of the data in real time. Scientific fields are generating data at a much faster rate than processing power; thus parallel solutions are necessary for rendering such large datasets. Furthermore, tile displays provide more pixels to present the features of the phenomena under study with finer detail. However, current object-space partitioning schemes for parallel rendering can make load imbalance across processors very severe. This work provides an alternative based on wavelet theory. The size reduction of the data allows data migration and makes image-space partitioning schemes possible for commodity processors deployed in PC-clusters.

    Committee: Han Shen (Advisor) Subjects: Computer Science
  • 18. Bordoloi, Udeepta Importance-driven algorithms for scientific visualization

    Doctor of Philosophy, The Ohio State University, 2005, Computer and Information Science

    Much progress has been made in the field of visualization over the past few years; but in many situations, it is still possible that the available visualization resources are overwhelmed by the amount of input data. The bottleneck may be the available computational power, storage capacity or available manpower, or a combination of these. In such situations, it is necessary to adapt the algorithms so that they can be run efficiently with less computation, with less space requirements, and with less time and effort from the human user. In this thesis, we present three algorithms that work towards reducing the resource constraints while maintaining the integrity of the visualizations. They are bound by a common underlying theme that all data elements are not equal in the particular visualization context – some are more important than others. We use certain data properties to create “importance“ measures for the data. These measures allow us to control the distribution of resources – computational, storage or human – to different portions of the data. We present a space efficient algorithm for speeding up isosurface extraction. Even though there exist algorithms that can achieve optimal search performance to identify isosurface cells, they prove impractical for large datasets due to a high storage overhead. With the dual goals of achieving fast isosurface extraction and simultaneously reducing the space requirement, we introduce an algorithm based on transform coding. We present a view selection method using a viewpoint goodness measure based on the formulation of entropy from information theory. It can be used as a guide which suggests good viewpoints for further exploration. We generate a view space partitioning, and select one representative view for each partition. Together, this set of views encapsulates the most important and distinct views of the data. We present an interactive global visualization technique for dense vector fields using levels of detail. It comb (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor) Subjects: Computer Science
  • 19. Shareef, Naeem Remote user-driven exploration of large scale volume data

    Doctor of Philosophy, The Ohio State University, 2005, Computer and Information Science

    No rendering pipeline exists to explore very large volume data, for example on the order of terabytes or more. In the extreme case, the data is essentially "stuck" at the site of creation. The consequence is that the utility of the dataset is greatly diminished since only a very small number of users are able to explore the rich information contained within. We address the challenge of how to provide access to such datasets to remote user's equipped with low-cost computational and display technology. Our work proposes a novel end-to-end rendering pipeline that allows for effective data exploration. Our paradigm couples view-dependent and image-based data structures along with novel rendering algorithms that allow for fast spatial and transfer function browsing on the client side. The view-dependent data structure, called a Pixel Ray Image (PRI), holds scalar information on projection rays through the volume. The representations for the scalar data are determined from the requirements of the particular projection equation. We present compact representations that may be stored in texture form for interactive rendering on todays PC graphics hardware. During spatial browsing, when the transfer functions do not change, the Layered Slab Image (LSI) data structure holds pre-computed projections of the data that are then used to quickly compute approximate renderings at nearby viewpoints. The PRI represents a compression of the volume along a single dimension. Sampling planes placed orthogonal to the sampling direction of the PRI can be computed easily in graphics hardware. Using an approach similar to the shear-warp algorithm, the volume may be rendered directly from this compressed format at views within a 45 degree neighborhood. The volume may be rendered from any view by placing three PRI at orthogonal sampling directions to each other. We present a novel rendering algorithm to render the volume directly from the compressed format on PC graphics hardware.

    Committee: Roger Crawfis (Advisor) Subjects: Computer Science
  • 20. Gao, Jinzhu Visibility acceleration for large-scale volume visualization

    Doctor of Philosophy, The Ohio State University, 2004, Computer and Information Science

    A growing number of scientific and medical applications are now producing large-scale data, ranging from gigabytes to even terabytes, on a daily basis. To analyze and understand this enormous amount of data, scientific visualization has become an indispensable tool. However, as the size of data increases, it can easily overwhelm the underlying computer system with limited computation power, storage space and network bandwidth. The interactivity of traditional visualization approaches is severely challenged. More advanced solutions are needed. This dissertation is focused on designing efficient visibility culling schemes for scalable visualization systems running in a massively parallel environment. The key idea is to efficiently estimate visible portions of data before the parallel visualization process starts. By utilizing parallel computing power, we are able to speed up the visualization process and visualize large-scale data that cannot be easily handled by one single PC. Visibility culling techniques provide further acceleration for a visualization algorithm by reducing the amount of data sent to the visualization pipeline. Achieving effective visibility culling in a scalable parallel visualization system is the main goal of this research. In this dissertation, we present several efficient and scalable visibility culling schemes for parallel visualization algorithms. First, we developed a data management and distribution mechanism to ensure the balanced workload with minimal run-time data communication overhead. Second, we proposed a multi-pass visibility culling scheme designed especially for parallel view-dependent isosurface extraction. To speed up the visibility estimation, we introduced a hardware accelerated solution that takes advantage of the occlusion query capability supported by up-to-date graphics hardware. Finally, to minimize the synchronization overhead in a multi-pass solution, we devised a highly scalable visibility culling framework using Plen (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor) Subjects: