Doctor of Philosophy, The Ohio State University, 2006, Computer and Information Science
The sizes of large data sets, ranging from gigabytes to terabytes, pose a formidable challenge to conventional volume visualization algorithms. Multiresolution rendering proves to be a viable solution to this challenge by reducing the actual amount of data sent to the rendering pipeline. However, previous multiresolution rendering algorithms are inherently sequential, which hinders their applications in parallel environments, such as PC clusters with increasing availability. Moreover, most of the existing algorithms for large volume visualization use data-based metrics for level-of-detail selection and provide very limited user interaction and control. There is lack of techniques and tools for more effective level-of-detail selection and rendering. I present a multiresolutional approach for representing, managing, selecting, and rendering large-scale three-dimensional steady and time-varying data sets. A multiresolution volume rendering algorithm is proposed to visualize large data sets in parallel environments that ensures a well-balanced workload. A comprehensive image-based quality metric is introduced for quality-driven interactive level-of-detail selection and rendering of large data sets. Furthermore, a new visual navigation interface is presented for the user to examine, compare, and validate different level-of-detail selection algorithms. Future research focuses on transfer function design for large-scale time-varying data, which includes spatio-temporal data reduction, transfer function design, and user interface support for space-time data exploration.
Committee: Han-Wei Shen (Advisor)
Subjects: Computer Science