Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 180)

Mini-Tools

 
 

Search Report

  • 1. Lee, Teng-Yok Data Triage and Visual Analytics for Scientific Visualization

    Doctor of Philosophy, The Ohio State University, 2011, Computer Science and Engineering

    As the speed of computers continues to increase at a very fast rate, the size of data generated from scientific simulations has now reached petabytes ($10^{12}$ bytes) and beyond. Under such circumstances, no existing techniques can be used to perform effective data analysis at a full precision. To analyze large scale data sets, visual analytics techniques with effective summarization and flexible interface are crucial in assisting the exploration of data at different levels of detail. To improve data access efficiency, summarization and triage are important components for categorizing data items according to their saliency. This will allow the user to focus only on the relevant portion of data. In this dissertation, several visualization and analysis techniques are presented to facilitate the analysis of multivariate time-varying data and flow fields. For multivariate time-varying data sets, data items are categorized based on the values over time to provide an effective overview of the time-varying phenomena. From the similarity to the user-specified feature, dynamic phenomena across multiple variables in different spatial and temporal domains can be explored. To visualize flow fields, information theory is used to model the local flow complexity quantitatively. Based on the model, an information-aware visualization framework is designed to create images with different levels of visual focus according to the local flow complexity. By extending the measurement from object space to image space, visualization primitives can be further rearranged, leading to more effective visualization of salient flow features with less occlusion.

    Committee: Han-Wei Shen PhD (Advisor); Roger A. Crawfis PhD (Committee Chair); Raghu Machiraju PhD (Committee Chair) Subjects: Computer Science
  • 2. Woodring, Jonathan Visualization of Time-varying Scientific Data through Comparative Fusion and Temporal Behavior Analysis

    Doctor of Philosophy, The Ohio State University, 2009, Computer Science and Engineering

    Visualization of time-varying scientific and medical datatraditionally has been done through animation or a series of still frame renders. Animation and still frame comparison is only minimally sufficient, due to limitations, such as short term visual memory and the lack of analytical feedback, to effectively find and compare temporal trends. To improve time-varying analysis, several different visualization methods are described. For direct visual comparison of individual time steps, we introduce a rendering technique that fuses multiple time steps into single data, by projection and composition methods. This can be achieved through projection along time, and further generalized to high dimensional space-time projection. Furthermore, time volumes (or multivariate data) can be compared through composition and set operations. To aid in the understanding of comparative time volumes, focus+context animation is used to reveal features in the data, by utilizing human motion perceptual capabilities. In addition to comparative and highlighting techniques, we also provide the quantitative analysis of time-varying data via time behavior classification. We allow a user to visualize and explore their time-varying data as classes of multi-scale temporal trends. Also through the analysis of the time activity, we can also semi-automatically generate classifications (transfer functions) to be used in the visualization pipeline.

    Committee: Han-Wei Shen PhD (Advisor); Roger Crawfis PhD (Committee Member); Rick Parent PhD (Committee Member) Subjects: Computer Science
  • 3. Yu, En Social Network Analysis Applied to Ontology 3D Visualization

    Master of Computer Science, Miami University, 2008, Computer Science and Systems Analysis

    OntoSELF was recently developed to provide 3D visualization of the underlying hierarchical structure of intensional ontologies. The extensions to OntoSELF focus on enhancing perception and facilitating high-level comprehension as well as low-level detail exploiting. The extensions include various visualization cues to enhance visualization perception, processing user-defined relationships in addition to the standard hierarchical IS-A relationship, user filtering on which relationships to include in the visualization, and social network analysis (SNA) metrics for additional filtering and structuring criteria, and for finding and better understanding important concepts of interest. To use standard social network analysis (SNA) techniques, a preprocessing algorithm is used to project an m-mode n-plex ontology structure to a 1-mode 1-plex sociomatrix. A high-level abstraction algorithm based on the notion of "communities of interest" is provided to simplify the social network view of the ontology to a higher level of abstraction.

    Committee: Valerie V. Cross PhD (Advisor); Mufit Ozden PhD (Committee Member); James D. Kiper PhD (Committee Member) Subjects: Computer Science
  • 4. Somasundaram, Ramanathan ONTOSELF: A 3D ONTOLOGY VISUALIZATION TOOL

    Master of Computer Science, Miami University, 2007, Computer Science and Systems Analysis

    Ontologies are frequently used to formalize the conceptualization of a domain. Numerous ontologies exist in a variety of domains. For example, The United Nations Standard Products and Services Code (UNSPSC) ontology and the North American Industrial Classification System ontology (NAICS) are becoming widely used in e-commerce applications and contain on the order of thousands of product categories. The Gene Ontology (GO) with over 22,000 terms is widely used in biomedical applications. Useful techniques for visualizing the overall structure are needed by both ontology users and creators to understand their structure at a high level and then be able to navigate through the levels of complexity within the ontology. Researchers in ontology visualization are beginning to recognize that ontologies demand their own specialized tools with visualization techniques that depend strongly on both the characteristics of the presented information and the supported task requiring the visualization. This thesis develops a flexible ontology 3D visualization system called OntoSELF (Ontology Scrutiny Exploiting Layouts and Filtering) that allows a user to better understand the nature and identity, i.e., self, of an ontology. It provides a variety of weighting functions from concept-lattice drawing techniques and modifies the layout based on a user-selected weighting function. OntoSELF provides very adjustable filtering capabilities based on structural metrics determined for each ontology concept. Through filtering and flexible layout, the user can then focus on particular areas of the ontology where concepts meet the criteria the user deems essential for better understanding the structure of the ontology, either to hide or abstract out complexity or to reveal intricate details of the ontology. A variety of ontologies varying in domain knowledge and sizes and a set of topology understanding tasks are used to investigate the performance of OntoSELF. To make the testing process more effi (open full item for complete abstract)

    Committee: Valerie Cross (Advisor) Subjects: Computer Science
  • 5. He, Wenbin Exploration and Analysis of Ensemble Datasets with Statistical and Deep Learning Models

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Ensemble simulations are becoming prevalent in various scientific and engineering disciplines, such as computational fluid dynamics, aerodynamics, climate, and weather research. Scientists routinely conduct a set of simulations with different configurations (e.g., initial/boundary conditions, parameter settings, or phenomenological models) and produce an ensemble of simulation outputs, namely an ensemble dataset. Ensemble datasets are extremely useful in studying the uncertainty of the simulation models and the sensitivities of the initial conditions and parameters. However, compared with deterministic scientific simulation data, visualizing and analyzing ensemble datasets are challenging because the ensemble datasets introduce extra dimensions into the field data (i.e., each spatial location is associated with multiple possible values instead of a deterministic value) and extra facets (e.g., simulation parameters). Over the last decade, various approaches have been proposed to visualize and analyze ensemble datasets from different perspectives. For example, the variability of isocontours is modeled and visualized by a collection of techniques. Coordinated multiple views are frequently used to visualize the simulation parameters and outputs simultaneously and linked together to study the influence of different simulation parameters. However, to handle different types of ensemble datasets (e.g., unstructured grid data, time-varying data, and extreme-scale data) and address various visualization tasks (e.g., uncertainty modeling and parameter space exploration), more work needs to be done in terms of ensemble data visualization and analysis. In this dissertation, we focus on visual exploration and analysis of ensemble datasets using statistical and deep learning models. Specifically, we explore and analyze ensemble datasets from three perspectives. First, we focus on modeling and visualizing the variability of ensemble members for 1) features (e.g., isosurfaces) (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Rephael Wenger (Committee Member); Huamin Wang (Committee Member) Subjects: Computer Science
  • 6. Kamal, Aasim A Novel Approach to Air Corridor Estimation and Visualization for Autonomous Multi-UAV Flights

    Master of Science, University of Toledo, 2019, Engineering (Computer Science)

    The world is on the brink of an era of Unmanned Aerial Vehicles (UAVs), widely known to public as drones, where we will get to experience multiple UAVs flying in the national airspace carrying out diverse tasks such as monitoring, surveillance, product deliveries, law enforcement, fertilizing crop fields, aerial photography, and transport. In such scenarios, where multiple UAVs are flying in a smaller airspace, there is a possibility of collisions, path overlaps, mix-ups, and uncertainties as far as their flying routes are concerned. These flying routes could be inside constructed air corridors where the UAVs would be allotted to fly, similar to the air corridors of commercial aircraft. There is a growing need to identify and construct these air corridors for UAVs to fly in their respective corridors to avoid such mishaps as is what is done with commercial airplanes. The airplanes fly in their designated air corridors from one location to another without any uncertainty. It would be really useful to devise and design such a system for multiple UAVs as well, that would be able to construct air corridors for them to fly through. This served as the primary motivation behind proposing a novel approach to estimate and visualize air corridors for autonomous multi-UAV flights in an airspace. In addition to it, we studied various popular uncertainty visualization techniques and came up with a cutting-edge way to incorporate uncertainty into the visualization of the air corridors. Furthermore, we provide a standalone web application with a user-friendly graphical user interface (GUI) developed using HTML5, CSS3, JavaScript and an open-source JavaScript library for visualizing world-class 3-D maps called CesiumJS. Subsequently, we present the estimation and visualization results and discuss possible application areas where the proposed technique could be put to use. Finally, we discuss the summarized research findings and future research directions.

    Committee: Ahmad Javaid (Committee Chair); Vijay Devabhaktuni (Committee Co-Chair); Devinder Kaur (Committee Member) Subjects: Computer Engineering; Computer Science
  • 7. Sickel, Jamie TPACK Development in Science Teacher Preparation: A Case Study in Queensland, Australia

    Doctor of Philosophy (PhD), Ohio University, 2016, Instructional Technology (Education)

    This study sought to identify key experiences that impact the development of technological pedagogical content knowledge (TPACK) of preservice secondary sciences teachers at a medium-sized university in Queensland, Australia. TPACK is a conceptual framework of a body of knowledge that teachers draw upon to influence practice; it is a dynamic and emergent form of knowledge that informs the employment of technology for teaching specific subject matter. This study employed an embedded case study approach, including delivery of a TPACK survey instrument and analysis of participant interviews, to identify the context-specific experiences that promote the development of TPACK among twelve preservice secondary science teachers. The research addresses a specific need cited in the literature, identifying TPACK impact factors, and provides a novel way to visualize TPACK development through contextual experiences. A novel approach to visually representing context-specific experiences and their influence on teacher knowledge, self-efficacy, values and beliefs was employed. Three major findings are presented below: 1) the majority of preservice secondary science teachers were unable to define the constructs of learning and science; 2) a focus on motivation and interest paired with a disconnect between expressed and enacted pedagogical orientation lead to teacher-centered instruction augmented with superficial tactics aimed at generating interest; and 3) difficulty in integrating knowledge bases yielded lower TPACK self-efficacy, which has detrimental impacts on the instruction planned by pre-service teachers for their students. Findings are directly aligned with participants' prior experience, compared to the relevant literature, and utilized to identify implications for teacher preparation as well as recommendations for future research.

    Committee: Teresa Franklin PhD (Advisor); David Moore PhD (Committee Member); John Hitchcock PhD (Committee Member); Sam Girton MFA (Committee Member) Subjects: Adult Education; Continuing Education; Education; Educational Evaluation; Educational Technology; Science Education; Teacher Education; Teaching
  • 8. Chaudhuri, Abon Geometric and Statistical Summaries for Big Data Visualization

    Doctor of Philosophy, The Ohio State University, 2013, Computer Science and Engineering

    In recent times, the visualization and data analysis paradigm is adapting fast to keep up with the rapid growth in computing power and data size. Modern scientific simulations run at massive scale to produce huge datasets, which must be analyzed and visualized by the domain experts to continue innovation. In the presence of large-scale data, it is important to identify and extract the informative regions at an early stage so that the following analysis algorithms, which are usually memory and compute-intensive, can focus only on those regions. Transforming the raw data to a compact yet meaningful representation also helps to maintain the interactivity of the query and visualization of analysis results. In this dissertation, we propose a novel and general-purpose framework suitable for exploring large-scale data. We propose to use importance-based data summaries, which can substitute for the raw data to answer queries and drive visual exploration. Since the definition of importance is dependent on the nature of the data and the task at hand, we propose to use suitable statistical and geometric measures or combination of various measures to quantify importance and perform data reduction on scalar and vector field data. Our research demonstrates two instances of the proposed framework. The first instance applies to large number of streamlines computed from vector fields. We make the visual exploration of such data much easier compared to navigating through a cluttered 3D visualization of the raw data. In this case, we introduce a fractal dimension based metric called box counting ratio, which quantifies the geometric complexity of streamlines (or parts of streamlines) by their space-filling capacity. We utilize this metric to extract, organize and visualize streamlines of varying density and complexity hidden in large number of streamlines. The extracted complex regions from the streamlines represent the data summaries in this case. We organize and present them (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Roger Crawfis (Committee Member); Rephael Wenger (Committee Member); Tom Peterka (Committee Member) Subjects: Computer Science
  • 9. Hughes, Tracey Visualizing Epistemic Structures of Interrogative Domain Models

    Master of Computing and Information Systems, Youngstown State University, 2008, Department of Computer Science and Information Systems

    In this paper, we explore the concept of epistemic visualization in interrogative domains. Epistemic visualization is the process and result of developing visual models that capture the structure, content, justification and acquisition of knowledge obtained by a software agent in a knowledge-based system. The knowledge is the foundation in which the agent can respond to queries against a corpus containing questions and answers. The visualizations are therefore used to examine the quality of the software agent's knowledge. The visual models will include justification and commitment artifacts as well as knowledge acquisition flow. The visualization will demarcate the a priori and posteriori knowledge. The knowledge of the software agent is stored in epistemic structures which are knowledge representation schemes that supports the basic concepts of knowledge as defined by the tripartite analysis of knowledge. Epistemic visualization is used to analyze the quality of the knowledge of a software agent in an interrogative domain. For our purpose, interrogative domains are hearings, trials, interrogations, personality test or any document source in which the primary content is questions and answers pairs. In this paper, we introduce the Epistemic Structure Es that captures the agent's knowledge and the visualization of that epistemic structure using common visualization techniques.

    Committee: Alina Lazar PhD (Committee Chair); John Sullins PhD (Committee Member); Yong Zhang PhD (Committee Member) Subjects: Artificial Intelligence; Computer Science; Information Systems; Linguistics; Technology
  • 10. Xu, Jiayi Scalable Extraction and Visualization of Scientific Features with Load-Balanced Parallelism

    Doctor of Philosophy, The Ohio State University, 2021, Computer Science and Engineering

    Extracting and visualizing features from scientific data can help scientists derive valuable insights. An extraction and visualization pipeline usually includes three steps: (1) scientific feature detection, (2) union-find for features' connected component labeling, and (3) visualization and analysis. As the scale of scientific data generated by experiments and simulations grows, it becomes a common practice to use distributed computing to handle large-scale data with data-parallelism, where data is partitioned and distributed over parallel processors. Three challenges arise for feature extraction and visualization on scientific applications. First, traditional feature detectors may not be effective and robust enough to capture features of interest across different scientific settings, because scientific features usually are highly nonlinear and recognized by domain scientists' soft knowledge. Second, existing union-find algorithms are either serial or not scalable enough to deal with extreme-scale datasets generated in the modern era. Third, existing parallel feature extraction and visualization algorithms fail to automatically reduce communication costs when optimizing the performance of processing units. This dissertation studies scalable scientific feature extraction and visualization to tackle the three challenges. First, we design human-centric interactive visual analytics based on scientists' requirements to address domain-specific feature detection and tracking. We focus on an essential problem in earth sciences: spatiotemporal analysis of viscous and gravitational fingers. Viscous and gravitational flow instabilities cause a displacement front to break up into finger-like fluids. Previously, scientists mainly detected the finger features using density thresholding, where scientists specify certain density thresholds and extract super-level sets from input density scalar fields. However, the results of density thresholding are sensitive to the select (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Rephael Wenge (Committee Member); Jian Chen (Committee Member) Subjects: Computer Engineering; Computer Science
  • 11. Ma, Chao Visual analytic technique and system of spatiotemporal-semantic events

    PHD, Kent State University, 2020, College of Arts and Sciences / Department of Computer Science

    Data containing geographical locations and time that associates with natural language texts, such as geotagged tweets, travel blogs, and crime reports are generally recognized as spatiotemporal semantic events. Many research fields have tried to gain valuable insights from these data and there have many techniques and methods are introduced in past decade. In computer science field, the study of spatiotemporal-semantic events in visualization and visual analytics is one of the hottest research topics. Text mining and data mining provide abundant methods to find meaningful knowledge and insights from semantic information of these data. Even though, there exist many contributions in this research field, there still lack of visually intuitive applications and approaches that allow frontline users, such as police, health officers, and social workers to freely navigate, effectively utilize and analyze their spatiotemporal semantic data, especially in community level. In this thesis, multiple visual analytics (VA) solutions are introduced. NeighborVis, CLEVis, and a new lens based visual interaction technique, GTMapLens to help frontline users harness semantic-rich spatiotemporal data. The development of all applications is fulfilled the requirement analysis and initial prototype evaluation. Text mining, topic modeling, hierarchical geospatial data indexing and many new visualization methods are studied and discussed along with those VA systems. The visual design is guided by requirement analysis with a cohort of multidisciplinary domain experts. Evaluation is presented with real world datasets to show the usability and effectiveness.

    Committee: Ye Zhao (Committee Chair); Xiang Lian (Committee Member); Jong-Hoon Kim (Committee Member); Xinyue Ye (Committee Member); Jay Lee (Committee Member) Subjects: Computer Science
  • 12. Khan, Meraj Ahmed Exploiting Human Factors and UI Characteristics for Interactive Data Exploration

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    The development of new data interaction modalities and concurrent advancement in the hardware and database technology has dramatically changed the way users interact with data. Data analysis tasks have now turned into interactive and instantaneous explorations of query-result space. Database systems are not designed to maintain interactive latency while handling the unique and unprecedented workloads posed by the modern interaction modalities. Our work strives to bridge the gap between the end-users and database systems, allowing the users to engage in interactive data exploration without an undue cognitive burden in processing the system feedback. We utilize the user-interaction behavior, the user-interface characteristics, and the human factors that drive the data exploration process to handle the challenges of interactive latency and cognitive overload in different interactive data exploration scenarios. This dissertation presents a middleware component – Flux Capacitor, that insulates the backend from bursty and query-intensive workloads generated by modern web, mobile, touch, and gesture-driven next-generation interfaces. Flux Capacitor uses prefetching and caching strategies informed by the inherent physics-metaphor of UI widgets such as friction and inertia in range sliders and maps, and the typical behavior patterns in user-interaction, enabling low interaction response times while intelligently trading off accuracy when required. We present a data substrate for addressing the unique data management and user interaction challenges posed by Augmented Reality (AR) interfaces for data – DreamStore. The platform incorporates optimizations for AR workload characteristics at various layers of the data stack, treating AR queries as first-class queries. DreamStore provides a user-focus based mechanism to handle visual clutter – a form of cognitive overload and enables interactive latency for user actions through prefetching and caching strategies that utiliz (open full item for complete abstract)

    Committee: Arnab Nandi (Advisor); Srinivasan Parthasarathy (Committee Member); Han-Wei Shen (Committee Member) Subjects: Computer Science
  • 13. AL-Dohuki, Shamal INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Computer Science

    Advanced sensing technologies and computing infrastructures have produced a variety of trajectory data of moving objects in urban spaces. One type of this data is taxi trajectory data. It records real-time moving paths sampled as a series of positions associated with vehicle attributes over urban road networks. Such data is big, spatial, temporal, unstructured and it contains abundant knowledge about a city and its citizens. Exploratory visualization systems are needed to study taxi trajectories with efficient user interaction and instant visual feedback. The extracted information can be utilized in many important and practical applications to optimize urban planning, improve human life quality and environment. As the primary novelty contribution, this thesis presents a set of visual analytics solutions with different approaches to interacting with massive taxi trajectory data to allow analysts to look at the data from different perspectives and complete different analytical tasks. Our approaches focus on how people directly interact with the data store, query and visualize the results and support practitioners, researchers, and decision-makers to advance transportation and urban studies in the new era of the smart city. First, we present SemanticTraj, a new method for managing and visualizing taxi trajectory data in an intuitive, semantic rich, and efficient means. In particular, taxi trajectories are converted into taxi documents through a textualization transformation process. This process maps global positioning system (GPS) points into a series of street/POI names and pickup/drop-off locations. It also converts vehicle speeds into user-defined descriptive terms. Then, a corpus of taxi documents is formed and indexed to enable flexible semantic queries over a text search engine. Second, we present a visual analytics system, named as QuteVis, which facilitates domain users to query and examine traffic patterns from large-scale traffic data in an urban transpor (open full item for complete abstract)

    Committee: Ye Zhao (Committee Chair); Cheng-Chang Lu (Committee Member); Xiang Lian (Committee Member); Xinyue Ye (Committee Member); Xiaoling Pu (Committee Member) Subjects: Computer Science
  • 14. Anderson, James Interactive Visualization of Search Results of Large Document Sets

    Master of Science in Computer Engineering (MSCE), Wright State University, 2018, Computer Engineering

    When presented with many search results, finding information or patterns within the data poses a challenge. This thesis presents the design, implementation and evaluation of a visualization enabling users to browse through voluminous information and comprehend the data. Implemented with the JavaScript library Data Driven Documents (D3), the visualization represents the search as clusters of similar documents grouped into bubbles with the contents depicted as word-clouds. Highly interactive features such as touch gestures and intuitive menu actions allow for expeditious exploration of the search results. Other features include drag-and-drop functionality for articles among bubbles, merging nodes, and refining the search by selecting specific terms or articles to receive more similar results. A user study consisting of a survey questionnaire and user tracking data demonstrated that in comparison to a standard text-browser for viewing search results, the visualization performs commensurate or better on most metrics.

    Committee: Thomas Wischgoll Ph.D. (Advisor); Michael Raymer Ph.D. (Committee Member); John Gallagher Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science
  • 15. Karaca, Ece Interactive Data Visualization: Applications Used to Illuminate the Environmental Effects of the Syrian War

    Master of Fine Arts, The Ohio State University, 2018, Design

    This research is an investigation of interactive data visualization systems, methods, and techniques, guided by a practice-led design research process, case study applications, and qualitative user research. The study specifically documents the design process of interactive data visualization applications developed to illuminate the environmental effects of the Syrian War, using interaction design techniques which are a powerful tool to map complex events and dynamic situations. Design, ethnic culture, and environmental issues have informed this thesis study, guided by design for social change. The preparatory work on dance and digital design contributed to the technical understanding needed for this project, which has become practicing Interactive Data Visualization: Applications Used to Illuminate the Environmental Effects of the Syrian War. As a designer from Turkey, I believe I have a responsibility to use creative means to facilitate social development and change. My aim was practicing methods of data visualization through case study applications to reveal long term impacts of urban destruction caused by the Syrian War. As a Turk who has closely observed the impact of wars in the Middle East region. Data visualization applications were created with a report from the PAX for Peace Organization. My first case study application was presented in a 3-dimensional construct, enabling viewers to interact with data through touch technologies. My second application was an interactive webpage designed with the same data set in a simpler layout. To assess the applications, usability testing was implemented to compare and analyze the possible improvements and future applications of the user interfaces. The intent of gathering feedback was to discover whether data visualization, interactive interfaces, and design-thinking methods have the potential to improve understanding, communication and innovative thinking by enhancing the effectiveness of the delivery of complex, (open full item for complete abstract)

    Committee: Brian Stone (Advisor); Maria Palazzi (Committee Member); Yvette Shen (Committee Member) Subjects: Design
  • 16. Xie, Weiyi A Geometric Approach to Visualization of Variability in Univariate and Multivariate Functional Data

    Doctor of Philosophy, The Ohio State University, 2017, Statistics

    This dissertation describes a new method for the construction and visualization of geometrically-motivated displays for univariate functional data and multivariate curve data. For univariate functional data, we use a recent functional data analysis framework, based on a representation of functions called square-root slope functions. We decompose observed variation in functional data into three main components: amplitude, phase, and vertical translation. For multivariate curve data, we use a similar recent elastic curve data analysis framework, based on square-root velocity functions, to decompose variability in curve data into five main components: location, scale, shape, orientation, and reparametrization. We then construct separate displays for each component, using the geometry and metric of each representation space, based on a novel definition of the median, the two quartiles, and extreme observations. The outlyingness of function and curve data is a very complex concept. Thus, we propose to separately identify outliers based on each of the main components after decomposition. We provide a variety of visualization tools for the proposed displays, including surface plots for the amplitude and phase components of univariate functional data, and circular plots for the orientation and seed components of bivariate curve data, among others. We evaluate the proposed methods using extensive simulations and then focus our attention on multiple real data applications, including exploratory data analysis of sea surface temperature functions, electrocardiogram biosignals, growth curves, gait functions, and respiration functions. We also study variability in simulated 3D spirals, handwritten signatures, 3D fibers from Diffusion Tensor Magnetic Resonance Imaging (DT-MRI), and 3D Lorenz attractor curves.

    Committee: Sebastian Kurtek (Advisor); Radu Herbei (Committee Member); Laura Kubatko (Committee Member); Oksana Chkrebtii (Committee Member) Subjects: Statistics
  • 17. Koya, Bharath Kumar Sched-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course

    Master of Science (MS), Wright State University, 2017, Computer Science

    Operating systems is an essential course in computer science curriculum, which helps students to develop a mental model of how computer operating systems work. The internal mechanisms and processes of an operating system (OS) are often complex, non-deterministic and intangible which makes them difficult for students to understand. One such concept is central processing unit (CPU) scheduling. CPU scheduling forms the basis of the multiprogramming in an OS. In practice, OS courses involve classroom lectures describing high-level abstractions of the concepts, and students complete programming assignments to apply the material in a more concrete way. Depending on the programming assignments, this approach may leave students with only a theoretical understanding of OS ideas, which may be different from the actual way these concepts are implemented in an OS. What many students require is a practical knowledge of OS implementation to supplement the high-level presentations of concepts taught in class or presented in a textbook. To bridge the gap between the operating system theory and practical implementation, this research describes the development of an interactive simulation to present the theories involved in CPU scheduling in visualizations and simulations. This thesis discusses a prototype interactive tutoring system (ITS) named as Sched-ITS. The tool covers all the important algorithms of CPU scheduling such as first-come, first-serve (FCFS), round robin (RR), shortest job first (SJF), shortest remaining time first (SRTF), priority with pre-emption, and priority without pre-emption. Sched-ITS also provides graphical visualization of how context switches occur during CPU scheduling in a real operating system. Sched-ITS makes use of the JavaFX framework for visualization and Perf-tool for tracing an OS's scheduling activities. It presents scheduling activities of background processes as well as pre-defined or user-defined processes. Sched-ITS can display schedul (open full item for complete abstract)

    Committee: Adam R. Bryant Ph.D. (Committee Chair); Mateen M. Rizki Ph.D. (Committee Member); Yong Pei Ph.D. (Committee Member) Subjects: Computer Science
  • 18. Kaczka Jennings, Rachel vizSlice: An Approach for Understanding Slicing Data via Visualization

    Master of Science, Miami University, 2017, Computer Science and Software Engineering

    Several approaches have been suggested for computing program slices based on different perspectives, including forward slicing, backward slicing, static slicing, and dynamic slicing. The applications of slicing are numerous, including testing, effort estimation, and impact analysis. Surprisingly, given the maturity of slicing, few approaches exist for visualizing slices. Here we present our research for visualizing large systems based on program slicing. In particular, we use treemaps to facilitate hierarchical, slicing-based navigation, we use bipartite graphs to facilitate visual impact analysis over a given variable or line of code, parallel coordinates to facilitate visual impact analysis over code blocks or variable groupings, and a text-based code browser to provide detailed context for the relevant visualizations. We believe our tools support various software maintenance tasks, including providing analysts an interactive visualization of the impact of potential changes, thus allowing developers to plan maintenance accordingly. We evaluate the research by assessing usability through a think aloud protocol and a heuristic evaluation. Our results indicate users could effectively complete the evaluation tasks we provided, and the visual idioms utilized in vizSlice were effective at communicating the underlying data to them. However, controls for these visualizations need improvement in both affordance and visibility. Regardless of any difficulties users experienced with vizSlice, users consistently rated the system positively on the measured heuristics. We provide insights on these results, future plans for improving vizSlice, and provide guidance for future research on visualizing program slices.

    Committee: Gerald Gannod (Advisor); Hakam Alomari (Committee Member); Matthew Stephan (Committee Member); James Kiper (Committee Member) Subjects: Computer Science
  • 19. Lu, Kewei Distribution-based Exploration and Visualization of Large-scale Vector and Multivariate Fields

    Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering

    Due to the ever increasing of computing power in the last few decades, the size of scientific data produced by various scientific simulations has been growing rapidly. As a result, effective techniques to visualize and explore those large-scale scientific data are becoming more and more important in understanding the data. However, for data at such a large scale, effective analysis and visualization is a non-trivial task due to several reasons. First, it is often time consuming and memory intensive to perform visualization and analysis directly on the original data. Second, as the data become large and complex, visualization usually suffers from visual cluttering and occlusion, which makes it difficult for users to understand the data. In order to address the aforementioned challenges, in this dissertation, a distribution-based query-driven framework to visualize and analyze large-scale scientific data is proposed. We propose to use statistical distributions to summarize large-scale data sets. The summarized data is then used to substitute the original data to support efficient and interactive query-driven visualization which is often free of occlusion. In this dissertation, the proposed framework is applied to flow fields and multivariate scalar fields. We first demonstrate the application of the proposed framework to flow fields. For a flow field, the statistical data summarization is computed from geometries such as streamlines and stream surfaces computed from the flow field. Stream surfaces and streamlines are two popular methods for visualizing flow fields. When the data size is large, distributed memory parallelism usually is needed. In this dissertation, a new scalable algorithm is proposed to compute stream surfaces from large-scale flow fields efficiently on distributed memory machines. After we obtain a large number of computed streamlines or stream surfaces, a direct visualization of all the densely computed geometries is seldom useful due to visua (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Yusu Wang (Committee Member); Ponnuswamy Sadayappan (Committee Member) Subjects: Computer Engineering; Computer Science
  • 20. Tong, Xin Interactive Visual Clutter Management in Scientific Visualization

    Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering

    Scientists visualize their data and interact with them on computers in order to thoroughly understand them. Nowadays, data become so large and complex that it is impossible to display the entire data on a single image. Scientific visualization often suffers from visual clutter problem because of high spacial resolution/dimension and temporal resolution. Interacting with the visualizations of large data, on the other hand, allows users to dynamically explore different parts of the data and gradually understand all information in the data. Information congestion and visual clutter exist in visualizations of different kinds of data, such as flow field data, tensor field data, and time-varying data. Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. Placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion in streamline and glyph visualization, we propose a view-dependent interactive 3D lens that removes the occluding streamlines/glyphs by pulling the them aside through animations. High resolution simulations are capable of generating very large vector fields that are expensive to store and analyze. In addition, the noise and/or uncertainty contained in the data often affects the quality of visualization by producing visual clutter that interferes with both the interpretation and identification of important features. Instead, we can store the distributions of many vector orientations and visualize the distributions with 3D glyphs, which largely reduce visual clutter. Empowered by rapid advance of high performance computer architectures and software, it is (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Huamin Wang (Committee Member); Arnab Nandi (Committee Member) Subjects: Computer Engineering; Computer Science