Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 16)

Mini-Tools

 
 

Search Report

  • 1. AL-Dohuki, Shamal INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Computer Science

    Advanced sensing technologies and computing infrastructures have produced a variety of trajectory data of moving objects in urban spaces. One type of this data is taxi trajectory data. It records real-time moving paths sampled as a series of positions associated with vehicle attributes over urban road networks. Such data is big, spatial, temporal, unstructured and it contains abundant knowledge about a city and its citizens. Exploratory visualization systems are needed to study taxi trajectories with efficient user interaction and instant visual feedback. The extracted information can be utilized in many important and practical applications to optimize urban planning, improve human life quality and environment. As the primary novelty contribution, this thesis presents a set of visual analytics solutions with different approaches to interacting with massive taxi trajectory data to allow analysts to look at the data from different perspectives and complete different analytical tasks. Our approaches focus on how people directly interact with the data store, query and visualize the results and support practitioners, researchers, and decision-makers to advance transportation and urban studies in the new era of the smart city. First, we present SemanticTraj, a new method for managing and visualizing taxi trajectory data in an intuitive, semantic rich, and efficient means. In particular, taxi trajectories are converted into taxi documents through a textualization transformation process. This process maps global positioning system (GPS) points into a series of street/POI names and pickup/drop-off locations. It also converts vehicle speeds into user-defined descriptive terms. Then, a corpus of taxi documents is formed and indexed to enable flexible semantic queries over a text search engine. Second, we present a visual analytics system, named as QuteVis, which facilitates domain users to query and examine traffic patterns from large-scale traffic data in an urban transpor (open full item for complete abstract)

    Committee: Ye Zhao (Committee Chair); Cheng-Chang Lu (Committee Member); Xiang Lian (Committee Member); Xinyue Ye (Committee Member); Xiaoling Pu (Committee Member) Subjects: Computer Science
  • 2. Kamw, Farah UTILIZING BIG TRAJECTORY DATA FOR URBAN VISUAL ANALYTICS AND ACCESSIBILITY STUDIES

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Computer Science

    Massive urban trajectories of humans and vehicles, together with road network and Points of Interest (POIs), have been used in a wide range of research by transportation engineers and urban planning professionals. This has contributed to improve urban planning, transportation management, and knowledge of human dynamics. Interactive visual analytics tools allow a variety of users to conduct iterative visual studies over the big data with intuitive visual representations and convenient interactions. Typically, the visual analytics tasks should be conducted in three main phases: (1) Preprocessing and preparing raw trajectory data with cleaning, enrichment, aggregation, and transformations. (2) Developing efficient data structures and query operations to support interactive visual querying and analysis over big data. (3) Designing visual interface with effective and convenient human-computer interactions. Firstly, this dissertation develops data preprocessing tools of various trajectories, road networks, and POIs, which can be directly used by general users through a web-based system. Users can directly upload raw trajectory data, while the system automatically fetches corresponding road segments data from OpenStreetMap (OSM), extracts zip code regions, or creates grid rectangular regions to couple the raw GPS data with geographical context. The system also automatically matches the trajectories with these road segments or regions. Secondly, effective data models are designed to store and manage heterogeneous urban data in a spatial database called the Trajectory DataBase (TrajBase). The key contribution is to develop trajectories and road segments (or regions) based geo-indexing scheme for trajectory-based urban study. This scheme can support fast spatial-temporal queries and visualization, while the traditional geo-indexing scheme is mostly designed for point-based geo-data. Thirdly, based on the proposed data models and tools, visual analytics queries and functions a (open full item for complete abstract)

    Committee: Ye Zhao Dr. (Advisor); Feodor Dragan Dr. (Committee Member); Arden Ruttan Dr. (Committee Member); Xinyue Ye Dr. (Committee Member); Wei Li Dr. (Committee Member) Subjects: Computer Science
  • 3. Gandee, Tyler Natural Language Generation: Improving the Accessibility of Causal Modeling Through Applied Deep Learning

    Master of Science, Miami University, 2024, Computer Science

    Causal maps are graphical models that are well-understood in small scales. When created through a participatory modeling process, they become a strong asset in decision making. Furthermore, those who participate in the modeling process may seek to understand the problem from various perspectives. However, as causal maps increase in size, the information they contain becomes clouded, which results in the map being unusable. In this thesis, we transform causal maps into various mediums to improve the usability and accessibility of large causal models; our proposed algorithms can also be applied to small-scale causal maps. In particular, we transform causal maps into meaningful paragraphs using GPT and network traversal algorithms to attain full-coverage of the map. Then, we compare automatic text summarization models with graph reduction algorithms to reduce the amount of text to a more approachable size. Finally, we combine our algorithms into a visual analytics environment to provide details-on-demand for the user by displaying the summarized text, and interacting with summaries to display the detailed text, causal map, and even generate images in an appropriate manner. We hope this research provides more tools for decision-makers and allows modelers to give back to participants the final result of their work.

    Committee: Philippe Giabbanelli (Advisor); Daniela Inclezan (Committee Member); Garrett Goodman (Committee Member) Subjects: Computer Science
  • 4. JAMONNAK, SUPHANUT Spatial Multimedia Data Visualization

    PHD, Kent State University, 2021, College of Arts and Sciences / Department of Computer Science

    Geo-encoded visual information (images and videos) offers the potential to acquire fine-scale, multi-time period and associated contextualized data for a variety of geographical environments, especially when combined with additional insights and geo-narratives (audio, text, graphics). These data are also being used in developing AI based knowledge discovery and decision making systems such as in the emerging autonomous driving applications. While these spatial multimedia data include abundant spatiotemporal, semantic and visual information, the means to fully leverage their potential using a suite of visual and interactive analysis techniques and tools has thus far been lacking. In this dissertation, new visual analytics techniques and systems are being developed for the spatial multimedia data. Visual data exploration is supported by software infrastructures so that domain researchers and decision-makers can easily capture, manage, query and visualize big and dynamic data to conduct analytical tasks. Moreover, the autonomous driving deep learning models are visually investigated for the study of neural network predictions together with large scale video data. This dissertation leverages the power of visualization for spatial multimedia data and contributes to an emerging research topic of visualization community.

    Committee: YE ZHAO (Advisor); XIANG LIAN (Committee Member); JAY LEE (Committee Member); ANDREW CURTIS (Committee Member); JONG-HOON KIM (Committee Member) Subjects: Computer Science
  • 5. Hazarika, Subhashis Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Recent advancements in the field of computational sciences and high-performance computing have enabled scientists to design high-resolution computational models to simulate various real-world physical phenomenon. In order to gain key scientific insights about the underlying phenomena it is important to analyze and visualize the output data produced by such simulations. However, large-scale scientific simulations often produce output data whose size can range from a few hundred gigabytes to the scale of terabytes or even petabytes. Analyzing and visualizing such large-scale simulation data is not trivial. Moreover, scientific datasets are often multifaceted (multivariate, multi-run, multi-resolution, etc.), which can introduce additional complexities to the analyses and visualization activities. This dissertation addresses three broad categories of data analysis and visualization challenges: (i) multivariate distribution-based data summarization, (ii) uncertain analysis in ensemble simulation data, and (iii) simulation parameter analysis and exploration. We proposed statistical and machine learning-based approaches to overcome these challenges. A common strategy to deal with large-scale simulation data is to partition the simulation domain and create data summaries in the form of statistical probability distributions. Instead of storing high-resolution raw data, storing the compact statistical data summaries results in reduced storage overhead and alleviated I/O bottleneck issues. However, for multivariate simulation data using standard multivariate distributions for creating data summaries is not feasible. Therefore, we proposed a flexible copula-based multivariate distribution modeling strategy to create multivariate data summaries during simulation execution time (i.e, in-situ data modeling). The resulting data summaries can be subsequently used to perform scalable post-hoc analysis and visualization. In many cases, scientists execute their simulations mu (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Rephael Wenger (Committee Member); Yusu Wang (Committee Member) Subjects: Computer Science; Statistics
  • 6. Williams, Brian A Mixed-Methods Approach to Understanding the Effects of Visual Analytic Strategies on Organizational Decision Making

    Doctor of Management, Case Western Reserve University, 2016, Weatherhead School of Management

    Visual analytic tools introduce self-service functionality and the ability to decision makers to interrogate data in real-time. Yet, not all decision makers are as “agile” as the tools they are using, and do not embrace more analytic roles. This mixed method study was designed to explore the individual and organizational experience of visual analytics during decision making. First, we performed 23 field interviews with professionals to explore the individual user experience of decision makers with visual analytic tools. These interviews revealed the key role that data interaction (more than the visual display of data) plays in brining decision makers into the analysis process. From these findings, we developed a quantitative survey among 97 participants to understand the specific impact of data interaction on decision-making success. This study revealed that data interaction affects not just the individual but also the organization. Last, we integrated the findings of these two studies and re-analyzed our field interview transcripts revealing the critical role of visual analytics in increasing the absorptive capacity of users and the organization. This integrated studies reveals the ability of visual analytic activity to unlock absorptive capacity and grow the knowledge adoption in an organization. We also reveal that absorptive capacity in visual analytic settings is a highly collaborative and non-linear process. Our mixed-method study first focused on the individual user experience and moved to see how individual analytic tasks leads to positive affect across the organization.

    Committee: Kalle Lyytinen, Ph.D. (Advisor) Subjects: Organizational Behavior
  • 7. Aboturkia, Amna A Study of the Effectiveness of Mobile Technology in the Major Fields and Opioid Epidemic

    MS, University of Cincinnati, 2019, Education, Criminal Justice, and Human Services: Information Technology

    The recent advances in smartphone technology have dramatically changed our life from transportation to health care services. The usage of mobile applications has also substantially increased in justice information field to provide swift and actionable information to end users including public. In the same time the opioid crisis in the United States is declared as national disaster that must be addressed using the most efficient and effective strategies. Citizens, law enforcement and technology can work together in coordinated and collaborative way to achieve solid communities and to have better services. This thesis discusses the beneficial of using mobile technology in different fields and analysis different data sources to identify the patterns of opioid overdose in the city of Cincinnati, also it propose a new smartphone app that unifies the efforts of law enforcement and public citizens in fighting the opioid epidemic.

    Committee: M. Murat Ozer Ph.D. (Committee Chair); Xuetao Wei Ph.D. (Committee Member) Subjects: Information Technology
  • 8. Wang, Junpeng Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Recently, Deep Neural Networks (DNNs) have demonstrated superhuman capabilities in solving problems across a wide range of applications. The outstanding performance of those models shows great potentials in replacing human labor with artificial intelligence in the near future. However, behind this beautiful picture, those deep learning models often work like black-boxes, i.e., people using them have a good sense of their inputs and outcomes, but have little knowledge on the complicated working mechanisms in between. Given that DNN models are not infallible, deploying them in real-world applications may put humans in a very dangerous situation. This is especially true for safety-critical applications, such as medical diagnosis and autonomous driving. The internal working mechanisms of DNNs are not easily interpretable. A DNN approximates a very complex function through an intricate combination of numerous linear and non-linear functions, which are organized as multiple computational layers. The more layers a DNN has (i.e., deeper), the more powerful it could be. Although each individual linear/non-linear function is simple, the combination of them could be exponentially complicated. This is where the powerful expression of DNNs comes from, as well as the reason that makes DNNs hard to be interpreted, as the data transformation in high-dimensional spaces could easily go beyond humans' interpretation capability. The emerging field of Explainable Artificial Intelligence (XAI) targets to open the black-boxes of DNNs, and many interpretation techniques have been proposed, e.g., sensitivity analysis, guided back-propagation. Focusing on a specific data instance of interest, those techniques interpret a DNN by examining how the model output is resulted from the corresponding input. Still, domain experts lack the ability in flexibly exploring the detailed intermediate outcomes and combining the piece-by-piece outcomes to draw insightful conclusions or derive useful in (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Huan Sun (Committee Member); Hyunwoo Park (Committee Member) Subjects: Computer Science
  • 9. Ji, Xiaonan An Integrated Framework of Text and Visual Analytics to Facilitate Information Retrieval towards Biomedical Literature

    Doctor of Philosophy, The Ohio State University, 2018, Computer Science and Engineering

    Digitalized scientific literature, as a special type of text articles, is considered valuable knowledge repository in widespread academic and practical settings. Biomedical literature has specifically played an important role in supporting evidence-based medicine and promoting quality healthcare. Given an information need such as a patient problem, information retrieval towards biomedical literature has been focusing on the identification of high relevant articles to support up-to-date knowledge synthetization and reliable decision making. In particular, high recall, high precision, and human involvement are expected for a rigorous information retrieval in healthcare. Despite the critical information needs requiring high effectiveness and efficiency, the information overload from the large volume and heterogeneous biomedical literature has placed challenges on that. In this dissertation, we propose an integrated and generalizable framework of text and visual analytics to facilitate the significant domain application of biomedical literature retrieval. We focus on the unmet and most challenging aspect of identifying high relevant articles from a text corpus, which is typically an article collection obtained via exhaustive literature search. We convert extensive biomedical articles to effective representations that encode underlying article meanings and indicate article relevancies; and promote advantageous visualizations to exploit and explore article representations so that humans can get involved in not only task accomplishment but also knowledge discovery. We first implement text analytics to generate machine-understandable article features and representations, and promote their effectiveness with multiple knowledge and computational resources. Consider the special format of biomedical literature, we start by investigating the fundamental lexical feature space consisting of diverse article elements and examine their usefulness in predicting article relevan (open full item for complete abstract)

    Committee: Alan Ritter Ph.D. (Advisor); Po-Yin Yen Ph.D. (Advisor); Raghu Machiraju Ph.D. (Committee Member) Subjects: Biomedical Research; Computer Science; Information Science
  • 10. Kaczka Jennings, Rachel vizSlice: An Approach for Understanding Slicing Data via Visualization

    Master of Science, Miami University, 2017, Computer Science and Software Engineering

    Several approaches have been suggested for computing program slices based on different perspectives, including forward slicing, backward slicing, static slicing, and dynamic slicing. The applications of slicing are numerous, including testing, effort estimation, and impact analysis. Surprisingly, given the maturity of slicing, few approaches exist for visualizing slices. Here we present our research for visualizing large systems based on program slicing. In particular, we use treemaps to facilitate hierarchical, slicing-based navigation, we use bipartite graphs to facilitate visual impact analysis over a given variable or line of code, parallel coordinates to facilitate visual impact analysis over code blocks or variable groupings, and a text-based code browser to provide detailed context for the relevant visualizations. We believe our tools support various software maintenance tasks, including providing analysts an interactive visualization of the impact of potential changes, thus allowing developers to plan maintenance accordingly. We evaluate the research by assessing usability through a think aloud protocol and a heuristic evaluation. Our results indicate users could effectively complete the evaluation tasks we provided, and the visual idioms utilized in vizSlice were effective at communicating the underlying data to them. However, controls for these visualizations need improvement in both affordance and visibility. Regardless of any difficulties users experienced with vizSlice, users consistently rated the system positively on the measured heuristics. We provide insights on these results, future plans for improving vizSlice, and provide guidance for future research on visualizing program slices.

    Committee: Gerald Gannod (Advisor); Hakam Alomari (Committee Member); Matthew Stephan (Committee Member); James Kiper (Committee Member) Subjects: Computer Science
  • 11. Liu, Xiaotong Visual Exploration and Comparative Analytics of Multidimensional Data Sets

    Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering

    Recently, rapidly growing amounts of data with numerous attributes and variables arise in various areas of science, engineering, business, and others. Analysis of the multi-faceted information contained in multidimensional data sets has already led to breakthroughs in many fields and emergence of new information-based industries. Data with high dimensionality and complexity has far exceeded our human ability for comprehension without powerful tools. Visualization enhances human's understanding by organizing information in graphical display, offering the possibility of visual exploration of data for knowledge discovery and sense-making. Visual exploration strengthens human perceptual capabilities with visual interfaces to guide data navigation, actively engaging users into the exploration process to make knowledge discovery much more efficient. However, due to the increasing heterogeneity and complexity of multidimensional data, the multidimensional data space exceeds human comprehension. Novel representations are needed to display and organize data items based on the relationships of the dimensions in multidimensional data sets. Furthermore, visual analysis of multidimensional data sets often requires investigating the hidden relationships between different dimensions and specific items to understand the multi-faceted properties of the data sets. The enormous multidimensional data space complicates the search of potentially interesting relations between dimensions and data items. Powerful and versatile visualization tools are thus needed to allow users to analyze and compare complex relations and heterogeneous structures in multidimensional data for knowledge discovery and sense-making. In this dissertation, we investigate critical aspects of multidimensional data visualization and comparative analytics in assisting users in visual exploration of multidimensional data for knowledge discovery and sense-making. Specifically, we address the questions: How can we d (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor) Subjects: Computer Engineering; Computer Science
  • 12. Huang, Xiaoke USING GRAPH MODELING IN SEVERAL VISUAL ANALYTIC TASKS

    PHD, Kent State University, 2016, College of Arts and Sciences / Department of Computer Science

    Graph models can represent a variety of data types such as social media, cyber business and security, web, urban networks, and more. They are extensively studied and widely used in data management, mining, and analysis in many important application areas. On the other hand, graph visualization has been a major topic in information visualization to manifest graph structure and features for effective and intuitive data exploration. In this thesis, we present a set of visual analytics solutions for several important applications by integrating graph models with visualization tools, including the visualization systems of urban trajectory data, text stream data, and categorical data. Our approaches utilize graphs to abstract and manage various data, to discover hidden knowledge with graph algorithms, and to help users gain insights from graph-based visualizations and interaction. Our research widens the horizon and enhances the capability of visual analytics methodologies. First, we propose a new visual analytics method, TrajGraph, for studying urban mobility patterns. In particular, a graph model represents taxi trajectories traveling over road networks. Then graph computation is applied to identify graph centralities that find the time varying hubs and backbones of road networks from massive taxi trajectories. The graph is further visualized and interacted for users to explore the important roles of city streets and regions. Second, we employed a parallel-graph model to enhance visual analytics of the large-scale urban trajectory datasets. Specifically, we designed a novel, scalable parallel-graph model for trajectory data management. It supports fast computation over various information queries in distributed environments. A new visualization tool that allows users to get statistics information, and relationship of cars and roads in the big trajectory data by employing the functionalities of the parallel-graph model. Third, we develop a dynamic visualizati (open full item for complete abstract)

    Committee: Ye Zhao (Advisor); Ruoming Jin (Committee Member); Chengchang Lu (Committee Member); Xinyue Ye (Committee Member); Donald White (Committee Member) Subjects: Computer Science
  • 13. Harvey, William Understanding High-Dimensional Data Using Reeb Graphs

    Doctor of Philosophy, The Ohio State University, 2012, Computer Science and Engineering

    Scalar functions are virtually ubiquitous in scientific research. A vast amount of research has been conducted in visualization and exploration of low-dimensional data during the last few decades, but adapting these techniques to high-dimensional, topologically-complex data remains challenging. Traditional metric-preserving dimensionality reduction techniques suffer when the intrinsic dimension of data is high, as the metric cannot generally survive projection into low dimensions. The metric distortion can be arbitrarily large, and preservation of topological structure is not guaranteed, resulting in a misleading view of the data. When preservation of geometry is not possible, topological analysis provides a promising alternative. As an example, simplicial homology characterizes the structure of a topological space (i.e. a simplicial complex) via its intrinsic topological features of various dimensions. Unfortunately, this information can be abstract and difficult to comprehend. The ranks of these homology groups (the Betti numbers) offer a simpler, albeit coarse, interpretation as the number of voids of each dimension. In high dimensions, these approaches suffer from exponential time complexity, which can render them impractical for use with real data. In light of these difficulties, we turn to an alternative type of topological characterization. We investigate the Reeb graph as a visualization and analysis tool for such complex data. The Reeb graph captures the topology of the set of level sets of a scalar function, providing a simple, intuitive, and informative topological representation. We present the first sub-quadratic expected time algorithm for computing the Reeb graph of an arbitrary simplicial complex, opening up the possibility of using the Reeb graph as a tool for understanding high-dimensional data. While the Reeb graph effectively captures some topological structure, it is still somewhat terse. The Morse-Smale complex summarizes a scalar function by b (open full item for complete abstract)

    Committee: Yusu Wang PhD (Advisor); Tamal Dey PhD (Committee Member); Rephael Wenger PhD (Committee Member) Subjects: Bioinformatics; Computer Science
  • 14. Dabdoub, Shareef Applied Visual Analytics in Molecular, Cellular, and Microbiology

    Doctor of Philosophy, The Ohio State University, 2011, Biophysics

    The current state of biological science is such that many sources of data are simply too large to be analyzed by hand. Furthermore, given the amazing breadth of investigation into the natural world, the potential for serious investigation from just mining heterogenous data sets is too rich to ignore. These two factors combined with the amount of computational power currently available make for ideal conditions from the perspective of visual analytics. Here we describe three computational projects focused on the visualization and analysis of data within the fields of microbial pathogenesis, cell biology, and molecular conformational dynamics. ProkaryMetrics is a new software package providing 3D reconstruction of fluorescent micrographs as well as various visual and statistical tools for analysis of bacterial biofilms. The software FIND is a new platform for promoting computational analysis and enhanced visualization of multicolor flow cytometry data. FIND provides users with user-friendly, cross-platform analysis software, while simultaneously providing algorithm designers a target for implementation. Finally, the Moflow project represents a new visual representation of atomic flow within molecules during conformational changes over time in a more intuitive sense than was previously possible.

    Committee: William Ray PhD (Committee Chair); Sheryl Justice PhD (Advisor); Shen Han-Wei PhD (Committee Member); Luis Actis PhD (Committee Member); Charles Daniels PhD (Committee Member) Subjects: Bioinformatics; Biophysics; Computer Science
  • 15. Ban, Hyowon Visualization Of Urban Concepts In Two Directions Of Thinking

    Doctor of Philosophy, The Ohio State University, 2009, Geography

    This study pursues questions about the top-down and the bottom-up directions of geographical thinking. A question about the top-down direction: 1) how geographical concepts could influence spatial data, is asked in the first half of the study, and another question for the bottom-up direction: 2) how existing data could inform geographical concepts, is asked in another half of the study. To answer the first question, Part 1 deals with the uncertainty of an exurban concept as a primary example, since there are many different definitions of exurbanization and the spatial boundaries based on them are not identical. Several definitions of exurbanization are investigated to determine how they represent exurban areas, and formal representations of the fuzzy-set approach are developed to analyze and visualize the uncertainty of the exurban definitions. The study develops a software interface that would allow interactive exploration, analysis, negotiation, and visualization of uncertain geographical concepts. Selected exurban definitions and empirical spatial data demonstrate concept comparison and concept creation activities using the interface. A case study of five different definitions of exurbanization in Ohio, U.S. shows different degrees of agreement on the exurban boundary, and also illustrates an impact of different approaches to negotiate the multiple definitions. In addition, Part 1 provides results from user evaluations of the developed software interface to better support the exchange of knowledge and communication between exurban stakeholders in Ohio. Findings from the evaluation indicate that the software interface could be useful for urban ontology research and land use, with support of the different ontologies of various actors for a common concept. To answer the second question, Part 2 attempts to derive some time-geography concepts from a set of spatio-temporal choreographic information developed by a multi-disciplinary project, “The Synchronous Objects pro (open full item for complete abstract)

    Committee: Ola Ahlqvist (Advisor); Noel Cressie (Committee Member); Darla Munroe (Committee Member); Alan Price (Committee Member) Subjects: Dance; Design; Geography; Statistics
  • 16. Reber, Sean Discovery And Visual Analysis of Tracts of Homozygosity In The Human Genome

    MS, Kent State University, 2013, College of Arts and Sciences / Department of Computer Science

    I propose a new visual analytics system designed for genetic researchers to study genome-wide homozygosity regions. Finding significant tracts of homozygosity (TOH) using single nucleotide polymorphisms (SNPs) from a large-scale genome data set can contribute to the discovery of genetic factors related to human diseases. The proposed system helps users to visually examine TOH clusters computed from the underlying patient data, lending itself a convenient and powerful tool for knowledge discovery. I've designed and implement the TOH clustering algorithm based on repeated binary spectral clustering. A hierarchy of clusters is created and represented by a TOH cluster (TOHC) tree. Researchers can investigate the clusters with a special interactive widget, namely navigation rings, which is integrated with a visual cluster explorer. Statistical association study and NCBI genome map viewer are also incorporated into the system. The usability and performance of the system is illustrated with a clinical data set of human cancers.

    Committee: Ye Zhao Dr. (Advisor); Arden Ruttan Dr. (Committee Member); C.C. Lu Dr. (Committee Member) Subjects: Bioinformatics; Computer Science