Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Schierl, Jonathan A 2D/3D Feature-Level Information Fusion Architecture For Remote Sensing Applications

    Master of Science (M.S.), University of Dayton, 2022, Electrical and Computer Engineering

    Remote sensing has seen significant attention due to advances in technology and access to data. A current challenge is classifying land regions by their usage – residential, industrial, forest, etc. Scope is very important, too large of an area would lead to multiple classes being present in one scene, and too small of an area would not contain enough contextual information to accurately determine a scene. To further complicate matters, there are multiple similar objects all present in different classes, for example trees are found in residential, forest, and park classes. Deep learning is a current technology that is successful with problems at this level of ambiguity. The most straight-forward approach to address this level of complexity is to use remote sensing images to classify land regions. However, deep learning using 2D images has its downsides, especially when analyzing aerial data, namely, it lacks 3-dimensional information such as depth. Similarly, there are also 3D deep learning architectures with different weaknesses, i.e., longer processing times and lack of intensity information. As access to processing hardware and remote sensing data continues to increase, there is a pressing need to leverage the strengths of both modalities. This can be done in one of three ways: (1) a data-level fusion, where data modalities are fused together directly; (2) a feature-level fusion, where features are fused after data modalities are processed individually; or (3) a decision-level fusion, where predictions are made using each modality independently, until, ultimately, they are fused into one final decision. In this work, we utilize a feature-level fusion because our dataset (comprised of lidar and RGB scenes) have very different types of information; after analysis, we found that each modality was better suited to different sections of our data, which we could harness using a feature-level fusion. Furthermore, to improve on these results, an accurate regist (open full item for complete abstract)

    Committee: Vijayan Asari (Advisor); Andrew Stokes (Committee Member); Theus Aspiras (Committee Member); Eric Balster (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science; Remote Sensing
  • 2. AYDAR, MEHMET Developing a Semantic Framework for Healthcare Information Interoperability

    PHD, Kent State University, 2015, College of Arts and Sciences / Department of Computer Science

    Interoperability in healthcare is stated as the ability of health information systems to work together within and across organizational boundaries in order to advance the effective delivery of healthcare for individuals and communities. The current healthcare information technology environment breeds incredibly complex data ecosystems. In many cases pertinent patient records are collected in multiple systems, often supplied by competing manufacturers with diverse data formats. This causes inefficiencies in data interoperability, as different formats of data create barriers in exchanging health information. This dissertation presents a semantic framework for healthcare information interoperability. We propose a system for translation of healthcare instance data, based on structured mapping definitions and using RDF as a common information representation to achieve semantic interoperability between different data models. Moreover, we introduce an entity similarity metric that utilizes the Jaccard index with the common relations of the data entities and common string literal words referenced by the data entities and augmented with data entity neighbors similarity. The precision of the similarity metric is enhanced by incorporating the auto-generated importance weights of the entity descriptors in the RDF representation of the dataset. Furthermore, we provide an automatic classification method, which we call summary graph generation, based on the pairwise entity similarities, and we propose that the summary graph can further be utilized for interoperability purposes. Finally, we present a suggestion based semi-automatic instance matching system and we test it on the RDF representation of a healthcare dataset. The system utilizes the entity similarity metric, and it presents similar node pairs to the user for possible instance matching. Based on the user feedback, it merges the matched nodes and suggests more matching pairs depending on the common relations and neigh (open full item for complete abstract)

    Committee: Austin Melton (Advisor); Angela Guercio (Committee Member); Ye Zhao (Committee Member); Alan Brandyberry (Committee Member); Helen Piontkivska (Committee Member); Javed I. Khan (Committee Chair); James L. Blank (Other) Subjects: Computer Science; Health Care; Health Sciences; Information Systems; Information Technology; Medicine
  • 3. Storrs, Judd Automatic Real-time Targeting of Single-Voxel Magnetic Resonance Spectroscopy

    PhD, University of Cincinnati, 2010, Engineering and Applied Science: Biomedical Engineering

    Magnetic resonance spectroscopy (MRS) is a non-invasive and non-destructive in vivo technique available on magnetic resonance imaging (MRI) scanners that is used to measure biochemical profiles from localized regions, or volumes-of-interest (VOIs), inside the body. A confounding factor for interpretation and analysis of MRS is spatial inconsistency in selection of VOIs for data collection, which may obscure biochemical alterations and reduce the statistical power of a study. Because VOI selection is performed manually by the MRI operator, consistency both between sessions and among subjects requires careful protocol design and experienced staff. Inter-subject anatomic variation, imprecise experimental protocols, and inter-operator variation contribute to VOI positioning error. In this work, automatic targeting of VOIs using a standard anatomic atlas was hypothesized to improve spatial consistency for VOIs, both among subjects and between sessions. Subject anatomy is aligned to a template during acquisition of routine high-resolution 3D anatomic imaging. Alignment is computed parallel to acquisition and completes prior to the end of the scan allowing immediate use of the template coordinate system for the next scan. Once aligned, preselected VOIs are transferred from the template for acquisition. Two real-time alignment techniques are compared. The first performs affine alignment of the subject to the ICBM452 template, and the second rigidly aligns subject anatomy between baseline and followup sessions. The technique was developed using simulations based on archived data from 79 subjects randomly segregated into training (40 subjects for development) and testing groups (39 subjects for evaluation). The accuracy of real-time spatial normalization was evaluated as disagreement with SPM5-derived nonlinear normalization. Median disagreement within the brain was 1.9 mm (largest: 9.1 mm). For comparison, optimal affine alignment was computed directly from nonlinear SPM5 re (open full item for complete abstract)

    Committee: Jing-Huei Lee PhD (Committee Chair); Wen-Jang Chu PhD (Committee Member); James Eliassen PhD (Committee Member); William Ball MD (Committee Member) Subjects: Biomedical Research