Search Results (1 - 25 of 2345 Results)

Sort By  
Sort Dir
 
Results per page  

Ye, EnTeamWATCH: Visualizing Development Activities Using a 3-D City Metaphor to Improve Conflict Detection and Team Awareness
Doctor of Philosophy (PhD), Ohio University, 2017, Electrical Engineering & Computer Science (Engineering and Technology)
Awareness of others’ activities has been widely recognized as essential in facilitating coordination in a team among Computer-Supported Cooperative Work communities. Several field studies on software developers in large software companies such as Microsoft showed that coworker and artifact awareness are the most common information needs for software developers; however, they are also the most frequently unsatisfied information needs. As a result, they may duplicate work, or create conflicts without knowing the status of others and the whole project. To address this problem, we propose a new approach to visualize the developer’s activities using a 3-D city metaphor and implement it in a workspace awareness tool named TeamWATCH (Team based Workspace Awareness Toolkit and Collaboration Hub). TeamWATCH extracts awareness information of artifacts, revisions, and developers from their local workspaces, version control repository, and bug tracking system. It then visualizes both real time and history awareness information together in a 3-D common view shared by the whole team. It also highlights active artifacts that are being changed locally via eye-catching animations and provides the customized personal view for each developer. The main contributions of this dissertation are 1) a 3-D software visualization scheme that improves workspace awareness and enhances team collaboration; 2) the design and implementation of the workspace awareness tool TeamWATCH using this visualization scheme; and 3) evaluations of the effectiveness of such awareness tools using TeamWATCH as an example in maintaining project awareness and detecting and resolving conflicts via three controlled use experiments. The experiment results showed that the subjects using TeamWATCH performed significantly better in software revision history and project evolution comprehension, and early conflict detection and resolution.

Committee:

Chang Liu (Advisor)

Subjects:

Computer Science

Keywords:

Software collaboration; software visualization; workspace awareness

Alaql, Omar abdulrahmanGENERAL PURPOSE APPROACHES FOR NO-REFERENCE IMAGE QUALITY ASSESSMENT
PHD, Kent State University, 2017, College of Arts and Sciences / Department of Computer Science
The last decade has witnessed great advances in digital images. Massive numbers of digital images are being captured by mobile digital cameras due to the increasing popularity of mobile imaging devices. These images are subjected to many processing stages during storing, transmitting, or sharing over a network connection. Unfortunately, these processing stages could potentially add visual degradation to original image. These degradations reduce the perceived visual quality which leads to an unsatisfactory experience for human viewers. Therefore, Image Quality Assessment (IQA) has become a topic of high interest and intense research over the last decade. The aim of IQA is to automatically assess image quality in agreement with human judgments. This dissertation mainly focuses on the most challenging category of IQA - general- purpose No-Reference Image Quality Assessment (NR-IQA), where the goal is to assess the quality of images without information about the reference images and without prior knowledge about the types of distortions in the tested image. This dissertation contributes to the research of image quality assessment by proposing three novel approaches for NR- IQA and one model for image distortions classification. First, we propose improvements in image distortions classification by introducing a training model based on new features collection. Second, we propose a NR-IQA technique, which utilizes our improvement in the classification model, and based on a hypothesis that an effective combination of image features can be used to develop efficient NR-IQA approaches. Third, a NR-IQA technique is proposed based on Natural Scene Statistics (NSS) by finding the distance between the natural images and the distorted images in 3D dimensional space. Forth, a novel NR-IQA approach is presented, by utilizing multiple Deep Belief Networks (DBNs) with multiple regression models. We have evaluated the performance of the proposed and some existing models on a fair basis. The obtained results show that our models give better results and yield a significant improvement.

Committee:

Cheng-Chang Lu (Advisor); Austin Melton (Committee Member); Kambiz Ghazinour (Committee Member); Jun Li (Committee Member); Mohammed Khan (Committee Member)

Subjects:

Computer Science

Abounia Omran, BehzadApplication of Data Mining and Big Data Analytics in the Construction Industry
Doctor of Philosophy, The Ohio State University, 2016, Food, Agricultural and Biological Engineering
In recent years, the digital world has experienced an explosion in the magnitude of data being captured and recorded in various industry fields. Accordingly, big data management has emerged to analyze and extract value out of the collected data. The traditional construction industry is also experiencing an increase in data generation and storage. However, its potential and ability for adopting big data techniques have not been adequately studied. This research investigates the trends of utilizing big data techniques in the construction research community, which eventually will impact construction practice. For this purpose, the application of 26 popular big data analysis techniques in six different construction research areas (represented by 30 prestigious construction journals) was reviewed. Trends, applications, and their associations in each of the six research areas were analyzed. Then, a more in-depth analysis was performed for two of the research areas including construction project management and computation and analytics in construction to map the associations and trends between different construction research subjects and selected analytical techniques. In the next step, the results from trend and subject analysis were used to identify a promising technique, Artificial Neural Network (ANN), for studying two construction-related subjects, including prediction of concrete properties and prediction of soil erosion quantity in highway slopes. This research also compared the performance and applicability of ANN against eight predictive modeling techniques commonly used by other industries in predicting the compressive strength of environmentally friendly concrete. The results of this research provide a comprehensive analysis of the current status of applying big data analytics techniques in construction research, including trends, frequencies, and usage distribution in six different construction-related research areas, and demonstrate the applicability and performance level of selected data analytics techniques with an emphasis on ANN in construction-related studies. The main purpose of this dissertation was to help practitioners and researchers identify a suitable and applicable data analytics technique for their specific construction/research issue(s) or to provide insights into potential research directions.

Committee:

Qian Chen, Dr. (Advisor)

Subjects:

Civil Engineering; Comparative Literature; Computer Science

Keywords:

Construction Industry; Big Data; Data Analytics; Data mining; Artificial Neural Network; ANN; Compressive Strength; Environmentally Friendly Concrete; Soil Erosion; Highway Slope; Predictive Modeling; Comparative Analysis

Joshi, Amit KrishnaExploiting Alignments in Linked Data for Compression and Query Answering
Doctor of Philosophy (PhD), Wright State University, 2017, Computer Science and Engineering PhD
Linked data has experienced accelerated growth in recent years due to its interlinking ability across disparate sources, made possible via machine-processable RDF data. Today, a large number of organizations, including governments and news providers, publish data in RDF format, inviting developers to build useful applications through reuse and integration of structured data. This has led to tremendous increase in the amount of RDF data on the web. Although the growth of RDF data can be viewed as a positive sign for semantic web initiatives, it causes performance bottlenecks for RDF data management systems that store and provide access to data. In addition, a growing number of ontologies and vocabularies make retrieving data a challenging task. The aim of this research is to show how alignments in the Linked Data can be exploited to compress and query the linked datasets. First, we introduce two compression techniques that compress RDF datasets through identification and removal of semantic and contextual redundancies in linked data. Logical Linked Data Compression is a lossless compression technique which compresses a dataset by generating a set of new logical rules from the dataset and removing triples that can be inferred from these rules. Contextual Linked Data Compression is a lossy compression technique which compresses datasets by performing schema alignment and instance matching followed by pruning of alignments based on confidence value and subsequent grouping of equivalent terms. Depending on the structure of the dataset, the first technique was able to prune more than 50% of the triples. Second, we propose an Alignment based Linked Open Data Querying System (ALOQUS) that allows users to write query statements using concepts and properties not present in linked datasets and show that querying does not require a thorough understanding of the individual datasets and interconnecting relationships. Finally, we present LinkGen, a multipurpose synthetic Linked Data generator that generates a large amount of repeatable and reproducible RDF data using statistical distribution, and interlinks with real world entities using alignments.

Committee:

Pascal Hitzler , Ph.D. (Advisor); Guozhu Dong, Ph.D. (Committee Member); Krishnaprasad Thirunaraya, Ph.D. (Committee Member); Michelle Cheatham, Ph.D. (Committee Member); Subhashini Ganapathy, Ph.D. (Committee Member)

Subjects:

Computer Science

Keywords:

Linked Data; RDF Compression; Ontology Alignment; Linked Data Querying; Synthetic RDF Generator; SPARQL

Moharreri, KayhanAugmenting Collective Expert Networks to Improve Service Level Compliance
Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering
This research introduces and develops the new subfield of large-scale collective expert networks (CEN) concerned with time-constrained triaging which has become critical to the delivery of increasingly complex enterprise services. The main research contribution augments existing human-intensive interactions in the CEN with models that use ticket content and transfer sequence histories to generate assistive recommendations. This is achieved with a recommendation framework that improves the performance of CEN by: (1) resolving incidents to meet customer time constraints and satisfaction, (2) conforming to previous transfer sequences that have already achieved their Service Levels; and additionally, (3) addressing trust to encourage adoption of recommendations. A novel basis of this research is the exploration and discovery of resolution process patterns, and leveraging them towards the construction of an assistive resolution recommendation framework. Additional interesting new discoveries regarding CENs include existence of resolution workflows and their frequent use to carry out service-level-effective resolution on regular content. In addition, the ticket-specific expertise of the problem solvers and their dynamic ticket load were found to be factors in the time taken to resolve an incoming ticket. Also, transfers were found to reflect the experts' local problem-solving intent with respect to the source and target nodes. The network performs well if certain transfer intents (such as resolution and collective) are exhibited more often than the others (such as mediation and exploratory). The assistive resolution recommendation framework incorporates appropriate strategies for addressing the entire spectrum of incidents. This framework consists of a two-level classifier with the following parts: (1) content tagger for routine/non-routine classification, (2) A sequence classifier for resolution workflow recommendation, (3) Response time estimation based on learned dynamics of the CEN (i.e. Expertise, and ticket load), and (4) transfer intent identification. Our solution makes reliable proactive recommendations only in the case of adequate historical evidence thus helping to maintain a high level of trust with the interacting users in the CEN. By separating well-established resolution workflows from incidents that depend on experts’ experiential and `tribal' knowledge for the resolution, this research shows a 34% performance improvement over existing content-aware greedy transfer model; it is also estimated that there will be a 10% reduction in the volume of service-level breached tickets. The contributions are shown to benefit the enterprise support and delivery services by providing (1) lower decision and resolution latency, (2) lower likelihood of service-level violations, and (3) higher workforce availability and effectiveness. More generally, the contributions of this research are applicable to a broad class of problems where time-constrained content-driven problem-solving by human experts is a necessity.

Committee:

Jayashree Ramanathan (Advisor); Rajiv Ramnath (Committee Member); Srinivasan Parthasarathy (Committee Member); Gagan Agrawal (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science; Information Technology

Keywords:

IT Service Management, Collective Expert Networks, Process Discovery, Ticket Routing Recommendations, Resolution Time Estimation, Event Mining, IT Service Support, Service Level Compliance, Human-in-the-loop, Learning from Enterprise Event Data

Tadisetty, SrikanthPrediction of Psychosis Using Big Web Data in the United States
MS, Kent State University, 2018, College of Arts and Sciences / Department of Computer Science
Posting on the internet, including weblogs or social media, is one of the ways individuals seek for an outlet to express themselves or mental health concerns. For many mental health issues such as psychosis, the timing of detection and treatment is critical; short and long-term outcomes are better when individuals begin treatment close to the onset of psychosis. While the internet offers a positive medium for short term therapy, it is not a face to face therapy session, wherein a trained professional is better able to deduce the root of the problem. Many clinicians are adopting electronic communication to strengthen their therapeutic alliance with their patients. The drawback of psychiatry is that it lacks objectified tests for mental illnesses that would otherwise be present in medicine. Current neuroscience has yet not found genetic markers that can characterize individual mental illnesses. A thought disorder (ThD) which is a widely found symptom in people suffering from schizophrenia, is diagnosed from the level of coherence when the flow of ideas is muddled without word associations. A system that can explore the use of speech analysis for aiding in psychiatric diagnosis is highly desirable and would help early detection and effective treatment results. This thesis introduces a framework – Prediction Onset Prediction System (POPS) - to predict the onset of psychosis based on written language habits. A scrape of a multitude of individual comments is analyzed using a trained psychosis prediction module that is able to predict if an individual is psychotic (based on the semantics) using natural language processing, machine learning techniques and a customized corpus with terms consist with psychotic language tendencies created using speech analysis techniques. The effectiveness of the corpus and its implication in psychosis detection is explored.

Committee:

Kambiz Ghazinour (Advisor)

Subjects:

Computer Science; Health; Mental Health; Psychology; Sociology; Teaching; Technology

Keywords:

Machine Learning; NLP; Natural Language Processing; Web Scrapping; Psychosis; Mental Illness; Mental Health; Twitter; Social Media; Psychosis Dictionary; Crisis Prevention; Mental Health Prediction; Psychological Health; Social Media; Lexical Analysis

Miranda, David JMusic Blocks: Design and Preliminary Evaluation of Interactive Tangible Block Games with Audio and Visual Feedback for Cognitive Assessment and Training
Master of Engineering, Case Western Reserve University, 2018, EMC - Mechanical Engineering
Tangible Geometric Games (TAG-Games) were developed initially for automated cognitive assessment using custom sensor-integrated blocks (SIG-Blocks). Building on this existing technology, Music Blocks focuses on incorporating music and audio feedback in TAG-Games to examine the potential of tangible games for cognitive training and assessment. New block enclosure and game board designs implement textures that portray information tangibly. Game algorithms support real-time gameplay and data collection. For preliminary game evaluation, a small scale human subject study was conducted involving 17 participants. Among the five Music Blocks games created, Direction Blocks, MineSweeper, and Password Blocks were tested along with three subtests of the Wechsler Adult Intelligence Test – Fourth Edition (Block Design, Digit Span, and Matrix Reasoning). Initial assessment concluded that tangible games pair best with audio-visual stimuli. Individual games correlated well with some subtests from WAIS-IV. Other results, the limitations, and conclusions of this study are discussed within the text.

Committee:

Kiju Lee (Committee Chair); Ming-Chun Huang (Committee Member); Marc Buchner (Committee Member)

Subjects:

Cognitive Psychology; Computer Science; Mechanical Engineering; Music

Keywords:

tangible games; cognitive assessment; music games; block games; computerized cognitive assessment; audio-tangible games

Rematska, GiorgiaA Stochastic Petri Net Reverse Engineering Methodology for Deep Understanding of Technical Documents
Doctor of Philosophy (PhD), Wright State University, 2018, Computer Science and Engineering PhD
Systems Reverse Engineering has gained great attention over time and is associated with numerous different research areas. The importance of this research derives from several technological necessities. Security analysis and learning purposes are two of them and can greatly benefit from reverse engineering. More specifically, reverse engineering of technical documents for deeper automatic understanding is a research area where reverse engineering can contribute a lot. In this PhD dissertation we develop a novel reverse engineering methodology for deep understanding of architectural description of digital hardware systems that appear in technical documents. Initially, we offer a survey on reverse engineering of electronic or digital systems. We also provide a classification of the research methods within this field, and a maturity metric is presented to highlight weaknesses and strengths of existing methodologies and systems that are currently available. A technical document (TD) is typically composed by several modalities, like natural language (NL) text, system’s diagrams, tables, math formulas, graphics, pictures, etc. Thus, for automatic deep understanding of technical documents, a synergistic collaboration among these modalities is necessary. Here we will deal with the synergistic collaboration between NL-text and system’s diagrams for a better and deeper understanding of a TD. In particular, a technical document is decomposed into two modalities NL-text and figures of system’s diagrams. Then, the NL-text is processed with a Natural Language text Understanding (NLU) method and text sentences are categorized into five categories, by utilizing a Convolutional Neural Network to classify them accordingly. While, a Diagram-Image-Modeling (DIM) method processes the figures by extracting the system’s diagrams. More specifically, NLU processes the text from the document and determines the associations among the nouns and their interactions, by creating their stochastic Petri-net (SPN) graph model. DIM performs processing/analysis of figures to transform the diagram into a graph model that holds all relevant information appearing in the diagram. Then, we combine (associate) these models in a synergistic way and create a synergistic SPN graph. From this SPN graph we obtain the functional specifications that form the behavior of the system in a form of pseudocode. In parallel we extract a flowchart to enhance the understanding that the reader could have about the pseudocode and the hardware system as a unity.

Committee:

Nikolaos Bourbakis, Ph.D. (Advisor); Soon Chung, Ph.D. (Committee Member); Bin Wang, Ph.D. (Committee Member); Sukarno Mertoguno, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Computer Science

Keywords:

systems reverse engineering; reverse engineering; technical document; stochastic Petri-net;

Yang, ZhaoyuanAdversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach
Master of Science, The Ohio State University, 2018, Electrical and Computer Engineering
We adapt idea of adversarial reinforcement learning to numerical state inputs of controllers. We propose an idea of generating adversarial noises for inputs of controllers using deep reinforcement learning. We also propose an idea of using reinforcement learning agent as observer and using observer to reduce effect of adversarial noise. Idea of using reinforcement learning as observer may be helpful for adapting knowledge from simulation to real world. We performed a sequence of analyses about adversarial reinforcement learning and deep reinforcement learning. Through analysis, we discover deep reinforcement learning agent learnt in ideal environment is not robust to adversarial noise and learning in adversarial environment will make agent robust in both adversarial and non-adversarial environment. We make several conjectures about phenomena we observe, and propose an idea of how to let deep reinforcement learning agent better use state information. We also propose an idea of how to use neural network to find policies optimize cost objective automatically. In the end, we discuss possible works could be done in the future.

Committee:

Abhishek Gupta (Advisor); Wei Zhang (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Electrical Engineering; Engineering

Keywords:

deep reinforcement learning; control system; adversarial reinforcement learning; machine learning

Othman, SalemAutonomous Priority Based Routing for Online Social Networks
PHD, Kent State University, 2018, College of Arts and Sciences / Department of Computer Science
Social Routing in Online Social Networks (OSNs) is very challenging, as it must handle privacy and performance. This study proposes a Social Online Routing (SOR) protocol for OSNs that satisfies Stratified Privacy Model (SPM) core requirements and minimizes end-to-end routing delays corresponding to the social routing information elements exchanged under the SPM. SOR uses five messages (I-need Message, I-have Message, I-thank Message, I-like/dislike message, and the I-Ack Message) for carrying routing information. Forwarding models (I-need Module, I-have Module, I-thank Module, and I-ack Module) and routing algorithms (Topology aware Shortest-Path-Based routing algorithm, Social-Priority-Based routing algorithm, and Queue-aware Social-Priority-Based routing algorithm) are introduced. Four anonymization techniques are also utilized for stratified privacy. To evaluate the study’s proposed protocol, an Online Social Networks Simulator is designed and implemented. Using real datasets from Google Plus, the simulator is used to evaluate end-to-end routing delays corresponding to the social routing information elements exchanged under the SPM.

Committee:

Javed Khan, Prof. (Advisor)

Subjects:

Computer Science

Keywords:

Online social networks; Social Priority based Routing; SOR Protocol; Social routing and forwarding; Simulation; Social requests; Anonymization; Privacy leakage; Privacy Enhancing Technologies; Social based Routing; Request Dissemination; Human dynamics;

Ewing, GabrielKnowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces
Master of Sciences, Case Western Reserve University, 2018, EECS - Computer and Information Sciences
In this thesis, we address the task of reinforcement learning in continuous state and action spaces. Specifically, we consider multi-task reinforcement learning, where a sequence of reinforcement learning tasks have to be solved, and inverse reinforcement learning, where a reward function has to be learned from expert demonstrations. We also use trees to represent models, rewards, and value functions in our domains. First, we design an algorithm to learn from demonstration in the presence of a non- smooth reward function. Second, we design another algorithm to perform offline reinforcement learning in the same scenario. This allows us to re-use experiences to help with new tasks. Third, we introduce a method to incorporate weak knowledge about policies with online learning in policy gradient algorithms. These contributions allow us to create a pipeline that efficiently learns and transfers knowledge across a sequence of tasks. We demonstrate our approaches on the task of learning control of a prosthetic arm from expert demonstrations under various scenarios in simulation.

Committee:

Soumya Ray, Dr. (Advisor); Michael Fu, Dr. (Committee Member); M. Cenk Cavusoglu, Dr. (Committee Member); Michael Lewicki, Dr. (Committee Member)

Subjects:

Computer Science

Keywords:

Machine learning; reinforcement learning; continuous actions; knowledge transfer; prostheses

Robinson, Jace D.A Model for Seasonal Dynamic Networks
Master of Science (MS), Wright State University, 2018, Computer Science
Sociotechnological and geospatial processes exhibit time varying structure that make insight discovery challenging. This paper presents statistical model of systems with seasonal dynamics, modeled as a dynamic network, to address this challenge. It assumes the probability of edge formations depend on a type assigned to incident nodes and the current time. Time dependencies are modeled by unique seasonal processes. The model is studied on several synthetic and real datasets. Superior fidelity of this model on seasonal datasets compared to existing network models, while being able to remain equally accurate for networks with randomly changing structure, is shown. The model is found to be twice as accurate at predicting future edge counts over competing models on New York City taxi trips, U.S. airline flights, and email communication within the Enron company. An anomaly detection use case for the model is shown for NYC traffic dynamics and email communications between Enron employees.

Committee:

Derek Doran, Ph.D. (Committee Chair); Tanvi Banerjee, Ph.D. (Committee Member); Fred Garber, Ph.D. (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science

Keywords:

Stochastic Blockmodel; Dynamic Networks; Seasonal Time Series; Kalman Filter

Howard, Shaun MichaelDeep Learning for Sensor Fusion
Master of Sciences (Engineering), Case Western Reserve University, 2017, EECS - Computer and Information Sciences
The use of multiple sensors in modern day vehicular applications is necessary to provide a complete outlook of surroundings for advanced driver assistance systems (ADAS) and automated driving. The fusion of these sensors provides increased certainty in the recognition, localization and prediction of surroundings. A deep learning-based sensor fusion system is proposed to fuse two independent, multi-modal sensor sources. This system is shown to successfully learn the complex capabilities of an existing state-of-the-art sensor fusion system and generalize well to new sensor fusion datasets. It has high precision and recall with minimal confusion after training on several million examples of labeled multi-modal sensor data. It is robust, has a sustainable training time, and has real-time response capabilities on a deep learning PC with a single NVIDIA GeForce GTX 980Ti graphical processing unit (GPU).

Committee:

Wyatt Newman, Dr (Committee Chair); M. Cenk Cavusoglu, Dr (Committee Member); Michael Lewicki, Dr (Committee Member)

Subjects:

Artificial Intelligence; Computer Science

Keywords:

deep learning; sensor fusion; deep neural networks; advanced driver assistance systems; automated driving; multi-stream neural networks; feedforward; multilayer perceptron; recurrent; gated recurrent unit; long-short term memory; camera; radar;

Li, MaoSpatial-temporal classification enhancement via 3-D iterative filtering for multi-temporal Very-High-Resolution satellite images
Master of Science, The Ohio State University, 2018, Electrical and Computer Engineering
It has been widely studied utilizing spatial-temporal remote sensing images to interpret ground objects. Due to the spectral ambiguities caused by inevitable factors like meteorological conditions, sunlight illumination, sensor radiation performance and earth objects reflectance, the interpretation accuracy of multi-class classification using a single temporal image is unsatisfactory. Under the hypothesis that earth objects have the temporal consistency, this thesis proposes a classification accuracy enhancement approach that utilizes 3-D temporal very-high-resolution images, where the digital surface model is generated through stereo dense matching. In the first place, the probability distribution of images’ coverage areas is derived from the supervised Random Forest Classifier. Then, the proposed method iteratively filters the probability maps with a 3-D bilateral filter which is built upon the domain of spectrum, spatial and height information of surface. Compared with single filtering enhancement studied before, continuously message passing from data in different dates can be achieved by iteratively filtering until the probability converge. It is conducted that each of the three experiments on 8 temporal consistent images presents convincing different types of city layout in Port-au-Prince, the capital of Haiti, including open grounds, dense residential and educational areas. After classification enhancement, the overall classification accuracy is increased by 2%~6%. The presenting results illustrate that although the study areas experienced a devastating earthquake leading to significant changes in the city landscape, the constraint on surface height effectively eliminates pre-enhancing classification errors. Furthermore, although the first filtering contributes the most on classification accuracy enhancement, this approach is manifested to consistently enhance the classification performance for similar earth objects like road and ground, permanent shelters and buildings through further iterations.

Committee:

Rongjun Qin, Dr. (Advisor); Desheng Liu, Dr. (Committee Co-Chair)

Subjects:

Computer Engineering; Computer Science; Electrical Engineering; Geographic Information Science; Geography; Remote Sensing

Keywords:

Image Enhance; Spatiotemporal probability bilateral filter; Random Forest, Classification

Whalen, Kevin ChristopherA map system to disseminate national science on forests for the creation of regional tree planting prioritization plans
MS, Kent State University, 2017, College of Arts and Sciences / Department of Computer Science
In the United States, urban forestry efforts are sustained through efforts from individuals, businesses, philanthropic organizations, and government agencies across local, state, and national levels. The i-Tree Tools suite of software promotes the use of, peer-reviewed science to explain the benefits that trees provide in a method intended for the general public. This thesis shares the computer-specific knowledge collected during the design, implementation, and continued expansion of i-Tree Landscape. The i-Tree Landscape application is a web-browser based, online, geographic information system, referred to as a web-GIS app. The "pages" of the web-app are part of a system of software libraries and services, along with dedicated hardware, which were specifically researched, compared, selected, and optimally configured for their roles in supporting the system as a whole. This work will also briefly touch upon the open source libraries and services running in the Landscape system, as well as, some of the decisions they influenced with acquiring hardware to support its deployment. Delivering the data and formulas associated with the benefits of trees for the entire geographic area of the United States becomes difficult over the internet, especially when it must be achieved via a non-expert interface. To manage this, the flow of the application is separated into five, non-sequential steps, prefixed with a landing page, and postfixed with a publishable report. This partitioning helps with code responsibility separation, as well. In addition to producing a tailorable report for describing the benefits of trees, the primary purpose of the application is to help prioritize tree planting efforts. This is well needed by foresters to help allocate for popular practice of mass tree plantings. The planning is done via a customizable model utilizing nearly all of the possible attributes as weighting options. The regional aggregations for this are available to users through nine boundary layers, most notably including counties, block groups, and watersheds. The research supporting the data on trees is from working directly with the authors of peer-reviewed research from the United States Department of Agriculture Forest Service laboring at the Northern Research Station at the College of Environmental Science and Forestry in Syracuse, New York. i-Tree Landscape has succeeded in becoming a science dissemination facility, by the use of information visualization, with the purpose of making decisions that promote urban forestry stewardship through modern web-GIS, and data processing techniques.

Committee:

Cheng-Chang Lu, PhD (Advisor); Austin Melton, PhD (Committee Member); Gokarna Sharma, PhD (Committee Member)

Subjects:

Computer Science; Ecology; Environmental Science; Geography; Urban Forestry; Urban Planning

Keywords:

budget national map processing; geographic information system; GIS; national land cover; forestry; tree planting prioritization; GDAL; GEOS; GeoServer; PostGIS; JTS; Open Geospatial Consortium; OGC; Open Source Geospatial Foundation; OSGeo;

Koya, Bharath KumarSched-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course
Master of Science (MS), Wright State University, 2017, Computer Science
Operating systems is an essential course in computer science curriculum, which helps students to develop a mental model of how computer operating systems work. The internal mechanisms and processes of an operating system (OS) are often complex, non-deterministic and intangible which makes them difficult for students to understand. One such concept is central processing unit (CPU) scheduling. CPU scheduling forms the basis of the multiprogramming in an OS. In practice, OS courses involve classroom lectures describing high-level abstractions of the concepts, and students complete programming assignments to apply the material in a more concrete way. Depending on the programming assignments, this approach may leave students with only a theoretical understanding of OS ideas, which may be different from the actual way these concepts are implemented in an OS. What many students require is a practical knowledge of OS implementation to supplement the high-level presentations of concepts taught in class or presented in a textbook. To bridge the gap between the operating system theory and practical implementation, this research describes the development of an interactive simulation to present the theories involved in CPU scheduling in visualizations and simulations. This thesis discusses a prototype interactive tutoring system (ITS) named as Sched-ITS. The tool covers all the important algorithms of CPU scheduling such as first-come, first-serve (FCFS), round robin (RR), shortest job first (SJF), shortest remaining time first (SRTF), priority with pre-emption, and priority without pre-emption. Sched-ITS also provides graphical visualization of how context switches occur during CPU scheduling in a real operating system. Sched-ITS makes use of the JavaFX framework for visualization and Perf-tool for tracing an OS’s scheduling activities. It presents scheduling activities of background processes as well as pre-defined or user-defined processes. Sched-ITS can display scheduling order changes for different algorithms for the same set of processes in a Linux operating system.

Committee:

Adam R. Bryant, Ph.D. (Committee Chair); Mateen M. Rizki, Ph.D. (Committee Member); Yong Pei, Ph.D. (Committee Member)

Subjects:

Computer Science

Keywords:

CPU Scheduling Visualization; Linux Scheduler Visualization; Perf tool; Scheduler Trace Points; JavaFx

Chen, JitongOn Generalization of Supervised Speech Separation
Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering
Speech is essential for human communication as it not only delivers messages but also expresses emotions. In reality, speech is often corrupted by background noise and room reverberation. Perceiving speech in low signal-to-noise ratio (SNR) conditions is challenging, especially for hearing-impaired listeners. Therefore, we are motivated to develop speech separation algorithms to improve intelligibility of noisy speech. Given its many applications, such as hearing aids and robust automatic speech recognition (ASR), speech separation has been an important problem in speech processing for decades. Speech separation can be achieved by estimating the ideal binary mask (IBM) or ideal ratio mask (IRM). In a time-frequency (T-F) representation of noisy speech, the IBM preserves speech-dominant T-F units and discards noise-dominant ones. Similarly, the IRM adjusts the gain of each T-F unit to suppress noise. As such, speech separation can be treated as a supervised learning problem where one estimates the ideal mask from noisy speech. Three key components of supervised speech separation are learning machines, acoustic features and training targets. This supervised framework has enabled the treatment of speech separation with powerful learning machines such as deep neural networks (DNNs). For any supervised learning problem, generalization to unseen conditions is critical. This dissertation addresses generalization of supervised speech separation. We first explore acoustic features for supervised speech separation in low SNR conditions. An extensive list of acoustic features is evaluated for IBM estimation. The list includes ASR features, speaker recognition features and speech separation features. In addition, we propose the Multi-Resolution Cochleagram (MRCG) feature to incorporate both local information and broader spectrotemporal contexts. We find that gammatone-domain features, especially the proposed MRCG features, perform well for supervised speech separation at low SNRs. Noise segment generalization is desired for noise-dependent speech separation. When tested on the same noise type, a learning machine needs to generalize to unseen noise segments. For nonstationary noises, there exists a considerable mismatch between training and testing segments, which leads to poor performance during testing. We explore noise perturbation techniques to expand training noise for better generalization. Experiments show that frequency perturbation effectively reduces false-alarm errors in mask estimation and leads to improved objective metrics of speech intelligibility. Speech separation in unseen environments requires generalization to unseen noise types, not just noise segments. By exploring large-scale training, we find that a DNN based IRM estimator trained on a large variety of noises generalizes well to unseen noises. Even for highly nonstationary noises, the noise-independent model achieves similar performance as noise-dependent models in terms of objective speech intelligibility measures. Further experiments with human subjects lead to the first demonstration that supervised speech separation improves speech intelligibility for hearing-impaired listeners in novel noises. Besides noise generalization, speaker generalization is critical for many applications where target speech may be produced by an unseen speaker. We observe that training a DNN with many speakers leads to poor speaker generalization. The performance on seen speakers degrades as additional speakers are added for training. Such a DNN suffers from the confusion of target speech and interfering speech fragments embedded in noise. We propose a model based on recurrent neural network (RNN) with long short-term memory (LSTM) to incorporate the temporal dynamics of speech. We find that the trained LSTM keeps track of a target speaker and substantially improves speaker generalization over DNN. Experiments show that the proposed model generalizes to unseen noises, unseen SNRs and unseen speakers.

Committee:

DeLiang Wang (Advisor); Eric Fosler-Lussier (Committee Member); Eric Healy (Committee Member)

Subjects:

Computer Science; Engineering

Keywords:

Speech separation; speech intelligibility; computational auditory scene analysis; mask estimation; supervised learning; deep neural networks; acoustic features; noise generalization; SNR generalization; speaker generalization;

Liu, YatingMotif Selection via a Tabu Search Solution to the Set Cover Problem
Master of Science (MS), Ohio University, 2017, Computer Science (Engineering and Technology)
Transcription factors (TFs) regulate gene expression through interaction with specific DNA regions, called transcription factor binding sites (TFBSs). Identifying TFBSs can help in understanding the mechanisms of gene regulation and the biology of human diseases. Motif discovery is the traditional method for discovering TFBSs. However, current motif discovery tools tend to generate a number of motifs that is too large to permit a biological validation. To address this problem, the motif selection problem is introduced. The aim of the motif selection problem is to select a small set of motifs from the discovered motifs, which cover a high percentage of genomic input sequences. Tabu search, a metaheuristic search method based on local search, is introduced to solve the motif selection problem. The performance of the proposed three motif selection methods, tabu-SCP, tabu-PSC and tabu-PNPSC, were evaluated by applying them to ChIP-seq data from the ENCyclopedia of DNA Elements (ENCODE) project. Motif selection was performed on 46 factor groups which include 158 human ChIP-seq data sets. The results of the three motif selection methods were compared with Greedy, enrichment method and relax integer liner programming (RILP). Tabu-PNPSC selected the smallest set of motifs with the highest overall accuracy. The average number of selected motifs was 1.37 and the average accuracy was 72.47%. Tabu-PNPSC was used to identify putative regulatory element binding sites that are in response to the overproduction of small RNAs RyfA1 in the bacteria Shigella dysenteriae. Six motifs were selected by tabu-PNPSC and the overall accuracy was 75.5%.

Committee:

Lonnie Welch (Advisor)

Subjects:

Bioinformatics; Computer Science

Keywords:

motif selection; tabu search; set cover problem

Jenson, SageDigital Morphologies: Environmentally-Influenced Generative Forms
BA, Oberlin College, 2017, Computer Science
We present a generative method to grow triangular meshes with organically-shaped features. Through the application of simplified forces, millions of particles develop into complex 3D forms in silico. These forms interact with external environments in a variety of ways, allowing for the integration of the proposed technique with pre-existing 3D objects and scenes. Large simulation sizes were computationally achieved through the massively parallel capabilities of modern Graphics Processing Units (GPUs).

Committee:

Robert Bosch (Advisor); Tom Wexler (Advisor)

Subjects:

Computer Science

Keywords:

graphics, 3D, coral, morphology, GPGPU, GPU, parallel computing, CUDA, simulation, organic, growth, particles, collision detection

Billa, Anka BabuDevelopment of an Ultra-Portable Non-Contact Wound Measurement System
Master of Science (MS), Wright State University, 2017, Computer Science
Continuous monitoring of changes in wound size is key to correctly predict whether wounds will heal readily with conventional treatment or require more aggressive treatment strategies. Unfortunately, existing wound measurement solutions don’t meet the clinical demand due to their limitations in accuracy, operating complexity and time, acquisition and operation cost, or reproducibility, resulting in unnecessarily lengthy recovery or extra treatment procedures, incurring an excessively high financial cost, and in many cases extended usage of addictive painkillers. In this thesis, we proposed and developed a low cost, a portable non-contact solution that combines multi-spectral imaging and a portfolio of imaging processing technologies to enable automatic and instantaneous wound identification and measurements. It provides full measurements of a wound: surface area, perimeter, length, and width, without requiring the calibration process as other existing photogrammetry or laser solutions. We have developed a prototype system that illustrates our image and wound analysis capabilities using off-shelf sensor units for capturing images. Our system is capable of identifying emulated wounds in any part of human body surface automatically and highlights them on a customized GUI instantly. Image processing engine running in background analyze and computes wound dimensions with an accuracy of 95%. Our experiment results indicated that the system is reliable, consistent, accurate and reproducible. This research has recently been selected to the 2017 I-Corps@Ohio program, a statewide program to assist faculty and graduate students from Ohio universities and colleges in validating the market potential of their technologies and assisting with launching startup companies.

Committee:

Yong Pei, Ph.D. (Advisor); Mateen Rizki, Ph.D. (Committee Member); Krishnaprasad Thirunarayan, Ph.D. (Committee Member)

Subjects:

Computer Science

Keywords:

Wound Dimensions Measurement; Distance between camera and object; Image Processing; Stereo Image pair Disparity; Feature based Matching

Yao, Miaojun3D Printable Designs of Rigid and Deformable Models
Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering
3D printing has become increasingly popular in recent years, thanks to the substantial flexibility it provides to 3D designers and manufacturers. However, there are still some significant problems to be solved before 3D printing can be widely used in our daily life and the manufacturing industries. First, the high cost of printing material and printing time usually makes it unaffordable to 3D printer users. Second, the build volume of a 3D printer is limited and large objects have to be decomposed so each part can fit into the printer. When it comes to decomposition, problems arise such as how to arrange the parts in a container so the packed size can be minimized and how to achieve stable assembly of those parts. Finally, soft models are usually difficult to design when specific target deformed shapes are desired, due to the complex nonlinearity of elastic deformation. In this dissertation, we focus on solving those problems with novel 3D modeling approaches. First, we present a level-set-based system to divide a 3D model into multiple parts to achieve minimal packed size, as well as other partitioning qualities such as minimal stress load and surface detail alignment. A container structure is constructed meanwhile to facilitate the packing process. We find that this system can serve both space saving and fast printing purposes effectively. Second, we propose a computational framework to design an interlocking structure of a partitioned shell model given a mesh segmentation input. We search for the optimal installation order and installation directions of the pieces based on data-driven and simulation-based metrics, and build male and female connectors on the boundary between pieces. Both time and material can be significantly reduced when printing such partitioned shell models and the assembled object is strong against separation. Finally, we develop a new method to optimize the rest shape of an elastic model so that it can be deformed to a target shape. The algorithm iteratively runs a gradient descent step to optimize the rest shape and a Newton step to obtain quasistatic equilibrium, both of which can be easily parallelized on GPU. The performance can be further improved by not solving the steps exactly in every iteration. To prevent the volumetric mesh from degeneration, we introduce an embedded mesh scheme, where only the embedded surface mesh is changed during the optimization while the volumetric cage mesh is not. Our experiment shows that the method can handle various nonlinear elastic material models and deformation goals fast and robustly.

Committee:

Huamin Wang (Advisor); Yusu Wang (Committee Member); Han-Wei Shen (Committee Member); Brian Joseph (Other)

Subjects:

Computer Engineering; Computer Science

Keywords:

3D printing, 3D packing, interlocking structure, shell modeling, shape partitioning, shape optimization, multi-phase level set, mesh segmentation, inverse elastic problem, GPU computing

Hall, Brenton TaylorUsing the Non-Uniform Dynamic Mode Decomposition to Reduce the Storage Required for PDE Simulations
Master of Mathematical Sciences, The Ohio State University, 2017, Mathematical Sciences
Partial Differential Equation simulations can produce large amounts of data that are very slow to transfer. There have been many model reduction techniques that have been proposed and utilized over the past three decades. Two popular techniques Proper Orthogonal Decomposition and Dynamic Mode Decomposition have some hindrances. Non-Uniform Dynamic Mode Decomposition (NU-DMD), which was introduced in 2015 by Gueniat et al., that overcomes some of these hindrances. In this thesis, the NU-DMD's mathematics are explained in detail, and three versions of the NU-DMD's algorithm are outlined. Furthermore, different numerical experiments were performed on the NU-DMD to ascertain its behavior with repect to errors, memory usage, and computational efficiency. It was shown that the NU-DMD could reduce an advection-diffusion simulation to 6.0075% of its original memory storage size. The NU-DMD was also applied to a computational fluid dynamics simulation of a NASA single-stage compressor rotor, which resulted in a reduced model of the simulation (using only three of the five simulation variables) that used only about 4.67% of the full simulation's storage with an overall average percent error of 8.90%. It was concluded that the NU-DMD, if used appropriately, could be used to possibly reduce a model that uses 400GB of memory to a model that uses as little as 18.67GB with less than 9% error. Further conclusions were made about how to best implement the NU-DMD.

Committee:

Ching-Shan Chou (Advisor); Jen-Ping Chen (Committee Member)

Subjects:

Aerospace Engineering; Applied Mathematics; Computer Science; Mathematics; Mechanical Engineering

Keywords:

Fluid Dynamics; Fluid Flow; Model Reduction; Partial Differential Equations; reducing memory; Dynamic Mode Decomposition; Decomposition; memory; Non-Uniform Dynamic Mode Decomposition

Kim, Dae WookData-Driven Network-Centric Threat Assessment
Doctor of Philosophy (PhD), Wright State University, 2017, Computer Science and Engineering PhD
As the Internet has grown increasingly popular as a communication and information sharing platform, it has given rise to two major types of Internet security threats related to two primary entities: end-users and network services. First, information leakages from networks can reveal sensitive information about end-users. Second, end-users systems can be compromised through attacks on network services, such as scanning-and-exploit attacks, spamming, drive-by downloads, and fake anti-virus software. Designing threat assessments to detect these threats is, therefore, of great importance, and a number of the detection systems have been proposed. However, these existing threat assessment systems face significant challenges in terms of i) behavioral diversity, ii) data heterogeneity, and iii) large data volume. To address the challenges of the two major threat types, this dissertation offers three unique contributions. First, we built a new system to identify network users via Domain Name System (DNS) traffic, which is one of the most important behavior-based tracking methods for addressing privacy threats. The goal of our system is to boost the effectiveness of existing user identification systems by designing effective fingerprint patterns based on semantically limited DNS queries that are missed by existing tracking efforts. Second, we built a novel system to detect fake anti-virus (AV) attacks, which represent an active trend in the distribution of Internet-based malware. Our system aims to boost the effectiveness of existing fake AV attack detection by detecting fake AV attacks in three challenging scenarios: i) fake AV webpages that require user interaction to install malware, instead of using malicious content to run automatic exploitation without users consent (e.g., shellcode); ii) fake AV webpages designed to impersonate real webpages using a few representative elements, such as the names and icons of anti-virus products from authentic anti-virus webpages; and iii) fake AV webpages that offer up-to-date solutions (e.g.,product versions and threat names) to emerging threats. Finally, we built a novel system to detect malicious online social network (OSN) accounts that participate in online promotion events. The goal of our work is to boost the effectiveness of existing detection methods, such as spammer detection and fraud detection. To achieve our goal, our framework that systematically integrates features that characterize malicious OSN accounts based on three of their characteristics: their general behaviors, their recharging patterns, and their currency usage, and then leverages statistical classifier for detection.

Committee:

Junjie Zhang, Ph.D. (Advisor); Adam Robert Bryant, Ph.D. (Committee Member); Bin Wang, Ph.D. (Committee Member); Xuetao Wei, Ph.D. (Committee Member)

Subjects:

Computer Science

Keywords:

network security; fake anti-virus software; intrusion detection; web document analysis; statistical classification; Domain Name System; behavioral fingerprints; privacy; online social networks; virtual currency; malicious accounts

Kasrani, ImenDevelopment of a Performance Assessment System for Language Learning
Master of Science (MS), Wright State University, 2017, Computer Science
Recent advances in computer-assisted, language-speaking, learning/training technology have demonstrated its promising potential to improve the outcome of language learning in early education, special education, English as a Second Language (ESL), and foreign language. The growing number of readily available mobile app-based solutions help encourage interest in learning to speak a foreign language, but their effectiveness is limited due to their lack of objective assessment and performance feedback resembling expert judgment. For example, it has been recognized that, in early education, students learn best with one-on-one instructions. Unfortunately, teachers do not have the time, and it is challenging to extend the learning to the home without the assistance of an independent learning/training tool. In this thesis research, our objective is to develop an effective and practical solution that will help people to learn and practice a new language independently at low cost. We have explored the use of real-time speech recognition, language translation, text synthesis, artificial intelligence (AI), and language intelligibility assessment technologies to develop a learning/training system that provides automatic assessment and instantaneous feedback of language-speaking performance in order to achieve an independent-learning workflow. Furthermore, we have designed and implemented a successful prototype system that demonstrates the feasibility and effectiveness of such a computer-assisted independent learning/training solution. This prototype can be easily used on a computer, tablet, smartphone, and other portable devices, and provides a new learning experience that is augmented and enhanced by objective assessment and significant feedback in order to improve the language-speaking proficiency of its user. Additionally, it may be used for real-time translation to support conversation across different languages. Our experimental results demonstrate that the proposed system can sufficiently analyze the intelligibility of one’s speaking, accurately identify mispronounced words, and define a feedback that localizes and highlights errors for continuous practice toward perfection.

Committee:

Yong Pei, Ph.D. (Advisor); Mateen Rizki, Ph.D. (Committee Member); Paul Bender, Ph.D. (Committee Member); Anna Lyon, Ed.D. (Committee Member)

Subjects:

Computer Science

Keywords:

Performance Assessment; Language Learning; Speech Recognition

Chen, ZhiangDeep-learning Approaches to Object Recognition from 3D Data
Master of Sciences, Case Western Reserve University, 2017, EMC - Mechanical Engineering
This thesis focuses on deep-learning approaches to recognition and pose estimation of graspable objects using depth information. Recognition and orientation detection from depth-only data is encoded by a carefully designed 2D descriptor from 3D point clouds. Deep-learning approaches are explored from two main directions: supervised learning and semi-supervised learning. The disadvantages of supervised learning approaches drive the exploration of unsupervised pretraining. By learning good representations embedded in early layers, subsequent layers can be trained faster and with better performance. An understanding of learning processes from a probabilistic perspective is concluded, and it paves the way for developing networks based on Bayesian models, including Variational Auto-Encoders. Exploitation of knowledge transfer--re-using parameters learned from alternative training data--is shown to be effective in the present application.

Committee:

Wyatt Newman, PhD (Advisor); M. Cenk Çavusoglu, PhD (Committee Member); Roger Quinn, PhD (Committee Member)

Subjects:

Computer Science; Medical Imaging; Nanoscience; Robotics

Keywords:

deep learning; 3D object recognition; semi-supervised learning; knowledge transfer

Next Page