Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 68)

Mini-Tools

 
 

Search Report

  • 1. Massimino, Brett Operational Factors Affecting the Confidentiality of Proprietary Digital Assets

    Doctor of Philosophy, The Ohio State University, 2014, Business Administration

    The leakage of an organization's proprietary, digital assets to unauthorized parties can be a catastrophic event for any organization. The magnitude of these events have been recently underscored by the Target data breach, in which 70 million consumer credit card accounts were compromised, and financial costs are expected to exceed $1 billion. Digital assets have steadily progressed beyond low-value data and information, and into high-value knowledge-based domains. Failures to protect these latter types of digital assets can have even greater implications for firms or even macroeconomic conditions. Using the Target event as an illustrative motivation, we highlight the importance of two relatively-unexplored topics within the domain of digital asset protections - (1) vendor management, and (2) worker adherence to standard, well-codified procedures and technologies. We explicitly consider each of these topics through the separate empirical efforts detailed in this dissertation. Our first empirical effort examines the effects of sourcing and location decisions on the confidentiality of digital assets. We frame our study within a product-development dyad, with a proprietary, digital asset being shared between partners. We treat confidentiality as a performance dimension that is influenced by each organization accessing the asset. Specifically, we empirically investigate the realm of electronic video game development and the illegal distribution activities of these products. We employ a series of web-crawling data collection programs to compile an extensive secondary dataset covering the legitimate development activities for the industry. We then harvest data from the archives of a major, black-market distribution channel, and leverage these data to derive a novel, product-level measure of asset confidentiality. We examine the interacting factors of industrial clustering (agglomeration) and national property rights legislations in affecting this confidentiality m (open full item for complete abstract)

    Committee: John Gray (Advisor); Kenneth Boyer (Advisor); James Hill (Committee Member); Elliot Bendoly (Committee Member) Subjects: Business Administration
  • 2. Young, Matthew Securing Data Integrity: A Framework for Risk Mitigation

    Master of Technology Management (MTM), Bowling Green State University, 2024, Technology Management/Quality Systems

    In the contemporary landscape of digital information, safeguarding data integrity has become imperative for organizations across various industries alike. This thesis addresses the escalating challenges associated with potential compromises to data integrity and introduces a framework tailored for effective risk mitigation. Through an in-depth examination of current available data, this research identifies vulnerabilities that expose systems to integrity threats, drawing insights from existing literature and pertinent case studies. The proposed framework synthesizes industry best practices, integrating key elements such as encryption, authentication, and continuous monitoring. Emphasizing a proactive stance, the framework aligns with established industry standards and regulatory guidelines to establish a robust defense against threats, both internal and external. Real-world applications of failure stories are examined to validate the framework's effectiveness, or lack thereof, in diverse organizational settings. This research aims to make a meaningful contribution to the field of data integrity security by offering practical solutions to the complex challenge of data integrity risks in the pharmaceutical and biotechnology industries. There are numerous similarities between these two industries, one of which being that each have the same rigorous data integrity requirements which must be met, therefore the framework constructed within this document is applicable to both industries alike. The insights derived from this study are intended to empower organizations, enabling them to fortify their digital and personnel infrastructure and maintain the resilience and trustworthiness of their data in the face of an ever-evolving threat landscape.

    Committee: Christopher Kluse Ph.D. (Committee Chair); Mohammed Abouheaf Ph.D. (Committee Member) Subjects: Management; Technology
  • 3. Zakaria, Yusuf A Data-Driven Framework for the Implementation of Dynamic Automated Warehouse Systems

    Master of Science (MS), Ohio University, 2024, Industrial and Systems Engineering (Engineering and Technology)

    In response to escalating inventory costs, dynamic purchasing needs, and the demand for rapid operations in the retail sector, both the warehousing and retail industries have accelerated their pace of innovation. Among these advances, the development of automated warehousing and storage systems stands out. However, despite widespread adoption, a comprehensive framework for effectively implementing these systems remains lacking. Hence, this study proposes a systematic approach that provides a foundational blueprint for harnessing vital information from historical sales data in the deployment of intelligent warehouse systems, incorporating a wide array of Automated Storage and Retrieval Systems (AS/RS) technologies. Specifically, it employs unsupervised machine learning for time series clustering to analyze historical sales data, while adapting and modifying the Recency, Frequency, Monetary (RFM) model to optimize the prioritized management of stock-keeping units (SKUs) in periodic segments.

    Committee: Tao Yuan (Advisor); Omar Alhawari (Committee Member); Gary Weckman (Committee Member); Ashley Metcalf (Committee Member) Subjects: Engineering; Industrial Engineering; Management; Sustainability; Systems Design; Technology
  • 4. Vavruska, Mary Food Allergies In College Students: Knowledge, Symptom Management, and Response Times

    BS, Kent State University, 2024, College of Nursing

    Introduction: College students with allergies to food face unique challenges regarding their knowledge of food allergies, allergy and symptom management, the time it takes them to respond to an allergic reaction. Subsequently there is a gap in the literature surrounding these topics that this study and survey aims to address and fill the gap in the literature. Methods: The cross-sectional design using a survey developed for the study examined the knowledge, symptom management, and emergency response times using a prescribed EpiPen of college students by utilizing questionnaires designed specifically for the study and the FAQLQ-AF questionnaire. Analysis assessed the constructs of knowledge, symptom management, and response times in the chosen population. Demographic data included age and class rank. All personal information were de-identified from the participants to protect privacy. Consent was obtained before participating in the survey. Procedure: 75 Kent State students ages 18-22 that fulfilled the inclusion criteria took part in the survey. Instructions for the completion of the survey, consent, and the survey uploaded to Qualtrics. Analysis and Results: Data cleaning was completed by the honor student and Dr. Dowell in preparation for data analysis. Data analysis was completed using Statistical Package for the Social Sciences (SPSS) with the help of campus statistician Kristin Yeager. Data analysis included descriptive analysis, means and standard deviations, as well as analysis of variance with regression models.

    Committee: Jo Dowell PhD, APRN-CNP, PNP, FNP-BC, CFNE (Advisor) Subjects: Health; Health Care; Health Sciences; Nursing
  • 5. Hsu, Yuan-Ming Topological Modeling Driven Data-Centric Techniques for PHM

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Mechanical Engineering

    Prognostics and Health Management (PHM) has evolved into a pivotal technique, integral for enhancing the efficiency and availability of industries. The proliferation of real-time data, facilitated by advancements in sensor and communication technology, has led to innovative applications in various industries. However, this surge in data production poses new challenges for the PHM domain, necessitating solutions that encompass cloud technology and sophisticated modeling proficiency. Furthermore, the inherent variance and uncertainty present in collected data exacerbate the complexity of PHM modeling. This study introduces a methodical approach to developing an efficient model for PHM, particularly when dealing with limited data. A method to select samples accordingly forms the feature of this methodology, aiming to overcome the challenges posed by the substantial volume of data. The strategy involves the development of a sample generation and selection mechanism, ensuring the identification of relevant and usable samples that significantly enhance the overall model performance. To tackle the high volume and dimensionality of the data, a topological model is crafted, providing a robust framework for effective data representation. This model proves invaluable in navigating the intricacies of large datasets, contributing to the creation of a more resilient PHM model. Simultaneously, distinct evaluation strategies are devised to address the unique challenges presented by different areas within the PHM domain. The proposed methodology undergoes validation using several industrial cases, encompassing a simulated dataset of plastic injection molding, as well as two public datasets from the semiconductor and additive manufacturing sectors. Comparing the proposed method to established ones helps prove its effectiveness and superiority. In essence, this research fills a critical gap in the PHM landscape by presenting a methodical and adaptive approach to modeling w (open full item for complete abstract)

    Committee: Jay Lee Ph.D. (Committee Chair); Thomas Richard Huston Ph.D. (Committee Member); David Siegel Ph.D. (Committee Member); Jing Shi Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 6. Kang, Donghe Physical Layout Optimization in Scientific and Analytic Data Management Systems

    Doctor of Philosophy, The Ohio State University, 2023, Computer Science and Engineering

    The performance of scientific and analytic applications faces challenges as data volume and complexity have dramatically increased in past decades. Data management systems develop many techniques to optimize data storage layouts, thus speeding up the I/O procedure in these applications. Data partitioning and data placement are two representative techniques. Prior research splits a large object, such as a scientific array or a table, into many hyper-rectangular partitions. However, complex workloads expose non-rectangular access patterns that do not match with rectangular partitions. Furthermore, prior studies optimize the partitioning layout in a single data store. Modern systems often have diverse storage infrastructures, requiring a smart data placement strategy. Both data partitioning and placement decide the storage layout but prior studies consider them independently. This dissertation, therefore, matches the partitioning layout with the non-rectangular access pattern and jointly tunes data partitioning and data placement to maximize the I/O performance. As partitions of an object are placed and processed in a few data stores, a query on such an object is split into fragments executed separately. A significant challenge for the split execution is to reconstruct final answers efficiently. Stitching partitions incurs a massive amount of memory operations, especially for split SQL executions because relational operators return data in random orders. Sorting is one of the most expensive operators in RDBMSs. A slow reconstruction procedure easily offsets the benefit of optimized layouts. This dissertation makes four contributions to optimize the physical storage layout in scientific and analytic data management systems. The first contribution is a human-interpretable model to predict the end-to-end query evaluation time. The model is the foundation for comparing candidate layouts in optimization algorithms. The second contribution is an algorithm that jointly (open full item for complete abstract)

    Committee: Spyros Blanas (Advisor); Suren Byna (Committee Member); John Paparrizos (Committee Member); Yang Wang (Committee Member) Subjects: Computer Engineering; Computer Science
  • 7. Robin, DEBOBROTO DAS LEVERAGING PISA SWITCHES FOR TRAFFIC-AWARE IN NETWORK LOAD BALANCING IN DATA CENTER NETWORKS

    PHD, Kent State University, 2023, College of Arts and Sciences / Department of Computer Science

    Over the last two decades, the exponential growth in data consumption, applications to facilitate them, and the evolution of application architectures have given rise to high-entropy traffic patterns through data center networks. Load balancing plays a pivotal role in managing this complex traffic pattern by link layer and application layer load balancing. The conventional load balancing meth- ods, while effective to some extent, have faced challenges in responding to the dynamic nature of the traffic pattern. To tackle this challenge, the load balancing concept has transformed, with the emergence of traffic awareness as a critical factor in its implementation. It requires the dynamic weight assignment/update for the paths while load balancing to adapt to high traffic variation. Re- cently emerging ”Protocol Independent Switch Architecture” (PISA) based programmable switches have unleashed the scope of implementing custom load balancing algorithms inside the network fab- ric. These in-network load balancers have the potential to achieve scalable, high throughput, and highly traffic-aware load balancing at the data center scale. However, PISA switches are designed with a limited per-packet computational budget to achieve extremely high throughput in the 6-100 Tbps range. Data center switches need to implement a large set of complex protocols and features. Delegating load-balancing logic to these switches consumes a costly computational budget, making implementing other primary features infeasible. As a result, existing PISA switch-based in-network load balancers are primarily designed for either link or application layer load balancing to meet this budget constraint. Moreover, they can not offer scalability for the same reason. This dissertation takes the next step to fill this critical gap and designs a weighted cost mechanism-based configurable load balancer (CLB)using PISA switches. CLB can offer scalable load balancing for both the link and ap (open full item for complete abstract)

    Committee: Javed I. Khan (Advisor) Subjects: Communication; Computer Science
  • 8. Konaje, Akarsh Mohan Fleet Management for Energy Efficient Operations of Commercial Vehicles

    Master of Science, The Ohio State University, 2023, Electrical and Computer Engineering

    Over the past decade, there has been a growing movement towards reducing the carbon footprint which involves striving for net-zero emissions and developing an infrastructure that can sustain it. Among the end-use sectors, transportation accounts for nearly a third of overall greenhouse gas (GHG) emissions, with commercial vehicles as a huge contributor, making it imperative for this industry to adapt to emerging technologies to accommodate the expectations of a green and sustainable mobility vision. Battery electric and fuel cell vehicle technologies are suitable candidates to replace the existing conventional fossil fuel powered vehicle architectures but a complete transformation is nigh realizable due to various impediments. A soft impact is vital to ease the transformation process to foster growth and acceptance among industry partners as well as prepare for any unseen hurdles along the way. The work presented in this thesis focuses on designing and operating commercial vehicle fleets, introducing a novel fleet management system (FMS) framework capable of providing energy efficient mobility solutions. The FMS uses a recommender system comprised of a Design Space Filter (DSF) module to provide a feasible set of powertrains from the vehicle configuration database and uses machine learning algorithms to estimate the energy consumption for a given drive cycle, ranking them on the basis of their freight energy efficiency metric and ultimately aiding in the fleet composition design process. Due to the usage of highly confidential data pertaining to vehicle behavior, operators and OEMs are not keen on sharing this data unless there are agreements and secure data-sharing procedures established. Aware of this data bottleneck, the FMS leverages federated learning technique to estimate vehicular performance attributes and provides inferences which can be utilized for analyzing fleet behavior and enhancing fleet operations. This is extended to learn the mobility dynamics of (open full item for complete abstract)

    Committee: Qadeer Ahmed (Advisor); Parinaz Naghizadeh (Committee Member); Manfredi Villani (Other) Subjects: Artificial Intelligence; Automotive Engineering; Computer Engineering; Electrical Engineering; Sustainability; Transportation
  • 9. Johnson, Raven-Seymone Referral Management: An Exploration of the Timeliness of the Referral Management Protocol within an Accountable Care Organization (ACO) between Primary Care and Specialty Care

    Doctor of Healthcare Administration (D.H.A.), Franklin University, 2022, Health Programs

    An Accountable Care Organization (ACO) was first created during the enactment of the Affordable Care Act (ACA) in 2010. An ACO is a group of physicians, hospitals, and other providers that voluntarily form together to coordinate a value-based care approach that handled the best quality of care for patients and delivers the right care at the right time. The appropriate means of communication between these various groups are through referral management and processing. This study used an exploratory qualitative approach to understand the perceived barriers around components impacting the timeliness of referrals between Primary Care providers and Specialists. In-depth interviews with 21 participants that represented departments of primary care, specialty care, and operations were conducted via Zoom or Microsoft Teams in gathering their understanding on the efficiencies, barriers, and root cause analysis as it pertains to the referral process. The interviews were transcribed verbatim, coded, and analyzed for major themes. ATLAS.ti Cloud software was used for coding analysis of the collected data. The three major components that were discussed during the interviews were regarding network management, operational excellence, and technology enablement. Six major themes and 16 sub- themes resulted from the interviews. Recommendations for the perceived barriers were included for future healthcare administrators operating ACOs.

    Committee: David Meckstroth (Committee Chair); Jesse Florang (Committee Member); Scott McDoniel (Committee Member) Subjects: Finance; Health; Health Care; Health Care Management
  • 10. Cai, Haoshu Modeling of High-Dimensional Industrial Data for Enhanced PHM using Time Series Based Integrated Fusion and Filtering Techniques

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Mechanical Engineering

    Prognostics and Health Management (PHM) has extended its frontiers to more pervasive applications for failure detection, process monitoring, and predictive maintenance in the increasingly complicated manufacturing environment. Meanwhile, as Internet of Things (IoT) technologies are developed rapidly, the research for PHM is facing non-negligible challenges in several aspects. The advancement in the volume, velocity, and variety of the manufacturing data demands improved analytics of PHM solutions. The mass of the manufacturing data demands more efficient selection strategy to exclude the incorrect and useless information. Also, in the industrial environment, the high-dimensional data is usually collected from various sensor recordings with changes and drifts, which constitute the fundamental properties of the stream data. The advanced PHM techniques are required to be capable to capture and track the coming information within the high-dimensional data continuously and adaptively. To deal with the challenges and research gaps, this research proposes a scalable methodology for discrete time series prediction based on industrial high-dimensional data. First, a reference-based fusion strategy is proposed and employed to combine the valuable knowledge from the historical data, to reduce data dimensionality and to exclude the information which is not helpful for further analysis. Second, a state modeling strategy is designed to fuse both the reference data selected by the previous strategy and the past time series data. Also, it formulates an efficient and accurate function to depict the relationship between the predictor and the target. Finally, a Bayesian filter is designed to deal with the strong non-linearity, to propagate in high-dimensional space and to learn the new knowledge continuously in the stream data without losing the properties of the historical data. Finally, three cases from different industrial environments are implemented to justify the feasibility, ef (open full item for complete abstract)

    Committee: Jay Lee Ph.D. (Committee Member); David Siegel Ph.D. (Committee Member); Jing Shi Ph.D. (Committee Member); Jay Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 11. Giraldo Gaviria, Sebastian Context-Aware Design Framework: Using a bottom-up approach for breaking down and contextualizing design problems.

    MFA, Kent State University, 2021, College of Communication and Information / School of Visual Communication Design

    When approaching a design problem, design practitioners need to analyze the situation and identify the many variables at play (context) to frame the problem and propose an informed solution. This thesis focuses on creating and implementing a modeled approach to understanding context by identifying and coding data gathered by the practitioners through data tagging to identify patterns and themes coming from multiple inputs. The primary research highlighted critical findings regarding the tagging process and the effects of a broad/narrow tagging approach and generated concepts like the "information density scale" and "tag quality." Furthermore, by applying carefully crafted research on alternative visual platforms, the research refined the types of tools that should be developed and provided information on the best practices to code and aggregate the data. In the end, this thesis created a prototype for a computer application that operates as a context-aware framework, based on solid conceptual notions that could help understand the context more systematically. This prototype provides an innovative way to understand contextual information to identify data patterns reliably and consistently; furthermore, it unlocks new ways to experience and understand the information and consequently how to use it to improve the design process.

    Committee: Jessica Barness (Advisor); Gretchen Rinnert (Committee Member); Ken Visocky O’Grady (Committee Member) Subjects: Design
  • 12. Yang, Shaojie A Data Augmentation Methodology for Class-imbalanced Image Processing in Prognostic and Health Management

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Mechanical Engineering

    Machine vision is commonly used in the field of prognostics and health management (PHM) for industrial applications, including defect reduction, robot assistant, quality inspection, safe work environment, reading characters, and packing inspection. It provides degradation information on parts that are too small to be seen by the human eye, and can also help classify the root cause of the fault without damaging parts or installing additional sensors. Many deep learning-based models have been studied for machine vision-based applications, but few studies focus on imbalanced data issues, and most studies are based on data that has a good amount of diversity. Modern deep-learning-based models require class-balanced data to avoid overfitting. In many industrial applications, it is difficult to collect class-balanced data to train deep learning models. For that reason, choosing the appropriate method to significantly increase the diversity of data available for industrial applications is critical for training modern deep learning-based models. Therefore, this research focuses on developing a data augmentation methodology for deep learning-based fault diagnosis modeling, to improve the quantity and diversity of class-imbalanced data without actually collecting new data. In this thesis, a cross-class data augmentation approach using convolutional autoencoder latent space interpolation is proposed for industrial image processing applications to overcome the presents of class-imbalanced dataset. During training stage, a latent space augmentation model is constructed into the traditional geometric transformation image augmentation approach. New samples are synthesized by interpolating extracted features from convolutional autoencoders. The developed approach has been validated using degradation assessment of cutting wheel and wafer map failure pattern recognition. And proposed method is benchmarked with geometric transformations and generative adversarial networks (GAN) usi (open full item for complete abstract)

    Committee: Jay Lee Ph.D. (Committee Chair); Jay Kim Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 13. MoosaviNejadDaryakenari, SeyedSobhan Telematics and Contextual Data Analysis and Driving Risk Prediction

    Doctor of Philosophy, The Ohio State University, 2020, Computer Science and Engineering

    Analysis of telematics data collected from drivers in real-time, along with contextual data (such as traffic and weather, data), provides valuable insights regarding an individual's driving behavior, common driving habits, and characteristics of a road network. The primary focus in this dissertation is on predicting the risk in driving, with the risk being a combination of the risk of the driver, the risk in a route, and the risk caused by driving conditions. We propose several data-analytic techniques for driving risk prediction and related caused and apply them to different sources of telematics and contextual data to extract useful insights. We seek to answer the following research question: How can telematics data and their context be modeled to make a fair and sound prediction about driving risk? In answering this question, we take two concerns into account: 1) scoring of driving risk is not a context-agnostic process, and 2) driving risk is not independent of the personality of drivers and their driving skills. We propose a solution that consists of three parts: a) characterizing driving context, b) characterizing driving style, and c) context-aware driving risk prediction. The first two parts derive useful insights that we leverage to design the third part. Characterizing driving context is about exploring properties of different contexts. We propose two solutions for this task. The first solution, which we term segmentation and causality analysis, derives the characteristics of contexts from the aggregate behavior of drivers. We start by segmenting trajectories to identify meaningful driving patterns (e.g., a hard brake). Then we analyze each pattern with respect to contextual data to identify cause-and-effect patterns (e.g., traffic signal --> hard-braking event). The second solution is a geo-spatiotemporal pattern discovery framework based on contextual data. We propose a new framework that explores two types of patterns, propagation and influential. P (open full item for complete abstract)

    Committee: Rajiv Ramnath (Advisor); Srinivasan Parthasarathy (Committee Member); Wei-Lun (Harry) Chao (Committee Member) Subjects: Computer Engineering; Computer Science
  • 14. Kadariya, Dipesh kBot: Knowledge-Enabled Personalized Chatbot for Self-Management of Asthma in Pediatric Population

    Master of Science (MS), Wright State University, 2019, Computer Science

    Asthma, chronic pulmonary disease, is one of the major health issues in the United States. Given its chronic nature, the demand for continuous monitoring of patient's adherence to the medication care plan, assessment of their environment triggers, and management of asthma control level can be challenging in traditional clinical settings and taxing on clinical professionals. A shift from a reactive to a proactive asthma care can improve health outcomes and reduce expenses. On the technology spectrum, smart conversational systems and Internet-of-Things (IoTs) are rapidly gaining popularity in the healthcare industry. By leveraging such technological prevalence, it is feasible to design a system that is capable of monitoring asthmatic patients for a prolonged period and empowering them to manage their health better. In this thesis, we describe kBot, a knowledge-driven personalized chatbot system designed to continuously track medication adherence of pediatric asthmatic patients (age 8 to 15) and monitor relevant health and environmental data. The outcome is to help asthma patients self manage their asthma progression by generating trigger alerts and educate them with various self-management strategies. kBOT takes the form of an Android application with a frontend chat interface capable of conversing both text and voice-based conversations and a backend cloud-based server application that handles data collection, processing, and dialogue management. The domain knowledge component is pieced together from the Asthma and Allergy Foundation of America, Mayoclinic, and Verywell Health as well as our clinical collaborator. Whereas, the personalization aspect is derived from the patient's history of asthma collected from the questionnaires and day-to-day conversations. The system has been evaluated by eight asthma clinicians and eight computer science researchers for chatbot quality, technology acceptance, and system usability. kBOT achieved an overall technology acceptance (open full item for complete abstract)

    Committee: Amit Sheth Ph.D. (Advisor); Krishnaprasad Thirunarayan Ph.D. (Committee Member); Valerie Shalin Ph.D. (Committee Member); Maninder Kalra M.D., Ph.D. (Committee Member) Subjects: Computer Science; Health Care Management; Information Technology
  • 15. AL-Dohuki, Shamal INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Computer Science

    Advanced sensing technologies and computing infrastructures have produced a variety of trajectory data of moving objects in urban spaces. One type of this data is taxi trajectory data. It records real-time moving paths sampled as a series of positions associated with vehicle attributes over urban road networks. Such data is big, spatial, temporal, unstructured and it contains abundant knowledge about a city and its citizens. Exploratory visualization systems are needed to study taxi trajectories with efficient user interaction and instant visual feedback. The extracted information can be utilized in many important and practical applications to optimize urban planning, improve human life quality and environment. As the primary novelty contribution, this thesis presents a set of visual analytics solutions with different approaches to interacting with massive taxi trajectory data to allow analysts to look at the data from different perspectives and complete different analytical tasks. Our approaches focus on how people directly interact with the data store, query and visualize the results and support practitioners, researchers, and decision-makers to advance transportation and urban studies in the new era of the smart city. First, we present SemanticTraj, a new method for managing and visualizing taxi trajectory data in an intuitive, semantic rich, and efficient means. In particular, taxi trajectories are converted into taxi documents through a textualization transformation process. This process maps global positioning system (GPS) points into a series of street/POI names and pickup/drop-off locations. It also converts vehicle speeds into user-defined descriptive terms. Then, a corpus of taxi documents is formed and indexed to enable flexible semantic queries over a text search engine. Second, we present a visual analytics system, named as QuteVis, which facilitates domain users to query and examine traffic patterns from large-scale traffic data in an urban transpor (open full item for complete abstract)

    Committee: Ye Zhao (Committee Chair); Cheng-Chang Lu (Committee Member); Xiang Lian (Committee Member); Xinyue Ye (Committee Member); Xiaoling Pu (Committee Member) Subjects: Computer Science
  • 16. Heaton, Tyler Cloud Based IP Data Management Theory and Implementation for a Secure and Trusted Design Space

    Master of Science, The Ohio State University, 2019, Electrical and Computer Engineering

    IP Data management in cloud computing presents a challenge with scalability and integration. With extended IP lifecycles and considerably large design files, IP provenance has become a priority to drive the need of a cloud design environment - research has gone into selecting a data management tool which promotes IP reuse and third party integration. This thesis presents work on the optimization of IP (intellectual property) data management in an analog design environment. A myriad of IP data management platforms were researched, vetted, and implemented in cloud infrastruc- tures. The features of the tools were compared and analyzed for performance in a cloud setting. Moreover, the server client architecture for these IP data management tools has been defined to enable the success of analog and digital design in a cloud computing atmosphere. In order to promote IP provenance and tracking, data must exist in a trusted design space which monitors user and data traffic, captures changes made to IP, and allocates permissions across design environments.

    Committee: Waleed Khalil (Advisor); Steve Bibyk (Committee Member); Shane Smith (Advisor) Subjects: Electrical Engineering; Engineering
  • 17. Gao, Ce Use of Data Analytics and Machine Learning to Improve Culverts Asset Management Systems

    PhD, University of Cincinnati, 2019, Engineering and Applied Science: Civil Engineering

    Culverts play an important role in the road drainage system. They are crucial to traffic safety because culvert failures are normally followed by pavement failures which directly put the traveling public in jeopardy. However, managing culverts is a challenging task for transportation agencies. This difficulty is mainly associated with the culvert inspection system for several reasons. Firstly, unlike other transportation assets like bridges or pavement, culverts are often installed beneath the ground, which significantly limits their accessibility for conducting on-site condition inspections. Furthermore, culvert inspections can only be performed by trained inspectors because they involve a combination of skills and experience, and require an intimate knowledge of the types of materials along with an understanding of the design and installation criteria for culverts. In addition, compared to other transportation assets like bridges and pavement, culverts are often given lower priority for funding and resource allocation in the whole asset management system. Furthermore, the large number of culverts that are in use makes the whole culvert inspection process very time consuming and labor intensive. In Ohio alone, there are over 90,000 culverts installed under interstate and state highways. Finally, the criteria used by most state Department of Transportations (DOTs) to schedule culvert inspections are based on a single or on only a few factors, the scheduling system is often rigid and treat all culverts in the same way. Considering culvert maintenance and repair works greatly rely on culvert inspection results, and culvert inspection will only be performed when they are scheduled, it is very important to have a better culvert inspection scheduling system that is able to improve the culvert management system (CMS). Using Machine Learning (ML) and data analytics techniques, this dissertation developed culvert condition prediction models to provide a more selective cul (open full item for complete abstract)

    Committee: Hazem Elzarka Ph.D. (Committee Chair); Raj Bhatnagar Ph.D. (Committee Member); Janet Jiaxiang Dong Ph.D. (Committee Member); Jiaqi Ma Ph.D. (Committee Member); Julian Wang Ph.D. (Committee Member) Subjects: Civil Engineering
  • 18. Stout, Blaine Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms

    Doctor of Philosophy, University of Toledo, 2018, Manufacturing and Technology Management

    Today's small-market and mid-market sized manufacturers, competitively face increasing pressure to capture, integrate, operationalize, and manage diverse sources of digitized data. Many have made significant investments in data technologies with the objective to improve on organization performance yet not all have realized demonstrable benefits that create organization value. One simple question arises, do business-analytics make a difference on company performance in today's information intensive environment? The research purpose, to explore this question by looking through the lens of data-centric pressure placed on management driving the invested use of data-technologies; how these drivers impact on management influence to adopt a digitized organization mindset, effecting data practices, shaping key processes and strategies and leading to capabilities growth that impact on performance and culture. The terms `Big Data' and `Small Data' are two of the most prolific used phrases in today's world when discussing business analytics and the value data provides on organization performance. Big Data, being strategic to organization decision-making, and Small Data, operational; is captured from a host of internal and external sources. Studying how leveraging business-analytics into organizational value is of research benefit to both academic and practioner audiences alike. The research on `Big and Small Data, and business analytics' is both varied and deep and originating from a host of academic and non-academic sources; however, few empirical studies deeply examine the phenomena as experienced in the manufacturing environment. Exploring the pressures managers face in adopting data-centric managing beliefs, applied practices, understanding key value-creating process strategy mechanisms impacting on the organization, thus provides generalizable insights contributing to the pool of knowledge on the importance of data-technology investments impacting on organizational cul (open full item for complete abstract)

    Committee: Paul Hong (Committee Chair); Thomas Sharkey (Committee Member); Wallace Steven (Committee Member); Cheng An Chung (Committee Member) Subjects: Information Systems; Information Technology; Management; Organization Theory; Organizational Behavior
  • 19. Jia, Xiaodong Data Suitability Assessment and Enhancement for Machine Prognostics and Health Management Using Maximum Mean Discrepancy

    PhD, University of Cincinnati, 2018, Engineering and Applied Science: Mechanical Engineering

    As more and more data become available for machine prognostic analysis in the big data environment, effective data suitability assessment methods become highly desired to help locate data with sufficient quality for analysis. Driven by this purpose, this paper proposes a novel and systematic methodology for data suitability assessment and enhancement based on the needs of prognostics and health management (PHM). In this study, the data suitability for PHM is assessed from the aspects of detectability, diagnosability and trendability, which correspond to the three major tasks of PHM -- fault detection, fault diagnosis, and fault prognosis. The proposed methodology is mainly built upon the recent researches on maximum mean discrepancy in the field of machine learning, which include a family of test statistics that are used to test the difference between two data distributions. After the data suitability is calibrated, a systematic methodology for data suitability enhancement is then proposed to identify the important features for fault detection, diagnosis and prognosis. The data suitability in the present study is enhanced by eliminating the irrelevant for specific PHM tasks. The effectiveness of the proposed methodology is demonstrated using 5 different industrial applications, which include the fault detection in semiconductors manufacturing, boring tool degradation assessment and life prediction, sensorless drive diagnosis, aero-engine remaining useful life prediction and ball screw fault diagnosis and prognostics. The results in these case studies indicate that the proposed methodology can be a promising tool to evaluate whether the data under study or the extracted feature set is suitable for PHM development. And it also can effectively identify the subset of features that would be useful for fault detection, diagnosis and prognosis respectively.

    Committee: Jay Lee Ph.D. (Committee Chair); Kelly Cohen Ph.D. (Committee Member); Jay Kim Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 20. Sridharan, Vaikunth Sensor Data Streams Correlation Platform for Asthma Management

    Master of Science (MS), Wright State University, 2018, Computer Science

    Asthma is a high-burden chronic inflammatory disease with prevalence in children with twice the rate compared to adults. It can be improved by continuously monitoring patients and their environment using the Internet of Things (IoT) based devices. These sensor data streams so obtained are essential to comprehend multiple factors triggering asthma symptoms. In order to support physicians in exploring causal associations and finding actionable insights, a visualization system with a scalable cloud infrastructure that can process multimodal sensor data and Patient Generated Health Data (PGHD) is necessary. In this thesis, we describe a cloud-based asthma management and visualization platform that integrates personalized PGHD from kHealth kit and outdoor environmental observations from web services. When applied to data from an individual, the tool assists in analyzing and explaining symptoms using "personalized" causes, monitor disease progression, and improve asthma management. The front-end visualization was built with Bootstrap Framework and Highcharts. Google's Firebase and Elasticsearch engine were used as back-end storage to aggregate data from various sources. Further, Node.js and Express Framework were used to develop several Representational State Transfer services useful for the visualization.

    Committee: Amit Sheth Ph.D. (Advisor); Krishnaprasad Thirunarayanan Ph.D. (Committee Member); Maninder Kalra Ph.D. (Committee Member); Valerie Shalin Ph.D. (Committee Member) Subjects: Computer Science; Health