Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 83)

Mini-Tools

 
 

Search Report

  • 1. Soukieh, Tarek How Can Business Analytics Induce Creativity: The Performance Effects of User Interaction with Business Analytics

    Doctor of Business Administration, Cleveland State University, 2016, Monte Ahuja College of Business

    Most organizations today use business analytics systems mainly for efficiency; reducing cost by contacting the right customer, generating revenue by reducing churn, etc. Nevertheless, business analytics holds promise in generating insights and in making users more creative in their decision making process. Analytics technology is becoming sophisticated with very advanced technical capabilities. However, behavioral aspects (i.e. user interaction) of using business analytics software have not reached the same level of sophistication. Very little research in this field discusses how to implement analytical systems and what outcomes will it produce. We are looking at conditions that can enhance user interaction with business analytics systems leading to certain performance outcomes. We propose that the fit between users' cognitive style (intuitive vs. rational), business analytics model representations (decision tree vs. clustering), and task type (convergent vs. divergent) can lead to efficiency but can have adverse effects on creativity because that might lead to mindlessness in the decision making process.

    Committee: Raymond Henry PhD (Committee Chair); Radha Appan PhD (Committee Member); Amit Ghosh PhD (Committee Member); Robert Whitbred PhD (Committee Member) Subjects: Business Administration; Computer Science; Information Systems; Information Technology; Management; Statistics; Systems Design; Systems Science
  • 2. Holovchenko, Anastasiia Development and evaluation of an interactive e-module on Central Limit Theorem

    Honors Theses, Ohio Dominican University, 2023, Honors Theses

    This paper describes the process of development and evaluation of an open educational resource (OER) e-module on the Central Limit Theorem written for an Introductory Statistics college-level course. The purpose of this project is two-fold. First, the e-module bridges the knowledge gap between introductory topics and Hypothesis Testing – one of the most challenging concepts in Statistics. Second, the project focuses on developing tools that allow instructors to analyze the effectiveness of the module and reveal student patterns of interaction with the platform. The overall goal of the project is to improve the quality of open educational resources, provide students/instructors with additional study materials in response to rising cost for textbooks and higher education, and provide more data for further research on student behavior while interacting with e-textbooks. The interactive e-module was developed using LaTeX markup language and Overleaf editor, uploaded to the XIMERA platform and tested on two sections of MTH 140, a college-level Statistics course. Once the experiment has been performed and the data collected, the results were analyzed using Python programming language. As a result of the study, some tools for analysis of user data have been developed, and an OER has been created.

    Committee: Anna Davis (Advisor); John Marazita (Committee Chair); Kristall Day (Committee Member); Lawrence Masek (Committee Member) Subjects: Computer Science; Education; Mathematics; Psychology; Statistics
  • 3. Moyer, Adam Self-Evolving Data Collection Through Analytics and Business Intelligence to Predict the Price of Cryptocurrency

    Doctor of Philosophy (PhD), Ohio University, 2020, Mechanical and Systems Engineering (Engineering and Technology)

    The development of the self-evolving data collection engine through analytics and business intelligence (SEDCABI) research engine along with plug-in prediction module (PPM) is demonstrated for the prediction of cryptocurrency (specifically, Bitcoin). Leveraging all data proves increase the accuracy of the prediction when compared to using only structured data, or only using unstructured data alone.

    Committee: Gary Weckman (Advisor) Subjects: Information Science; Information Systems
  • 4. Grant, Navneet FACTORS INFLUENCING WILLINGNESS TO ADOPT ADVANCED ANALYTICS IN SMALL BUSINESSES

    Doctor of Business Administration, Cleveland State University, 2020, Monte Ahuja College of Business

    Business analytics (BA) continues to be one of the top technology trends in recent years as well as one of the top priorities for CIO's in many large enterprises. Business analytic tools can significantly help small businesses in quickly responding to changing market conditions and improving their organizational performance. However, prior studies report that the adoption rate of business analytics in small businesses is extremely low such that only 32 percent small businesses have adopted Business Intelligence (BI) and analytics solutions till now (SMB Group, 2018). As small businesses constitute a major force in the US economy, a slow rate of adoption of significant technological innovations, such as BA, may be a critical concern that can affect the economy in the longer run. Despite this, the extant small business literature as well as the information systems literature fails to provide an understanding of why small businesses are not receptive to current BA trends. Therefore, drawing upon the theoretical underpinnings of organizing vision theory, strategic orientation literature, and theory of upper echelon, this study investigates the willingness of small businesses to adopt newer innovations in BA. More specifically, this study investigates the impact of the reception of organizing vision of BA by owner-managers, learning orientation of small businesses, analytics orientation of small businesses, and personal characteristics of owner-mangers on small businesses' willingness to adopt BA. By drawing its motivation from prior strategic orientation and BA literature, this study is also among the first one to propose, formally develop, and validate the measurement construct of analytics orientation.

    Committee: Radha Appan Dr. (Committee Chair); Raymond Henry Dr. (Committee Member); Sreedhar Madhavaram Dr. (Committee Member); Chieh-Chen Bowen Dr. (Committee Member) Subjects: Information Systems
  • 5. Akula, Venkata Ganesh Ashish Implementation of Advanced Analytics on Customer Satisfaction Process in Comparison to Traditional Data Analytics

    Master of Science in Engineering, University of Akron, 2019, Mechanical Engineering

    One of the major challenges in the survey data analysis is to determine which methodology or technique suits the best for the data. The constant rise in the data being obtained over the year's calls for the need for effective data analysis techniques, as an ineffective data analysis could lead to false recommendations and less customer satisfaction. Therefore, the main focus of this research is to test a variety of advanced data analysis methods and determine how to improve the insights obtained through survey data for sustainable continuous improvement. The data used in this research is obtained from the AJI-2 technical training department of the Federal Aviation Administration in the form of the end of the course and post-course evaluations. Contrary to the traditional survey analytical methods such as the summary statistics, we systematically tested and compared the utilization of advanced analytics on the survey data. Average Weighted Score which is widely used in survey data analysis is able to differentiate the degree of surveyees' satisfaction level on the survey questions and consequently is able to provide more insightful information on the course evaluations and customer satisfaction. Advanced analytics such as Correlation Analysis is used to understand the correlation in the data among the responses to the overall satisfaction question; Contingency Analysis is conducted to analyze the responses the surveyees chose when compared to their overall satisfaction; Logistic Regression is used on the survey data, to model the association of a categorical outcome of overall satisfaction with independent variables, and the Cluster Analysis is conducted to analyze the survey data to form clusters based on the responses that share common characteristics with which each cluster will have a unique continuous improvement strategy to improve customer satisfaction. These insightful findings obtained from this advanced analytics were helpful in understanding the data patte (open full item for complete abstract)

    Committee: Shengyong Wang PhD (Advisor); Chen Ling PhD (Committee Member) Subjects: Business Administration; Mechanical Engineering
  • 6. AL-Dohuki, Shamal INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Computer Science

    Advanced sensing technologies and computing infrastructures have produced a variety of trajectory data of moving objects in urban spaces. One type of this data is taxi trajectory data. It records real-time moving paths sampled as a series of positions associated with vehicle attributes over urban road networks. Such data is big, spatial, temporal, unstructured and it contains abundant knowledge about a city and its citizens. Exploratory visualization systems are needed to study taxi trajectories with efficient user interaction and instant visual feedback. The extracted information can be utilized in many important and practical applications to optimize urban planning, improve human life quality and environment. As the primary novelty contribution, this thesis presents a set of visual analytics solutions with different approaches to interacting with massive taxi trajectory data to allow analysts to look at the data from different perspectives and complete different analytical tasks. Our approaches focus on how people directly interact with the data store, query and visualize the results and support practitioners, researchers, and decision-makers to advance transportation and urban studies in the new era of the smart city. First, we present SemanticTraj, a new method for managing and visualizing taxi trajectory data in an intuitive, semantic rich, and efficient means. In particular, taxi trajectories are converted into taxi documents through a textualization transformation process. This process maps global positioning system (GPS) points into a series of street/POI names and pickup/drop-off locations. It also converts vehicle speeds into user-defined descriptive terms. Then, a corpus of taxi documents is formed and indexed to enable flexible semantic queries over a text search engine. Second, we present a visual analytics system, named as QuteVis, which facilitates domain users to query and examine traffic patterns from large-scale traffic data in an urban transpor (open full item for complete abstract)

    Committee: Ye Zhao (Committee Chair); Cheng-Chang Lu (Committee Member); Xiang Lian (Committee Member); Xinyue Ye (Committee Member); Xiaoling Pu (Committee Member) Subjects: Computer Science
  • 7. Stout, Blaine Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms

    Doctor of Philosophy, University of Toledo, 2018, Manufacturing and Technology Management

    Today's small-market and mid-market sized manufacturers, competitively face increasing pressure to capture, integrate, operationalize, and manage diverse sources of digitized data. Many have made significant investments in data technologies with the objective to improve on organization performance yet not all have realized demonstrable benefits that create organization value. One simple question arises, do business-analytics make a difference on company performance in today's information intensive environment? The research purpose, to explore this question by looking through the lens of data-centric pressure placed on management driving the invested use of data-technologies; how these drivers impact on management influence to adopt a digitized organization mindset, effecting data practices, shaping key processes and strategies and leading to capabilities growth that impact on performance and culture. The terms `Big Data' and `Small Data' are two of the most prolific used phrases in today's world when discussing business analytics and the value data provides on organization performance. Big Data, being strategic to organization decision-making, and Small Data, operational; is captured from a host of internal and external sources. Studying how leveraging business-analytics into organizational value is of research benefit to both academic and practioner audiences alike. The research on `Big and Small Data, and business analytics' is both varied and deep and originating from a host of academic and non-academic sources; however, few empirical studies deeply examine the phenomena as experienced in the manufacturing environment. Exploring the pressures managers face in adopting data-centric managing beliefs, applied practices, understanding key value-creating process strategy mechanisms impacting on the organization, thus provides generalizable insights contributing to the pool of knowledge on the importance of data-technology investments impacting on organizational cul (open full item for complete abstract)

    Committee: Paul Hong (Committee Chair); Thomas Sharkey (Committee Member); Wallace Steven (Committee Member); Cheng An Chung (Committee Member) Subjects: Information Systems; Information Technology; Management; Organization Theory; Organizational Behavior
  • 8. Koseler, Kaan Realization of Model-Driven Engineering for Big Data: A Baseball Analytics Use Case

    Master of Science, Miami University, 2018, Computer Science and Software Engineering

    Data collection and analysis is widespread across all industries, leading to a glut of data and a dearth of specialists who can use this data to derive insights. Accompanying the new “Big Data” paradigm is a resurgence in interest in machine learning techniques. Using machine learning techniques to work with "Big Data” is a complex task, often requiring specialized knowledge of the problem space as well as appropriate computer algorithms and approaches. However, such specialists who also possess programming ability are difficult to find and expensive to train. The gap between the problem space and the software solution often includes developers who lack the requisite domain-specific knowledge. The Model-Driven Engineering (MDE) paradigm helps close this gap by allowing developers to implement quality software by modeling it using high-level domain specific concepts. In this thesis, we attempt to demonstrate the plausibility of applying MDE to big data by considering a use case of machine learning baseball analytics, specifically, prediction of the next pitch. We model and implement MDE solutions to this use case by employing and updating an existing, but untested, Domain-Specific Modeling Language (DSML). We implement model instances considering different prediction factors and a code generation scheme for this DSML that is targeted at a binary classification problem of fastball versus non-fastball. Our goal is to help demonstrate the viability of the MDE paradigm in the machine learning domain, make machine learning software development more accessible and formalized, and help facilitate future research in this area.

    Committee: Matthew Stephan PhD (Advisor); James Kiper PhD (Committee Member); Michael Zmuda PhD (Committee Member) Subjects: Computer Science
  • 9. Ji, Xiaonan An Integrated Framework of Text and Visual Analytics to Facilitate Information Retrieval towards Biomedical Literature

    Doctor of Philosophy, The Ohio State University, 2018, Computer Science and Engineering

    Digitalized scientific literature, as a special type of text articles, is considered valuable knowledge repository in widespread academic and practical settings. Biomedical literature has specifically played an important role in supporting evidence-based medicine and promoting quality healthcare. Given an information need such as a patient problem, information retrieval towards biomedical literature has been focusing on the identification of high relevant articles to support up-to-date knowledge synthetization and reliable decision making. In particular, high recall, high precision, and human involvement are expected for a rigorous information retrieval in healthcare. Despite the critical information needs requiring high effectiveness and efficiency, the information overload from the large volume and heterogeneous biomedical literature has placed challenges on that. In this dissertation, we propose an integrated and generalizable framework of text and visual analytics to facilitate the significant domain application of biomedical literature retrieval. We focus on the unmet and most challenging aspect of identifying high relevant articles from a text corpus, which is typically an article collection obtained via exhaustive literature search. We convert extensive biomedical articles to effective representations that encode underlying article meanings and indicate article relevancies; and promote advantageous visualizations to exploit and explore article representations so that humans can get involved in not only task accomplishment but also knowledge discovery. We first implement text analytics to generate machine-understandable article features and representations, and promote their effectiveness with multiple knowledge and computational resources. Consider the special format of biomedical literature, we start by investigating the fundamental lexical feature space consisting of diverse article elements and examine their usefulness in predicting article relevan (open full item for complete abstract)

    Committee: Alan Ritter Ph.D. (Advisor); Po-Yin Yen Ph.D. (Advisor); Raghu Machiraju Ph.D. (Committee Member) Subjects: Biomedical Research; Computer Science; Information Science
  • 10. Yu, Andrew NBA ON-BALL SCREENS: AUTOMATIC IDENTIFICATION AND ANALYSIS OF BASKETBALL PLAYS

    Master of Computer and Information Science, Cleveland State University, 2017, Washkewicz College of Engineering

    The on-ball screen is a fundamental offensive play in basketball; it is often used to trigger a chain reaction of player and ball movement to obtain an effective shot. All teams in the National Basketball Association (NBA) employ the on-ball screen on offense. On the other hand, a defense can mitigate its effectiveness by anticipating the on-ball screen and its goals. In the past, it was difficult to measure a defender's ability to disrupt the on-ball screen, and it was often described using abstract words like instincts, experience, and communication. In recent years, player motion-tracking data in NBA games has become available through the development of sophisticated data collection tools. This thesis presents methods to construct a framework which can extract, transform, and analyze the motion-tracking data to automatically identify the presence of on-ball screens. The framework also provides assistance for NBA players and coaches to adjust their game plans regarding the on-ball screen using trends from past games. With the help of support vector machines, the framework identifies on-ball screens with an accuracy of 85%, which shows considerable improvement from the current published results in existing literature.

    Committee: Sunnie Chung Ph.D. (Committee Chair); Yongjian Fu Ph.D. (Committee Member); Nigamanth Sridhar Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 11. Rao, Rashmi Jayathirtha Modeling learning behaviour and cognitive bias from web logs

    Master of Science, The Ohio State University, 2017, Computer Science and Engineering

    Students who can link algebraic functions to their corresponding graphs perform well in STEM courses. Increasingly, early algebra curricula teaches these concepts in tandem. However, it is challenging to assess whether students are linking the concepts. Tests, video analyses, interviews and other traditional methods that aim to quantify how students link the concepts taught in school require precious classroom and teacher time. In this paper, we use web logs collected by a smart classroom web server to infer learning. Compared to traditional methods, web logs are widely available, voluminous and amenable to data science. However, web logs are constrained by factors such as data collection practices by system managers and the design of the web interface. Our approach partitions the web interface into components related to data and graph concepts. We collect click and mouse movement data as users interact with these components. We used statistical and data mining techniques like k means clustering, principal component analysis and frequent sequence patterns to model their learning behavior. We compared our models with traditional methods to assess learning behavior for a workshop presented in Summer 2016. Students in the workshop were middle-school math teachers planning to use a new early algebra curriculum in their own classrooms. First, we used our models to assess participation levels, a prerequisite indicator for learning. Our models aligned with ground-truth traditional methods for 17 of 18 students. The results from machine learning models which do not consider the computational thinking aspect of the web components aligned with ground-truth methods in only 10 of 18 students. Unlike traditional methods, our models were computed in minutes and not days. The results of the models w.r.t the two types of components of the web portal have been used to infer possible data or graph oriented cognitive bias in the students.

    Committee: Christopher Stewart Dr (Advisor); Arnulfo Perez Dr (Committee Member); Alan Ritter Dr (Committee Member) Subjects: Computer Science
  • 12. Jadhav, Ashutosh Knowledge Driven Search Intent Mining

    Doctor of Philosophy (PhD), Wright State University, 2016, Computer Science and Engineering PhD

    Understanding users' latent intents behind search queries is essential for satisfying a user's search needs. Search intent mining can help search engines to enhance its ranking of search results, enabling new search features like instant answers, personalization, search result diversification, and the recommendation of more relevant ads. Hence, there has been increasing attention on studying how to effectively mine search intents by analyzing search engine query logs. While state-of-the-art techniques can identify the domain of the queries (e.g. sports, movies, health), identifying domain-specific intent is still an open problem. Among all the topics available on the Internet, health is one of the most important in terms of impact on the user and forms one of the most frequently searched areas. This dissertation presents a knowledge-driven approach for domain-specific search intent mining with a focus on health-related search queries. First, we identified 14 consumer-oriented health search intent classes based on inputs from focus group studies and based on analyses of popular health websites, literature surveys, and an empirical study of search queries. We defined the problem of classifying millions of health search queries into zero or more intent classes as a multi-label classification problem. Popular machine learning approaches for multi-label classification tasks (namely, problem transformation and algorithm adaptation methods) were not feasible due to the limitation of label data creations and health domain constraints. Another challenge in solving the search intent identification problem was mapping terms used by laymen to medical terms. To address these challenges, we developed a semantics-driven, rule-based search intent mining approach leveraging rich background knowledge encoded in Unified Medical Language System (UMLS) and a crowd-sourced encyclopedia (Wikipedia). The approach can identify search intent in a disease-agnostic manner and has been eva (open full item for complete abstract)

    Committee: Amit Sheth Ph.D. (Advisor); Krishnaprasad Thirunarayan Ph.D. (Committee Member); Michael Raymer Ph.D. (Committee Member); Jyotishman Pathak Ph.D. (Committee Member) Subjects: Computer Science
  • 13. Fuhry, David PLASMA-HD: Probing the LAttice Structure and MAkeup of High-dimensional Data

    Doctor of Philosophy, The Ohio State University, 2015, Computer Science and Engineering

    Making sense of, analyzing, and extracting useful information from large and complex data is a grand challenge. A user tasked with meeting this challenge is often befuddled with questions on where and how to begin to understand the relevant characteristics of such data. Recent advances in relational analytics, in particular network analytics, offer key tools for insight into connectivity structure and relationships at both local ("guilt by association") and global (clustering and pattern matching) levels. These tools form the basis of recommender systems, ranking, and learning algorithms of great importance to research and industry alike. However, complex data rarely originate in a format suitable for network analytics, and the transformation of large and typically high-dimensional non-network data to a network is rife with parameterization challenges, as an under- or over-connected network will lead to poor subsequent analysis. Additionally, both network formation and subsequent network analytics become very computationally expensive as network size increases, especially if multiple networks with different connectivity levels are formed in the previous step; scalable approximate solutions are thus a necessity. I present an interactive system called PLASMA-HD to address these challenges. PLASMA-HD builds on recent progress in the fields of locality sensitive hashing, knowledge caching, and graph visualization to provide users with the capability to probe and interrogate the intrinsic structure of data. For an arbitrary dataset (vector, structural, or mixed), and given a similarity or distance measure-of-interest, PLASMA-HD enables an end user to interactively explore the intrinsic connectivity or clusterability of a dataset under different threshold criteria. PLASMA-HD employs and enhances the recently proposed Bayesian Locality Sensitive Hashing (BayesLSH), to efficiently estimate connectivity structure among entities. Unlike previous efforts which operate at (open full item for complete abstract)

    Committee: Srinivasan Parthasarathy (Advisor); Arnab Nandi (Committee Member); P Sadayappan (Committee Member); Michael Barton (Committee Member) Subjects: Computer Science
  • 14. Lin, Wei Distributed CAV Mobility Data Intelligence and Informatics towards Self-Adaptive Traffic Operation and Sustainability Assessment

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Civil Engineering

    Emerging transportation technologies, such as connected and autonomous vehicle (CAV) technologies with the support of cooperative vehicle-to-everything (C-V2X) wireless communications, are bringing up diversity of real-time data sources which will ultimately ubiquitously available over roadway networks. Examples include CAV mobility trajectory datasets, basic safety message (BSM) for vehicle safety state, and map data describing the physical geometry of road configurations such as intersections. Apparently, those emerging data sources can be used to depict the main characteristics of traffic flow operations and provide a solid basis to derive profiles of traffic patterns. The quality and scalability of the CAV mobility data really matters to the effect of CAVs cooperative driving behaviors and systematic traffic control schemes, aligned with the top priority of safety assurance. A convergent challenge hence arises from the imperative need to gather, process, analyze, store, and streamline vast quantities of CAV datasets to characterize large-scale traffic environment for maximizing safety and efficiency of the roadway network. If this challenge is successfully addressed, such an intelligent data fusion could have a great potential to be streamlined with other infrastructural monitoring data collected at different spatial locations to perform large-scale self-adaptive traffic control. With the support of edge/fog and cloud computing technologies, those dynamic traffic operational datasets can flow to the traffic management agencies, as a big data asset, by synergizing them with other heterogeneous land-use and environmental monitoring measurements. This also is a way to provide robust and reliable information sources for sustainability assessment. A comprehensive revelation and profound comprehension of the underlying mechanism and rationale for processing and interconnecting the CAV-enabled data sources with control infrastructures will establish the groundwork (open full item for complete abstract)

    Committee: Heng Wei Ph.D. (Committee Chair); Y. Jeffrey Yang Ph.D M.A B.A. (Committee Member); Xuefu Zhou Ph.D. (Committee Member); John Ash Ph.D. (Committee Member) Subjects: Civil Engineering
  • 15. Gandee, Tyler Natural Language Generation: Improving the Accessibility of Causal Modeling Through Applied Deep Learning

    Master of Science, Miami University, 2024, Computer Science

    Causal maps are graphical models that are well-understood in small scales. When created through a participatory modeling process, they become a strong asset in decision making. Furthermore, those who participate in the modeling process may seek to understand the problem from various perspectives. However, as causal maps increase in size, the information they contain becomes clouded, which results in the map being unusable. In this thesis, we transform causal maps into various mediums to improve the usability and accessibility of large causal models; our proposed algorithms can also be applied to small-scale causal maps. In particular, we transform causal maps into meaningful paragraphs using GPT and network traversal algorithms to attain full-coverage of the map. Then, we compare automatic text summarization models with graph reduction algorithms to reduce the amount of text to a more approachable size. Finally, we combine our algorithms into a visual analytics environment to provide details-on-demand for the user by displaying the summarized text, and interacting with summaries to display the detailed text, causal map, and even generate images in an appropriate manner. We hope this research provides more tools for decision-makers and allows modelers to give back to participants the final result of their work.

    Committee: Philippe Giabbanelli (Advisor); Daniela Inclezan (Committee Member); Garrett Goodman (Committee Member) Subjects: Computer Science
  • 16. Alain, Gabriel Evaluating Healthcare Excellence: The Agile Healthcare Performance Index (AHPI) as a Catalyst for Quality Improvement and Systemic Efficiency

    Doctor of Philosophy, The Ohio State University, 2024, Health and Rehabilitation Sciences

    This dissertation presents the development and evaluation of the Agile Healthcare Performance Index (AHPI), a novel methodology designed to improve quality and measure performance within healthcare settings. It offers a framework designed to capture the complexities of healthcare delivery. Chapter 3 introduces the AHPI, emphasizing its significance in enhancing resource allocation and operational decision-making through an analysis of synthetic data across hospital service lines. The results underscore the adaptability and temporal sensitivity compared to static, unweighted indices, highlighting the potential to refine healthcare performance measurement. Chapter 4 extends the application of the AHPI to quality improvement (QI) initiatives, hypothesizing its effectiveness in aligning healthcare decision-making processes with the complex nature of care delivery. A simulation-based case study illustrates the alignment of the AHPI with the Cynefin framework's domains, demonstrating its strategic utility in navigating the dynamic challenges of healthcare. Chapter 5 focuses on the practical application of the AHPI in evaluating hip fracture care among the elderly, utilizing data from the American College of Surgeons National Surgical Quality Improvement Program (NSQIP). The findings reveal the ability of the AHPI to accurately reflect variations in surgical outcomes, validating its role as a dynamic tool for quality improvement and policymaking across healthcare settings. Together, these studies advocate for the AHPI as a groundbreaking approach to healthcare performance assessment and QI. By integrating multidimensional metrics and a data-driven methodology, using the AHPI can provide a robust solution for enhancing care quality and operational efficiency, paving the way for a more adaptable and effective healthcare system.

    Committee: Catherine Quatman-Yates (Advisor); Courtney Hebert (Committee Member); Lisa Juckett (Committee Member); Carmen Quatman (Committee Co-Chair) Subjects: Health Care; Health Sciences; Operations Research; Systems Design
  • 17. Li, Wenzhe Methodology for Yield Improvement by Addressing Variability and Uncertainty in Advanced Manufacturing Systems and Processes

    PhD, University of Cincinnati, 2023, Engineering and Applied Science: Mechanical Engineering

    Controlling variability and uncertainty (V&U) in advanced manufacturing systems and industrial processes is of great importance for quality management and yield improvement. Variability is often envisaged in two dimensions: one is the variety of outputs from different products, and the other is the variance of the system's outputs over time. Uncertainty is often caused by incomplete knowledge, resulting in poor decision-making. V&U are the key factors that affect the manufacturing yield, especially for critical advanced manufacturing processes, e.g., high-precision parts molding and assembly, semiconductor wafer etching, chemical mechanical planarization (CMP), additive manufacturing, etc. Addressing V&U from data-driven approaches, especially for online (real-time and dynamic) applications, is rarely studied in the literature. Practice in the real world relies heavily on human experience and tedious trial and error. Creating a digital representation of variability from limited information and based on which to optimize production yield involves systematic methodologies which are also rarely discussed. To address these challenges, this study develops a systematic framework to deal with the V&U issue to improve production yield by leveraging a novel adaptive modeling methodology. Improving manufacturing yield can be generally defined as a control optimization problem by searching for a control strategy based on which the process and product V&U could be minimized. The proposed framework involves methodologies from useful and usable data collection, information extraction and selection, data-driven adaptive modeling and optimization for yield improvement. Adaptive modeling technologies are specially developed and leveraged as accurate and efficient surrogate models for the varying target processes or machines. Therefore, with such a digital model, the optimization accuracy and efficiency could be significantly enhanced, allowing for methodology online deployment. The (open full item for complete abstract)

    Committee: Jay Lee Ph.D. (Committee Chair); Jing Shi Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Jay Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 18. Zhang, Jielun Sustaining the Performance of Artificial Intelligence in Networking Analytics

    Doctor of Philosophy (Ph.D.), University of Dayton, 2023, Electrical Engineering

    Emerging Artificial Intelligence (AI) techniques, including both Machine Learning algorithms and Deep Learning models, have become viable solutions to support network measurement and management. As the fundamental of network analytics, network traffic classification has recently been studied with the adoption of AI techniques. For example, widely studied AI-based traffic classifiers, developed based on artificial neural networks such as convolutional neural networks (CNNs) and long short-term memory networks (LSTMs), have demonstrated high classification accuracy. However, their performance is limited to the coverage of the knowledge databases, which restricts their effectiveness in dealing with updated or new network applications. To address the limitations, model update mechanisms are introduced, which allow AI-based traffic classification models to sustain high performance by creating a new knowledge base. These mechanisms enable the AI-based network traffic classification models to adapt to those evolving network applications in dynamic network environments. Additionally, the dissertation discusses the challenges of AI performance in network security and resolves them by leveraging the proposed mechanisms.

    Committee: Eric Balster (Committee Chair); Hui Wang (Committee Member); Brad Ratliff (Committee Member); Feng Ye (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 19. Aglonu, Kingdom Using Data Analytics to Understand Student Support in STEM for Nontraditional Students

    Master of Computing and Information Systems, Youngstown State University, 2023, Department of Computer Science and Information Systems

    Co-curricular supports have been practice bias, which makes it difficult to understand need-based support for nontraditional students in STEM. Thus, the aim of this study was to use data analytics to understand student support in STEM for Nontraditional Students. Quantitative research method approach was adopted with a longitudinal survey of 366 students in the Fall and 218 students in the Spring. In order to understand the support system for non-traditional students, structural equation modeling was used. RStudio was used to screen and analyze the initial data, and the lavaan package in R was used to conduct latent variable analyses. To examine the latent correlations, all constructs were concurrently integrated in a single Confirmatory Factor Analysis model. Subsequently, the data analysis process moved on to robust full information maximum likelihood (RFIML) estimation of SEM and the non-significant pathways were removed until the final model was developed. The study found that though the omnibus support model, as well as the support model for traditional, were not confirmed in both Fall and Spring semesters, it was confirmed for nontraditional students in the Fall semester. The significant loadings for the nontraditional students in the Fall semester include academic integration, university integration, academic advisory support, faculty support, stem faculty support, student affairs support, and cost-of-attendance support & training. However, it was found that the support model for nontraditional students in the Spring semesters was not confirmed. Therefore, using structural equation modeling, this study provides important insights for understanding support for nontraditional students.

    Committee: Cory Brozina PhD (Advisor); Alina Lazar PhD (Committee Member); Arslanyilmaz Abdu PhD (Committee Member) Subjects: Engineering; Higher Education; Statistics
  • 20. Brogan, Cole Logistic Regression Analysis on the Value of Statistics in Major League Baseball

    Bachelor of Science, Walsh University, 2023, Honors

    Currently, there is limited information for the public sector of Major League Baseball on the processes regarding data analytics and statistics in MLB. Prior to the release of Moneyball, there was an imperfect understanding of success in professional baseball. After the developments which created sabermetrics in baseball, it remained to be seen which statistics were valued by baseball organizations. It was the goal of this analysis to determine which offensive metrics could be used to predict whether an MLB team makes the playoffs. A binary logistic regression analysis using IBM SPSS Statistics (Version 28) was used to create the model. Sabermetric statistics used were wins above replacement (WAR), runs created (RC), batting average on balls in play (BABIP), and on base percentage plus slugging percentage plus (OPS+). These predictor variables were compared to a measure of success, whether a team made the postseason. These team statistics were used for all 30 MLB teams over the years 2015-2019 to create the regression model. Upon completion the model was significant in predicting whether a team made the playoffs. A further study found that WAR and RC were the two variables that correlated most closely with making the postseason.

    Committee: Steven Harris (Advisor); Kai Brunkalla (Other) Subjects: Sports Management; Statistics