Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 30)

Mini-Tools

 
 

Search Report

  • 1. DePero, Matthew Thread Safe Multi-Tier Priority Queue for Managing Pending Events in Multi-Threaded Discrete Event Simulations

    Master of Computer Science, Miami University, 2018, Computer Science and Software Engineering

    Parallel Discrete Event Simulation (PDES) conducted using emerging shared memory many-core CPUs presents capacity for even greater performance by 1) eliminating the need for message passing and associated serialization/deserialization overheads, and 2) reducing memory requirements by allowing a single copy of an event to be shared between multiple threads. However, the overall performance of a PDES is highly contingent on the speed and capacity of its pending event set data structure. Accordingly, we present a simple, thread-safe priority queue called 3tSkip for managing pending events. Our design takes advantage of contemporary synchronization primitives, including atomics and lock-free data structures to ensure good performance. The priority queue has been incorporated into a redesigned version of a parallel simulator called MUSE, to enable PDES on shared memory platforms. The effectiveness of the proposed solution has been assessed using standard PDES benchmarks. Our analysis identifies many critical design obstacles to multi-threaded design and presents novel solutions to those design obstacles. Our solution achieves significant speedup in high granularity scenarios, when compared to existing MUSE simulator, though more work is required before multithreaded design becomes effective in a broad range of scenarios.

    Committee: Dhananjai Rao Dr. (Advisor); Mike Zmuda Dr. (Committee Member); Karen Davis Dr. (Committee Member) Subjects: Computer Science
  • 2. Alt, Aaron Profile Driven Partitioning Of Parallel Simulation Models

    MS, University of Cincinnati, 2014, Engineering and Applied Science: Computer Engineering

    A considerable amount of research into effective parallelization for discrete event driven simulation has been conducted over the past few decades. However, most of this research has targeted the parallel simulation infrastructure; focusing on data structures, algorithms, and synchronization methods for the parallel and distributed simulation kernels. While this focus has successfully improved and refined the performance of parallel discrete event simulation kernels, little effort has been directed toward analyzing and preparing the simulation model itself for parallel execution. Model specific optimizations could have significant performance implications, but have been largely ignored. This fact is complicated by the lack of a widely used simulation and modeling language for many domains. The lack of a common language is, however, not entirely insurmountable. For example, the partitioning and assignment of objects from the simulation model onto the hardware platform is generally performed by the simulation infrastructure. While partitioning can have dramatic impacts on the communication frequencies between the concurrently executed objects, most existing parallel simulation infrastructures do little to address this opportunity. This thesis addresses the partitioning and assignment of objects within a simulation model for parallel execution. The specific target of this effort is to develop a partitioning and assignment strategy for use in the WARPED parallel simulation kernel that has been developed and maintained at the University of Cincinnati. The focus of the work is to develop a general purpose solution that can function for any simulation model that has been prepared for execution on the WARPED kernel. The specific solution exploits a sequential kernel from the WARPED project to pre-simulate the simulation model to obtain profile data regarding the frequency of events communicated between objects. This event frequency data is then used to develop partitio (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Fred Beyette Ph.D. (Committee Member); Karen Davis Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 3. Molnar, Alex Using Simulation in Healthcare Emergency Transport to Improve Efficiency and Safety

    Master of Science (MS), Ohio University, 2023, Industrial and Systems Engineering (Engineering and Technology)

    Using computer simulation in healthcare is a longstanding endeavor, where alternative scenarios and system configurations can be tested before they are implemented in the real world. One facet of healthcare that has an apparent lack of simulation work is that of emergency transportation, especially that which has a goal to reduce the amount of time a crew works over their designated shift. In collaboration with a Midwest USA Hospital, the researcher sought to model the current system, then experiment by altering crew schedule start times, the number of crews, and policy to reduce the frequency and duration a crew works past their scheduled end time. Therefrom, a framework was developed to help other institutions with similar aims. After the simulation model was verified and validated, experiments varying the start times of the crews, scheduling the crew constituent resources separately, and the number of crews were investigated. With the goal to reduce the time a crew spends working past their scheduled shift without negatively affecting productivity, scenarios that performed similarly in the number of transport requests serviced where there was a decrease in over-shift metrics were identified. The results of the simulation experimentation can be summarized this way: reducing over-shift comes at a cost, likely, either a reduction in productivity or an additional crew. A seven-themed framework for future studies was derived: a sound mission, system understanding, data availability/understanding, respect for process, simulation experience, results analysis and recommendations, and empathy.

    Committee: Tao Yuan (Advisor); Dušan Šormaz (Advisor) Subjects: Health Care; Industrial Engineering; Management
  • 4. Ryan, Miller Integrated Simulation Model for Patient Flow Between Operating Rooms and Progressive Care Units Using Custom Objects

    Master of Science (MS), Ohio University, 2020, Industrial and Systems Engineering (Engineering and Technology)

    Process improvements in hospitals usually focus on a single department (eg. emergency department, operating theater, specialty clinic, etc). However, actions taken in one department inevitably affect the performance of other departments. Therefore, higher efficiency improvements can be obtained by considering the patient care process as one synergetic activity involving several departments and various sets of resources. In this research we propose an integrated approach for modeling the patient lifecycle for multiple departments. First we describe a patient flow from his/her entry into the hospital through a progressive care unit until the patient has fully recovered. We use process mapping methods to address value added activities and other necessary activities in the patient lifecycle. Then, a simulation model is developed in Simio using customized objects created in previous works. Those customized objects carry their own logic and behavior. For example, the Bed object includes logic for a patient recovering while using several hospital resources (nurses, therapist) in his/her hospital stay. Those objects were used to build several configurations of an integrated model with multiple departments. Data about patient arrival patterns, their health acuity, and procedure needs were obtained from a real hospital in order to test our approach. The procedures duration data (which were different for different levels of patient acuity and for different surgical and other procedures) were used to obtain service distribution using statistical analysis methods. Modular simulation objects and data distributions from real hospitals allowed us to build an integrated simulation model with several configurations of the process flow. Simulation experiments were performed on these models and performance recorded. The recommendation for implementations in the hospitals is also reported.

    Committee: Dusan Sormaz (Advisor); Gursel Suer (Committee Member); Diana Schwerha (Committee Member); Vic Matta (Committee Member) Subjects: Engineering; Health Care; Industrial Engineering
  • 5. Higiro, Julius MANAGING PENDING EVENTS IN SEQUENTIAL & OPTIMISTIC PARALLEL DISCRETE EVENT SIMULATIONS

    Master of Science, Miami University, 2017, Computer Science and Software Engineering

    The choice of data structure for managing and processing pending events in timestamp priority order plays a critical role in achieving good performance of sequential and parallel Discrete Event Simulation (DES). Accordingly, we propose and evaluate the effectiveness of multi-tiered (2 and 3 tier) data structures, including our proposed 2-tier Ladder Queue, for both sequential and optimistic parallel simulations, on distributed memory platforms. Our assessments use (a fine-tuned version of) the Ladder Queue, which has shown to outperform many other data structures for DES. The experimental results based on the PHOLD benchmark and the PCS simulation model show that our 3-tier heap and 2-tier ladder queue outperform the Ladder Queue by 10% to 50% in simulations, particularly those with higher concurrency per Logical Process (LP), in both sequential and Time Warp synchronized parallel simulations.

    Committee: Dhananjai Rao (Advisor); Matthew Stephan (Committee Member); Karen Davis (Committee Member) Subjects: Computer Science
  • 6. Hay, Joshua Experiments with Hardware-based Transactional Memory in Parallel Simulation

    MS, University of Cincinnati, 2014, Engineering and Applied Science: Computer Engineering

    Transactional memory is a concurrency control mechanism that dynamically determines when threads may safely execute critical sections of code. It does so by tracking memory accesses performed within a transactional region, or critical section, and detecting when memory operations conflict with other threads. Transactional memory provides the performance of fine-grained locking mechanisms with the simplicity of coarse-grained locking mechanisms. Parallel Discrete Event Simulation is a problem space that has been studied for many years, but still suffers from significant lock contention on SMP platforms. The pending event set is a crucial element to PDES, and its management is critical to simulation performance. This is especially true for optimistically synchronized PDES, such as those implementing the Time Warp protocol. Rather than prevent causality errors, events are aggressively scheduled and executed until a causality error is detected. This thesis explores the use of transactional memory as an alternative to conventional synchronization mechanisms for managing the pending event set in a time warp synchronized parallel simulator. In particular, this thesis examines the use of Intel's hardware transactional memory, TSX, to manage shared access to the pending event set by the simulation threads. In conjunction with transactional memory, other solutions to contention are explored such as the use of multiple queues to hold the pending event set and the dynamic binding of threads to these multiple queues. For each configuration a comparison between conventional locking mechanisms and transactional memory access is performed to evaluate each within the WARPED parallel simulation kernel. In this testing, evaluation of both forms of transactional memory (HLE and RTM) implemented in the Haswell architecture were performed. The results show that RTM generally outperforms conventional locking mechanisms and that HLE provides consistently better performance tha (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Fred Beyette Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 7. Gideon, John The Integration of LlamaOS for Fine-Grained Parallel Simulation

    MS, University of Cincinnati, 2013, Engineering and Applied Science: Computer Engineering

    LlamaOS is a custom operating system that provides much of the basic functionality needed for low latency applications. It is designed to run in a Xen-based virtual machine on a Beowulf cluster of multi/many-core processors. The software architecture of llamaOS is decomposed into two main components, namely: the llamaNET driver and llamaApps. The llamaNET driver contains Ethernet drivers and manages all node-to-node communications between user application programs that are contained within a llamaApp instance. Typically, each node of the Beowulf cluster will run one instance of the llamaNET driver with one or more llamaApps bound to parallel applicaitons. These capabilities provide a solid foundation for the deployment of MPI applications as evidenced by our initial benchmarks and case studies. However, a message passing standard still needed to be either ported or implemented in llamaOS. To minimize latency, llamaMPI was developed as a new implementation of the Message Passing Interface (MPI), which is compliant with the core MPI functionality. This provides a standardized and easy way to develop for this new system. Performance assessment of llamaMPI was achieved using both standard parallel computing benchmarks and a locally (but independently) developed program that executes parallel discrete event-driven simulations. In particular, the NAS Parallel Benchmarks are used to show the performance characteristics of llamaMPI. In the experiments, most of the NAS Parallel Benchmarks ran faster than, or equal to their native performance. The benefit of llamaMPI was also shown with the fine-grained parallel application WARPED. The order of magnitude lower communication latency in llamaMPI greatly reduced the amount of time that the simulation spent in rollbacks. This resulted in an overall faster and more efficient computation, because less time was spent off the critical path due to causality errors.

    Committee: Philip Wilsey Ph.D. (Committee Chair); Fred Beyette Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 8. King, Randall WARPED Redesigned: An API and Implementation for Discrete Event Simulation Analysis and Application Development

    MS, University of Cincinnati, 2011, Engineering and Applied Science: Computer Engineering

    In 1995, researchers at the University of Cincinnati released WARPED as a publically available discrete event simulation kernel. The goal of the project was to provide a system for research and analysis of the Time Warp distributed simulation synchronization protocol. WARPED was to be efficient, maintainable, flexible, configurable, and portable. It was written in C++ and used the Message Passing Interface (MPI) standard to accommodate as many parallel platforms as possible. As the software implementation was expanded with additional capabilities and optimizations, several problems with the original design became apparent. The primary problem was that the configuration of various Time Warp optimizations could only be made at compile time. As simulations increased in size and complexity, this compile time became a significant burden. Another problem, related to the first, was that WARPED could not be used and distributed as a shared library due to the compile time configuration requirement. This thesis discusses the design and implementation of the Time Warp mechanism in a new version of WARPED, now called the WARPED v2.x series (the initial series is now called the WARPED v1.x series). The primary goal of WARPED v2.x is to provide run time configuration of the system. The goals of the previous version carry over to the new version. Several simulation models are also included in the initial release of the WARPED v2.0 distribution for use in analyzing the system. In this initial version of WARPED v2.x, the system includes sequential and parallel simulation kernels that can be configured at run time for use with any of the simulation models compliant with the WARPED API. The parallel simulation kernel uses the Time Warp distributed synchronization mechanism and includes several Time Warp optimizations, including: various cancellation strategies, fossil collection algorithms, GVT estimation algorithms, state saving algorithms, event list structures, scheduling algorithm (open full item for complete abstract)

    Committee: Philip Wilsey PhD (Committee Chair); Fred Beyette PhD (Committee Member); Carla Purdy PhD (Committee Member) Subjects: Computer Engineering
  • 9. Zoll, Brian Evaluating the E-consult Process for Diabetes Care Delivery at an Outpatient Care Clinic

    Master of Science in Engineering (MSEgr), Wright State University, 2013, Industrial and Human Factors Engineering

    The occurrence of one of the most common chronic conditions in the U.S., diabetes, is expected to rise 53% from 24 million cases in 2003 to 37 million cases in 2023. The U.S. Veterans Health Administration (VHA) is not immune to this. The VHA has experienced an $820 million increase spending on diabetes patients between 2000 and 2008. The VHA has tried to keep the growth of chronic care costs in check through improvements in patient access to care by expanding its network of community-based outpatient clinics. Other methods the VHA has used to curb chronic care spending costs are electronic health records (EHR), patient aligned care teams (PACT), telehealth, and e-consults. An e-consult is defined as an electronic communication between primary care physicians and specialists about general or patient-specific questions that may preclude the need for an in-person referral. The objective of this study was to evaluate the effects of increased e-consult demand on time-based outcomes, quantify the sensitivity of these outcomes to walk-in patient arrival rates, electronic view-alert notifications, and primary care physician (PCP) unavailability, and provide recommendations to alleviate the detrimental effects of factors that are determined to have a significant effect on these outcomes. We collected data from 5 different VHA outpatient clinics, which was used in a discrete event simulation (DES) model of a typical VA outpatient clinic. Factors analyzed in the model were e-consult demand, view-alert notification arrivals, walk-in patient arrivals, and PCP unavailability. After the model was validated with real data, a detailed experimental study was conducted to determine factors that have a significant effect on e-consult time-based outcomes, such as cycle time. A total of 495 experiments were run and statistical analysis of the results indicated that all four factors had a significant effect on e-consult cycle time (p<0.05). Results also showed that (open full item for complete abstract)

    Committee: Pratik Parikh Ph.D. (Advisor); Jennie Gallimore Ph.D. (Committee Member); Xinhui Zhang Ph.D. (Committee Member) Subjects: Health Care Management; Industrial Engineering
  • 10. Muthalagu, Karthikeyan Threaded WARPED : An Optimistic Parallel Discrete Event Simulator for Cluster of Multi-Core Machines

    MS, University of Cincinnati, 2012, Engineering and Applied Science: Computer Engineering

    Parallel Discrete Event Simulation (PDES) is an active area of research for many years. Studies with PDES have focused primarily on either shared memory or distributed memory platforms. However, the emergence of low-cost multi-core and many-core processors suitable for use in Beowulf clusters provides an opportunity for PDES execution on a platform containing both shared memory and distributed memory parallelism. This thesis explores the migration of an existing PDES simulation kernel called WARPED to a Beowulf Cluster of many-core processors. More precisely, WARPED is an optimistically synchronized PDES simulation kernel that implements the Time Warp paradigm. It was originally designed for efficient execution on single-core Beowulf Clusters. The work of this thesis extends the WARPED kernel to contain parallel threaded execution on each node as well as parallelism between the nodes of the cluster. The new version of warped will be called threaded WARPED. In this thesis, warped is redesigned with thread safe data structure protected by various constructs. In particular atomic instructions are used to deploy lock-free data-structures and synchronization. With the addition of thread to WARPED the work also required adjustments and extensions to several of the subalgorithms of Time Warp. In particular, adjustments to the algorithm for computing Global Virtual Time (GVT), and termination detection were required. This thesis explains the modifications made to implement threaded WARPED and evaluates the performance capabilities of the two solutions for managing the shared data structures.

    Committee: Philip Wilsey PhD (Committee Chair); Fred Beyette PhD (Committee Member); Wen Ben Jone PhD (Committee Member) Subjects: Computer Engineering
  • 11. Rea, David Surviving the Surge: Real-time Analytics in the Emergency Department

    PhD, University of Cincinnati, 2021, Business: Business Administration

    This dissertation is motivated by the problem of crowding in the emergency department. A near-universal problem, crowding has been linked to negative outcomes for both patients and providers. A primary cause of crowding is the inherent stochasticity of patient arrivals. Stochasticity, while operationally problematic, is difficult to control in an emergency department where all patients seeking care must be seen. As it cannot be eliminated, accounting for stochasticity is critical to mitigating crowding in the emergency department. Because both crowding and its consequences occur in real time, any analytical model designed to support operational decisions must also provide insights in real time. A review of the literature reveals that, while many arrival forecasting models have been proposed, few have been assessed for their ability to support real-time decision-making during demand surges. This dissertation studies the design of such models with an eye towards operational support, such as the activation of backup staff when beneficial. Using a unique set of data --- made up of approximately 875,000 patient encounters from four hospitals across two health systems --- valuable insights as to the importance of distributional assumptions when forecasting during demand surges are identified. Namely, when quantifying the risk of a potential crowding event, discrete distributional forecasts (e.g., those with Poisson and Negative Binomial predictive distributions) will outperform typical Gaussian-based models. In addition, it is shown that proactive activation of backup staff, based on an appropriately constructed model, can lead to decreased patient waiting times compared to typical current practice. Importantly, this benefit to patients comes at a cost to schedule stability for providers. Intelligent management of this tradeoff presents opportunities for both improvements to practice and future research.

    Committee: Craig Froehle Ph.D. (Committee Chair); Jeffrey Mills Ph.D. (Committee Member); Yichen Qin (Committee Member); Uday Rao Ph.D. (Committee Member) Subjects: Health Care
  • 12. Liu, Enhao Innovative Simulation and Tree Models and Reinforcement Learning Methods with Applications in Cybersecurity

    Doctor of Philosophy, The Ohio State University, 2021, Industrial and Systems Engineering

    This research explores the reinforcement learning methods, machine learning methods, and discrete event simulation models with applications in the field of cybersecurity. In cybersecurity, virtually all types of devices that contain computers have so-called “cyber vulnerabilities” which offer ways for attackers to gain access or at least limit performance. A race then follows between hackers' finding and applying “exploits”, and vendors offering patches that are discovered to be needed by scans and implemented by end users. If the hackers win, they cause losses. In this dissertation, we propose a discrete event simulation model in which the mechanism of vulnerabilities and hosts has been studied. A concept of a nested “birth and death” process is introduced in the context of vulnerability lifetime and its interaction with a host. Also, we investigate the benefits and drawbacks of the current scanning policy and maintenance policy with a case study of a major university. We also propose cost-effective alternatives and investigate the significance of celebrity vulnerabilities. Next, we explore the optimal control policies to schedule cyber maintenance actions in a partially observable environment caused by incomplete inspections. Incomplete inspection, resulting mainly from computers being turned off during the scan, leads to a challenge for scheduling maintenance actions. We propose the application of Partially Observable Markov Decision Processes (POMDPs) to derive cost-effective cyber-maintenance actions that minimize total costs. To assess the benefits of optimal policies obtained from POMDPs, we use real-world data from a major university. Compared with alternative policies using simulations, the optimal control policies can significantly (2x ~ 10x) reduce expected maintenance expenditures per host and relatively quickly mitigate the most important vulnerabilities. Further, we investigate the main disadvantages of the widely used Common Vulnerability Scoring S (open full item for complete abstract)

    Committee: Theodore Allen (Advisor); Cathy Xia (Committee Member); Guzin Bayraksan (Committee Member) Subjects: Industrial Engineering
  • 13. Gupta, Sounak Pending Event Set Management in Parallel Discrete Event Simulation

    PhD, University of Cincinnati, 2018, Engineering and Applied Science: Computer Science and Engineering

    In Parallel Discrete Event Simulation (PDES), the pending event set refers to the set of events available for execution. These pending events are aggressively scheduled for execution in a Time Warp synchronized parallel simulation without strict enforcement of the causal relationship between events. For most discrete event simulation models, event processing granularity is generally quite small. On many-core and multi-core platforms, this decrease in granularity aggravates contention for the shared data structures which store these pending events. As the number of cores increase, a key challenge lies in providing effective, contention-free event list management for scheduling events. Lock contention, sorting, and scheduling order are the prime contributors to contention for access to the pending events set. Possible solutions to this problem include atomic read-write operations, hardware transactional memory, or synchronization-friendly data structures. The focus is on choosing efficient data structures for the pending event set and optimization of scheduling techniques that can improve the performance of the Time Warp synchronized parallel simulation. The following design concepts for optimizing the pending event set are explored in this dissertation: 1. an exploration of a variety of different data structures that are commonly used in the management of pending event set. In addition, the management of the pending event set using a Ladder Queue data structure is explored. The Ladder Queue forms a self-adjusting hierarchically partitioned priority queue that makes it particularly attractive for managing the pending event set 2. the elimination of sorting within the Ladder Queue partitions. Events are then scheduled from the lowest partition without concerns for their time order and causal independence of these events is assumed 3. an atomic read-write access to the Ladder Queue partition that holds the smallest available events is explored 4. Objec (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Nael Abu-Ghazaleh Ph.D. (Committee Member); Fred Beyette Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 14. Sami, Mohammed Abdul Amalgamation-Segregation Analysis of Complex Integrated Multi facility Perioperative System through Discrete event simulation using Opt quest & KN method.

    Master of Science in Engineering, University of Akron, 0, Mechanical Engineering

    From times Healthcare has always been the most expensive field worldwide, yet it couldn't reach an ideal state in its performance to consistently deliver high quality of patient care. Narrowing down to U.S worldwide, America stands in the first runner up position that spends most of their income on health care and still is not among the lofty positions when it comes to quality of care. In such an expensive and complex field it is very important for all healthcare organizations to improvise and implement a lean method for continuous process improvement that acts as a catalyst for a continuous performance improvement environment. When we talk about a lean process, a major focus behind every lean organization is to reduce waste in their systems process. Wastes in healthcare such as waiting time of patient, excessive resources or very less resources, unnecessary transportation or movement of patient, rework done for increasing patients length of stay in hospital etc. When a healthcare organization adapts a lean method in their system the major constraint faced by hospitals is a high variability in patient's arrival time, availability of resources, availability of staff and patients with different case type. When a clinic is Multi facility Integrated clinic controlling patient types and their variability in arrival time is a biggest constrained and cannot be controlled hence one needs an ideal state process flow irrespective of patient variability and interdependent departments/units. The current research is on improving the process/quality performance of a complex integrated multi facility (CIMS) perioperative unit for a community hospital in Willoughby, Ohio. The perioperative area, besides surgical procedures is an integrated unit with several other specialized care units. The patients being treated in perioperative area are interdependent on various other units as well. In a brief, perioperative area could be a combination of Pre Operation unit, Pos (open full item for complete abstract)

    Committee: Shengyong Wang Dr. (Advisor); Chen Ling Dr. (Committee Member); Sergio Felicelli Dr. (Committee Member) Subjects: Industrial Engineering; Mechanical Engineering
  • 15. Sadeghi, Azadeh Global Supply Chain Inventory Management and Production Planning Strategies

    Master of Science (MS), Ohio University, 2017, Industrial and Systems Engineering (Engineering and Technology)

    In this research, inventory management and production planning are applied in a global supply chain system. Data set comes from a pharmaceutical company that produces blood sugar strips. There is a manufacturing plants located in Puerto Rico with its own warehouse. Based on the customer demand, distribution centers and regional retailer centers are established in the continents and several countries, respectively. In order to choose the best locations for the distribution centers a facility-planning model is used. When inventory levels in the regional retailer centers drop to reorder points, items are ordered from a continental distribution center. The main distribution centers order the items from the manufacturing plants when their reorder points are violated. It is important to note that manufacturing plants are assumed to have limited capacity. In this problem, transportation leadtimes and demand quantities are defined as stochastic variables following normal distribution. The objective is to cover the demand with the minimum total cost. In the first step, a mathematical model is applied to choose the distribution centers. Then, continuous inventory policy is considered for retailers, distribution centers, and Puerto Rico warehouse. In the final step, simulation model is used to evaluate the entire approach.

    Committee: Gursel Suer (Advisor) Subjects: Industrial Engineering
  • 16. Ward, Michael Delivering Responsive Care in the Emergency Department: Targeting the Population Versus a Disease-Specific Approach

    PhD, University of Cincinnati, 2016, Business: Business Administration

    The value delivered by the U.S. healthcare system to American patients continues to diminish and lags behind other developed nations. Nowhere is this challenge more evident than in access to healthcare services for acute, unscheduled visits as seen in the emergency department (ED) where waiting times to see a clinical provider average nearly an hour. Existing disease-based approaches to process improvement prioritize the performance of the ED for a very limited number of patients with specific conditions. Due to the preference for the use of clinical outcomes (e.g., mortality) rather than operational outcomes (e.g., length of stay), overall system performance may be compromised in order to meet the needs of a limited subset of diseases (e.g., stroke or acute myocardial infarction). Alternatively, population-based approaches take a more comprehensive approach by viewing the healthcare system as a series of interconnected parts rather than individual silos. The research in the following three essays addresses both population-level and disease-based process improvement. In the first two essays, we examine the use of a population-level approach that uses a dynamic capacity allocation policy we call “Relook.” Relook assigns resources (nurses) to patients depending upon need throughout the ED visit. Using discrete event simulation, we examined how waiting time (arrival-to-bed placement interval) performance in an example ED using a Relook policy compares to astatic policy. We found that a Relook policy reduces waiting times, costs, and is reasonably robust to the time between patient reevaluation. These benefits were achieved through re-allocating capacity to lower-intensity resources (i.e., nurses who can care for more patients simultaneously) periodically throughout the patient's ED visit. In the third essay, we examine a targeted approach to treating patients who present to the ED with a condition called heart failure. Heart failure is a common condition for which pati (open full item for complete abstract)

    Committee: Craig Froehle Ph.D. (Committee Chair); Gary D. Scudder Ph.D. (Committee Member); Mark Eckman M.D. (Committee Member); W. David Kelton Ph.D. (Committee Member); Michael Magazine Ph.D. (Committee Member) Subjects: Operations Research
  • 17. Mitra, Rupendra Nath E-SCALE: Energy Efficient Scalable Sensor Coverage with Cell-phone App Using LTE

    MS, University of Cincinnati, 2015, Engineering and Applied Science: Computer Science

    E-SCALE, Energy-efficient Scalable sensor Coverage with cell-phone Apps using LTE, is a newly adopted approach to provide large scale sensor coverage using cell phones and cellular network infrastructure. The focus of this thesis is to propose the idea of E-SCALE, extend it through simulations to determine how E-SCALE and its traffic can impact the key performance indices of the backbone network, i.e., LTE in this case. E-SCALE addresses the problem of providing occasional sensor coverage in a public area that occasionally hosts a large number of people. Since, investing for permanent sensor network based surveillance infrastructure in such places involves fairly large amount of money. Thus, E-SCALE is conceived for providing on-the-fly safety surveillance for occasional crowd. In our simulation we have taken LTE as the backbone infrastructure essential for E-SCALE. Our simulation results shows that a large area occupied by a crowd can be taken under environmental and safety surveillance monitoring by simply using the cell phones' sensors of the people gathered over the concerned area. E-SCALE can also take care of the battery power saving of the audience's cell phones to make it an energy-efficient technique. Smart phones are mostly equipped with hi-end sensor systems like temperature sensors, toxic-gas-sensors and likewise many more to be expected in near future. Hence, without any requirement of explicit sensors deployment for say, fire-safety, air-pollution, toxic gas detection, etc., a large area with gathering of people can be monitored continually, effectively and that's too in an energy efficient way.This thesis reports detail simulation results how this sensor data traffic could affect the LTE network performance. Another effort is made to reduce the battery power consumption of the user equipment so that the guests can freely use applications other than E-SCALE during the gatherings and the user equipment do not suffer from drainage of high battery power. (open full item for complete abstract)

    Committee: Dharma Agrawal D.Sc. (Committee Chair); Raj Bhatnagar Ph.D. (Committee Member); Chia Han Ph.D. (Committee Member) Subjects: Computer Science
  • 18. Desai, Vibha Comparative Effectiveness of Tacrolimus-Based Steroid Sparing versus Steroid Withdrawal Regimens in Patients with Kidney Transplantation: Results from Discrete Event Simulation Modeling

    PhD, University of Cincinnati, 2014, Pharmacy: Pharmaceutical Sciences/Biopharmaceutics

    Background: Acute rejection (AR) and graft loss (GL) that occur as a complication following kidney transplantation (KT) are a major cause of concern in patients with KT. Corticosteroids used as potent immunosuppressants in preventing AR and GL are associated with potentially serious side effects such as development or progression of cardiovascular diseases (CVD), new onset diabetes (NODM), infections and malignancies. Deaths with a functioning graft account for 40% of deaths following KT and CVD are a major cause of these deaths. Trials on tacrolimus-based regimen have found no significant difference in the AR or GL rates and a significant reduction in total cholesterol with steroid withdrawal regimens compared to steroid maintenance regimens. However, majority of these trials were short-term of a duration of <=5 years and included low immunologic risk patients. Long-term effects of these regimens on GL and cardiovascular events such as stroke, myocardial infarction (MI) and deaths due to CVD (D-Cardio) are not known. Objectives: The objectives of the study were to determine the optimal steroid withdrawal strategy that minimizes the incidence of both graft loss as well as cardiovascular events, amongst the five strategies: 1) steroid avoidance 2) 7-day steroid withdrawal 3) 6-month steroid withdrawal 4) 12-month steroid withdrawal and 5) steroid maintenance, using a discrete event simulation model. Methods: A discrete event simulation model was developed that included the following events: AR, GL, MI, stroke, other CVD, NODM, cancer, bacterial infection (BI), cytomegalovirus infection, fracture, D-Cardio, death due to GL and death due to other reasons. The United States Renal Data System registry that follows patients with transplantation was used to derive risk estimates of patients for the above events using parametric regressions adjusting for patients' demographic characteristics, immunologic risks and comorbidities. The estimates were then used to obtain (open full item for complete abstract)

    Committee: Pamela Heaton Ph.D. (Committee Chair); Jens Goebel M.D. (Committee Member); J. Jaime Caro M.D.C.M. F.R.C.P.C. F.A (Committee Member); Teresa Cavanaugh Pharm.D. (Committee Member); Christina Kelton Ph.D. (Committee Member) Subjects: Pharmaceuticals
  • 19. Carver, Eric Reducing Network Latency for Low-cost Beowulf Clusters

    MS, University of Cincinnati, 2014, Engineering and Applied Science: Computer Engineering

    Parallel Discrete Event Simulation (PDES) is a fine-grained parallel application that can be difficult to optimize on distributed Beowulf clusters. A significant challenge on these compute platforms is the relatively high network latency compared to the high CPU performance on each node. The frequent communications and high network latency means that event information communicated between nodes can arrive after a significant delay where the processing node is either waiting for the event to arrive (conservatively synchronized solutions) or prematurely processing events while the transmitted event is in transit (optimistically synchronized solutions). Thus, solutions to reduce network latency are crucial to the deployment of PDES. Conventional attacks on network latency in cluster environments are to use high priced hardware such as Infiniband and/or lightweight messaging layers other than TCP/IP. However, clusters are generally high cost systems (tens to hundreds of thousands of dollars) that, by necessity, must be shared. The use of lower latency hardware such as Infiniband can nearly double the hardware cost and the replacement of the TCP/IP network stack on a shared platform is generally infeasible as other users of the shared platform (with coarse-grained parallel computations) are well served by the TCP/IP stack and unwilling to rewrite their applications to use the APIs of alternate network stacks. Furthermore, configuring the hardware with multiple messaging transport layers is also quite difficult to setup and not generally supported. Low cost, small-form factor compute nodes with multi-core processing chips are becoming widely available. These solutions have lower performing compute nodes and yet often still support 100Mb/1Gb Ethernet hardware (reducing the network latency/processor performance disparity). The much lower per node costs (on the order of $200 per node) can enable the deployment of non-shared, dedicated cluste (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Wen Ben Jone Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 20. Alghwiri, Alaa Parking System Analysis Using Discrete Event Simulation

    Master of Science, University of Akron, 2014, Mechanical Engineering

    In a university setting, parking service plays a critical role in complementing education. It is important to provide and manage university parking services that balance the resources of the university, market conditions, and the safety of all members of the campus community, and the convenience of the users. The complexity of the parking facility distribution, the dynamic nature of the traffic flows, and the randomness of the parking access times is indicative of both the challenge of this work as well as the urgency with which it is needed. Previous literature in parking system analytics has mainly focused on using proper parking capacity to user ratios to allocate parking resources. However, this approach doesn't offer much operational details in dealing and managing regional parking congestion issues, which are the major concerns in both the design and continuous improvement process of the parking system. This research, therefore, proposes to use a scientific based, data-driven approach to systematically model, validate, and simulate alternative parking strategies for optimal system redesign. Specifically, discrete event simulation model is used to capture the parking systems dynamics in granular details to enable deeper understanding and subsequent continuous improvement of the parking systems.

    Committee: Shengyong Wang Dr. (Advisor); Jared Coleman Mr. (Committee Member); Sergio Felicelli Dr. (Committee Member) Subjects: Mechanical Engineering