Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 19)

Mini-Tools

 
 

Search Report

  • 1. Triukose, Sipat A Peer-to-Peer Internet Measurement Platform and Its Applications in Content Delivery Networks

    Doctor of Philosophy, Case Western Reserve University, 2014, EECS - Computer and Information Sciences

    Network measurement is crucial for ensuring Internet's effective operation, security, and continued development. However, collecting representative measurements in a complex infrastructure like the Internet is extremely challenging. To address this challenge, we propose a novel approach to provide focused, on-demand Internet measurements called DipZoom (for Deep Internet Performance Zoom). Unlike prior approaches that face a difficulty in building a measurement platform with sufficiently diverse measurements and measuring hosts, DipZoom implements a matchmaking service, which uses P2P concepts to bring together experimenters in need of measurements and external measurement providers. Further, to demonstrate the utility of DipZoom as a tool for real-world research, we use it to answer some of the challenging questions regarding Internet operation. Specifically, we use DipZoom to conduct an extensive study of content delivery networks (CDN ), which are among the key components of today Internet infrastructure. in performance, security, and improvement aspects. First, we conduct a large-scale performance study of the CDN platform operated by the leading DNS service provider. The study's result shows that the number of worldwide data centers in CDN platform could be significantly reduced without affecting the content delivery performance. Therefore, system designers can decide on the number of data centers to meet their other objectives without having to worry about performance degradation. Second, we used some measuring techniques developed for the above performance study to uncover a significant security vulnerability in CDNs. We showed that several CDNs, including commercial CDNs, not only left their customers vulnerable to the application-level denial of service attack, but CDNs themselves are also susceptible to be recruited to amplify the attack. Finally, based on insights gained in our CDN studies, we propose an approach to improve the cont (open full item for complete abstract)

    Committee: MICHAEL RABINOVICH (Advisor); TEKIN OZSOYOGLU (Committee Member); SHUDONG JIN (Committee Member); VIRA CHANKONG (Committee Member); MARK ALLMAN (Committee Member) Subjects: Computer Science
  • 2. Roy, Madhumitaa Quantifying the Impact of Meteorology on PM2.5 in Cincinnati, OH Using Advanced Statistical Models.

    PhD, University of Cincinnati, 2023, Engineering and Applied Science: Environmental Engineering

    Particulate matter (PM2.5) is associated with negative health outcomes and increased mortality. Quantifying its major influencing factors is essential to air quality management. Although anthropogenic sources are an important driver for PM2.5 concentrations, meteorological conditions can exert a strong influence on long-term PM2.5 variability. Quantifying interactions between meteorological conditions and PM2.5 is essential to understand the variability of PM2.5 and develop forecasting methods. Statistical models, though, do not account for atmospheric processes, but they are very useful for quantifying the pollutant sensitivities of individual meteorological parameters. Chapter 1: Introduction, focuses on the motivation, overall introduction, and background of PM2.5, establishing a link between meteorology and PM2.5 and the significance of air pollution forecasting using statistical methods. In Chapter 2: Clustering and Regression-Based Analysis of PM2.5 Sensitivity to Meteorology in Cincinnati, Ohio, principal components analysis (PCA) is used to determine the most important meteorological parameters for use in multivariate regression, which was used to quantify PM2.5 sensitivities to the local meteorology in Cincinnati. Sensitivities were calculated for the entire dataset as well as by seasons and k-means clustering. Our results show that clustering could potentially be a useful way to bin PM2.5 data based on meteorology. The distribution of PM2.5 concentrations in each cluster and season was evaluated using the Kolmogorov–Smirnov test with data fitting using the lognormal and gamma distributions. Chapter 3: Impact of Regularization Techniques on ANN-based Analysis of PM2.5 Sensitivities to Meteorology, builds forecast models to predict PM2.5 at Cincinnati, Ohio. Four different analyses were designed to forecast PM2.5, namely Base Analysis (BA), Auto-correlation Analysis (AA), Cluster-based PM2.5 forecast (CF), and Season-based PM2.5 forecast (SF). Three differen (open full item for complete abstract)

    Committee: Simone Balachandran Ph.D. (Committee Chair); Patrick Ray Ph.D. (Committee Member); Mingming Lu Ph.D. (Committee Member); Richard Brokamp Ph.D. (Committee Member) Subjects: Environmental Engineering
  • 3. Failor, Andrew Creating an Urban Farming Network: A Community of Growth

    MARCH, University of Cincinnati, 2023, Design, Architecture, Art and Planning: Architecture

    The current state of farming is neither sustainable nor economical. The economic pressure to produce a surplus of goods stems from the 1970s mantra to “get big or get out”. This strenuous effort to constantly produce “more” has created a monoculture of farming that relies heavily on polluting machinery and forces plant growth through a stream of chemicals and genetic modifications. Industrial, or Mega-Farming movements have made it near impossible for small farms to compete as the current economic model of farming acts in surplus, something a family farm often cannot do. The environmental implications of this exhaustive farming method result in quick degradation of the soil necessary for growing, and farmers are running out of new places to grow. With a growing population, it can be understood that farming of this scale must be maintained to feed such high numbers, although the industrial farms of the United States are subsidized in a way that promotes the growing of corn, soybean, wheat, and other heavily processed foods. Alternative farming methods should then be considered to allow for a diversification of crops, a more sustainable relationship with natural cycles, and the widespread access to affordable and quality produce. New methods of growing that are gaining popularity include soilless indoor growing, such as hydroponic and aeroponic systems. These indoor systems will supplement outdoor seasonal growing, which also will look to be optimized through pixel cropping. These methods of farming achieve great success at the personal level, and require little agricultural knowledge, thus paving the way for families to again become farmers. With the size of farm field decreasing, and productivity increasing, the farm can move to more dense areas creating a new urban condition that can be explored architecturally. By rejecting the normative approach to modern farming and accepting a localized and collaborative approach to agriculture, the city can function as (open full item for complete abstract)

    Committee: Michael McInturf M.Arch. (Committee Chair); Elizabeth Riorden M.Arch. (Committee Member) Subjects: Architecture
  • 4. Goulder, Michael The Effect of Supply Chain Strategies on Direct-to-Consumer Industry Evolution: A Mixed-Methods Study

    Doctor of Management, Case Western Reserve University, 2022, Weatherhead School of Management

    Direct-to-consumer (D2C) entrepreneurial activity is disrupting both traditional retailers and manufacturers by directly linking consumers with products and services, disintermediating elements of the traditional value chain. We conducted a mixed methods, three-phase study to better understand how supply chain strategy choices influence D2C firms' competitive advantage. In phase one we conducted in-depth interviews with leaders of small to medium sized D2C businesses. One key finding: only a subset of D2C companies are willing to invest in differentiated supply chain capabilities. Many viewed supply chain as a utility function. This informed phase two, a quantitative analysis (scenario-based experiment) of the degree to which logistics quality drives customer satisfaction, finding a strong positive effect. This motivated phase three, a quantitative modeling exercise determining the relationship between distribution network size (number of DCs) and proximity to customers, an indicator of lead time performance. This analysis suggests that most D2C firms, with just one or two DCs are significantly disadvantaged, with deleterious consequences for customer service.

    Committee: Kalle Lyytinen Ph.D. (Advisor); Manoj Malhotra Ph.D. (Advisor); Yunmei Wang Ph.D. (Advisor) Subjects: Entrepreneurship; Operations Research
  • 5. Thaibah, Hilal Managing a Hybrid Oral Medication Distribution System in a Pediatric Hospital: A Machine Learning Approach

    PhD, University of Cincinnati, 2021, Pharmacy: Pharmaceutical Sciences

    Background: The efficient and safe delivery of medications represent a challenge, particularly within in-patient hospital pharmacies. Different medication distribution systems have evolved and were deployed to meet this challenge. Centralized, decentralized, and hybrid medication distribution systems comprise the main medication distribution systems. Moreover, the optimization of these systems is an ongoing process, and there is a need for innovative managing tools that align with these distribution systems. Objectives: This dissertation aimed at managing the oral medication distribution system within a Hybrid Medication Distribution System (HMDS) in a pediatric hospital using Artificial Neural Network (ANN) modeling. This aim was sought in two specific aims: to develop and validate an ANN model to determine the assignment of oral medications to either centralized or decentralized distribution system within the HMDS, and to evaluate the expandability of the developed ANN model in managing these assignments in another high-throughput nursing unit. Methods: Retrospective data analyses were performed using a one-year dispensing data from the Cincinnati Children's Hospital Medical Center between January 1st – December 31st of 2018. A subset of the oral medication dispense transactions was obtained, and two nursing units were selected to carry out the analyses for the two aims. The ANN model was developed and validated, and the model's quality metrics were obtained. The expandability of the developed ANN model, as well as the retraining of the model, were evaluated. Results: There was a total of 82,961 oral medication dispense transactions in aim 1 dispensing unit. The centralized distribution accounted for 54.18% of the oral medication dispense transactions, and 45.82% pertained to the decentralized distribution. The ANN model was developed, and cross-validated using 75% training (n= 62,002) and 25% testing (n= 20,667) data. The ANN training model had a 98% (open full item for complete abstract)

    Committee: Alex Lin Ph.D. (Committee Chair); Jianfei (Jeff) Guo Ph.D. (Committee Member); Ana Hincapie Ph.D. (Committee Member); Marepalli Rao Ph.D. (Committee Member); Bingfang Yan D.V.M. Ph.D. (Committee Member) Subjects: Pharmaceuticals
  • 6. Sargun, Deniz Robust Change Detection with Unknown Post-Change Distribution

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    As communication and control systems become more complex, connected and process data at higher velocities, detecting changes in patterns becomes increasingly difficult yet still crucial to guarantee a level of QoS, security, reliability etc. For large systems, usually there are many modes of failure and they are also prone to attacks from different surfaces. Still, there are numerous zero-day vulnerabilities that are unidentified until they cause a fault or are exploited. Change detection with unknown distributions provides a way of detecting the occurrence of faults or the gain of access by malicious parties by comparing the time series system features to their norm. In a wide variety of the applications, on the other hand, it is feasible to assume a certain level of knowledge of the system before the effect takes place and utilizing the knowledge of initial conditions increases the detection performance. With an ever increasing data rate and connectivity, any change in the observed process has to be detected on the fly before it is outdated, without the necessity to store and with a small blast radius for malicious activities. A delay in real time change detection may result in QoS disruption, cyber-physical threats and inability to contain the spread of a disease. So, minimal computational complexity is a key ingredient for modern change detection algorithms. In this dissertation, we assume non-Bayesian change detection problems under a finite alphabet with varying change point and cost models and with unknown post-change distributions. We focus on robust detection algorithms that utilize the knowledge of pre-change system dynamics and are of low complexity. Given that the effect of the change on the system is unknown, the distribution of observations may divert in many ways without much structure, whereas, before the change point, a false alarm is structured by Sanov's theorem, following a particular sample path. The proposed methods characterize (open full item for complete abstract)

    Committee: C. Emre Koksal (Advisor); Atilla Eryilmaz (Committee Member); Kiryung Lee (Committee Member); Abhishek Gupta (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Statistics
  • 7. Armstrong, Zoey Modeling distributions of Cantharellus formosus using natural history and citizen science data

    Master of Arts, Miami University, 2021, Geography

    The Pacific Golden Chanterelle (Cantharellus formosus) is a widely sought-after mushroom most abundant in the forests of Washington and Oregon, USA. This project used the species to investigate how accurately the species distribution could be modeled using natural history (herbarium) as model training data and citizen science (iNaturalist) as validation data. To combat the potential sampling bias towards population centers an effort variable weighting scheme was used to consider observations in harder to reach areas more than those in easier to access areas. Four models were created and run using the natural history data as training points: Random Forests (RF), Maxent, General Linear Model (GLM), and Artificial Neural Network (ANN); the effort variable was only applied to the ANN and GLM models. Out of these four, RF was found to perform the best with an equitable skill score (ETS) 0.987 when tested against the iNaturalist citizen science validation points. Overall, this project provides a good proof of concept and framework for the use of herbarium and citizen science data for use in biogeographical modeling projects in the future.

    Committee: Mary Henry (Advisor); Jessica McCarty (Committee Member); Nicholas Money (Committee Member) Subjects: Geography
  • 8. Hossain, Md Amjad DESIGN OF CROWD-SCALE MULTI-PARTY TELEPRESENCE SYSTEM WITH DISTRIBUTED MULTIPOINT CONTROL UNIT BASED ON PEER TO PEER NETWORK

    PHD, Kent State University, 2020, College of Arts and Sciences / Department of Computer Science

    Traditionally, the multi-party telepresence system is supported by one or more servers called Multipoint Control Unit(MCU). These servers are expensive, involve the third party in the system, and also bottleneck for large scale implementation. So, this dissertation presents protocols for autonomous Peer-to-Peer(P2P) implementation of Crowd-scale Telepresence System. The protocols use multiple features from widely adopted P2P network, Gnutella. The proposed protocols and strategies are designed based on the Principle of Distributed Computing (PDC) and the Principle of Priority-based Resource Allocation(PPRA). These principles are considered to address three of the four identified challenges of CMTS implementation, (1) Computational Challenge, (2) Temporal Challenge, and (3) Overcrowding Challenge. The fourth one is the visual challenge which is left for future work. The PDC is used to address the first two challenges by distributing of MCU's workloads among participating peers. The MCU consists of a Multipoint Controller(MC) and one or more Multipoint Processors(MP). For distributed MCU, the optimal placement of MC and MPs in the P2P overlay network is necessary, which is time-consuming because of exponential search space. So, a phase-based design approach is considered. For optimal placement of MC, three incremental protocols, such as GAncestor, ZePoP, and ZePoP-ε are presented. Then, multiple methods are discussed to place the MPs around the optimal MC. For supporting the desired frame rate, two versions of progressive timer management schemes are used at MPs. The protocol ZePoP-ε is designed based on PPRA that emphasis to properly utilize the limited resources of the P2P network. Thus, PPRA is used to address the overcrowding challenge as well as the temporal challenge. It is used to design a profit-based stream collection mechanism of ZePoP-ε for maximizing a Dynamic Role and Demand based Index (DRDI) in bounded waiting time. The proposed protocols and methods co (open full item for complete abstract)

    Committee: Javed I Khan (Advisor); Cheng Chang Chang Lu (Committee Member); Gokarna P Sharma (Committee Member); Murali Shanker (Committee Member); Jun Li (Other) Subjects: Computer Science
  • 9. Whipps, Gene Contributions to Distributed Detection and Estimation over Sensor Networks

    Doctor of Philosophy, The Ohio State University, 2017, Electrical and Computer Engineering

    Wireless sensor networks have matured over the last several years from popular research and development platforms to commercially-available sensors and systems. In many applications, wireless sensor networks have size, weight, power, and cost limitations. These constraints directly affect the ability of sensor nodes to adequately process and reliably communicate information within the sensor network. This dissertation examines aspects of distributed detection and estimation over a sensor network while considering limitations inherent in wireless networks. First, we consider the problem of distributed detection from a large network of sensors and introduce a realistic network model. Sensor nodes make individual decisions from their local observation and then communicate these decisions through a shared and imperfect communications channel to a central decision node. The key difference from previous research is the network model allows the decision rule to leverage errors in the channel to improve detection performance. We derive analytical expressions that characterize the detection performance of the system with respect to both sensor density and communications delay. We show that the detection performance improves with network density when sensor nodes are appropriately censored and desensitized, despite increasing message collisions. In addition, we show that detection performance using the protocol model, with imperfect communications, rapidly converges to the perfect communications case as the number of communication slots increase. Second, we study the problem of distributed quickest change detection from a network of sensors. Similar to the first part, sensor nodes communicate information to a central decision node, but in this part the central node continues to collect information from the sensor nodes until a detection is declared. We consider a minimax-type distributed quickest change detection solution that minimizes detection delay for a d (open full item for complete abstract)

    Committee: Randolph Moses (Advisor); Emre Ertin (Advisor); Eylem Ekici (Committee Member) Subjects: Electrical Engineering; Statistics
  • 10. Ding, Fei Smart Distribution System Automation: Network Reconfiguration and Energy Management

    Doctor of Philosophy, Case Western Reserve University, 2015, EECS - Electrical Engineering

    Smart distribution system automation is the key to realizing a highly reconfigurable, reliable, flexible and active distribution system. Automated network reconfiguration including restoration is the most studied area in distribution automation, and it contributes to power loss minimization, voltage improvement and also can enable the distribution network to respond to contingencies and changes happened in the grid. Distributed energy resources at the customer premises, energy storage systems and plug-in electric vehicles are indispensable parts of future smart distribution systems. Their participations have brought more dynamics and uncertainties into the grid, and hence new technologies at both planning and operation levels must be developed to manage the energy dispatched from distributed energy resources and energy storage units, the charging and discharging behaviors of electric vehicles so that the entire power distribution system could operate stably and efficiently. Meantime, due to the intermittent, imperfectly predicted renewable energy and more complicated, uncertain load patterns, two challenges have arisen on network reconfiguration study, including more frequent reconfiguration actions and more complicated optimization problems for determining the optimal network topology. Thus, new approaches for reconfiguring distribution networks must be developed to overcome these challenges. In order to address the above challenges which distribution systems are facing to and develop new technologies for realizing smart distribution automation, a comprehensive study on network reconfiguration and energy management of distributed generation systems was studied. The contributions of this dissertation include: (1) proposed a novel problem formulation for network reconfiguration problem based on “switch states”; (2) developed three new methods to solve the optimization problem including heuristic algorithm, hybrid algorithm and revised genetic algorithm; (3) propos (open full item for complete abstract)

    Committee: Kenneth Loparo (Advisor); Vira Chankong (Committee Member); Hong Mingguo (Committee Member); Prica Marija (Committee Member) Subjects: Electrical Engineering; Energy
  • 11. Ma, Tao Statistics of Quantum Energy Levels of Integrable Systems and a Stochastic Network Model with Applications to Natural and Social Sciences

    PhD, University of Cincinnati, 2013, Arts and Sciences: Physics

    I have done two projects in my PhD period: first, the statistics of quantum energy levels of integrable systems and second, a stochastic network model with applications to natural and social sciences. Quantum chaos studies level statistics and wave-function characteristics in the semiclasscal regime where the system action is several orders of magnitude larger than Planck's constant. The level statistics of classically integrable systems and classically chaotic ones are different. The levels of classically chaotic systems repel each other while these of classically integrable ones seem to be uncorrelated. We have studied level statistics in semiclassical spectrum of classically integrable systems without extra degeneracies. Our new findings are as follows. We developed a parametric averaging method to achieve ensemble averaging. We found the interval level number variance displays persistent oscillations. Contrary to previous belief, the nearest-neighbor spacing distribution of integrable systems displays some repulsion. The theoretical explanations of level statistics are based on both semiclassical theory and quantum mechanical derivation. Distributions of mass of species in the field of biology, human response times in the field of psychology, personal wealth distribution in the field of economics, and stock return in finance, all display power-law tails. It is theoretically and practically meaningful to understand the characteristics and origin of these power-law tails. We study a stochastic network model that is able to generate generalized inverse gamma distribution with power-law tail. We apply this distribution to the field of psychology and find good fit of data of with human response times. At last, we extend the model to study the stock return.

    Committee: Rostislav Serota Ph.D. (Committee Chair); Andrei Kogan Ph.D. (Committee Member); Michael Ma Ph.D. (Committee Member); L.C.R. Wijewardhana Ph.D. (Committee Member) Subjects: Physics
  • 12. Benedik, Christopher Modeling and Minimization of Integrated Circuit Packaging Parasitics at Radio Frequencies

    Doctor of Philosophy (PhD), Wright State University, 2013, Engineering PhD

    Many integrated circuits are connected to their packaging pins through bondwires. Due to the low cost of bondwires, there is interest in extending operating frequencies or negating their effects in order to keep the price of packaged integrated circuits as low as possible. Bondwires function as lumped circuits consisting of inductors, capacitors, and resistors which can be modeled based on wire geometry. Knowing this, models can be created which approximate the effects of bondwires. With the knowledge of these models, compensation techniques can be implemented which will match the bondwire impedance to the signal line impedance. The effects of these elements on circuit operation is apparent on both signal and power lines to devices. This dissertation is going to present 1. A bondwire model based on physical characteristics of interconnections including neighboring wires. The model is tested against data from fabricated test fixtures, and results compared to those produced by current software. 2. A compensation method for performance degradation caused by bondwires at radio frequencies. Test fixtures implementing these methods are fabricated and checked with results compared to predictions. 3. A method of component stacking which can be used to attach passive components directly to IC die. -Use of above method to improve power distribution network (PDN) performance. Theoretical results are compared to measured test fixture results. -Use of above method to improve performance of off device filters through Q-factor improvement. Improvement verified through test and analysis of a physical test fixture.

    Committee: Saiyu Ren Ph.D. (Advisor); Raymond Siferd Ph.D. (Committee Member); Marty Emmert Ph.D. (Committee Member); Marian Kazimierczuk Ph.D. (Committee Member); Ronald Coutu Ph.D. (Committee Member) Subjects: Electrical Engineering; Engineering
  • 13. Lindley, Trevor A Framework to Protect Water Distribution Systems Against Potential Intrusions

    MS, University of Cincinnati, 2001, Engineering : Environmental Engineering

    A framework is developed to quantify the susceptibility of drinking water distribution systems to intrusion events. The framework integrates infrastructure information, hydraulic modeling, and demographic data. These elements are managed within a geographic information system (GIS). Using criteria that reflect system pressure, hydraulic intrusion pathways, and contaminant sources, the framework identifies locations within the distribution system susceptible to intrusion events. Locations found to be susceptible to intrusions are prioritized for attention based on proximity to sensitive populations, such as young children and the elderly. The proposed method is demonstrated with a case study based on a real distribution system. The study area encompasses approximately 38 square miles, includes three service areas, contains over 280 miles of water main serving 18,900 connections with a total average demand of five to six million gallons per day. Susceptibility conditions exist at some locations throughout the system; however, only rarely do all three conditions coincide. Hence very few locations were deemed susceptible to intrusion events. The framework may support capital improvement programs, operational decisions, and distribution system sampling designs. Methods such as this have been suggested as part of a larger distribution system management approach to improve water quality and at the same time reduce regulatory sampling requirements.

    Committee: Steven Buchberger (Advisor) Subjects:
  • 14. LI, ZHIWEI NETWORK WATER QUALITY MODELING WITH STOCHASTIC WATER DEMANDS AND MASS DISPERSION

    PhD, University of Cincinnati, 2006, Engineering : Environmental Engineering

    A new computer model, ADRNET, is developed to predict the spatial and temporal distribution of disinfectant in a pipe network, considering stochastic water demands and unsteady mass dispersion. An Eulerian-Lagrangian scheme is combined with a numerical Green's Function technique to solve the advection-dispersion-reaction equation efficiently in network conditions. In a comparison with the industry standard advection-reaction water quality model (EPANET), ADRNET exhibits better agreement with field observations at locations where laminar flow is prevalent. Implementation of the ADRNET model is preceded by three ancillary studies. The first study investigates the effect of temporal averaging on stochastic pipe flows to identify the appropriate time scales for water quality modeling of distribution networks. For this purpose, a non-homogeneous Poison Rectangular Pulse (PRP) process is utilized to simulate high resolution residential water demands in a distribution network. Two water demand models are successfully established to demonstrate variability and frequency of regimes for PRP flows as function of time scale. The results show that the variance of time-averaged PRP random flows is inversely proportional to the time scale; the frequency of flow regimes depends on both the time scale and the mean of the random flows. The second study investigates the conditions under which mass dispersion is important in pipe networks through comparison of numerical simulations with and without dispersive transport. The results show that mass dispersion is always important in laminar flow zones, and the importance of dispersion increases with increasing pipe diameter but decreases with increasing of reaction rate coefficient. Finally, the effect of temporal scale on unsteady dispersion is studied through both theoretical analyses with periodic binary flow pulses and numerical simulation with PRP random laminar flows. For small diameter tubes, unsteady dispersion decreases with incr (open full item for complete abstract)

    Committee: Dr. Steven Buchberger (Advisor) Subjects:
  • 15. KHANAL, NABIN GENERALIZED SENSITIVITY ANALYSIS OF WATER DISTRIBUTION SYSTEM VULNERABILITY TO DELIBERATE INTRUSIONS

    MS, University of Cincinnati, 2005, Engineering : Environmental Engineering

    This thesis presents a two-part investigation on the vulnerability of municipal water distribution systems to deliberate biochemical intrusions. In part 1, intrusions were modeled as a steady 6-hour injection of a soluble conservative contaminant into a randomly selected node on the pipe network of a small town. The propagation of the contaminant through the water distribution system was tracked with EPANET and, at the end of 72 hours, the fraction of the town's population exposed to the contaminant was estimated. This was repeated for multiple injection nodes randomly dispersed across the network. A dimensionless “Exposure Index” (EI) was introduced as a simple global measure of network vulnerability: an EI value of 0 implies that no residents are exposed to the contaminant; an EI value of 100 implies that all residents are exposed. In addition, results of the intrusion simulations were used to construct a “Zone of Influence” map which categorizes the network nodes on the basis of their exposure potential. In part 2, a Generalized Sensitivity Analysis (GSA) was performed to identify which, if any, of four dynamic network variables (base demand, storage capacity, mass loading and injection duration) had a significant influence on the percentage of population exposed to the contaminant during an intrusion. Latin Hypercube Sampling was used to set-up 1152 biochemical assault simulations at each of three injection nodes. The nodes were selected on the basis of their exposure potential (high, medium or low) as indicated on the Zone of Influence map. Based on the Kolmogorov-Smirnov d statistic, simulated exposure levels were found to be most sensitive to variations in base demand and mass loading. Tank capacity and injection duration tended not to be important. More work is needed, however, to identify appropriate measures of population exposure. This exercise demonstrates that GSA holds promise as a robust tool for streamlining computationally intensive network simulati (open full item for complete abstract)

    Committee: Dr. Steven Buchberger (Advisor) Subjects: Engineering, Environmental
  • 16. SHAH, VIVEK PARALLEL CLUSTER FORMATION FOR SECURED COMMUNICATION IN WIRELESS AD HOC NETWORKS

    MS, University of Cincinnati, 2004, Engineering : Computer Science

    Routing protocols in wireless ad hoc network are highly insecure and prone to various attacks owing to its inherent characteristics of open medium, dynamically changing topologies and distributed cooperation between the member nodes. Having a secure routing protocol in wireless ad hoc networks appears to be a problem that is not trivial to solve. We propose a scheme to enhance the fault-tolerance of cluster head's functionality in CBRP. CBRP with a single cluster head is single point of failure and unsuitable especially for functionalities like key distribution. By distributing the cluster head service to a group of cluster heads called Council nodes and utilizing the (k, n) secret sharing scheme, we can increase the fault tolerance of network manifolds against security attacks. Simulation results obtained demonstrates that our proposed algorithm enables simultaneous formation the Council based clusters, thereby making the scheme time efficient and comparable to CBRP. Results also show that since large size clusters are formed in Council based clusters, it is feasible to apply (k, n) secret sharing concepts. The scheme is more suitable for low mobility networks due to the less signaling overhead involved in during cluster reformations.

    Committee: Dr. Dharma Agrawal (Advisor) Subjects: Computer Science
  • 17. Cutter, Matthew Dispersion in Steady Pipe Flow with Reynolds Number Under 10,000

    MS, University of Cincinnati, 2004, Engineering : Environmental Engineering

    The longitudinal dispersion coefficient of a conservative tracer (CaCl2) was calculated from continuous flow tests in a dead-end pipe system. The system consisted of 6-inch diameter PVC pipe with a test length of approximately 44 meters. Flow conditions ranged from laminar to turbulent regimes, with a Reynolds number range of 1000 to 10000. Two static mixers in series were used to homogenize the tracer concentration across the cross-section of pipe. The conductivity of the tracer was measured at two locations downstream of the injection and mixers using a conductivity probe at a point in the cross-section. Dispersion coefficients calculated by the method of moments are plotted versus Reynolds number. Test results show increasing time-averaged dispersion rate in the laminar flow regime and a portion of the transitional flow regime with increasing Reynolds number. At a flow rate corresponding to a Reynolds number (Re) of approximately Re=2400, the dispersion rate reaches a maximum value and then decreases until approximately Re=4000. As the tests enter the turbulent flow regime, the dispersion rate is minimized due to the plug-flow behavior inherent to turbulent flow. Results indicate that dispersion plays a more important role in mass transport in laminar and transitional flow than advective mass transport. Incorporating dispersion estimates into network water quality models will improve quality predictions for the dead-end portions of the network.

    Committee: Dr. Steven Buchberger (Advisor) Subjects: Engineering, Environmental
  • 18. Stewart, Craig An Evolutionary Analysis of the Internet Autonomous System Network

    MS, Kent State University, 2010, College of Arts and Sciences / Department of Computer Science

    The backbone of the Internet is made up of a network of autonomous systems. The Autonomous System Network, also referred to as the ASN, provides an organized system of routing between hosts and other autonomous systems. Knowledge of the ASN is important to the understanding of the Internet. The Internet can be viewed as the ASN itself for the purposes of study in order to comprehend the major issues of performance and growth involved with it. However, the complexity, size, and pattern of evolution in the ASN make the network difficult to track over time. Previous research on the topic has done little to clarify the picture of the ASN. The majority of studies use static data like snapshots and small data sets to study the autonomous systems. Furthermore, recent researchers do not discuss the structural aspects of the ASN. We aim to expand on both the dynamic and structural properties of the ASN utilizing a variety of metrics. Included in these measurements is a new process incorporating familiar topological patterns and generating functions. We discuss several such topological structures and the new technique, which involves the comparison of the complex network graph to different topologies that show signatures and generating functions similar to the network. This process is then applied to a data set representing several states of the ASN that encompasses information collected from a variety of sources over the course of five years. The findings of this study reveal interesting and important information about the evolutionary state of the ASN, providing a complete and thorough analysis of multiple key properties of the Internet.

    Committee: Javed Khan PhD (Advisor); Arden Ruttan PhD (Committee Member); Hassan Peyravi PhD (Committee Member) Subjects: Computer Science
  • 19. Li, Hailong Analytical Model for Energy Management in Wireless Sensor Networks

    PhD, University of Cincinnati, 2013, Engineering and Applied Science: Computer Science and Engineering

    Wireless sensor networks (WSNs) are one type of ad hoc networks with data-collecting function. Because of the low-power, low-cost features, WSN attracts much attention from both academia and industry. However, since WSN is driven by batteries and the multi-hop transmission pattern introduces energy hole problem, energy management of WSN became one of fundamental issues. In this dissertation, we study the energy management strategies for WSNs. Firstly, we propose a packets propagation scheme for both deterministic and random deployment of WSNs so to prolong their lifetime. The essence of packets propagation scheme is to control transmission power so as to balance the energy consumption for the entire WSN. Secondly, a characteristic correlation based data aggregation approach is presented. Redundant information during data collection can be effectively mitigated so as to reduce the packets transmission in the WSN. Lifetime of WSN is increased with limited overhead. Thirdly, we also provide a two-tier lifetime optimization strategy for wireless visual sensor network (VSN). By deploying redundant cheaper relay nodes into existing VSN, the lifetime of VSN is maximized with minimal cost. Fourthly, our two-tier visual sensor network deployment is further extended considering multiple base stations and image compression technique. Last but not the least, description of UC AirNet WSN project is presented. At the end, we also consider future research topics on energy management schemes for WSN.

    Committee: Dharma Agrawal D.Sc. (Committee Chair); Kenneth Berman Ph.D. (Committee Member); Yizong Cheng Ph.D. (Committee Member); Chia Han Ph.D. (Committee Member); Wen Ben Jone Ph.D. (Committee Member) Subjects: Computer Engineering