Search Results (1 - 25 of 52 Results)

Sort By  
Sort Dir
 
Results per page  

Aldakheel, Eman A.A Cloud Computing Framework for Computer Science Education
Master of Science (MS), Bowling Green State University, 2011, Computer Science
With the rapid growth of Cloud Computing, the use of Clouds in educational settings can provide great opportunities for Computer Science students to improve their learning outcomes. In this thesis, we introduce Cloud-Based Education architecture (CBE) as well as Cloud-Based Education for Computer Science (CBE-CS) and propose an automated CBE-CS ecosystem for implementation. This research employs the Cloud as a learning environment for teaching Computer Science courses by removing the locality constraints, while simultaneously improving students' understanding of the material provided through practical experience with the finer details and subjects’ complexities. In addition, this study includes a comparison between Cloud-based virtual classrooms and the traditional e-learning system to highlight the advantages of using Clouds in such a setting. We argue that by deploying Computer Science courses on the Cloud, the institution, administrators, faculty, and the students would gain significant advantages from the new educational setting. The infrastructure buildup, the software updating and licenses managements, the hardware configurations, the infrastructure space, maintenance, and power consumption, and many other issues will be either eliminated or minimized using the Cloud technology. On the other hand, the number of enrolled students is likely to increase since the Cloud will increase the availability of the needed resources for interactive education of larger number of students; it can deliver advanced technology for hands-on training, and can increase the readiness of the students for job market. The CBE-CS approach is more likely to allow faculty to better demonstrate the subjects' complexities to the students by renting he needed facilities whenever it is desired. The research also identified several potential Computer Science courses which could be launched and taught through Clouds. In addition, the selected courses have been classified based on three well-known levels of the Cloud services: Software as a Service (SaaS), Platform as a service (PaaS), and Infrastructure as a Service (IaaS). Subsequently, we propose to build a framework for CSE-CS considering the service layers and the selected courses. The proposed CBE-CS framework is intended to be integrated in a Virtual Classroom Ecosystem for Computer Sciences based on Cloud Computing referred to as VCE-CS. This ecosystem is scalable, available, reliable, and cost effective. Examples from selected pilot courses (i.e., Database, Operating System, Network, and Parallel Programming) are discussed. This research describes VCE-CS and argues for the benefits of such systems.

Committee:

Hassan Rajaei, PhD (Advisor); Guy Zimmerman, PhD (Committee Member); Jong Lee, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

Cloud Computing; Clouds in educational settings; Cloud Computing for Computer Science (CBE-CS); Cloud Computing for Computer students; Cloud Based Education architecture; Cloud Based Education (CBE)

Nagavaram, AshishCloud Based Dynamic Workflow with QOS For Mass Spectrometry Data Analysis
Master of Science, The Ohio State University, 2011, Computer Science and Engineering

Lately, there is a growing interest in the use of cloud computing for scientific applications, including scientific workflows. Key attractions of cloud include the pay-as-you-go model and elasticity. While the elasticity offered by the clouds can be beneficial for many applications and use-scenarios, it also imposes significant challenges in the development of applications or services. For example, no general framework exists that can enable a scientific workflow to execute in a dynamic fashion with QOS (Quality of Service) support, i.e. exploiting elasticity of clouds and automatically allocating and de-allocating resources to meet time and/or cost constraints while providing the desired quality of results the user needs.

This thesis presents a case-study in creating a dynamic cloud workflow implementation with QOS of a scientific application. We work with MassMatrix, an application which searches proteins and peptides from tandem mass spectrometry data. In order to use cloud resources, we first parallelize the search method used in this algorithm. Next, we create a flexible workflow using the Pegasus Workflow Management System from ISI. We then add a new dynamic resource allocation module, which can use fewer or a larger number of resources based on a time constraint specified by the user. Finally we extend this to include the QOS support to provide the user with the desired quality of results. We use the desired quality metric to calculate the values of the application parameters. The desired quality metric refers to the parameters that are computed to maximize the user specified benefit function while meeting the time constraint. We evaluate our implementation using several different data-sets, and show that the application scales quite well. Our implementation effectively allocates resources adaptively and the parameter prediction scheme is successful in choosing parameters that help meet the time constraint.

Committee:

Gagan Agrawal, PhD (Advisor); Rajiv Ramnath, PhD (Committee Member); Michael Freitas, PhD (Committee Member)

Subjects:

Bioinformatics; Biomedical Engineering; Biomedical Research; Computer Engineering; Computer Science

Keywords:

cloud;dynamic workflow;adaptive execution on cloud;parallelization on cloud;time constraint execution;QOS on cloud;parameter prediction;parameter modeling

Manjunatha, Ashwin KumarA Domain Specific Language Based Approach for Developing Complex Cloud Computing Applications
Master of Science in Computer Engineering (MSCE), Wright State University, 2011, Computer Engineering

Computing has changed. Lately, a slew of cheap, ubiquitous, connected mobile devices as well as seemingly unlimited, utility style, pay as you go computing resources has become available at the disposal of the common man. The latter commonly called Cloud Computing (or just Cloud) is democratizing computing by making large computing power accessible to people and corporations around the world easily and economically.

However, taking full advantage of this computing landscape, especially for the data intensive domains, has been hampered by many factors, the primary one being the complexity in developing applications for the variety of available platforms.

This thesis attempts to alleviate many of the issues faced in developing complex Cloud centric applications by using a Domain Specific Language (DSL) based methods. The research is focused in two main areas. One area is hybrid applications with mobile device based front-ends and Cloud based back-ends. The other is data and compute intensive biological experiments, exemplified by applying a DSL to metabolomics data analysis. This research investigates the viability of using a DSL in each domain and provides evidence of successful application.

Committee:

Amit Sheth, PhD (Advisor); Krishnaprasad Thirunarayan, PhD (Committee Member); Paul Anderson, PhD (Committee Member); Ramakanth Kavuluru, PhD (Committee Member)

Subjects:

Computer Engineering; Computer Science

Keywords:

Cloud Computing; Mobile Computing; Domain Specific Language; DSL; Cloud Mobile Hybrid Application; Metabolomics; Mobicloud; Mobicloud Toolkit; mobi-cloud; Metabolink; SCALE toolkit

Carpenter, Aaron P.Cloud-Based Collaborative Environments in the Business World: A Study in Editing Practices
Master of Education (MEd), Bowling Green State University, 2012, Career and Technology Education/Technology
Cloud-based collaborative learning environments are used in many companies today. The problem of this study was to identify if cloud-based collaborative learning environments were actually being used in the business world and what behavioral change, if any, comes about due to their use. The objectives of this study were to: 1.) Study the use of cloud-based collaborative environments in the business world. 2.) Investigate how the editing capabilities are used. 3) Examine whether or not the collaborative systems in place have any impact on user behavior in the company. To complete this study, both qualitative and quantitative measures were utilized to gather data. Surveys, as well as interviews, were the methods used in conjunction with a company that currently uses a cloud-based collaborative system in the workplace. This study brings some new data and issues to light in how these new collaborative cloud-based systems are used in the business environment, but it is recommended that further research be continued to get a different perspective on how other companies use these technologies.

Committee:

Terry Herman, PhD (Committee Chair); Fei Gao (Committee Member); Anthony Fontana (Committee Member)

Subjects:

Instructional Design

Keywords:

learning design; instructional design; Internet; cloud; cloud-based systems; e-learning; technology; education; tech education; wikis; Microsoft Exchange; GoogleDocs; Google

Jamaliannasrabadi, SabaHigh Performance Computing as a Service in the Cloud Using Software-Defined Networking
Master of Science (MS), Bowling Green State University, 2015, Computer Science
Benefits of Cloud Computing (CC) such as scalability, reliability, and resource pooling have attracted scientists to deploy their High Performance Computing (HPC) applications on the Cloud. Nevertheless, HPC applications can face serious challenges on the cloud that could undermine the gained benefit, if care is not taken. This thesis targets to address the shortcomings of the Cloud for the HPC applications through a platform called HPC as a Service (HPCaaS). Further, a novel scheme is introduced to improve the performance of HPC task scheduling on the Cloud using the emerging technology of Software-Defined Networking (SDN). The research introduces “ASETS: A SDN-Empowered Task Scheduling System” as an elastic platform for scheduling HPC tasks on the cloud. In addition, a novel algorithm called SETSA is developed as part of the ASETS architecture to manage the scheduling task of the HPCaaS platform. The platform monitors the network bandwidths to take advantage of the changes when submitting tasks to the virtual machines. The experiments and benchmarking of HPC applications on the Cloud identified the virtualization overhead, cloud networking, and cloud multi-tenancy as the primary shortcomings of the cloud for HPC applications. A private Cloud Test Bed (CTB) was set up to evaluate the capabilities of ASETS and SETSA in addressing such problems. Subsequently, Amazon AWS public cloud was used to assess the scalability of the proposed systems. The obtained results of ASETS and SETSA on both private and public cloud indicate significant performance improvement of HPC applications can be achieved. Furthermore, the results suggest that proposed system is beneficial both to the cloud service providers and the users since ASETS performs better the degree of multi-tenancy increases. The thesis also proposes SETSAW (SETSA Window) as an improved version of SETSA algorism. Unlike other proposed solutions for HPCaaS which have either optimized the cloud to make it more HPC-friendly, or required adjusting HPC applications to make them more cloud-friendly, ASETS tends to provide a platform for existing cloud systems to improve the performance of HPC applications.

Committee:

Hassan Rajaei, Ph.D (Advisor); Robert Green, Ph.D (Committee Member); Jong Kwan Lee, Ph.D (Committee Member)

Subjects:

Computer Engineering; Computer Science; Technology

Keywords:

High Performance Computing; HPC; Cloud Computing; Scientific Computing; HPCaaS; Software Defined Networking; SDN; Cloud Networking; Virtualization

Diskin, YakovVolumetric Change Detection Using Uncalibrated 3D Reconstruction Models
Doctor of Philosophy (Ph.D.), University of Dayton, 2015, Electrical Engineering
We present a 3D change detection technique designed to support various wide-area-surveillance (WAS) applications in changing environmental conditions. The novelty of the work lies in our approach of creating an illumination invariant system tasked with detecting changes in a scene. Previous efforts have focused on image enhancement techniques that manipulate the intensity values of the image to create a more controlled and unnatural illumination. Since most applications require detecting changes in a scene irrespective of the time of day, (lighting conditions or weather conditions present at the time of the frame capture), image enhancement algorithms fail to suppress the illumination differences enough for Background Model (BM) subtraction to be effective. A more effective change detection technique utilizes the 3D scene reconstruction capabilities of structure from motion to create a 3D background model of the environment. By rotating and computing the projectile of the 3D model, previous work has been shown to effectively eliminate the background by subtracting the newly captured dataset from the BM projectile leaving only the changes within the scene. Although previous techniques have proven to work in some cases, these techniques fail when the illumination significantly changes between the capture of the datasets. Our approach completely eliminates the illumination challenges from the change detection problem. The algorithm is based on our previous work in which we have shown a capability to reconstruct a surrounding environment in near real-time speeds. The algorithm, namely Dense Point-Cloud Representation (DPR), allows for a 3D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the relative depths in a scene. The reconstruction process resulting in a point-cloud is computed based on SURF feature matching and depth triangulation analysis. We utilized optical flow features and a single image super resolution technique to create an extremely dense model. The accuracy of DPR is independent of the environmental changes that may be present between the datasets, since DPR only operates on images within one dataset to create the 3D model for each dataset. Our change detection technique utilizes a unique scheme to register the two 3D models. The technique uses an opportunistic approach to compute the optimal feature extraction and matching scheme to compute a fundamental matrix needed to transform a 3D point-cloud model from one dataset to align with the 3D model produced by another. Next, in order to eliminate any effects of the illumination change we convert each point-cloud model into a 3D binary voxel grid. A `one’ is assigned to voxels containing points from the model while a `zero’ is assigned to voxels with no points. In our final step, we detect the changes between the two environments by geometrically subtracting the registered 3D binary voxel models. This process is computationally efficient due to logic-based operation available when handling binary models. We measure the success of our technique by evaluating the detection outputs, false alarm rate and computational expense when comparing with state-of-the-art change detection techniques.

Committee:

Vijayan Asari, Ph.D. (Committee Chair); Raul Ordonez, Ph.D. (Committee Member); Eric Balster, Ph.D. (Committee Member); Juan Vasquez, Ph.D. (Committee Member)

Subjects:

Electrical Engineering

Keywords:

volumetric change detection; 3D reconstruction; aerial surveillance; point cloud registration; illumination invariant; noise suppression; Dense Point-cloud Representation;

Diskin, YakovDense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision
Master of Science (M.S.), University of Dayton, 2013, Electrical Engineering
We present a 3D reconstruction algorithm designed to support various automation and navigation applications. The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point cloud model of its unknown surroundings. In this thesis, we present the step by step methodology of the development of a reconstruction technique. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. In an improved version of the algorithm, we utilized optical flow features to create an extremely dense representation model. Although dense, this model is hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear super resolution. With this addition, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel’s neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. Our final contribution of additional preprocessing steps is designed to filter noise points and mismatched features, giving birth to the complete Dense Point-cloud Representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy and computational expense of the reconstruction technique and compare with two state-of-the-arts techniques. After the presentation of rigorous analysis and comparison, we conclude by presenting the future direction of development and its plans for deployment in real-world applications.

Committee:

Asari Vijayan, PhD (Committee Chair); Raul Ordonez, PhD (Committee Member); Eric Balster, PhD (Committee Member)

Subjects:

Electrical Engineering; Engineering

Keywords:

monocular vision; 3D Scene Reconstruction; Dense Point-cloud Representation; Point Cloud Model; DPR; Super Resolutoin; Vision Lab; University of Dayton; Computer Vision; Vision Navigation; UAV; UAS; UGV; RAIDER; Yakov Diskin; Depth Resolution Enhancement

Wisniewski, John PThe Effect of Age and Metallicity on Be Circumstellar Disk Formation
Doctor of Philosophy, University of Toledo, 2005, Physics

While rapid rotation is likely the dominant mechanism which influences the development of classical Be circumstellar disks, recent observational and theoretical work suggest that evolutionary age and/or metallicity may also influence the onset of the Be phenomenon. We use a simple 2-color diagram photometric technique to identify the candidate Be population in 16 Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), and Galactic clusters having a wide range of ages and metallicities. We detect an enhancement in the fractional early-type candidate Be star population relative to the fractional later-type candidate population in clusters whose early-type stars are near the end of their main sequence lifetimes, suggesting the Be phenomenon is enhanced with evolutionary age. Furthermore, in contrast to the suggestion of Fabregat & Torrejon (2000) that the Be phenomenon should begin at least 10 Myr after the zero-age-main-sequence, we detect a substantial number of candidate Be stars in clusters as young as 5 Myr. Follow-up photo-polarimetric observations of these young candidates reveal many are true classical Be stars, indicating that a significant number of zero-age-main-sequence stars must be rotating close to their critical breakup velocities. The improved statistics offered by our study also reveal clear evidence of an enhancement of the Be phenomenon in low metallicity environments.

It is commonly assumed in the literature that all B-type objects detected as excess H alpha emitters via 2-color diagrams are ''Be stars''. We explore the nature of many of these candidate Be stars with additional photo-polarimetric observations, and find that ~25% of these objects exhibit properties which aren't consistent with those expected from classical Be stars. We also find that the prevalence of polarization Balmer jumps in Be stars located in low metallicity environments is lower than that typically observed for Galactic Be stars. One interpretation of this result is that disk systems in low metallicity environments have fundamentally different properties, i.e. smaller disks and/or lower disk temperatures, than their Galactic counterparts. We also detect evidence of cluster-wide alignment of Be circumstellar disks in 2 LMC clusters.

Committee:

Karen Bjorkman (Advisor)

Subjects:

Physics, Astronomy and Astrophysics

Keywords:

Large Magellenic Cloud; Small Magellanic Cloud; Be star; polarization

Patali, RohitUtility-Directed Resource Allocation in Virtual Desktop Clouds
Master of Science, The Ohio State University, 2011, Computer Science and Engineering

User communities are rapidly transitioning their "traditional desktops" that have dedicated hardware and software installations into "virtual desktop clouds" (VDCs) that are accessible via thin-clients. To allocate and manage VDC resources for Internet-scale desktop delivery, existing work focuses mainly on managing server-side resources based on utility functions of CPU and memory loads, and do not consider network health and thin-client user experience. Resource allocations without combined utility-directed information of system loads, network health and thin-client user experience in VDC platforms inevitably results in costly guesswork and over-provisioning of resources.

In this thesis, an analytical model i.e., "Utility-Directed Resource Allocation Model (U-RAM)" is presented to solve the combined utility-directed resource allocation problem within VDCs. The solution uses an iterative algorithm that leverages utility functions of system, network and human components obtained using a novel virtual desktop performance benchmarking toolkit i.e., "VDBench". The combined utility functions are used to direct decision schemes based on Kuhn-Tucker optimality conditions for creating user desktop pools and determining optimal resource allocation size/location. U-RAM is evaluated in a VDC testbed featuring: (a) popular user applications (Spreadsheet Calculator, Internet Browser, Media Player, Interactive Visualization), and (b) TCP/UDP based thin-client protocols (RDP, RGS, PCoIP) under a variety of user load and network health conditions. Evaluation results demonstrate that U-RAM solution maximizes VDC scalability i.e., 'VDs per core density', and 'user connections quantity', while delivering satisfactory thin-client user experience.

Committee:

Rajiv Ramnath (Advisor); Prasad Calyam (Committee Member); Gagan Agrawal (Committee Member)

Subjects:

Computer Engineering; Computer Science

Keywords:

Virtual Destop Cloud; Desktop Virtualization; Utility; Cloud; Thin Client; Scalability; Performance

Deng, NanSystems Support for Carbon-Aware Cloud Applications
Doctor of Philosophy, The Ohio State University, 2015, Computer Science and Engineering
Datacenters, which are large server farms, host cloud applications, providing services ranging from search engines to social networks and video streaming services. Such applications may belong to the same owner of the datacenter or from third party developers. Due to the growth of cloud applications, datacenters account for a larger fraction of worldwide carbon emissions each year. To reduce the carbon emissions, many datacenter owners are slowly but gradually adopting clean, renewable energy, like solar or wind energy. To encourage datacenter owners to invest into renewable energy, the usage of renewable energy should lead to profit. However, in most cases, renewable energy supply is intermittent and may be limited. Such fact makes renewable energy more expensive than traditional dirty energy. On the other hand, not all customers have the need of using renewable energy for their applications. Our approach is to devise accountable and effective mechanisms to deliver renewable energy only to users that will pay for renewable-powered services. According to our research, datacenter owners could make profit if they could concentrate the renewable energy supply to carbon-aware applications, who prefer cloud resources powered by renewable energy. We develope two carbon-aware applications as use cases. We conclude that if an application take carbon emissions as a constraint, it will end up with using more resources from renewable powered datacenters. Such observation helps datacenter owners to wisely distribute renewable energy within their systems. Our first attempt of concentrating renewable energy focuses on architectural level. Our approach requires datacenters have on-site renewable energy generator using grid ties to integrate renewable energy into their power supply system. To measure the concentration of renewable energy, we introduce a new metric, the renewable-powered instance. Using this metric, we found that grid-tie placement has first-order effects on renewable-energy concentration. On-site renewable energy requires an initial investment to install renewable generator. Although this cost could be gradually amortized over time, some people prefer renewable energy credit, which could be bought from utility companies by paying premium for the renewable energy transmitted through the grid and produced in other locations. To let datacenters, with or without on-site renewable energy generator, attract more carbon-aware customers, we designed a system for Adaptive Green Hosting. It identifies carbon-aware customers by signaling customers’ applications when renewable energy is available and observing their behaviors. Since carbon-aware applications would tend to use more resources in a datacenter with low emission rates, datacenter owners could make profit by attributing more renewable energy to carbon-aware applications, so that could encourage them to use more resources. Our experiments show that adaptive green hosting can increase profit by 152% for one of todays larger green hosts. Although it is possible for cloud applications to maintain a low carbon footprint while make profit, most existing applications are not carbon-aware. The carbon footprint for most existing workloads is large. Without forcing them to switch to renewable energy, we believe responsible end users could take a step forward first. We propose a method to help end users to discover implementation-level details about a cloud application by extracting its internal software delays. Such details are unlikely to be exposed to third-party users. Instead, our approach probes target application from outside, and extract normalized software delay distributions using only response times. Such software delay distributions are not only useful to reveal normalized energy footprint of an application, but could also be used to diagnose root causes of tail response times for live applications.

Committee:

Christopher Stewart, Dr. (Advisor); Xiaorui Wang, Dr. (Committee Member); Gagan Agrawal, Dr. (Committee Member)

Subjects:

Computer Engineering; Computer Science

Keywords:

Datacenter; Renewable Energy; Performance Analysis; Black-box Analysis; Cloud Computing

Chakraborty, SuryadipData Aggregation in Healthcare Applications and BIGDATA set in a FOG based Cloud System
PhD, University of Cincinnati, 2016, Engineering and Applied Science: Computer Science and Engineering
The Wireless Body Area Sensor Network (WBASN) is a wireless network of wearable computing devices including few medical body sensors which capture and transmit different physiological data wirelessly to a monitoring base station. When a physiological sensor continuously senses and generates huge amount of data, the network might become congested due to heavy traffic and it might lead to starvation and ineffectiveness of the WBASN system. This had led to the beginning of our first problem in this research which is the use of aggregation of data so as to reduce the traffic, enhancing the network life time, and saving the network energy. This research also focuses on dealing with huge amount of healthcare data which is widely known today as `BIGDATA’. Our research investigates the use of BIGDATA and ways to analyze them using a cloud based architecture that we have proposed as FOG Networks which improves the use of cloud architecture. During the work of data aggregation, we propose to use of the statistical regression polynomial of the order 4, and 8. Due to computation, we performed the 6th order coefficient computation and analyzed our results with real-time patient data with compression ratio and correlation coefficients. We also focus on studying the energy saving scenarios using our method and investigate how the node failure scheme would be handled. While focusing on building a polynomial based data aggregation approach in the WBASN system which involves summing and aggregating of wireless body sensors data of the patient's, we noticed the problem of dealing with thousand and millions of patients data when we run a WBASN system for continuous monitoring purpose. We could not also deal with such big amount of data in the small storage of the physiological sensors with small computation abilities of them. So, there is an immediate necessity of an architecture and tools to deal with these thousands of data commonly known today as the BIGDATA. To analyze the BIGDATA, we propose to implement a robust cloud-based structure that uses Hadoop based map reduce system and get some meaningful interpretation of the patient's monitoring data for the medical practitioners, doctors and medical representatives in a very time-efficient manner. As getting thousands of BIGDATA with patient’s secured health information is a proprietary and licensed issue, we examined our cloud based BIGDATA architecture using the Twitter and Google N-gram data which are freely available in public domain. In our next proposed task, we plan to implement a robust and scalable architecture of the existing cloud system which itself takes care of the short comings of the public cloud architecture such as Amazon S3, Microsoft Azure etc. Therefore, we propose to use of a newly introduced system known as the FOG networks that significantly helps the clients (medical workers monitoring the patient’s vital parameters) to easily assess, interpret and analyze the patient’s data of injuries, health parameter performance, and improvement in the health condition, associated vital parameters and emergency data arise very efficiently and more effectively.

Committee:

Dharma Agrawal, D.Sc. (Committee Chair); Amit Bhattacharya, Ph.D. (Committee Member); Rui Dai, Ph.D. (Committee Member); Chia Han, Ph.D. (Committee Member); Carla Purdy, Ph.D. (Committee Member)

Subjects:

Computer Science

Keywords:

Wireless body area sensor networks;Data aggregation;Cloud computing;Fog computing

Patil, Sharada KrishnaUsable, lightweight and secure, architecture and programming interface for integration of Wireless Sensor Network to the Cloud
Master of Science, The Ohio State University, 2011, Computer Science and Engineering

Wireless sensor networks (WSN) have been gaining popularity in recent years because of their potential to enable innovative applications in various fields. These fields include industrial, social, and regulatory applications, to name a few. If we extend a traditional sensor network to the Internet, WSNs that are dispersed and networked together can collaborate to accomplish many tasks that cannot be accomplished with a few powerful sensors or computers on a smaller network. With the gaining popularity of cloud services due to the pay per use policy of computation and data storage resources and easing of the burden of maintaining the service, using the cloud to integrate a WSN to the Internet has become viable.

The primary goal of this research is to investigate how to facilitate secure communication between a WSN and a cloud and to provide a secure policy for users to access such a service. We propose and develop an architecture and programming interface, named as Intortus, to enable this exploration. Intortus lets software programmers develop and deploy applications on WSN quickly by relieving the programmer from understanding and using the cloud provider API to access its data store, and writing embedded C code for sensors. There are many challenges enabling such a service that provides end to end secure communication, design constraints due to limitations in the services of cloud providers to mention a few. We discuss the issues and approach taken to build such an architecture and interface. This thesis also describes the functionality Intortus provides and how it can be further extended.

Committee:

Rajiv Ramnath, PhD (Advisor); Jay Ramanathan, PhD (Advisor)

Subjects:

Computer Science

Keywords:

Wireless Sensor Network; Cloud; Integration

Gera, AmitProvisioning for Cloud Computing
Master of Science, The Ohio State University, 2011, Industrial and Systems Engineering
The paradigm of cloud computing has started a new era of service computing. While there are many research efforts on developing enabling technologies for cloud computing, few focuses on how to strategically set price and capacity and what key components are leading to success in this emerging market. In this thesis, we present quantitative modeling and optimization approaches for assisting such decisions in cloud computing services. We first show that learning curve models help in understanding the potential market of cloud services and explain quantitatively why cloud computing is most attractive to small and medium businesses. We then present Single Instance model to depict a particular type of cloud networks and aid in resource provisioning for the cloud service providers. We further present Multiple Instance model to depict any generic cloud network. We map the resource provisioning problem to Kelly's Loss Network and propose Genetic Algorithm to solve it. The approach provides the cloud service provider a quantitative framework to obtain management solutions and to learn and react to the critical parameters in the operation management process by gaining useful business insights.

Committee:

Dr. Cathy Xia (Advisor); Dr. Theodore Allen (Committee Member)

Subjects:

Operations Research

Keywords:

Cloud Computing; Learning Curves; Stochastic Modeling; Pricing; Provisioning

Kline, Wayne T.Climatic Factors Associated with the Rapid Wintertime Increase in Cloud Cover across the Great Lakes Region
MA, Kent State University, 2009, College of Arts and Sciences / Department of Geography
The Great Lakes Region of the United States is an area of great climatic diversity. Research analyzing diurnal temperature range (DTR) has noted that in late autumn and early winter an abrupt decrease in the mean temperature range for stations near the Great Lakes occurs. Reasons for this rapid change are likely related to cloud cover amounts and frequencies of specific weather-types. In this thesis, temporal trends and correlations of several weather variables were conducted to assist in the explanation of the rapid change in the region’s climate. This variability was then correlated to the teleconnection phases of PNA (Pacific/North American) and NAO (North Atlantic Oscillation). Through statistical and spatial analysis of 54 first order weather stations it was found that the timing and magnitude of breakpoints in DTR, cloud cover, and MP (moist polar weather-type) were the most significantly related. The breakpoint for DTR decrease and cloud cover (CC) increase occurs in early November in the east and late October in the west, generally seen with increased MP frequency as well. DTR breakpoint occurs on the same day, typically in late October to early November, or a few days after CC while MP is typically a few weeks after DTR. Changes in the magnitude of the breakpoint, relative to teleconnection phase, were much more significant than the timing of the breakpoint. PNA phase demonstrated greater and stronger influence on the western Great Lakes Region while NAO on the eastern and strong lake-effect areas.

Committee:

Scott Sheridan, PhD (Advisor); Thomas Schmidlin, PhD (Committee Member); Donna Witter, PhD (Committee Member)

Subjects:

Atmosphere; Earth; Geography

Keywords:

Great Lakes; Climate Variability; Trends; Cloud Cover; Diurnal Temperature Range; Spatial Synoptic Classification; Weather-Types; Teleconnections; Pacific/North American; North Atlantic Oscillation; Climatology; Geography

Xu, SiyaoTHE RECONSTRUCTION OF CLOUD-FREE REMOTE SENSING IMAGES: AN ARTIFICIAL NEURAL NETWORKS (ANN) APPROACH
MA, Kent State University, 2009, College of Arts and Sciences / Department of Geography
Spatial or temporal serial remote sensing images are taking more and more important roles in monitoring, utilizing and analyzing resources. However, a large number of remote sensing images are contaminated by clouds, which cause missing information and, moreover, result in the difficulty of extracting complete information. Traditional resolutions to this problem have limits such as low resolution, data lost or large computational load. In this paper, a method that utilizes Artificial Neural Networks (ANN) interpolator is implemented, which may avoid those problems stated above. For the sake of assessing the performance of ANN interpolator, a small area of forest, mountain, valley and road is clipped off from an ETM+ file. Several “cloud” areas will be manually created to test the ANN model. One band of the image is transformed into ASCII files. In the next step, a K Nearest Neighbor (KNN) searching algorithm is applied on these ASCII files, and k neighbors was found for every pixel in this area. Then an ANN model is built. For each pixel that was contaminated by cloud, its neighbors are used as input information, and the output for this pixel is its predicted DN value. Finally, the output will be restored to a raster file. Root-Mean-Square-Error, Quantile-Quantile Plot, and Error Distribution Map are adopted to assess the performance of this ANN interpolator. Finally, the thesis concludes that activate functions and neighborhood search do not cause significant difference in the output of ANN interpolators, and the interpolation results are globally good but largely biased regionally.

Committee:

Mandy Munro-Stasiuk, Phd (Advisor); Milton Harvey, Phd (Advisor)

Subjects:

Geography

Keywords:

Remote Sensing Image; Cloud-free; Artificial Neural Networks

Jayapandian, Catherine PraveenaCloudwave: A Cloud Computing Framework for Multimodal Electrophysiological Big Data
Doctor of Philosophy, Case Western Reserve University, 2014, EECS - Computer and Information Sciences
Multimodal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (ECG), are central to effective patient care and clinical research in many disease domains (e.g., epilepsy, sleep medicine, and cardiovascular medicine). Electrophysiological data is an example of clinical 'big data' characterized by volume (in the order of terabytes (TB) of data generated every year), velocity (gigabytes (GB) of data per month per facility) and variety (about 20-200 multimodal parameters per study), referred to as '3Vs of Big Data.' Current approaches for storing and analyzing signal data using desktop machines and conventional file formats are inadequate to meet the challenges in the growing volume of data and the need for supporting multi-center collaborative studies with real-time and interactive access. This dissertation introduces a web-based electrophysiological data management framework called Cloudwave using a highly scalable open-source cloud computing approach and hierarchical data format. Cloudwave has been developed as a part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center project called Prevention and Risk Identification of SUDEP Mortality (PRISM). The key contributions of this dissertation are: 1. An expressive data representation format called Cloudwave Signal Format (CSF) suitable for data-interchange in cloud-based web applications; 2. Cloud based storage of CSF files processed from EDF using Hadoop MapReduce and HDFS; 3. Web interface for visualization of multimodal electrophysiological data in CSF; and 4. Computational processing of ECG signals using Hadoop MapReduce for measuring cardiac functions. Comparative evaluations of Cloudwave with traditional desktop approaches demonstrate one order of magnitude improvement in performance over 77GB of patient data for storage, one order of magnitude improvement to compute cardiac measures for signal-channel ECG data, and 20 times improvement for four-channel ECG data using a 6-node cluster in local cloud. Therefore, our Cloudwave approach helps addressing the challenges in the management, access and utilization of an important type of multimodal big data in biomedicine.

Committee:

Guo-Qiang Zhang, PhD (Committee Chair); Satya Sahoo, PhD (Committee Member); Xiang Zhang, PhD (Committee Member); Samden Lhatoo, MD, FRCP (Committee Member)

Subjects:

Bioinformatics; Biomedical Research; Computer Science; Neurosciences

Keywords:

Big Data; Data management; Cloud computing; Electrophysiology; Web application; Ontology; Signal analysis

McDonald, Trent ABetween Artifice and Actuality: The Aesthetic and Ethical Metafiction of Vladimir Nabokov and David Mitchell
Bachelor of Arts (BA), Ohio University, 2014, English
This thesis is concerned with metafiction or self-conscious fiction. After discussing the status of metafiction from its postmodernist heyday (both its proponents and its critics) to current criticism, an analysis about the role of aesthetics and ethics in literature follows. The novels Pale Fire by Vladimir Nabokov and Cloud Atlas by David Mitchell (and associated texts) are then analyzed in their own chapters. Nabokov is considered as an author averse to moral didactic fiction and enamored with aesthetically focused fiction. For Nabokov, metafiction is used as an escape from the world and into "aesthetic bliss." Mitchell, on the other hand, uses metafiction to instruct us to make our own ethical decisions without the presence of an author's guiding hand. Neither author is able to use metafiction to escape from reality as all fiction remains connected to the material world. In the conclusion, metafiction is presented as a fictional form with strengths and flaws like any other.

Committee:

Thom Dancer (Advisor)

Subjects:

American Literature; British and Irish Literature; Literature

Keywords:

Literature; Vladimir Nabokov; David Mitchell; Pale Fire; Cloud Atlas; Postmodernism; Metafiction; Aesthetics and Literature; Ethics and Literature; Richard Rorty; John Gardner; William H Gass; Metamodernism; Contemporary Literature; Literary Criticism

Keller, Dustin M.U-Spin Symmetry Test of the Σ*+ Electromagnetic Decay
Doctor of Philosophy (PhD), Ohio University, 2010, Physics and Astronomy (Arts and Sciences)
This dissertation presents analysis for electromagnetic decay of the Σ0(1385) from the reaction γ pK+ Σ*0. Also presented is the first ever measurement of the electromagnetic decay of the Σ+(1385) from the reaction γ p → K0 Σ*+. Both results are extracted from the g11a data set taken using the CLAS detector at Thomas Jefferson National Accelerator Facility. A real photon beam with a maximum energy of 3.8 GeV was incident on a liquid hydrogen target during the experiment resulting in the photoproduction of the kaon and Σ* hyperons. Kinematic fitting is used to separate signal from background in each case. For the first time, a method to kinematically fit the neutron in the Electromagnetic Calorimeter (EC) of CLAS was performed, leading to a high statistics study of the neutron resolutions in the EC. New techniques in neutron resolution matching for Monte Carlo simulation using dynamic variable smearing are also developed. The results from the Σ0(1385) electromagnetic decay lead to smaller statistical and systematic uncertainties than the previous measurement by Taylor et al. A U-spin symmetry test using the U-spin SU(3) multiplet representation gave a prediction for the Σ*+→ Σ+γ partial width and the Σ*0→ Λγ partial width. The latter agrees, within the experimental uncertainties, with the prediction from U-spin symmetry, but the former reaction is much smaller than its prediction.

Committee:

Kenneth Hicks, PhD (Advisor); Todd Young, PhD (Committee Member); Justin Franz, PhD (Committee Member); Carl Brune, PhD (Committee Member); Kenneth Hicks, PhD (Advisor)

Subjects:

Physics

Keywords:

U-Spin Symmetry Test; Electromagnetic decay; Strange Baryons; Strange Sector Partial Width; meson cloud effect

Richards, CraigDevelopment of Cyber-Technology Information for Remotely Accessing Chemistry Instrumentation
Master of Computing and Information Systems, Youngstown State University, 2011, Department of Computer Science and Information Systems

There exists a wide variety of technologies which allow for remote desktop access, data transfer, encryption, and worldwide communication through the Internet. These technologies, while independently solving unique problems, can be combined into a project which would resolve all of the unique problems with one single system. Youngstown State University's Chemistry Department required a high reliability unified system to provide remote access, web cam feeds, user security, and encrypted file transfer for computer equipment operating scientific instrumentation. A suitable software project solution was developed at Youngstown State University in collaboration with Zethus Software through analysis of technological resources and project requirements, and a process of software development.

This thesis describes the cumulus::CyberLab project developed in order to resolve the above requirements. The cumulus::CyberLab project allows students, faculty, and scientists to remotely access millions of dollars of scientific equipment offered by our university from anywhere in the world. To best describe this project, this thesis outlines the overview of the project, work in the project, and how this project created unique software which is valuable to not only our university but also to other worldwide users.

Committee:

Graciela Perera, PhD (Advisor); Allen Hunter, PhD (Committee Member); John Sullins, PhD (Committee Member)

Subjects:

Biology; Chemistry; Communication; Computer Science

Keywords:

Remote access; Scientific instrumentation; Cloud computing; Secure file storage

Snyder, Brett WTools and Techniques for Evaluating the Reliability of Cloud Computing Systems
Master of Science in Engineering, University of Toledo, 2013, College of Engineering
This research introduces a computationally efficient approach taken to evaluate the reliability of a cloud computing system (CCS). The cloud computing paradigm has ushered in the need for the ability to provide computing resources in a highly scalable, flexible, and transparent fashion. The rapid uptake of cloud resource utilization has led to a need for methods that can assess the reliability of a CCS, while aiding the process of expansion and planning. This thesis proposes using reliability assessments likened to those performed on industrial grade power systems to establish methods for evaluating the reliability of a CCS and the corresponding performance metrics. Specifically, non-sequential Monte Carlo Simulation (MCS) is used to evaluate CCS reliability at a system scale. Further contributions are made regarding the design, development, and exploration of standardized test systems, a novel state representation of CCSs, and the use of test systems based on real-world CCSs. Results demonstrate that the method is effective and multiple insights are provided into the nature of CCS reliability and CCS design. A scalable, graphical, web-based piece of cloud simulation software called ReliaCloud-NS is also presented. ReliaCloud-NS is designed with a RESTful API for performing non-sequential MCS to perform reliability evaluations of cloud computing systems. ReliaCloud-NS allows users to design and simulate complex CCSs built from CCS components. Simulation results are stored and presented to the user in the form of interactive charts and graphs from within a Web browser. ReliaCloud-NS contains multiple types of simulations as well as multiple VM allocation schemes. ReliaCloud-NS also contains a novel feature which will evaluate CCS reliability across a range of varying VM allocations and establish and graph a CCS reliability curve. In this thesis the interactive web-based interface, the different types of simulations available, and an overview of the results generated from a simulation are described. The contributions of this thesis lay the foundation for computationally efficient methods that allow for the design and evaluation of highly resilient CCSs. Coupled with the ReliaCloud-NS software, the contributions of this thesis allow for efficient design of complex, yet, reliable CCS systems that can be simulated and analyzed, leading to improved customer experience and cost-savings.

Committee:

Robert Green, Ph.D. (Committee Chair); Vijay Devabhaktuni, Ph.D. (Committee Member); Mansoor Alam, Ph.D. (Committee Member); Hong Wang, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Computer Science; Electrical Engineering

Keywords:

cloud computing; reliability; availability; Monte Carlo simulation; modeling; software

Huang, LindaOpening Up to the Universe: Cai Guoqiang's Methodology from 1986 to 1996
MA, University of Cincinnati, 2013, Design, Architecture, Art and Planning: Art History
During his residence in Japan from 1986 to 1995, Cai Guoqiang (b. 1957) developed his philosophical thinking on art, life, and the universe and adopted gunpowder explosion as his major art making method. With a focus on his encounter with Japanese Mono-ha artists and the influences of Japan's culture and history on his art, this study explores how Cai challenged the orthodox forms of Asian art by radically exploding gunpowder and how he adopted Daoist view of the universe to cross cultural and disciplinary boundaries. In the first chapter, I investigate the interrelationship between Cai and Mono-ha artists and disclose the common theme of de-materialization underlying their work. I argue that Cai's artistic use of gunpowder explosion liberates his material from its physical confines and initiates conversations with the unknown power of the universe. In the second chapter, I discuss Cai's reflections on Mono-ha arts and examine the embodiment of "the inner universe and external universe" in his explosion events, through which he established a collaborative working relationship with the invisible natural power. In the last chapter, I further discuss the theme of art and war in his work. By analyzing the metaphorical imagery of the mushroom cloud in his explosion events, I uncover his artistic strategy of simulating, intervening, and deciphering human history. Taking a close look at Cai's early gunpowder explosion projects, this study unveils his subversive methods for escaping the conventional art production modes and expanding the temporal-spatial dimensions of his work, which instills new possibilities and infinite freedom in his art.

Committee:

Kimberly Paice, Ph.D. (Committee Chair); Mikiko Hirayama, Ph.D. (Committee Member); Morgan Thomas, Ph.D. (Committee Member)

Subjects:

Art History

Keywords:

Mono-ha;dematerialization;Daoism;art production mode;cross boundary;mushroom cloud

Pon, KarenThe Representation of Low Cloud in the Antarctic Mesoscale Prediction System
Master of Science, The Ohio State University, 2015, Atmospheric Sciences
The accuracy of cloud prediction in Antarctica can have a significant impact on aviation operations. Unforecast low cloud can endanger an aircraft attempting to land, and affect a pilot’s ability to distinguish the horizon and surface features while in flight. Over-forecasting of low cloud results in fewer missions completed. A number of cloud forecast products have been developed over the years however forecasters often prefer to use the low level relative humidity (RH) fields to forecast low cloud. This study investigated the use of the Stoelinga-Warner algorithm to generate the current Antarctic Mesoscale Prediction System (AMPS) cloud base height forecast and whether a RH threshold could be used as a proxy for cloud base height. The Stoelinga-Warner algorithm was tested using a case study of a mesoscale low in Prydz Bay near Davis station. The algorithm was insensitive to changes in the phase scheme and light extinction threshold used to predict cloud base. Further investigation revealed inadequate quantities of cloud hydrometeors, indicating a problem with the model’s microphysics scheme. Therefore, AMPS combined with the Stoelinga-Warner algorithm does not accurately predict cloud base height. Cloud base heights derived from radiosonde RH thresholds were compared with synoptic observations for Davis, McMurdo and Halley. Lidar observations were also tested against both synoptic observations and radiosonde-derived cloud base heights at Halley. The optimal RH threshold for predicting cloud base height was ~70% at Davis and McMurdo, and ~90% at Halley. AMPS RH data was used to generate cloud base heights at different thresholds, and these were verified against synoptic observations. Results were mixed due to the comparatively large scatter in the model RH field, with the optimal RH threshold changing according to the verification metric used. However there was broad agreement that Davis and McMurdo required a lower RH threshold than Halley. The thresholds found for Davis and McMurdo are consistent with a study by Inoue et al. (2015) which found optimal RH thresholds between 58% and 66% for Davis, Casey and Mawson stations. The reason for the much higher threshold at Halley is unclear, and further studies are required to determine whether a general RH threshold can be applied across the continent to predict cloud base height.

Committee:

David Bromwich (Advisor); Jay Hobgood (Committee Member); Jialin Lin (Committee Member)

Subjects:

Atmospheric Sciences

Keywords:

cloud; antarctica; numerical weather prediction

Hassett, Maribeth O.Analysis of the Hygroscopic Properties of Fungal Spores and Pollen Grains inside an Environmental Scanning Electron Microscope (ESEM)
Doctor of Philosophy, Miami University, 2016, Biological Sciences
Substantial amounts of primary biological aerosolized particles (PBAP) are emitted into the atmosphere each year. Cloud droplets are formed by the condensation of water vapor on nucleation sites such as aerosolized particles. Recent findings in atmospheric research indicate that PBAP may play an important role in climate by contributing to the formation of cloud and precipitation-sized droplets by acting as cloud condensation nuclei. This dissertation analyzes the hydration of commonly detected PBAP, fungal spores and pollen grains, inside an environmental scanning electron microscope (ESEM), in order to elucidate their potential role in atmospheric droplet formation. Experiments focus on determining the hygroscopicity, surface wettability, and droplet formation by particles with varying surface features. The ability of PBAP to act as cloud condensation nuclei depends on the morphological and chemical characteristics of the particle. For example, the size, shape, or presence of soluble materials, may impact the critical supersaturation at which these particles are able to condense water on their surfaces, in addition to the mechanics of droplet formation. Fungal spores examined in this study exhibit distinct variations in the chemical nature of the spore surface, owing to differences in dispersal modes (active or passive). The pollen grains examined exhibit morphologically diverse, and possess various surface architectures including a range of aperture types. Observations inside the ESEM indicate that certain distinguishing characteristics of pollen grains and fungal spores may enhance or hinder surface wettability and droplet formation. For example, actively dispersed basidiospores exhibited rapid growth and expansion of large droplets, likely due to the presence of hygroscopic sugars on their surfaces. Surface features of pollen grains, such as the presence of pores or furrows, dictated surface wettability and the internal uptake of water by pollen. These findings indicate that certain taxonomic groups may exhibit distinct properties that allow them to act as efficient nuclei for the condensation of water vapor in the atmosphere. This research has implications for the emission of biological particles in regions supporting large populations of actively dispersing fungi and plants, and heightens the importance of the sustainability of these types of ecosystems.

Committee:

Nicholas Money (Advisor)

Subjects:

Biology; Botany

Keywords:

Fungal spores; Pollen grains; Cloud condensation nuclei; ESEM

Hopson, James EdwardCharacteristics of a water vapor expansion chamber /
Doctor of Philosophy, The Ohio State University, 1954, Graduate School

Committee:

Not Provided (Other)

Subjects:

Physics

Keywords:

Cloud chamber;Condensers ;Thermocouples

Ranabahu, Ajith HarshanaAbstraction Driven Application and Data Portability in Cloud Computing
Doctor of Philosophy (PhD), Wright State University, 2012, Computer Science and Engineering PhD
Cloud computing has changed the way organizations create, manage, and evolve their applications. While many organizations are eager to use the cloud, tempted by substantial cost savings and convenience, the implications of using clouds are still not well understood. One of the major concerns in cloud adoption is the vendor lock-in of applications, caused by the heterogeneity of the numerous cloud service offerings. Vendor locked applications are difficult, if not impossible to port from one cloud system to another, forcing cloud service consumers to use undesired or suboptimal solutions. This dissertation investigates a complete and comprehensive solution to address the issue of application lock-in in cloud computing. The primary philosophy is the use of carefully defined abstractions in a manner that makes the heterogeneity in the clouds invisible. The first part of this dissertation focuses on the development of cloud applications using abstract specifications. Given the domain specific nature of many cloud workloads, we focused on using Domain Specific Languages (DSLs). We applied DSL based development techniques to two domains with different characteristics and learnt that abstract driven methods are indeed viable and results in significant savings in cost and effort. We also showcase two publicly hosted Web-based application developments tools, pertaining to the two domains. These tools use abstractions in every step of the application life-cycle and allow domain experts to conveniently create applications and deploy them to clouds, irrespective of the target cloud system. The second part of this dissertation presents the use of process abstractions for application deployment and management in clouds. Many cloud service consumers are focused on specific application oriented tasks, thus we provided abstractions for the most useful cloud interactions via a middleware layer. Our middleware system not only provided the independence from the various process differences, but also provided the means to reuse known best practices. The success of this middleware system also influenced a commercial product.

Committee:

Amit Sheth, PhD (Advisor); Krishnaprasad Thirunarayan, PhD (Committee Member); Keke Chen, PhD (Committee Member); Eugene Maximilien, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

Cloud computing; Domain Specific Languages; Program Generation; Program Portability

Next Page