Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 18)

Mini-Tools

 
 

Search Report

  • 1. Nykl, Scott High-fidelity Distributed Physics for Continuous Interactive Virtual Worlds Using Server Clusters

    Master of Science (MS), Ohio University, 2008, Computer Science (Engineering and Technology)

    Today's virtual worlds are places of immense interaction and realism that give rise to highly immersive environments. One of the most critical aspects providing this realism is the simulation of motion and the physical interaction between objects, i.e., physics. The physics of a virtual world is governed by mathematical equations that approximate the physical attributes of the real world. As virtual worlds grow larger, the computational complexity grows; thus, the processing time needed to simulate the virtual world grows. Once the processing time required to simulate an amount of time within the virtual world is greater than that same time in the real world, players in the virtual world begin to notice a slow down; this takes away from realism and causes immersion to suffer. As virtual worlds grow larger and more complex, the computational power required to simulate the virtual world in ‘real time' grows proportionally. This thesis focuses on simulating high fidelity distributed physics for continuous interactive virtual worlds using server clusters.

    Committee: David M. Chelberg PhD (Advisor); Shawn Ostermann PhD (Committee Member); Hans Kruse PhD (Committee Member); Teresa Franklin PhD (Committee Member) Subjects: Computer Science
  • 2. Kernan, Nicholas A Large Scale Assessment of DNS Resolution Services

    Master of Sciences (Engineering), Case Western Reserve University, 2023, EECS - Computer and Information Sciences

    Prior work indicated that in some regions, a client's choice of DNS resolution service may affect DNS performance by up to an order of magnitude. We conduct a follow-up study that dramatically upscales the number of vantage points to gain a more complete picture, and sets up ongoing monitoring of DNS resolvers to offer a longitudinal perspective to their performance. Specifically, we collect measurements from thousands of globally distributed vantage points, assessing ISP resolvers and popular public resolvers using DNS latency and the quality of edge server mappings produced for leading CDNs. We find that ISP resolvers far outperform public resolvers for DNS latency due to a high cache hit rate and low hit latency. Meanwhile, the best CDN mappings are achieved by ISP resolvers and public resolvers using ECS, or when the CDN uses global anycast. We monitor these trends monthly, providing results visualization through a publicly accessible website.

    Committee: Michael Rabinovich (Advisor); Vincenzo Liberatore (Committee Member); An Wang (Committee Member) Subjects: Computer Science
  • 3. Waikul, Devendra BLUETOOTH-ENABLED ENERGY MONITORING SYSTEM WITH WIRELESS DATA ACQUISITION USING WEB SERVER

    Master of Sciences, Case Western Reserve University, 2020, EECS - Computer Engineering

    The internet of things (IoT) is rapidly becoming part of everyday life. The internet of things can be anything from smart assistants, or smart devices such LED light bulbs, electric outlets to widely used wireless sensor networks. Electrical devices inside any household has potential to become part of wireless mesh network where each device is monitored for their operation and electrical energy consumption. Still, monitoring electric consumption inside a household is still not actively utilized under internet of things. Majority of the houses are equipped with smart energy meters which transmit weekly or monthly power usage to electrical companies. These readings are reflected in the electric bill every month and provide very crude and irrelevant information to pinpoint energy activities in the desired `meshes' of individual rooms of any household and therefore cannot meet the growing expectation and requirements for abundance and accuracy of the data, for efficient electrical energy management. After a comprehensive survey of existing energy monitoring devices and systems, a few technologies have come across which focus either on single device or on overall household. These technologies will not be able to pinpoint every device in a household. Apart from the surface level monitoring, these devices tend to be expensive as they come with subscription and added devices for complete support. To compete with such technologies, an electric energy monitoring system is proposed. This system has three layers of software and hardware components. The first layer is sensors. These sensors make use of existing wireless sensor network mesh technology. Each sensor is a low-cost Bluetooth low energy (BLE) based module which monitors electrical devices. The second layer is gateway. The gateway acts as the middle man between sensor and the third layer which is server. Gateway grabs data from the sensors and translates it to server compatible language package and sends it to the se (open full item for complete abstract)

    Committee: Philip Feng (Advisor); Christos Papachristou (Committee Member); Kenneth Loparo (Committee Member) Subjects: Computer Engineering
  • 4. Li, Yu Securing Modern Cyberspace Using A Multi-Faceted Approach

    Doctor of Philosophy (PhD), Wright State University, 2019, Computer Science and Engineering PhD

    Security has become one of the most significant concerns for our cyberspace. Securing the cyberspace, however, becomes increasingly challenging. This can be attributed to the rapidly growing diversities and complexity of the modern cyberspace. Specifically, it is not any more dominated by connected personal computers (PCs); instead, it is greatly characterized by cyber-physical systems (CPS), embedded systems, dynamic services, and human-computer interactions. Securing modern cyberspace therefore calls for a multi-faceted approach capable of systematically integrating these emerging characteristics. This dissertation presents our novel and significant solutions towards this direction. Specifically, we have devised automated, systematic security solutions to three critical aspects of our modern cyberspace including i) cyber-physical systems, ii) dynamic web services, and iii) social networks. This dissertation makes the following contributions. First, we have conducted systematic vulnerability assessment for a real-world, complex CPS, namely Unmanned Systems Autonomy Services (UxAS). Our assessment has identified a set of exploitable vulnerabilities. Second, we have designed an adaptive traffic morphing algorithm to conceal CPS communications into background network traffic. Third, we have designed a CPS self-destruct model and studied the security-and-performance trade-off using probabilistic model checking. Fourth, we built a novel detection system to detect PHP-based malicious web shells. Finally, we have designed a novel detection system to detect suspicious behaviors in an online emotional support system.

    Committee: Junjie Zhang Ph.D. (Advisor); Bin Wang Ph.D. (Committee Member); Phu H. Phung Ph.D. (Committee Member); Michelle Andreen Cheatham Ph.D. (Committee Member) Subjects: Computer Engineering
  • 5. Heaton, Tyler Cloud Based IP Data Management Theory and Implementation for a Secure and Trusted Design Space

    Master of Science, The Ohio State University, 2019, Electrical and Computer Engineering

    IP Data management in cloud computing presents a challenge with scalability and integration. With extended IP lifecycles and considerably large design files, IP provenance has become a priority to drive the need of a cloud design environment - research has gone into selecting a data management tool which promotes IP reuse and third party integration. This thesis presents work on the optimization of IP (intellectual property) data management in an analog design environment. A myriad of IP data management platforms were researched, vetted, and implemented in cloud infrastruc- tures. The features of the tools were compared and analyzed for performance in a cloud setting. Moreover, the server client architecture for these IP data management tools has been defined to enable the success of analog and digital design in a cloud computing atmosphere. In order to promote IP provenance and tracking, data must exist in a trusted design space which monitors user and data traffic, captures changes made to IP, and allocates permissions across design environments.

    Committee: Waleed Khalil (Advisor); Steve Bibyk (Committee Member); Shane Smith (Advisor) Subjects: Electrical Engineering; Engineering
  • 6. Street, Julie Applying sports analytics to Division 1 women's volleyball: serve receive pass rating relative to server location, speed of the serve, location of the passer, and passing angle

    Master of Science in Exercise and Health Studies, Miami University, 2018, Exercise and Health Studies

    Sports analytics can be applied to the sport of volleyball to gain a competitive edge against an opponent. One hundred three Division I volleyball matches from the 2017 Mid-American Conference were analyzed using the parameters of server location, speed of the serve, location of the passer, and passing angle, as it relates to the serve receive pass rating. Results showed that there is a significant difference in serve receive pass rating with the speed of the serve, location of the passer, and passing angle. However, there is no significant difference in serve receive pass rating based on server location. Sports analytics can be used to maximize training and recruit skills that correlate to a winning volleyball program.

    Committee: Kyle Timmerman (Committee Chair); Emily Davie (Committee Member); Ronald Cox (Committee Member) Subjects: Statistics
  • 7. Shedimbi, Prudhvi Rao Optimizing Request Routing in Heterogeneous Web Computation Environments

    MS, University of Cincinnati, 2016, Engineering and Applied Science: Computer Science

    With the increase in popularity of web applications, load balancers have become a vital instrument in the day to day operations of almost every web application. T hey become more critical when underlying servers have heterogeneous capabilities. This thesis focuses on developing an effective load balancing algorithm for a system with heterogeneous server capabilities. We develop an algorithm, Prum, that balances requests based on the servers' latest response time. We then compare the performance of this algorithm with Round Robin, which is the default load balancing function for various load balancers. Detailed analysis of response time and request routing of both the algorithms show that Prum outperforms Round Robin when underlying servers have diverse configurations.

    Committee: Paul Talaga Ph.D. (Committee Chair); Michal Kouril Ph.D. (Committee Member); Raj Bhatnagar Ph.D. (Committee Member) Subjects: Computer Science
  • 8. Glendenning, Kurtis Browser Based Visualization for Parameter Spaces of Big Data Using Client-Server Model

    Master of Science (MS), Wright State University, 2015, Computer Science

    Visualization is an important task in data analytics, as it allows researchers to view abstract patterns within the data instead of reading through extensive raw data. Allowing the ability to interact with the visualizations is an essential aspect since it provides the ability to intuitively explore data to find meaning and patterns more efficiently. Interactivity, however, becomes progressively more difficult as the size of the dataset increases. This project begins by leveraging existing web-based data visualization technologies and extends their functionality through the use of parallel processing. This methodology utilizes state-of-the-art techniques, such as Node.js, to split the visualization rendering and user interactivity controls between a client-server infrastructure. The approach minimizes data transfer by performing the rendering step on the server while allowing for the use of HPC systems to render the visualizations more quickly. In order to improve the scaling of the system with larger datasets, parallel processing and visualization optimization techniques are used.

    Committee: Thomas Wischgoll Ph.D. (Advisor); Michael Raymer Ph.D. (Committee Member); Derek Doran Ph.D. (Committee Member) Subjects: Computer Science
  • 9. Assaad, Firas Biometric Multi-modal User Authentication System based on Ensemble Classifier

    Master of Science, University of Toledo, 2014, Engineering

    User authentication performed through the traditional method of authorizing based on username and password results in the weakest link between users and their authentication credentials. Stolen authentication credentials in the form of user id and password pairs result in countless large-scale and detrimental security breaches across many segments of the social and economic groups. Authentication based on biometric features offer a much stronger link between the users and their credentials. In this project, we propose a multi-modal biometric authentication methodology to provide a more trusted authentication of the actual user. The system utilizes two biometric traits in its authorization procedure; these are face and voice recognition. During training, the face recognition subsystem is responsible for detecting the face in an image using the Viola-Jones algorithm, and implementing face preprocessing steps of eye detection and several geometric transformations to filter out unneeded details of the face. Using the Eigenfaces technique, the face recognition subsystem trains on those multiple images of a user being authorized for access and stores the resulting user-specific templates in the authorized users database. Similarly, during training, the voice recognition module acquires a voice sample from the user, extracts the voice features using the Mel-Frequency Cepstral Coefficients that are used to represent the “voiceprint” of the user, and then models those features using the Linde–Buzo–Gray algorithm. Each of the two modules, namely face recognition and voice recognition, following training performs as an independent base classifier within an ensemble design. The outputs from these two modules are fused together using score-level transformation to generate a final decision to either grant or deny access to a user who is in the process of authentication. The proposed algorithm has been trained and tested on Yale Extended, NIST FERET, and ELSDSR databases. Sys (open full item for complete abstract)

    Committee: Gursel Serpen (Advisor); Jackson Carvalho (Committee Member); Lawrence Thomas (Committee Member) Subjects: Artificial Intelligence; Bioinformatics; Computer Engineering; Computer Science
  • 10. Panta, Purushottam Web Design, Development and Security

    Master of Science in Mathematics, Youngstown State University, 2009, Department of Mathematics and Statistics

    Websites are the most convenient way to present and disseminate information to the maximum number of people in the world. The web browsers are the means to render the information on web page, the basic building blocks of a website, and web programming is the basic structure (architecture) of each web page.The thesis on "Web Design, Development and Security" is a complete analysis of website design and development. Web sites should be able to present abundant information to a visitor in well organized manner. In addition, there must be a reliable transfer of secure information between server and client. There exist other major factors such as user friendliness, layout, simplicity, ease of rendering in the browser and so on that are closely related with the quality of website. This thesis will elaborate on a number of issues that are related with web design and development. These principles will be illustrated and demonstrated in the design of some websites that I have designed so far.

    Committee: John Sullins PhD (Advisor); Graciela Perera PhD (Committee Member); Jamal Tartir PhD (Committee Member) Subjects: Computer Science
  • 11. XU, ZHIYONG IMPROVING ROUTING AND CACHING PERFORMANCE IN DHT BASED PEER-TO-PEER SYSTEMS

    PhD, University of Cincinnati, 2003, Engineering : Computer Science and Engineering

    With the exciting successes of file sharing applications such as Napster, Gnutella and Kazza, Peer-to-Peer (P2P) system becomes one of the fastest growing applications on Internet. A P2P system can be viewed as a distributed system in which nodes have identical responsibility. It has generated great research interests because it achieves many fascinating technical characteristics such as fully decentralized control, self-autonomous and load balance. However, its appealing properties also bring more difficult problems than the traditional Client/Server applications in the system design and implementation, especially in large-scale environments. A critical problem in P2P system is to search and locate resources among peers. This task is fulfilled by the routing service. As the most frequently executed operations in Peer-to-Peer (P2P) applications, the efficiency of the routing procedures has great influence on system overall performance. In recent years, Distributed Hash Table (DHT) based routing algorithms have been proposed as a fundamental building block for P2P systems. In current DHT algorithms, routing tasks are uniformly distributed across all the system peers. Each peer only maintains a small amount of system routing information and a routing procedure is accomplished by the coordination of several peers. However, the neglect of the characteristics of real P2P systems reduces their efficiency and makes them difficult to achieve the ideal performance. Caching is another important technical aspect. Good caching scheme can significantly decrease the unnecessary network traffic and reduce the client perceived latency. The main contribution of this dissertation is: by taking the P2P system characteristics such as topology information, peers' computer resource diversity and workload distribution into consideration, system routing and caching performance in current DHT algorithms are greatly improved. We also address maintenance overhead problem and propose our preli (open full item for complete abstract)

    Committee: Dr. Yiming Hu (Advisor) Subjects: Computer Science
  • 12. WANG, JUN HIGH PERFORMANCE I/O ARCHITECTURES AND FILE SYSTEMS FOR INTERNET SERVERS

    PhD, University of Cincinnati, 2002, Engineering : Computer Science and Engineering

    Previous studies have shown that disk I/O is one of the major performance bottlenecks of Web proxy servers, one class of Internet servers. Most conventional file systems and I/O architectures do not work well because of high overheads. This dissertation makes several contributions to solve this problem. First, it studies Log-structured File System (LFS) as one possible existing solution. We find two new ways to significantly improve LFS performance: one is a novel reordering write buffer called WOLF, which reduces LFS garbage collection overheads by classifying data into two classes in the write buffer and forcing a bimodal distribution on the disk; the other is a data reorganization policy called PROFS, which integrates disk physical geometries to effectively optimize data placements on multiple zone disks. Second, we develop a novel, User-space, Customized File System called UCFS that can boost I/O performance for Web proxy servers. UCFS is a user-level software component of a proxy server, which manages data on a raw disk or disk partition. It is not only easy and inexpensive to be implemented, but also has good portability and maintainability. UCFS uses several novel designs, including a novel file system called Cluster-structured File System (CFS) to provide persistent high I/O performance without degradation. Comprehensive experiments show that UCFS can markedly improve I/O performance for Web proxy servers by more than one order of magnitude. In the future, WOLF and PROFS can be successfully applied to those I/O storage systems that use log-structuring techniques while UCFS would be prosperously extended to other areas such as Web servers and mail/news servers.

    Committee: Dr. Yiming Hu (Advisor) Subjects: Computer Science
  • 13. Ma, Chao ACE Student Tracker

    Master of Science, The Ohio State University, 2011, Computer Science and Engineering

    ACE Student Tracker is a system that tracks Applying, Current and Employed graduate students in the Department of Electrical and Computer Engineering at The Ohio State University. In this thesis, we first overview its functionality and then focus on the technical aspects: the architecture, the use of Sharepoint to integrate authentication, authorization and user interfaces of an ASP.NET application and SQL Server Reporting Services (SSRS) reports, and the use of LINQ to SQL as the main data access technology. Various security considerations, design choices, development practices and lessons learned are presented.

    Committee: Furrukh Khan PhD (Advisor); Rajiv Ramnath PhD (Committee Member) Subjects: Computer Science
  • 14. Zhou, Yin Page connection representation: An object-oriented and dynamic language for complex web applications

    Master of Science (MS), Ohio University, 2001, Electrical Engineering & Computer Science (Engineering and Technology)

    This thesis develops an object-oriented dynamic language Page Connection Representation(PCR), which defines the inner workings and connections between, active server side components so as to set up an infrastructure for database-backed web applications. This language automates the web application development by generating executable service components from the compilation of PCR code. As an easy-to-learn language for rapid development with powerful performance, PCR mixes 3GL and 4GL language features. The compiler for PCR is coded in C. The compiler generates final modules as C++ objects which can be compiled once again by a third party compiler into CGI or Fast CGI code. Both the development and the testing are carried out on a Linux platform running Redhat 6.0 with Apache as the web server. The 4GL database connectivity to the MySql database engine is supported by PCR. Furthermore, the architectural comparisons of PCR with other popular web technologies like Active Server Page(ASP) and Extensible Markup Language(XML), as well as the demonstration of the E-Magazine Management System (EMMS) developed using PCR are done. These show that PCR may set up a new useful web application infrastructure for Freeware world with its great easiness, performance, scalability and portability.

    Committee: Liming Cai (Advisor) Subjects:
  • 15. Hawkins, Steve An Internship in Technical and Scientific Communication with Dell Inc

    Master of Technical and Scientific Communication, Miami University, 2003, Technical and Scientific Communication

    In January of 2000, I accepted a full-time position with Dell Inc. (formerly known as Dell Computer Corporation) located in Austin, Texas. This report describes the first 18-months of my tenure at Dell and focuses on a major project I completed during this time. I began this project in January 2001 and completed it in March 2001. The other chapters in this report provide a description of Dell Inc., an overview of my internship and my major and minor writing projects, an analysis of the problem-solving model, and some examples of the technical writing assignments that I developed at Dell.

    Committee: Jean Lutz (Advisor) Subjects: Computer Science
  • 16. Stahr, Michael DESIGN AND IMPLEMENTATION OF AN EMPLOYEE JOB SCHEDULING AND PROGRESSION TRACKING SYSTEM

    Master of Systems Analysis, Miami University, 2002, Systems Analysis

    This thesis is a project-oriented one that features the design and development of a software system used for employee hourly progression tracking and job scheduling in an industrial environment. This software was developed at a professional level as verified by its adoption by one of the largest boxboard companies in the industry for its day-to-day operations. Some of the main features of this system are: Specialized ActiveX controls Object oriented design utilizing classes in Visual Basic 6.0 and C++ Stored procedures embedded in Microsoft SQL Server 7.0 TCP/IP technology This system consists of two integrated subsystems, Tracker and Crew Master. The first, Tracker, was designed to replace a manual system used by the accounting department for tracking employee hourly progression. The second, Crew Master, replaces a manual system for scheduling hourly employees for job positions as well as vacation scheduling.

    Committee: Jim Kiper (Advisor) Subjects:
  • 17. Hangwei, Qian Dynamic Resource Management of Cloud-Hosted Internet Applications

    Doctor of Philosophy, Case Western Reserve University, 2012, EECS - Computer and Information Sciences

    Internet is evolving toward service-oriented computing platforms (e.g., cloud computing platforms, such as Amazon EC2 and Microsoft Azure). In these platforms, service providers (owners of the platforms) offer resource pools by building multiple geo-distributed data centers; application providers (owners of the applications) outsource the hosting of their applications to these platforms, and pay by the amount of resources used as utility. These multi-tenant platforms need to dynamically allocate resources to applications so as to meet their demand variation. In this thesis, we address several issues of the dynamic resource management in these platforms. On the one hand, we consider the resource provisioning problems within data centers. In order to allocate resources to applications quickly, we propose deploying ghost virtual machines (VMs) which host spare application instances across the physical machines. When an application needs more instances, we can configure the request distributer to forward requests to ghost VMs, which takes only 5-7 seconds. Also, to deal with the scalability issues in mega data center (with hundreds of thousands of servers), we introduce hierarchical resource management scheme in which servers are divided into groups (pods), each with about 5k servers, and existing techniques are employed to manage resources in each pod efficiently. Meanwhile, multiple strategies are explored to balance the load among the pods. In addition, we also propose a new data center architecture in which we can apply DNS-based mechanism to balance the load among the access links which connect data center to Internet. On the other hand, we address the resource management problems among multiple data centers. We proposed a unified approach to decide in how many/which data centers each application should be deployed, and how client requests are forwarded to the geo-distributed service replicas. We make these decisions based on a min-cost network flow model, and appl (open full item for complete abstract)

    Committee: Michael Rabinovich PhD (Committee Chair); Vincenzo Liberatore PhD (Committee Member); Guo-Qiang Zhang PhD (Committee Member); Christos Papachristou PhD (Committee Member) Subjects: Computer Science
  • 18. Sharma, Mayank PERFORMANCE EVALUATION OF AN ENHANCED POPULARITY-BASED WEB PREFETCHING TECHNIQUE

    Master of Science, University of Akron, 2006, Computer Science

    The growth of the World Wide Web has been tremendous over the last decade, but at the same time, it has exacerbated the response time as perceived by users in accessing web documents. Web Caching and web prefetching are techniques used to alleviate such problems. Caching improve access latency by locally storing previously accessed web documents whereas prefetching techniques rely on predictive approaches to speculatively retrieve and store web objects into the cache for future use. Predictions on what to prefetch are made based on different criteria such as access patterns, popularity and structure of documents accessed. Prefetching approaches differ in their implementation and complexity, but more importantly in the performance that can be achieved based upon the prediction accuracy. In this work, we introduce a simple and transparent popularity-based prefetching algorithm which combines both the top 10 and next-n prefetching approaches. In addition to using access-frequency as the criteria for prefetching, we also use the time of access of web documents to generate the top 10 list. This approach of using access-frequency and time of access is known as the GDSP approach, which has been used in cache management. Instead of generating next-n list for all the documents accessed by the users, we log the next-n documents for the top 10 documents only, thus reducing complexity and overhead. The results obtained from simulations, in terms of hit rate and prefetching effectiveness show the efficacy of our proposed algorithm as compared to other approaches. Future work includes making the proposed approach dynamic by refreshing the top 10 list with the latest GDSP values and prefetching dynamic web documents.

    Committee: Xuan-Hien Dang (Advisor) Subjects: Computer Science