Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 270)

Mini-Tools

 
 

Search Report

  • 1. Panta, Purushottam Web Design, Development and Security

    Master of Science in Mathematics, Youngstown State University, 2009, Department of Mathematics and Statistics

    Websites are the most convenient way to present and disseminate information to the maximum number of people in the world. The web browsers are the means to render the information on web page, the basic building blocks of a website, and web programming is the basic structure (architecture) of each web page.The thesis on "Web Design, Development and Security" is a complete analysis of website design and development. Web sites should be able to present abundant information to a visitor in well organized manner. In addition, there must be a reliable transfer of secure information between server and client. There exist other major factors such as user friendliness, layout, simplicity, ease of rendering in the browser and so on that are closely related with the quality of website. This thesis will elaborate on a number of issues that are related with web design and development. These principles will be illustrated and demonstrated in the design of some websites that I have designed so far.

    Committee: John Sullins PhD (Advisor); Graciela Perera PhD (Committee Member); Jamal Tartir PhD (Committee Member) Subjects: Computer Science
  • 2. Patni, Harshal Real Time Semantic Analysis of Streaming Sensor Data

    Master of Science (MS), Wright State University, 2011, Computer Science

    The emergence of dynamic information sources - like social, mobile and sensors, has led to ginormous streams of real time data on the web also called, the era of Big Data [1]. Research studies suggest, these dynamic networks have created more data in the last three years than in the entire history of civilization, and this trend will only increase in the coming years [1]. Gigaom article on Big data shows, how the total information generated by these dynamic information sources has completely surpassed the total storage capacity. Thus keeping in mind the problem of ever-increasing data, this thesis focuses on semantically integrating and analyzing multiple, multimodal, heterogeneous streams of weather data with the goal of creating meaningful thematic abstractions in real-time. This is accomplished by implementing an infrastructure for creating and mining thematic abstractions over massive amount of real-time sensor streams. Evaluation section shows 69% data reduction with this approach.

    Committee: Amit Sheth PhD (Advisor); Ramakanth Kavaluru PhD (Committee Member); Krishnaprasad Thirunarayan PhD (Committee Member) Subjects: Computer Science; Geographic Information Science
  • 3. Pschorr, Joshua SemSOS : an Architecture for Query, Insertion, and Discovery for Semantic Sensor Networks

    Master of Science (MS), Wright State University, 2013, Computer Science

    With sensors, storage, and bandwidth becoming ever cheaper, there has been a drive recently to make sensor data accessible on the Web. However, because of the vast number of sensors collecting data about our environment, finding relevant sensors on the Web and then interpreting their observations is a non-trivial challenge. The Open Geospatial Consortium (OGC) defines a web service specification known as the Sensor Observation Service (SOS) that is designed to standardize the way sensors and sensor data are discovered and accessed on the Web. Though this standard goes a long way in providing interoperability between sensor data producers and consumers, it is predicated on the idea that the consuming application is equipped to handle raw sensor data. Sensor data consuming end-points are generally interested in not just the raw data itself, but rather actionable information regarding their environment. The approaches for dealing with this are either to make each individual consuming application smarter or to make the data served to them smarter. This thesis presents an application of the latter approach, which is accomplished by providing a more meaningful representation of sensor data by leveraging semantic web technologies. Specifically, this thesis describes an approach to sensor data modeling, reasoning, discovery, and query over richer semantic data derived from raw sensor descriptions and observations. The artifacts resulting from this research include: - an implementation of an SOS service which hews to both Sensor Web and Semantic Web standards in order to bridge the gap between syntactic and semantic sensor data consumers and that has been proven by use in a number of research applications storing large amounts of data, which serves as - an example of an approach for designing applications which integrate syntactic services over semantic models and allow for interactions with external reasoning systems. As more sensors and observations move o (open full item for complete abstract)

    Committee: Krishnaprasad Thirunarayan Ph.D. (Advisor); Amit Sheth Ph.D. (Committee Member); Bin Wang Ph.D. (Committee Member) Subjects: Computer Science; Geographic Information Science; Information Systems; Remote Sensing; Systems Design; Web Studies
  • 4. Bhat, Akshaykumar Evaluation of Beam-to-Column Gravity Moment Connections

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Civil Engineering

    Traditionally, columns in two-way unequal depth beam-to-column gravity moment connections, with large eccentricity between the compressive flange forces, are designed for the governing limit state of web compression buckling. This study focuses on finite element modeling of such connections to identify the governing limit state and examine the validity of this traditional design assumption. It was hypothesized that web local crippling, and not web compression buckling, would control when the compressive flange forces delivered from the unequal depth beams to the column are far enough apart. The work presented in this report offers (1) application of the design equations; (2) benchmarking of finite element analyses techniques for experimental two-way equal depth beam-to-column moment connection; (3) application of the same modeling techniques and investigation of the compression region in a two-way equal and unequal depth beam-to-column gravity moment connection, with various combinations of inclusion or exclusion of initial imperfections and axial load; and (4) conclusion based on the failure modes and scope for any future work. It was concluded that the finite element analyses with the inclusion of initial imperfections and axial load captured both web compression buckling and web local crippling. Still, it was difficult to distinguish between the failure modes. The finite element analyses models that exclude initial imperfections and axial load predicted the failure modes well. The peak loads in all the cases were higher compared to the AISC design equations. Therefore, further investigation for web compression buckling and web local crippling in unequal depth beam-to-column moment connection is recommended.

    Committee: Rachel Chicchi Ph.D. (Committee Chair); Kent Harries Ph.D. (Committee Member); Bahram Shahrooz Ph.D. (Committee Member) Subjects: Civil Engineering
  • 5. Gomadam, Karthik Semantics Enriched Service Environments

    Doctor of Philosophy (PhD), Wright State University, 2009, Computer Science and Engineering PhD

    During the past seven years services centric computing has emerged as the preferred approach to architect complex software. Software is increasingly developed by integrating remotely existing components, popularly called services. This architectural paradigm, also called Service Oriented Architecture (SOA), brings with it the benefits of interoperability, agility and flexibility to software design and development. One can easily add or change new features to existing systems, either by the addition of new services or by replacing existing ones. Two popular approaches have emerged for realizing SOA. The first approach is based on the SOAP protocol for communication and the Web Service Description Language (WSDL) for service interface description. SOAP and WSDL are built over XML, thus guaranteeing minimal structural and syntactic interoperability. In addition to SOAP and WSDL, the WS-* (WS-Star) stack or SOAP stack comprises other standards and specification that enable features such as security and services integration. More recently, the RESTful approach has emerged as an alternative to the SOAP stack. This approach advocates the use of the HTTP operations of GET/PUT/POST/DELETE as standard service operations and the REpresentational State Transfer (REST) paradigm for maintaining service states. The RESTful approach leverages on the HTTP protocol and has gained a lot of traction, especially in the context of consumer Web applications such as Maps. Despite their growing adoption, the stated objectives of interoperability, agility, and flexibility have been hard to achieve using either of the two approaches. This is largely because of the various heterogeneities that exist between different service providers. These heterogeneities are present both at the data and the interaction levels. Fundamental to addressing these heterogeneities are the problems of service Description, Discovery, Data mediation and Dynamic configuration. Currently, service description (open full item for complete abstract)

    Committee: Amit Sheth PhD (Committee Chair); Michael Raymer PhD (Committee Member); Lakshmish Ramaswamy PhD (Committee Member); Shu Schiller PhD (Committee Member); Guozhou Dong PhD (Committee Member); Krishnaprasad Thirunarayan PhD (Committee Member) Subjects: Computer Science
  • 6. Mwaura, Catherine An investigation of the Innovation-decision Process of Faculty Members with Respect to Web-based Instruction

    Doctor of Philosophy (PhD), Ohio University, 2003, Instructional Technology (Education)

    Although web-based instruction (WBI) is growing faster than any other instructional technology, a large number of faculty members in the universities are hesitant or reluctant to adopt. Using a qualitative approach, this study investigates factors that influence faculty members when deciding to adopt or reject web-based instruction based on the research question: What factors influence the decision process to adopt or reject web-based instruction among faculty members? Rogers' (1995) theory, diffusion of innovations, focusing on the model of the innovation-decision process, provides the theoretical framework in this study. The researcher interviewed 31 university faculty members and 5 administrators. The faculty interviewees included those who had adopted WBI as well as those who were planning to adopt or had rejected WBI. Among those who adopted WBI, some used it as a supplement to face-to-face instruction while others taught at a distance. The researcher also attended a workshop, a seminar, and a conference to acquire an understanding of the support provided to the faculty members using or planning to use web-based instruction. The primary factors that influence faculty members in making their decisions to adopt or reject WBI include: 1) training faculty members how to develop and implement WBI, 2) complexity in the implementation of web-based instruction, 3) compatibility of WBI with faculty members' teaching values, 4) time and effort needed to develop and implement WBI, 5) relative advantages of using WBI, and 6) lack of incentives (awards, annual reviews, and promotions). The implications of this study are that universities should 1) encourage collaboration among faculty members as they learn, develop and implement WBI by providing them with opportunities, during faculty meetings, to demonstrate how they are using WBI, 2) provide support that focuses on faculty's instructional needs for WBI, 3) consider faculty members' knowledge and technical skill levels in (open full item for complete abstract)

    Committee: Sandra Turner (Advisor) Subjects: Education, General
  • 7. Burke, Sarah Working as an Agent of Change: Writing Rapidly and Establishing Standards in Web Software Documentation

    Master of Technical and Scientific Communication, Miami University, 2003, Technical and Scientific Communication

    This report discusses my internship experiences at Fig Leaf Software in Washington, DC, where I worked as a technical writer during the summer of 2001. In the report, I describe the young, rapid-development environment in which I worked, my major tasks and projects, and a significant project that I completed during my internship. During this project, I faced many challenges in developing the company's first client installation guide, including staying within the allotted hours and budget, gaining access to technical information, and establishing standards for a new document type. After discussing these challenges, I examine my role and value as an agent of change at Fig Leaf Software and present an expanded organizational role for technical communication practitioners.

    Committee: Katherine Durack (Advisor) Subjects: Information Science
  • 8. Sledzik, Christopher PR and Online Branding Corporate Perceptions in a Digital Space: Branding Goodyear Engineered Products in the Automotive Aftermarket Online

    MA, Kent State University, 2012, College of Communication and Information / School of Media and Journalism

    This thesis paper investigates how the main principles of online public relations are applied to create a brand for organizations in the digital world. Research of scholarly journals, acclaimed texts and recent articles discusses how public relations and information architecture theories and strategies have been applied to the World Wide Web to shape perceptions of key stakeholders. Based on the research findings, a content analysis was conducted reviewing brand web pages and social media activity to produce recommendations for a current business case.

    Committee: Bob Batchelor PhD (Committee Co-Chair); Michele Ewing MA (Committee Co-Chair); Stefanie Moore MA (Committee Member) Subjects: Communication; Marketing; Mass Communications; Technical Communication; Web Studies
  • 9. Konduri, Aparna CLustering of Web Services Based on Semantic Similarity

    Master of Science, University of Akron, 2008, Computer Science

    Web Services are proving to be a convenient way to integrate distributed software applications. As service-oriented architecture is getting popular, vast numbers of web services have been developed all over the world. But it is a challenging task to find the relevant or similar web services using web services registry such as UDDI. Current UDDI search uses keywords from web service and company information in its registry to retrieve web services. This information cannot fully capture user's needs and may miss out on potential matches. Underlying functionality and semantics of web services need to be considered. In this study, we explore semantics of web services using WSDL operation names and parameter names along with WordNet. We compute semantic similarity of web services and use this data to generate clusters. Then, we use a novel approach to represent the clusters and utilize that information to further predict similarity of any new web services. This approach has really yielded good results and can be efficiently used by any web service search engine to retrieve similar or related web services.

    Committee: Chien-Chung Chan (Advisor) Subjects: Computer Science
  • 10. Caiazza, Evyn Defining user demographics to understand the success of digitally designed modalities in STEM topics

    MFA, Kent State University, 2024, College of Communication and Information / School of Visual Communication Design

    Education has always required the use of tools and resources to effectively impact learners. The creation of these tools, or modalities, requires advanced knowledge of design and an understanding of the needs of the users. Designers can use contemporary research on technology, inclusivity, and accessibility to develop impactful learning modalities. Written, visual, and multimodal content are explored in this research. Four digital web-responsive educational modalities (essay, quick text, infographic, and interactive) were created all providing the same information about tectonic plates. Users were randomly assigned a modality and then given as much time as needed to view or interact with it. When finished, they filled out a survey about their demographics, their comprehension of the information from the modality, and information about their educational experiences. While modality type did not impact users' comprehension scores, it was found that demographic information could play a role in users' abilities to understand scientific content. Adults with STEM (science, technology, engineering, and mathematics) careers and children with an adult in the household with a STEM career performed better on the comprehension section than those not frequently exposed to STEM topics. Additional demographic information was analyzed, and standard deviations show that the additional information did not impact scores. By understanding what demographic details impact user education, better educational resources can be designed to accommodate user needs.

    Committee: Kenneth O’Grady (Committee Member); Jessica Barness (Advisor); Gretchen Rinnert (Committee Member) Subjects: Web Studies
  • 11. Saraf, Nikita Sandip Leveraging Commercial and Open Source Software to Process and Visualize Advanced 3D Models on a Web-Based Software Platform

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Computer Science

    Today, most successful business models widely use software programs to bridge the gap between data and business requirements. Changes in business strategies also require software programs to adapt with it. As a result, the available software products are continuously evolving, and are rapidly changing with new technologies and user requirements. Earlier in 2017, the Ohio Department of Transportation (ODOT) and University of Cincinnati started developing a web application, called the Common Operating Platform (COP), to remotely process the drone-captured images into 3D models using commercial (Pix4D) and open-source (OpenDroneMap). The idea is to engage shared hardware and software resources to perform such complex tasks. The platform immediately gained popularity and actively used by the personnel at ODOT. Preliminary study shows that the Common Operating Platform has a lot of room to incorporate more features. Hence, this thesis introduces the Common Operating Platform v11.0 that comes more complex 3D modeling and visualization workflows. The purpose of this work is to enhance functionality, reliability, efficiency, and usability of the Common Operating Platform. Initially, this document enlists shortcomings of the existing system and proposes new solutions to eliminate these shortcomings. Secondly, the proposed system architecture is compared against the existing architecture. In the final stage, the proposed enhancements are implemented by leveraging commercial (Pix4D) and open-source (MeshLabJS) software tools. Other miscellaneous features to improve system performance, efficiency and reliability are also discussed.

    Committee: Arthur Helmicki Ph.D. (Committee Chair); Victor Hunt Ph.D. (Committee Member); Nan Niu Ph.D. (Committee Member) Subjects: Computer Science
  • 12. Catanese, Alexander Organic Web Design: Exploring Nature as Metaphor in Responsive Web Design

    MFA, Kent State University, 2017, College of Communication and Information / School of Visual Communication Design

    The rise of responsive design as an approach to web design in the last decade has shaped the ways that designers consider the web as a medium. As this approach has developed, its practitioners have begun forming patterns and templates which might limit alternative modes of thinking in the field. Criticism of this approach is still in its early stages, leaving open an opportunity for reflection, expansion, and the exploration of alternative modes of thinking and making for considering the medium. The web browser has properties which are inherently flexible, fluid, and adaptive—begging for a parallel exploration of flexible, fluid, adaptive systems as external inspiration for web design. Nature has been considered as a form of inspiration throughout the history of the arts and design, providing analogical and metaphorical modes of thinking that expand upon traditional approaches. Biomimicry is an emerging practice within industrial design, architecture, and engineering—yet little has been discussed within the field of visual communication design, especially within responsive web design.¿ In this thesis, relationships shared by design and the natural world are investigated through secondary research, and critical making is used as primary research to examine the organic properties of the web browser. Three prototypes were designed to explore, expand, and reflect upon these organic properties within responsive design. Each prototype was subsequently reviewed by design educators and professionals. This thesis proposes that looking to nature's principles and forms can inform design for the web as a medium, providing an approach which builds upon and extends the capacities of responsive web design.

    Committee: Jessica Barness MFA (Advisor); Ken Visocky O'Grady MFA (Committee Member); Aoife Mooney MA (Committee Member) Subjects: Design
  • 13. Rude, Howard Intelligent Caching to Mitigate the Impact of Web Robots on Web Servers

    Master of Science (MS), Wright State University, 2016, Computer Science

    With an ever increasing amount of data that is shared and posted on the Web, the desire and necessity to automatically glean this information has led to an increase in the sophistication and volume of software agents called web robots or crawlers. Recent measurements, including our own across the entire logs of Wright State University Web servers over the past two years, suggest that at least 60\% of all requests originate from robots rather than humans. Web robots display different statistical and behavioral patterns in their traffic compared to humans, yet present Web server optimizations presume that traffic exhibits predominantly human-like characteristics. Robots may thus be silently degrading the performance and scalability of our web systems. This thesis investigates a new take on a classic performance tool, namely web caches, to mitigate the impact of robot traffic on web server operations. It proposes a cache system architecture that:~(i) services robot and human traffic in separate physical memory stores, with separate polices;~(ii) uses an adaptable policy for admitting robot related resources;~(iii) combines a deep neural network with Bayesian models to improve request prediction. Experiments with real data demonstrate (i) significant reduction in bandwidth usage for prefetching and (ii) improvements in hit rate for human driven traffic compared to a number of baselines, especially in configurations where web caches have limited size.

    Committee: Derek Doran Ph.D. (Committee Chair); Tanvi Banerjee Ph.D. (Committee Member); John Gallagher Ph.D. (Committee Member) Subjects: Computer Science
  • 14. Sengupta, Kunal A Language for Inconsistency-Tolerant Ontology Mapping

    Doctor of Philosophy (PhD), Wright State University, 2015, Computer Science and Engineering PhD

    Ontology alignment plays a key role in enabling interoperability among various data sources present in the web. The nature of the world is such, that the same concepts differ in meaning, often so slightly, which makes it difficult to relate these concepts. It is the omni-present heterogeneity that is at the core of the web. The research work presented in this dissertation, is driven by the goal of providing a robust ontology alignment language for the semantic web, as we show that description logics based alignment languages are not suitable for aligning ontologies. The adoption of the semantic web technologies has been consistently on the rise over the past decade, and it continues to show promise. The core component of the semantic web is the set of knowledge representation languages -- mainly the W3C (World Wide Web Consortium) standards Web Ontology Language (OWL), Resource Description Framework (RDF), and Rule Interchange Format (RIF). While these languages have been designed in order to be suitable for the openness and extensibility of the web, they lack certain features which we try to address in this dissertation. One such missing component is the lack of non-monotonic features, in the knowledge representation languages, that enable us to perform common sense reasoning. For example, OWL supports the open world assumption (OWA), which means that knowledge about everything is assumed to be possibly incomplete at any point of time. However, experience has shown that there are situations that require us to assume that certain parts of the knowledge base are complete. Employing the Closed World Assumption (CWA) helps us achieve this. Circumscription is a very well-known approach towards CWA, which provides closed world semantics by employing the idea of minimal models with respect to certain predicates which are closed. We provide the formal semantics of the notion of Grounded Circumscription, which is an extension of circumscription with desirable propert (open full item for complete abstract)

    Committee: Pascal Hitzler Ph.D. (Advisor); Krzysztof Janowicz Ph.D. (Committee Member); Krishnaprasad Thirunarayan Ph.D. (Committee Member); Prabhaker Mateti Ph.D. (Committee Member) Subjects: Computer Science
  • 15. Janga, Prudhvi Integration of Heterogeneous Web-based Information into a Uniform Web-based Presentation

    PhD, University of Cincinnati, 2014, Engineering and Applied Science: Computer Science and Engineering

    With the continuing explosive growth of the world wide web, a wealth of information has become available online. The web has become one of the major sources of information for both individual users and large organizations. To find the information, individual users can either use search engines or navigate to a particular website following links. The former method returns links to vast amounts of data in seconds while the latter one could be tedious and time consuming. The presentation of results using the former method is usually a web page with links to actual web data sources (or websites). The latter method takes the user to the actual web data source itself. Using the two most popular forms of web data presentation/retrieval, web data can hardly be queried, manipulated and analyzed easily even though it is publicly and readily available. Many companies also use web for information whose challenge is to build web-based analytical and decision support systems, often referred to as web data warehouses. However, the information present on the web is extremely complex and heterogeneous which brings along with it a challenge in integrating and presenting retrieved web data in a uniform format. Hence, there is a need for different web data integration frameworks that can integrate and present web data in a uniform format. To achieve a homogeneous representation of web data we need a framework that extracts relevant structured and semi-structured web data from different web data sources, generates schemas from structured as well as semi-structured web data, and integrates schemas generated from different structured and semi-structured web data sources into a merged schema, populates it with data and presents it to the end user in a uniform format. We propose a modular framework for homogeneous presentation of web data. This framework consists of different standalone modules that can also be used to create independent systems that solve other schema unification problem (open full item for complete abstract)

    Committee: Karen Davis Ph.D. (Committee Chair); Raj Bhatnagar Ph.D. (Committee Member); Hsiang-Li Chiang Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Science
  • 16. Henson, Cory A Semantics-based Approach to Machine Perception

    Doctor of Philosophy (PhD), Wright State University, 2013, Computer Science and Engineering PhD

    Machine perception can be formalized using semantic web technologies in order to derive abstractions from sensor data using background knowledge on the Web, and efficiently executed on resource-constrained devices. Advances in sensing technology hold the promise to revolutionize our ability to observe and understand the world around us. Yet the gap between observation and understanding is vast. As sensors are becoming more advanced and cost-effective, the result is an avalanche of data of high volume, velocity, and of varied type, leading to the problem of too much data and not enough knowledge (i.e., insights leading to actions). Current estimates predict over 50 billion sensors connected to the Web by 2020. While the challenge of data deluge is formidable, a resolution has profound implications. The ability to translate low-level data into high-level abstractions closer to human understanding and decision-making has the potential to disrupt data-driven interdisciplinary sciences, such as environmental science, healthcare, and bioinformatics, as well as enable other emerging technologies, such as the Internet of Things. The ability to make sense of sensory input is called perception; and while people are able to perceive their environment almost instantaneously, and seemingly without effort, machines continue to struggle with the task. Machine perception is a hard problem in computer science, with many fundamental issues that are yet to be adequately addressed, including: (a) annotation of sensor data, (b) interpretation of sensor data, and (c) efficient implementation and execution. This dissertation presents a semantics-based machine perception framework to address these issues. The tangible primary contributions created to support the thesis of this dissertation include the development of a Semantic Sensor Observation Service (SemSOS) for accessing and querying sensor data on the Web, an ontology of perception (Intellego) that provides a formal semanti (open full item for complete abstract)

    Committee: Amit Sheth Ph.D. (Advisor); Krishnaprasad Thirunarayan, Ph.D. (Committee Member); Payam Barnaghi Ph.D. (Committee Member); Satya Sahoo Ph.D. (Committee Member); John Gallagher Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science; Information Science
  • 17. Koron, Ronald Developing a Semantic Web Crawler to Locate OWL Documents

    Master of Science (MS), Wright State University, 2012, Computer Science

    The terms Semantic Web and OWL are relatively new and growing concepts in the World Wide Web. Because these concepts are so new there are relatively few applications and/or tools for utilizing the potential power of this new concept. Although there are many components to the Semantic Web, this thesis will focus on the research question, "How do we go about developing a web crawler for the Semantic Web that locates and retrieves OWL documents." Specifically for this thesis, we hypothesize that by giving URIs to OWL documents, including all URIs from within these OWL documents, priority over other types of references, then we will locate more OWL documents than by any other type of traversal. We reason that OWL documents have proportionally more references to other OWL documents than non-OWL documents do, so that by giving them priority we should have located more OWL files when the crawl terminates, than by any other traversal method. In order to develop such an OWL priority queue, we needed to develop some heuristics to predict OWL documents during real-time parsing of Semantic Web documents. These heuristics are based on filename extensions and OWL language constructs, which are not absolute when predicting a document type before retrieval. However, if our reasoning is correct, then URIs found in an OWL document will likely lead to more OWL documents, such that when the crawl ends because of reaching a maximum document limit, we will have retrieved more OWL documents than by other methods such as breadth-first or load-balanced. We conclude our research with an evaluation of our results to test the validity of our hypothesis and to see if it is worthy of future research.

    Committee: Pascal Hitzler PhD (Committee Chair); Gouzhu Dong PhD (Committee Member); Krishnaprasad Thirunarayan PhD (Committee Member) Subjects: Computer Science
  • 18. Jain, Prateek Linked Open Data Alignment & Querying

    Doctor of Philosophy (PhD), Wright State University, 2012, Computer Science and Engineering PhD

    The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can "understand and satisfy the requests of people and machines to use the web content" i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 295 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud, as we will illustrate,are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short. This thesis presents a comprehensive solution to address the issue of alignment and relationship identification using a bootstrapping based approach. By alignment we mean the process of determining correspondences between classes and properties of ontologies. We identify subsumption, equivalence and part-of relationship between classes. The work identifies part-of relationship between instances. Between properties we will establish subsumption and equivalence relationship. By bootstrapping we mean the process of being able to utilize the information which is contained within the datasets for improving the data within them. The work showcases use of bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.

    Committee: Amit Sheth PhD (Advisor); Pascal Hitzler PhD (Committee Member); Krishnaprasad Thirunarayan PhD (Committee Member); Kunal Verma PhD (Committee Member); Peter Yeh PhD (Committee Member) Subjects: Computer Science
  • 19. NARAYANAN, SUGAN APPLICATION OF WEB SERVICES FOR REMOTE ACCESS OF BEARCAT III ROBOT USING THE .NET FRAMEWORK

    MS, University of Cincinnati, 2003, Engineering : Industrial Engineering

    Remote access to a robot with secured transfer of information across devices and platforms is a challenging issue today. In the existing setup, to connect two machines, the available software needs to be installed at both locations. Its application is limited only to computers operating within the same platform. It also exposes all the functions present in the other machine. Added to this, the data transfer protocol makes it unsecured further. Web services on the .NET framework provided by Microsoft TM help to eliminate these problems. To execute from a remote location authenticated users log on to the server that controls the robot, through a web interface. In order to control the Robot for obstacle avoidance, web services are invoked. The algorithm for obstacle avoidance is coded on a different server using ASP. NET. It is designed for three types of obstacles-circular, rectangular and linear. Only the required logic is exposed to the user and not the complete algorithm. Also the parameters passed between the server controlling the robot and the server that hosts the algorithm is through a secured protocol which is built in as a part of the .NET framework. The result of the algorithm is returned to the user via the web interface. In addition to this, the .NET framework also provides a facility to use the existing program (coded using conventional programming languages) across the network. This design was performed on Bearcat III and Bearcat Cub Robots. Thus any authenticated user would be able to access the robot remotely and securely across the University of Cincinnati's network.

    Committee: Dr. Ernest Hall (Advisor) Subjects: Engineering, Industrial
  • 20. Lee, Changpil An Evaluation Model for Application Development Frameworks for Web Applications

    Master of Science, The Ohio State University, 2011, Computer Science and Engineering

    Web frameworks and web-based content management systems (WCMS) are emerging as the growth of the Internet has continued and its role has increased in the daily life of its users. When choosing a web framework or a WCMS on which to build an application, web application developers usually decide on a framework or a WCMS based on their prior web development expertise, the quality of the available documentation about the framework, or the popularity of the web framework, regardless of its ability to meet the non-functional and functional requirements of the application to be built. The problem is that there are many choices with respect to web frameworks and WCMS for a web developer to select from and the goals must be to select the best framework for the job. When choosing an framework or WCMS, web application developers must consider the overall effect of the design decisions within the framework, the inherent tradeoffs between quality attributes (such as performance and security), and the tradeoffs required to address user, system, and business requirements. However, the process necessary to find the best framework for an application's requirements is not well described. There is only limited research on the evaluation of web application frameworks; this makes it hard for developers to choose the right web framework or WCMS. In this thesis, a new cost-estimation model for web frameworks and WCMS is introduced. The costs in the estimation model are categorized as follows: system costs, design costs, learning costs, and implementation costs. This cost-estimation model is a framework for comparing the relative costs of web frameworks and WCMS. This cost estimation model was applied in two case studies namely the Polymer Properties Predictor (P4) project and the PFast project, both at the Ohio Supercomputing Center. Finally we used the cost-estimation model to analyze the development of a new web project.

    Committee: Rajiv Ramnath (Advisor); Jay Ramanathan (Committee Member); Tom Bitterman (Committee Member) Subjects: Computer Science