Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 9)

Mini-Tools

 
 

Search Report

  • 1. Jitpaiboon, Thawatchai The Roles of Information Systems Integration in the Supply Chain Integration Context - Firm Perspective

    Doctor of Philosophy, University of Toledo, 2005, Manufacturing Management

    With advances in information technology (IT), information systems integration (ISI) and its role in an integrated supply chain have become important to executives and researchers. ISI represents the degree of cooperation in information system practices between business functions within a firm and between a firm and its trading partners. It has been documented that the introduction and utilization of ISI for supply chain management enhance the firms' competitiveness and growth. While many firms focus on achieving high levels of IT utilization, without high levels of ISI, supply chain members may not attain the full benefits of working within a supply chain. The concept of ISI can be captured using two main sub-constructs (e.g., internal ISI and external ISI) and can be conceptualized at three levels – (1) Strategic, (2) Operational, and (3) Infrastructural. Studying the effects of ISI in these levels can help researchers and executives understand how ISI practices at different levels contribute to overall supply chain effectiveness. The implications of such an understanding may bring significant benefits to both operations researchers and practitioners. Such benefits may include making better decisions about which IT to utilize, which information systems (IS) practices to emphasize, and what level of ISI to attain. From a practitioner's perspective, this research provides important guidelines so firms may better understand ISI issues and effectively implement IT. This study proposes three significant contributions to supply chain management research. First, this study applies an information system perspective to study both causes and effects of supply chain integration. It proposes a theoretical framework that considers the role of ISI as a mediator between IT utilization and supply chain integration. Second, this study provides the inferences made from an instrument that is valid and reliable for the current study's context, which are beneficial for both practitione (open full item for complete abstract)

    Committee: T.S. Ragu-Nathan (Advisor) Subjects: Business Administration, Management
  • 2. Sannellappanavar, Vijaya DATAWAREHOUSE APPROACH TO DECISION SUPPORT SYSTEM FROM DISTRIBUTED, HETEROGENEOUS SOURCES

    Master of Science, University of Akron, 2006, Computer Science

    In today's world of global business, worldwide partnerships and corporate mergers, decision making plays a major role in the steady growth of a business providing it a competitive edge. Decision making is the key to smooth day-to-day operations as well as for effective future planning in this ever competitive world. Several sources of data exist in the business from which valuable information can be extracted to help make a wide range of decisions. In order to facilitate querying and analysis, the data from these sources need to be integrated. There are various considerations and approaches for such Data Integration or Information Integration and several issues surround this process. These issues are considered and the prominent approaches to Information Integration are studied with an emphasis on the Datawarehousing approach. A Datawarehouse is implemented from scratch from available raw data sources and by means of experimentation, it is shown how Datawarehousing is the most suited of all the considered approaches in specific business settings. The main objective of the research and the contribution of this thesis are towards analyzing the major issues faced in specific enterprise scenario and demonstrating how the Datawarehousing approach provides an efficient solution for them and hence provides a solid foundation for Decision Support Systems from distributed, heterogeneous data sources.

    Committee: Chien-Chung Chan (Advisor) Subjects:
  • 3. Stout, Blaine Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms

    Doctor of Philosophy, University of Toledo, 2018, Manufacturing and Technology Management

    Today's small-market and mid-market sized manufacturers, competitively face increasing pressure to capture, integrate, operationalize, and manage diverse sources of digitized data. Many have made significant investments in data technologies with the objective to improve on organization performance yet not all have realized demonstrable benefits that create organization value. One simple question arises, do business-analytics make a difference on company performance in today's information intensive environment? The research purpose, to explore this question by looking through the lens of data-centric pressure placed on management driving the invested use of data-technologies; how these drivers impact on management influence to adopt a digitized organization mindset, effecting data practices, shaping key processes and strategies and leading to capabilities growth that impact on performance and culture. The terms `Big Data' and `Small Data' are two of the most prolific used phrases in today's world when discussing business analytics and the value data provides on organization performance. Big Data, being strategic to organization decision-making, and Small Data, operational; is captured from a host of internal and external sources. Studying how leveraging business-analytics into organizational value is of research benefit to both academic and practioner audiences alike. The research on `Big and Small Data, and business analytics' is both varied and deep and originating from a host of academic and non-academic sources; however, few empirical studies deeply examine the phenomena as experienced in the manufacturing environment. Exploring the pressures managers face in adopting data-centric managing beliefs, applied practices, understanding key value-creating process strategy mechanisms impacting on the organization, thus provides generalizable insights contributing to the pool of knowledge on the importance of data-technology investments impacting on organizational cul (open full item for complete abstract)

    Committee: Paul Hong (Committee Chair); Thomas Sharkey (Committee Member); Wallace Steven (Committee Member); Cheng An Chung (Committee Member) Subjects: Information Systems; Information Technology; Management; Organization Theory; Organizational Behavior
  • 4. Emeka-Nweze, Chika ICU_POC: AN EMR-BASED POINT OF CARE SYSTEM DESIGN FOR THE INTENSIVE CARE UNIT

    Doctor of Philosophy, Case Western Reserve University, 2017, EECS - Computer Engineering

    In this era of technological transformation in medicine, there is need to revolutionize the approach and procedures involved in the treatment of diseases to have a restructured understanding of the role of data and technology in the medical industry. Data is a key factor in diagnosis, management, and treatment of patients in any medical institution. Proper management and usage of patient's data will go a long way in helping the society save money, time and life of the patient. Having data is one thing and providing a system or means of translating the data is another issue. This dissertation is proposing a design of a Point of Care system for the Intensive Care Unit (a.k.a ICU_POC), which is a system that integrates the capabilities of the bedside monitors, bedside eFlowsheet and the Electronic Medical Records in such a manner that the clinicians interact with one another in real time from different locations, to view, analyze, and even make necessary diagnoses on patients' ailment based on their medical records. It demonstrates how patient data from the monitors can be imported, processed, and transformed into meaningful and useful information, stored, reproduced and transferred automatically to all necessary locations securely and efficiently without any human manipulation. ICU_POC will grant physicians the remote capability in managing patients properly by providing accurate patient data, easy analysis and fast diagnosis of patient conditions. It creates an interface for physicians to query historical data and make proper assumptions based on previous medical conditions. The problem lies in managing data transfer securely between one hospital EMR database and the other for easy accessibility of data by the physicians. This work is challenged by designing a system that could provide a fast, accurate, secure and effective (FASE) diagnosis of medical conditions of the patients in the ICU. The proposed system has the potential of reducing patients' length of stay i (open full item for complete abstract)

    Committee: Kenneth Loparo (Advisor); Farhad Kaffashi (Committee Member); Vira Chankong (Committee Member); Michael Degeorgia (Committee Member) Subjects: Computer Engineering; Computer Science; Engineering
  • 5. Zhang, Xuan Supporting on-the-fly data integration for bioinformatics

    Doctor of Philosophy, The Ohio State University, 2007, Computer and Information Science

    The use of computational tools and on-line data knowledgebases has changed the way the biologists conduct their research. The fusion of biology and information science is expected to continue. Data integration is one of the challenges faced by bioinformatics. In order to build an integration system for modern biological research, three problems have to be solved. A large number of existing data sources have to be incorporated and when new data sources are discovered, they should be utilized right away. The variety of the biological data formats and access methods have to be addressed. Finally, the system has to be able to understand the rich and often fuzzy semantic of biological data. Motivated by the above challenges, a system and a set of tools have been implemented to support on-the-fly integration of biological data. Metadata about the underlying data sources are the backbone of the system. Data mining tools have been developed to help users to write the descriptors semi-automatically. With automatic code generation approach, we have developed several tools for bioinformatics integration needs. An automatic data wrapper generation tool is able to transform data between heterogeneous data sources. Another code generation system can create programs to answer projection, selection, cross product and join queries from flat file data. Real bioinformatics requests have been used to test our system and tools. These case studies show that our approach can reduce the human efforts involved in an information integration system. Specifically, it makes the following contributions. 1) Data mining tools allow new data sources to be understood with ease and integrated to the system on-the-fly. 2) Changes in data format are localized by using the metadata descriptors. System maintenance cost is low. 3) Users interact with our system through high-level declarative interfaces. Programming efforts are reduced. 4) Our tools process data directly from flat files and requires no dat (open full item for complete abstract)

    Committee: Gagan Agrawal (Advisor) Subjects: Computer Science
  • 6. Iwai, Nobuyuki Economic models of developing countries in the global ecnomy

    Doctor of Philosophy, The Ohio State University, 2003, Agricultural, Environmental and Development Economics

    Three theoretical models on economic policies of developing countries are developed. The first model explains the economic rationale for the observed policy combination of a developing country (inviting foreign direct investment (FDI) through education investment (EDI)) and the interest of a multinational corporation (MNC) about the local labor quality when it contemplates FDI. Information on local labor is the source of a more efficient contract for the MNC with local labor, and the local government can benefit both agents through EDI, FDI and information sharing. However, the policy tends to benefit the government and the MNC at the expense of local labor welfare. A new concept term take-off point, the point at which the government starts making EDI, is introduced. The behavior of take-off point is the main focus of the model. The second model investigates the welfare effects for a developed country which mandates child labor prohibition by their developing country trading partner. The model addresses this issue using human capital accumulation theory and general equilibrium trade theory. It is shown that the distinction between the short run and the long run effects of child labor policy is very important, both in magnitude and direction of influence. The incorporation of increasing returns to scale technology in the trade model can lead to a situation in which child labor prohibition converts the importer-exporter positions. The framework introduced here is generally applicable to analyses of policy change which entails human capital accumulation processes. Finally, an endogenous growth model is developed to show the possible link between economic growth and production stability resulted from economic integration. Welfare implications are even stronger; economic integration is always welfare improving if it reduces production volatilities, regardless if the growth rate increases or not. While, the market equilibrium rate of growth is lower than the optimal growt (open full item for complete abstract)

    Committee: Stan Thompson (Advisor) Subjects: Economics, General
  • 7. Xu, Jian Iterative Aggregation of Bayesian Networks Incorporating Prior Knowledge

    Master of Science, Miami University, 2004, Computer Science and Systems Analysis

    Multi-source information integration has gained significant interest recently. We focus on integrating Bayesian networks (BNs) learned from data. The BN batch aggregation algorithm proposed by Maynard-Reid II and Chajewska (MC01) requires all sources' information be available at aggregation time and does not take the user's prior knowledge into account. We extend this algorithm to make the aggregation iterative, supporting “anytime” querying, and to allow the incorporation of the user's structural prior knowledge. We prove that the iterative extensions for joint distribution aggregation are independent of the order in which sources arrive. We show experimentally that iterative BN aggregation is order-dependent due to bias introduced by the algorithm's optimization nature and an “inertial” effect. However, we show that the results compare well with the batch algorithm in accuracy and efficiency. We also show experimentally that incorporating the user's structural prior knowledge can improve the accuracy and efficiency.

    Committee: Pedrito Maynard-Zhang (Advisor) Subjects: Computer Science
  • 8. Erson, E. Zeynep Development, Integration and Simulation of Multiscale Mathematical Models of Physiological Processes: A Software Engineering Perspective

    Doctor of Philosophy, Case Western Reserve University, 2011, EECS - Computer and Information Sciences

    With the emergence of systems biology, modeling of physiological models started to adopt more integrative approaches instead of reductionist methods. In order to understand the complex system, which is human body, multiscale physiological models must be integrated and analyzed as a whole. However modeling and simulation of multiscale physiological processes deal with the challenges such as high coupling within and among scales. In order to deal with such challenges in a systematic way, there is a significant need for information technology solutions together with related analytical and computational tools that will facilitate integration of models and simulations of complex biological systems. The idea of functional modularity and structural modularity presented in this thesis helps to cope with the highly complex and coupled nature of the physiological processes in the software level. Functional modularity is proposed through the use of information flow, which aims to separate the information and the flow of information in physiological processes. Ontology based design approaches are suggested for representing the anatomical and structural information of physiological processes, in order to achieve structural modularity. In this thesis, we also present a software framework which is built on the principles of structural and functional modularity. Physiological Model Simulation, Integration and Modeling Framework (Phy-SIM) is an information technology framework to facilitate development, integration and simulation of large-scale highly-integrated models of human physiology. Ultimate aim of Phy-SIM is to enhance the physiological model development processes; but more importantly to accelerate the development, analysis and testing of integration approaches for multiscale and multilevel physiological models.

    Committee: M. Cenk Cavusoglu PhD (Committee Chair); Z. Meral Ozsoyoglu PhD (Committee Member); Andy Podgurski PhD (Committee Member); Kenneth Loparo PhD (Committee Member) Subjects: Computer Science; Information Science
  • 9. Cook, Casey EXPLORING THE USE OF INTERACTIVE MULTIMEDIA AS AN INSTRUCTIONAL RESOURCE IN MIDDLE SCHOOLS OF NORTHWEST OHIO

    Master of Education (MEd), Bowling Green State University, 2006, Career and Technology Education/Technology

    Public education in the United States is in a notable state of transition regarding its use of computer technology as a tool to help educate K-12 students, but usage in the classrooms of American public schools is inconsistent and far from reaching its full potential, according to a report issued in 2003 by the National Research Council. Advocates who endorse interactive multimedia as part of computer based instruction believe that it can enhance teaching and learning because it can combine the benefits of visual and audio media with sophisticated programming to offer useful feedback during instruction. If interactive multimedia truly has the potential to enhance teaching and learning, then it should be studied. This descriptive study explored the use of interactive multimedia as an instructional resource in middle schools located throughout northwest Ohio. The research method for this study was a survey sent to middle school teachers. The sample was a stratified convenience sample of schools selected from urban, suburban, and rural communities throughout northwest Ohio. Teachers were asked if they used interactive multimedia in their classrooms, and were also asked what subject matter they taught, how many years they had been teaching, and what their gender was. A slightly larger majority of the teachers that responded to the survey indicated that they do use interactive multimedia as part of their classroom instruction, but responses were different between the strata, and it became apparent that most teachers still used interactive rarely during the school year. It was also discovered that a majority of teachers who indicated that they do not use interactive multimedia listed reasons that seemed more circumstantial in nature, rather than due to choice or personal preference not to use it. Lack of computers and lack of training emerged as the two most common deterrents as to why those teachers were not using interactive multimedia as part of their instruction. The (open full item for complete abstract)

    Committee: Gene Poor (Advisor) Subjects: Education, Technology