Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 7)

Mini-Tools

 
 

Search Report

  • 1. Dipko, James Advancing Software Development Team Performance: The Impact of Shared Mental Models and Transactive Memory

    Doctor of Business Administration, Cleveland State University, 2023, Monte Ahuja College of Business

    According to McKinsey Consulting (2016 and 2021), 70% of digital transformation efforts fail to reach their stated objectives, and the rate of software complexity continues to grow. At the same time, the degree of interdependencies in the workplace are increasing (Handke et al. 2022), and virtual teams are becoming more commonplace. As a result, IS managers seeking ways to foster successful outcomes among software development teams face formidable challenges. Software development teams must coordinate expertise, adhere to development methodologies, follow prescribed processes, and solve complex problems. Due to knowledge intensity, task interdependencies, and the degree of risk associated with software projects, the nature of shared cognition among these teams becomes a salient consideration for managers and researchers alike. Components grounded in theories of group psychology such as shared mental models and transactive memory have been shown in prior research to favorably influence team performance. Shared mental models are the collective, structural representation of a team's knowledge domain. Transactive memory is the interaction of individual memories and group processes. For software development teams however, it is unclear whether these factors translate into higher quality software development team outcomes. It is also unclear whether the implementation of formal software development processes foster the maturity of shared mental model similarity and transactive memory. Using an online experiment, this study implements group training and coding standard interventions to understand (a) the nature of relationships between these interventions and software development team performance (b) whether these interventions help strengthen the similarity of shared mental models in software development teams, and (c) whether collectively, stronger shared mental models and transactive memory positively influence software development team performance. Using results fro (open full item for complete abstract)

    Committee: Ray Henry (Committee Chair); Abdullah Oguz (Committee Member); Iftikhar Sikder (Committee Member); Nigamanth Sridhar (Committee Member) Subjects: Information Science; Information Systems
  • 2. Samipour-Biel, Sabina A process model of Transactive Memory System Shared Knowledge Structure emergence: A computational model in R

    Doctor of Philosophy, University of Akron, 2022, Psychology-Industrial/Organizational

    This dissertation studies the emergence of Transactive Memory Systems (TMS), specifically the shared knowledge structure component of TMS. To this end, the first part of the project provides in-depth discussions around the theoretical and methodological integration of the TMS and broader teams literatures, the alignment of theoretical and empirical TMS definitions, and the need to study communication in depth as the mechanism through which TMS develops. In the second part of the dissertation, the principles of these discussions were applied to build a computational model of TMS shared knowledge structure emergence in R. Each simulation ran for 100 iterations to study whether communication between agents regarding their areas of expertise resulted in the emergence of the TMS shared knowledge structure. Decision-making and deep learning theories were drawn on to predict that when agents did not have overlap in areas of expertise (had some overlap in areas of expertise), selecting to communicate with the team member thought most likely to be an expert in an information area led to more favorable (less favorable) outcomes than selecting which team member to communicate with randomly. The simulation was repeated for seven learning rates representing how readily agents changed their perceptions about their team members. Results indicated that query and response regarding expertise areas of agents consistently led to the emergence of a TMS shared knowledge structure across conditions. In most instances the pattern of emergence was marked by an initial period of rapid emergence followed by a decrease in the emergence rate. The results supported the hypotheses that when there is no expertise overlap, selecting maximally would lead to a more emerged shared knowledge structure than searching randomly, while the opposite would be the found when there was some expertise overlap. The latter finding challenges the assumption present in the TMS literature that searching for inf (open full item for complete abstract)

    Committee: Joelle Elicker (Advisor); Andrea Snell (Committee Co-Chair); Matthew Juravich (Committee Member); James Diefendorff (Committee Member); Paul Levy (Committee Member) Subjects: Psychology
  • 3. Chakraborty, Sourav High Performance and Scalable Cooperative Communication Middleware for Next Generation Architectures

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Modern high-performance computing (HPC) systems are enabling scientists to tackle various grand challenge problems in diverse domains including cosmology and astrophysics, earthquake and weather analysis, molecular dynamics and physics modeling, biological computations, and computational fluid dynamics among others. Along with the increasing demand for computing power, these applications are creating fundamental new challenges in terms of communication complexity, scalability, and reliability. At the same time, remote and virtualized clouds are rapidly gaining in popularity compared to on-premise clusters due to lower initial cost and greater flexibility. These requirements are driving the evolution of modern HPC processors, interconnects, storage systems, as well as middleware and runtimes. However, a large number of scientific applications have irregular and/or dynamic computation and communication patterns that require different approaches to extract the best performance. The increasing scale of HPC systems coupled with the diversity of emerging architectures, including the advent of multi-/many-core processors and Remote Direct Memory Access (RDMA) aware networks have exacerbated this problem by making a "one-size-fits-all" policy non-viable. Thus, a fundamental shift is required in how HPC middleware interact with the application and react to its computation and communication requirements. Furthermore, current generation middleware consist of many independent components like the communication runtime, resource manager, job launcher etc. However, the lack of cooperation among these components often limits the performance and scalability of the end-application. To address these challenges, we propose a high-performance and scalable "Cooperative Communication Middleware" for HPC systems. The middleware supports MPI (Message Passing Interface), PGAS (Partitioned Global Address Space), and hybrid MPI+PGAS programming models and provides improved point-to-p (open full item for complete abstract)

    Committee: Dhabaleswar K Panda (Advisor); Gagan Agrawal (Committee Member); Ponnuswamy Sadayappan (Committee Member); Hari Subramoni (Committee Member) Subjects: Computer Engineering; Computer Science
  • 4. Hong, Changwan Code Optimization on GPUs

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Graphic Processing Units (GPUs) have become popular in the last decade due to their high memory bandwidth and powerful computing capacity. Nevertheless, achieving high-performance on GPUs is not trivial. It generally requires significant programming expertise and understanding of details of low-level execution mechanisms in GPUs. This dissertation introduces approaches for optimizing regular and irregular applications. To optimize regular applications, it introduces a novel approach to GPU kernel optimization by identifying and alleviating bottleneck resources. This approach, however, is not effective in irregular applications because of data-dependent branches and memory accesses. Hence, tailored approaches are developed for two popular domains of irregular applications: graph algorithms and sparse matrix primitives. Performance modeling for GPUs is carried out by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel's execution. The utility of the bottleneck analysis is demonstrated in two contexts: i) Enhancing the OpenTuner auto-tuner with the new bottleneck-driven optimization strategy. Effectiveness is demonstrated by experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite. ii) Manual code optimization. Two case studies illustrate the use of a bottleneck analysis to iteratively improve the performance of code from state-of-the-art DSL code generators. However, the above approach is ineffective for irregular applications such as graph algorithms and sparse linear systems. Graph algorithms are used in various applications, and high-level GPU graph processing frameworks are an attractive alternative for achieving both high productivity and high-performance. This dissertation develops an approach to graph processing on GPUs (open full item for complete abstract)

    Committee: Ponnuswamy Sadayappan (Advisor); Rountev Atanas (Committee Member); Teodorescu Radu (Committee Member) Subjects: Computer Science
  • 5. Nouanesengsy, Boonthanome High-Concurrency Visualization on Supercomputers

    Doctor of Philosophy, The Ohio State University, 2012, Computer Science and Engineering

    Many scientific and engineering fields employ computer simulations of specific phenomena to help solve complex problems. Supercomputers and other high performance computing machines are regularly used to perform these scientific simulations. The resulting data then needs to be analyzed and visualized, which is difficult when the data is large. One approach to producing visualizations faster is to generate them in parallel. Many challenges remain, though, when attempting to analyze and visualize large data in parallel, while maintaining good performance and scalability. The size of the data is one challenge. When data size becomes very large, the I/O overhead from loading the data becomes a bottleneck, which could hinder performance. In addition, some visualization algorithms have unknown communication and computational load, which results in poor workload distribution and load balancing. This load imbalance hinders overall scalability. Another possible reason for poor parallel performance is that the method does not take advantage of the specific hardware architecture of the host machine. In order to meet these challenges, we present methods to parallelize several visualization techniques. First, a scalable shared memory rendering technique was found by adapting established parallel rendering methods to a shared memory architecture. Three rasterization methods, including sort-first, sort-last, and a hybrid method, were tested on a large shared-memory machine. Next, parallel streamline generation in static flow fields, due to the nature of the problem, suffers from high load imbalance. To make the computation more load balanced, we analyzed the flow field and estimated the workload of each block in the flow field. A load balanced partitioning of data blocks was then computed from this workload estimation. In our tests, we were able to scale up to thousands of processes while using hundreds of thousands of seeds. For time-varying flow fields, the Finite-Time Lyapunov (open full item for complete abstract)

    Committee: Han-Wei Shen PhD (Advisor); Yusu Wang PhD (Committee Member); Gagan Agrawal PhD (Committee Member); Kate Calder PhD (Committee Member) Subjects: Computer Science
  • 6. Bokhari, Saniyah Parallel Solution of the Subset-sum Problem: An Empirical Study

    Master of Science, The Ohio State University, 2011, Computer Science and Engineering

    We investigate the parallelization of an algorithm on three very different architectures. These are: a 128-processor Cray XMT massively multithreaded machine, a 16-processor IBM x3755 shared memory machine and a 240-core NVIDIA FX 5800 graphics processor unit (GPU). The problem we use in our investigation is the well-known subset-sum problem. While this is known to be NP-complete, it is solvable in pseudo-polynomial time, i.e., time proportional to the number of input objects multiplied by the sum of their sizes. This product defines the size of the dynamic programming table used to solve the problem. The hypothesis that we wish to test is that the Cray, with its specialized hardware and large uniform shared memory, is suitable for very large problems, the IBM x3755 is suitable for intermediate sized problems and the NVIDIA FX 5800 can give superior performance only for problems that fit within its modest internal memory. We show that it is straightforward to parallelize this algorithm on the Cray XMT primarily because of the word-level locking that is available on this architecture. For the other two machines we present an alternating word algorithm that can implement an efficient solution. The timings of our respective codes were carefully measured over a comprehensive range of problem sizes. On the Cray XMT we observe very good scaling for large problems and see sustained performance as the problem size increases. However this machine has poor scaling for small problem sizes; it performs best for problem sizes of 1012 bits or more. The IBM x3755 performs very well on medium sized problems, but has poor scalability as the number of processors increases and is unable to sustain performance as the problem size increases. This machine tends to saturate for problem sizes of 1011 bits. The NVIDIA GPU performs well for problems whose tables t within its 4GB device memory. This corresponds to tables of size approximately 1010/. The experimental measurements support ou (open full item for complete abstract)

    Committee: Ten H. Lai PhD (Advisor); Dong Xuan PhD (Committee Member) Subjects: Computer Engineering; Computer Science
  • 7. Huang, Wei High Performance Network I/O in Virtual Machines over Modern Interconnects

    Doctor of Philosophy, The Ohio State University, 2008, Computer Science and Engineering

    With the increasing size and complexity of modern computing systems, a balance between performance and manageability is becoming critical to achieve high performance as well as high productivity computing. Virtual Machine (VM) technology provides various features that help management issues on large-scale computing systems; however, performance concerns have largely blocked the deployment of VM-based computing, especially in the High-Performance Computing (HPC) area.This dissertation aims at reducing the virtualization overhead and achieving the co-existence of performance and manageability through VM technologies. We focus on I/O virtualization, designing an experimental VM-based computing framework, and addressing performance issues at different levels of the system software stack. We design VMM-bypass I/O, which achieves native I/O performance in VMs by taking advantage of the OS-bypass interconnects. Also with the OS-bypass interconnects, we propose high performance VM migration with Remote Direct Memory Access (RDMA), which drastically reduces the VM management cost. To further improve the communication efficiency on multi-core systems, we design Inter-VM Communication (IVC), a VM-aware communication library to allow efficient shared memory communication among VMs on the same physical host. Finally, we design MVAPICH2-ivc, an MPI library that is aware of VM-based environments and can transparently benefit HPC applications with our proposed designs. The dissertation concludes that performance should no longer be a barrier to deploying VM-based computing, which enhances productivity by achieving much improved manageability with very little sacrifice in performance.

    Committee: Dhabaleswar Panda (Advisor); Feng Qin (Committee Member); P. Sadayappan (Committee Member) Subjects: Computer Science