Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 37)

Mini-Tools

 
 

Search Report

  • 1. UGWU, UGOCHUKWU Iterative tensor factorization based on Krylov subspace-type methods with applications to image processing

    PHD, Kent State University, 2021, College of Arts and Sciences / Department of Mathematical Sciences

    This work is concerned with structure preserving and other techniques for the solution of linear discrete ill-posed problems with transform-based tensor-tensor products, e.g., the t-product and the invertible linear transform product. Specifically, we focus on two categories of solution methods, those that involve flattening, i.e., reduce the tensor equation to an equivalent equation involving a matrix and a vector, and those that preserve the tensor structure by avoiding flattening. Various techniques based on Krylov subspace-type methods for solving third order tensor ill-posed problems are discussed. The data is a laterally oriented matrix or a general third order tensor. Regularization of tensor ill-posed problem by Tikhonov's approach and truncated iterations are considered. Golub-Kahan bidiagonalization-type, Arnoldi-type, and Lanczos-type processes are applied to reduce large-scale Tikhonov minimization problems to small-sized problems. A few steps of the t-product bidiagonalization process can be employed to inexpensively compute approximations of the singular tubes of the largest Frobenius norm and the associated left and right singular matrices. A less prohibitive computation of approximations of eigentubes of the largest Frobenius norm and the corresponding eigenmatrix by a few steps of the t-product Lanczos process is considered. The interlacing of the Frobenius norm of the singular tubes is shown and applied. The discrepancy principle is used to determine the regularization parameter and the number of iterations by a chosen method. Several truncated iteration techniques, e.g., SVD-like, and those based on the above processes are considered. Solution methods for the weighted tensor Tikhonov minimization problem with weighted global and non-global bidiagonalization processes are discussed. The weights on the fidelity and regularization parts of this problem are suitably defined symmetric positive definite (SPD) tensors. The computation of the inverse of (open full item for complete abstract)

    Committee: Lothar (Advisor) Reichel (Committee Chair); Xiaoyu Zheng (Committee Member); Barry Dunietz (Committee Member); Sergij Shyanovskii (Committee Member); Jing Li (Committee Member) Subjects: Applied Mathematics
  • 2. Chen, Quan Higher symmetries in operator algebras

    Doctor of Philosophy, The Ohio State University, 2023, Mathematics

    This dissertation consists of two self-contained papers from my graduate work at Ohio State University. In Chapter 2, we review the definition of C*-algebras, von Neumann algebras, C*/W* categories/ 2-categories. Some of this background material is taken from [GLR85, CHPJP22]. We also review the 2-categories C*Alg/W*Alg of C*/W*-correspondences and Q-system realization construction taken directly from my articles [CHPJP22, CP22]. Chapter 3 is joint work with Corey Jones and David Penneys [CJP21]. First We discuss the construction of a unitary braided tensor category End_loc(C) from a given W*- category C. When C is the category of finitely generated projective modules over a type II1 factor M, the underlying tensor category of its dualizable part of End_loc(Mod_fgp(M)) is Connes' bimodule version χ (M) due to Popa. Second, for each unitary fusion category C, we construct a II1-factor M such that χ (M) \cong Z(C). Chapter 4 is joint work with Roberto Hern andez Palomares and Corey Jones [CPJ22]. We introduce a K-theoretic invariant for actions of unitary fusion categories on unital C*-algebras. We show that for inductive limits of finite dimensional actions of fusion categories on AF-algebras, this is a complete invariant. In particular, this gives a complete invariant for inductive limit actions of finite groups on unital AF-algebras. We apply our results to obtain a classification of finite depth, strongly AF-inclusions of unital AF- algebras.

    Committee: David Penneys (Advisor) Subjects: Mathematics
  • 3. Huston, Peter Tensor Category Constructions in Topological Phases of Matter

    Doctor of Philosophy, The Ohio State University, 2022, Mathematics

    This dissertation consists of adaptations of two papers exploring the connections between tensor categories and localized excitations in (2+1)D topologically ordered systems. Topological orders have long been studied in terms of monoidal higher categories. In particular, topological orders in (2+1)D are obtained by string-net condensation from the data of unitary fusion categories, and topological domain walls and phase transitions between (2+1)D topological orders arise from anyon condensation, which can be understood in terms of the internal Morita theories of etale algebra objects in unitary modular tensor categories. Complementing existing models for domain walls, we introduce an extension of the Levin-Wen string-net model in which tuning a parameter implements anyon condensation. We then use tube algebra techniques to verify that the expected topological order results. We also describe the effects of the phase transition on anyons and string operators of the original topological order. We study compositions of parallel topological domain walls and their decompositions into superselection sectors. Our approach uses a description of particle mobility across domain walls in terms of tunneling operators. These are formalized in a 3-category of (2+1)D topological orders with a fixed anomaly described by a unitary modular tensor category A, algebraically characterized by the 3-category of A-enriched unitary fusion categories. We describe how A-enriched unitary fusion categories determine commuting projector string-net models for the corresponding anomalous (2+1)D topological orders on the boundary of (3+1)D Walker-Wang models with trivial topological order. We then give an explicit description in terms of etale algebra objects of the local operators which determine superselection sectors of parallel domain walls, characterizing the indecomposable domain wall in each sector.

    Committee: David Penneys (Advisor); Niles Johnson (Committee Member); Thomas Kerler (Committee Member) Subjects: Mathematics
  • 4. Teng, Peiyuan Tensor network and neural network methods in physical systems

    Doctor of Philosophy, The Ohio State University, 2018, Physics

    In this dissertation, new ideas and methods from tensor network theory and neural network theory are discussed. Firstly, common computational methods, such as the exact diagonalization method, the Density Matrix Renormalization Group approach, and the tensor network theory are reviewed. Following this direction, a way of generalizing the tensor renormalization group (TRG) to all spatial dimensions is proposed. Mathematically, the connection between patterns of tensor renormalization group and the concept of truncation sequence in polytope geometry is discovered. A theoretical contraction framework is proposed. Furthermore, the canonical polyadic decomposition is introduced to tensor network theory. A numerical verification of this method on the 3-D Ising model is carried out. Secondly, this dissertation includes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, highly entangled qubit states are searched for to show the applicability of this method. Finally, machine learning approaches are reviewed. Machine learning methods are applied to quantum mechanics. The radial basis function network in a discrete basis is used as the variational wavefunction for the ground state of a quantum system. Variational Monte Carlo(VMC) calculations are carried out for some simple Hamiltonians. The results are in good agreements with theoretical values. The smallest eigenvalue of a Hermitian matrix can also be acquired using VMC calculations. These results demonstrate that machine learning techniques are capable of solving quantum mechanical problems.

    Committee: Yuan-Ming Lu Dr. (Advisor); Ciriyam Jayaprakash Dr. (Committee Member); Jay Gupta Dr. (Committee Member); Comert Kural Dr. (Committee Member) Subjects: Physics
  • 5. Srivastava, Rohit Kumar Modeling Performance of Tensor Transpose using Regression Techniques

    Master of Science, The Ohio State University, 2018, Computer Science and Engineering

    Tensor transposition is an important primitive in many tensor algebra libraries. For example, tensor contractions are implemented using TTGT(Transpose-Transpose-GEMM-Transpose) approach. Performing efficient transpose of an arbitrary tensor requires different optimization techniques depending on the required permutation. Exhaustive evaluation of all parameter choices like slice size and blocking is prohibitively expensive. We present an approach to model the performance of different kernels inside TTLG, a Tensor Transpose Library for GPUs, for different parameters like slice size, blocking, and resultant warp efficiency etc. Predictions made by this model are then used to guide in kernel and its parameter selection.

    Committee: Ponnuswamy Sadayappan (Advisor); Radu Teodorescu (Committee Member) Subjects: Computer Science
  • 6. Sahoo, Swarup Integrated loop optimizations for data locality enhancement of tensor contraction expressions /

    Master of Science, The Ohio State University, 2005, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 7. Vangani, Harsh Modeling the geoid and the deflection of the vertical using tensor-product B-splines on the sphere /

    Master of Science, The Ohio State University, 2005, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 8. Feng, Xiaozhou Phase transitions in the dynamics of quantum information: fracton systems and quantum trees

    Doctor of Philosophy, The Ohio State University, 2023, Physics

    Random quantum circuits play an important role in the study of non-equilibrium properties of quantum many-body systems, such as entanglement and thermalization. In this thesis, we study the spreading of quantum information in two important examples of random circuits: a quantum circuit with charge and dipole conservation laws, and the random tree tensor network, which corresponds to the all-to-all circuit. We find that the dynamics of these two systems can fall into different phases depending on the spreading of quantum information in the system/Hilbert space. We first study the time evolution of a one-dimensional fracton system with charge and dipole moment conservation using a random unitary circuit description. Previous work has shown that when the random unitary operators act on four or more sites, an arbitrary initial state eventually thermalizes via a universal subdiffusive dynamics. In contrast, a system evolving under three-site gates fails to thermalize due to strong ``fragmentation'' of the Hilbert space. Here we show that three-site gate dynamics causes a given initial state to evolve toward a highly nonthermal state on a time scale consistent with Brownian diffusion. Strikingly, the dynamics produces an effective attraction between isolated fractons or between a single fracton and the boundaries of the system, as in the Casimir effect of quantum electrodynamics. We show how this attraction can be understood by exact mapping to a simple classical statistical mechanics problem, which we solve exactly for the case of an initial state with either one or two fractons. What's more, we further find that in the limit of maximal single charge goes to infinity, the fragmentation of Hilbert space of different filling can be solved by mapping to a mathematical tournament problem. Using this mapping we can show that the system exhibits a transition from a non-thermalized to a thermalized phase by tuning the filling number, as observed in a previous work. Our tech (open full item for complete abstract)

    Committee: Brian Skinner (Advisor); Jeanie Lau (Committee Member); Samir Mathur (Committee Member); Mohit Randeria (Committee Member) Subjects: Physics
  • 9. Ntiamoah, Daniel Analysis of Order Strategies for Alternating Algorithms in Optimization

    Doctor of Philosophy (PhD), Ohio University, 2023, Mathematics (Arts and Sciences)

    We investigate the choice of order of directions for Alternating Coordinate or Block Coordinate Descent algorithms. It is known that when starting the algorithm, the ordering of blocks in the first few steps can be important. We find for three blocks that it is beneficial to begin an alternating algorithm by testing different permutations of the blocks, i.e. 6 different orders for the first three steps. We study which combinations of orders are most effective and design a testing strategy that involves testing 6 permutations for 2 full passes. This strategy outperforms both random, fixed order and other testing strategies in fair comparisons. On quadratic test problems, we prove that our strategy for three directions is optimal among strategies employing two full passes and never gives bad results. For 4 search directions we find a similar search strategy for testing 2 full passes, with 24 permutations. We observe that testing orders is the most beneficial on hard (i.e. ill-conditioned) problems. As the number of directions and dimensions of the problem grows, the initial search orders still matter, but the number of steps needed for the best order to emerge grows. Further, as the number is directions grows, the number of permutations of directions grows factorially. It may become impossible to find the optimal order in an efficient way, but our observations suggest efficient strategies to find a good order for the first steps.

    Committee: Todd Young (Advisor) Subjects: Mathematics
  • 10. Yang, Seo Eun Texts, Images, and Emotions in Political Methodology

    Doctor of Philosophy, The Ohio State University, 2022, Political Science

    My dissertation comprises (1) the development of a machine learning framework that combines verbal and visual features together, models the intricate web of relationships between them, and extracts visual semantics, and (2) the application of a deep learning and a transfer learning framework to extract emotions from social media posts. This dissertation consists of three papers as follows. The first paper introduces a machine-learning visual framing analysis to examine the visual and verbal patterns of online news reporting and explore image-text relations in news stories. The second paper presents a machine-learning multimodal framing analysis to integrate the various types of data (e.g., image, text, and metadata) simultaneously and extract the semantic meaning from them together. The third paper is an application of a deep learning and a transfer learning to show the power of Twitter in providing fine-grained measures of real-time emotions and thereby offer a comprehensive overview of the role of emotions in voting participation. My dissertation can take into account various types of data simultaneously and extract politically meaningful semantics using computer vision, NLP, graph theory, high-dimensional statistics, and transfer learning.

    Committee: Skyler Cranmer (Committee Chair); Janet Box-Steffensmeier (Committee Member); Robert Bond (Committee Member) Subjects: Communication; Computer Science; Political Science
  • 11. Khatiwada, Pawan An Introduction to Tensor Networks and Matrix Product States with Applications in Waveguide Quantum Electrodynamics

    Master of Science, Miami University, 2021, Physics

    The operation of state-of-the-art quantum computers are requiring the coherent control of 100's of quantum bits (qubits) to store, manipulate and transfer information quantum mechanically. The traditional theoretical tool adopted in the field of quantum optics such as the master equation approach, quantum Langevin equations, input-output formalism, and real-space method began to face challenges as these techniques are more suitable to deal with few qubit problems. Offering a solution to this issue, in this thesis, we present an introduction to the tensor network (TN) theory and Matrix Product States (MPS) as a formalism to predict ground states and time evolution of interesting excited states of many-qubit architectures. As a simple example, we apply the TN theory to waveguide quantum electrodynamics architectures to analyze the excitation dynamics of two-level quantum emitters coupled to the one-dimensional guided photonic modes.

    Committee: Imran Mirza PhD (Advisor); Herbert Jaeger PhD (Committee Member); Samir Balli PhD (Committee Chair) Subjects: Information Science; Optics; Physics; Quantum Physics; Theoretical Physics
  • 12. Dowling, Brendan Mixed Size XOR Strong Refutation

    Master of Sciences, Case Western Reserve University, 2020, EECS - Computer and Information Sciences

    An esteemed 2016 paper set a new lower bound on the clause density required to strongly refute a k-XOR formula in polynomial time. This paper used the sum of squares algorithm in conjunction with an improved bound for the injective tensor norm to reach its result, which is limited to formulas with all clauses the same size. We consider how this technique could be expanded to formulas with mixed clause sizes. We specifically focus our efforts on what we view as the simplest combination of clause sizes: an XOR formula with clauses of sizes k and 2k with k even. While this thesis does not establish a new refutation bound for this mixed size formula, it does give a prospective structure for the proof, and shows how to expand the 2016 paper's techniques to mixed size problems. Additionally, this thesis gives an overview of the 2016 paper for new researchers.

    Committee: Harold Connamacher (Advisor); Vincenzo Liberatore (Committee Member); Shuai Xu (Committee Member) Subjects: Computer Science; Mathematics
  • 13. Krishnaswamy, Sriram On Computationally Efficient Frameworks For Data Association In Multi-Target Tracking

    Doctor of Philosophy, The Ohio State University, 2019, Mechanical Engineering

    The aim of this dissertation is to examine ways of improving the computational efficiency of data association algorithms in tracking and to do so with better methods to handle data. Data association algorithms are employed in tracking problems in conjunction with an estimation algorithm to determine the optimal state estimate of multiple objects of interest given a set of measurements. This work primarily deals with Bayesian or pseudo-Bayesian paradigms for data association and reduces the computational cost by reducing the exponential growth or the so-called ``curse of dimensionality'' in these problems. This increase in the number of hypotheses is exacerbated in dense environments with low signal-to-noise ratio (SNR). This research employs tensor decomposition to reduce the number of incoming measurements into a core tensor or a low-dimensional summary and use it as a substitute for the complete set of measurements. The underlying data association considered in this research is the Joint Probabilistic Data Association (JPDA), a pseudo-Bayesian sub-optimal single-scan data association algorithm for multiple measurements. JPDA determines the best measurement by constructing a series of hypotheses, known as feasible events, based on a binary matrix, known as validation matrix, that indicates if a given measurement could have originated from a target. The primary bottleneck in JPDA is that the number of feasible events grows exponentially with growing measurements in scenarios with no other information present to distinguish the targets. Tensors, fundamentally, are just multi-dimensional arrays typically used to for data storage and transfer. By performing tensor decomposition, or high-dimensional principal component analysis (PCA), on a tensor it is reduced into two components — a low-dimensional summary known as the core, and a set of transformation matrices known as projection matrices. Dynamic Tensor Analysis (DTA) adapts this idea for a stream of data increasi (open full item for complete abstract)

    Committee: Mrinal Kumar (Advisor); Levent Guvenc (Committee Member); Ran Dai (Committee Member) Subjects: Aerospace Engineering; Mechanical Engineering
  • 14. Kim, Jinsung Optimizing Tensor Contractions on GPUs

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Tensor contractions are higher dimensional analogs of matrix multiplications. They form the key kernels in many computational contexts such as high order models in quantum chemistry, deep learning, finite element methods, etc. In contrast to the wide availability of high-performance libraries for matrix multiplication on GPUs, the same is not true for tensor contractions. This dissertation introduces approaches for optimizing tensor contractions on GPUs. Tensor contractions are computationally expensive. Hence, they bound the performance of many applications. For example, in the Cluster Singles and Doubles with perturbative Triples correction (CCSD(T)) method in NWChem (a widely used computational chemistry suite), tensor contractions are used to model reasonably large molecules. Tensor contractions, on average, account for bottleneck in the CCSD(T) method. The performance of tensor contractions can be improved by minimizing the data movement volume and exploiting the hardware features. For a sequence of tensor contractions, fusion is a powerful technique to reduce the data movement. Our tensor contraction kernels for CCSD(T) reduces the data traffic and achieves up to 8x speedup over state of the art techniques. Generating efficient codes for arbitrary tensor contractions is extremely difficult -- especially for GPUs. The choice of mapping the tensor dimensions to various hardware thread dimensions such as thread blocks and warps determine the amount of data movement. The biggest challenge in choosing an efficient mapping is that the set of possible mappings is huge -- exponential with respect to the number of dimensions of the tensor. In most cases, exhaustive search and auto-tuning are prohibitively expensive. Fortunately, performance models which predict the data movement along with heuristic pruning of the search space can drastically reduce the search space. Our tensor contraction code generator can generate efficient CUDA tensor codes using a data movement mo (open full item for complete abstract)

    Committee: Ponnuswamy Sadayappan (Advisor); Atanas Rountev (Committee Member); Radu Teodorescu (Committee Member) Subjects: Computer Engineering; Computer Science
  • 15. Nisa, Israt Architecture-aware Algorithm Design of Sparse Tensor/Matrix Primitives for GPUs

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Sparse matrix/tensor operations have been a common computational motif in a wide spectrum of domains - numerical linear algebra, graph analytics, machine learning, health-care, etc. Sparse kernels play a key role in numerous machine learning algorithms and the rising popularity of this domain increases the significance of the primitives like SpMV (Sparse Matrix-Vector Multiplication), SDDMM (Sampled Dense-Dense Matrix Multiplication), MF/TF(Sparse Matrix/Tensor Factorization), etc. These primitives are data-parallel and highly suitable for GPU-like architectures that provide massive parallelism. Real-world matrices and tensors are large-scale and have millions of data points, which is sufficient to utilize all the cores of a GPU. Yet, a data-parallel algorithm can become the bottleneck of an application and perform way below than the upper bound of the roofline model. Some common reasons are frequent irregular global memory access, low data reuse, and imbalanced work distribution. However, efficient utilization of GPU memory hierarchy, reduced thread communication, increased data locality, and an even workload distribution can provide ample opportunities for significant performance improvement. The challenge lies in utilizing the techniques across applications and achieve an even performance in spite of the irregularity of the input matrices or tensors. In this work, we systematically identify the performance bottlenecks of the important sparse algorithms and provide optimized and high performing solutions. At the beginning of this dissertation, we explore the application of cost-eff ective ML techniques in solving the format selection and performance modeling problem in the SpMV domain. By identifying a small set of sparse matrix features to use in training the ML models, we are able to select the best storage format and predict the execution time of an SpMV kernel as well. Next, we optimize the SDDMM kernel, which is a key bottleneck in fa (open full item for complete abstract)

    Committee: P. (Saday) Sadayappan (Advisor); Atanas Rountev (Committee Member); Radu Teodorescu (Committee Member) Subjects: Computer Science
  • 16. Hong, Changwan Code Optimization on GPUs

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Graphic Processing Units (GPUs) have become popular in the last decade due to their high memory bandwidth and powerful computing capacity. Nevertheless, achieving high-performance on GPUs is not trivial. It generally requires significant programming expertise and understanding of details of low-level execution mechanisms in GPUs. This dissertation introduces approaches for optimizing regular and irregular applications. To optimize regular applications, it introduces a novel approach to GPU kernel optimization by identifying and alleviating bottleneck resources. This approach, however, is not effective in irregular applications because of data-dependent branches and memory accesses. Hence, tailored approaches are developed for two popular domains of irregular applications: graph algorithms and sparse matrix primitives. Performance modeling for GPUs is carried out by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel's execution. The utility of the bottleneck analysis is demonstrated in two contexts: i) Enhancing the OpenTuner auto-tuner with the new bottleneck-driven optimization strategy. Effectiveness is demonstrated by experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite. ii) Manual code optimization. Two case studies illustrate the use of a bottleneck analysis to iteratively improve the performance of code from state-of-the-art DSL code generators. However, the above approach is ineffective for irregular applications such as graph algorithms and sparse linear systems. Graph algorithms are used in various applications, and high-level GPU graph processing frameworks are an attractive alternative for achieving both high productivity and high-performance. This dissertation develops an approach to graph processing on GPUs (open full item for complete abstract)

    Committee: Ponnuswamy Sadayappan (Advisor); Rountev Atanas (Committee Member); Teodorescu Radu (Committee Member) Subjects: Computer Science
  • 17. McClatchey, Nathaniel Tensors: An Adaptive Approximation Algorithm, Convergence in Direction, and Connectedness Properties

    Doctor of Philosophy (PhD), Ohio University, 2018, Mathematics (Arts and Sciences)

    This dissertation addresses several problems related to low-rank approximation of tensors. Low-rank approximation of tensors is plagued by slow convergence of the sequences produced by popular algorithms such as Alternating Least Squares (ALS), by ill-posed approximation problems which cause divergent sequences, and by poor understanding of the nature of low-rank tensors. Though ALS may produce slowly-converging sequences, ALS remains popular due to its simplicity, its robustness, and the low computational cost for each iteration. I apply insights from Successive Over-Relaxation (SOR) to ALS, and develop a novel adaptive method based on the resulting Successive Over-Relaxed Least Squares (SOR-ALS) method. Numerical experiments indicate that the adaptive method converges more rapidly than does the original ALS algorithm in almost all cases. Moreover, the adaptive method is as robust as ALS, is only slightly more complicated than ALS, and each iteration requires little computation beyond that of an iteration of ALS. Divergent sequences in tensor approximation may be studied by examining their images under some map. In particular, such sequences may be re-scaled so that they become bounded, provided that the objective function is altered correspondingly. I examine the behavior of sequences produced when optimizing bounded multivariate rational functions. The resulting theorems provide insight into the behavior of certain divergent sequences. Finally, to improve understanding of the nature of low-rank tensors, I examine connectedness properties of spaces of low-rank tensors. I demonstrate that spaces of unit tensors of bounded rank are path-connected if the space of unit vectors in at least one of the factor spaces is path-connected, and that spaces of unit separable tensors are simply-connected if the unit vectors are simply-connected in every factor space. Moreover, I partially address simple connectedness for unit tensors of higher rank.

    Committee: Martin Mohlenkamp (Advisor); Todd Young (Committee Member); Winfried Just (Committee Member); David Juedes (Committee Member) Subjects: Mathematics
  • 18. Makola, Monwabisi The Long Term Effects of Radiation Therapy on White Matter Integrity and Information Processing Speed: A Diffusion Tensor Imaging Study in Pediatric Brain Tumor Patients

    PhD, University of Cincinnati, 2017, Medicine: Neuroscience/Medical Science Scholars Interdisciplinary

    Background – Radiation therapy (RT) is a very common treatment for pediatric brain tumors. Unfortunately, RT is associated with long-term cognitive impairments. White matter (WM) tracts, such as the corpus callosum (CC), mediate processing speed which , in turn, mediates overall cognitive ability. Pediatric RT patients experience WM deficits and long-term deficits in processing speed and intelligence quotient (IQ). However, long-term effects of RT on WM integrity in these patients are unknown. Patients who receive RT frequently undergo surgery, but it is unclear how surgery affects long-term WM integrity and cognitive ability. WM integrity, age, gender, highest parent education and occupation, and family income are affecters of intelligence, but their effects on cognitive ability in pediatric RT patients are unknown. Objective - 1) to determine the long-term effects of RT on CC integrity as well as processing speed and IQ two years after treatment while controlling for surgery; 2) To determine the effects of non-treatment, intelligence-associated factors on post-treatment cognitive ability; 3) to quantity the relationship between radiation dose and WM integrity in pediatric brain tumor patients. Methods –Diffusion tensor imaging (DTI) was used to quantify WM integrity of the genu (GCC), body (BCC), and splenium (SCC) of the CC. Preschool, child, and adult variants of the Wechsler intelligence assessments measured processing speed and IQ. Participants included pediatric brain tumor patients diagnosed between 3 and 16 years old. They underwent baseline evaluations 3 to 12 months after surgery, and approximately two years later. Participants treated with surgery and RT were matched to those who didn't receive RT. Baseline and two-year DTI, processing speed, and IQ data were compared between groups at each time point and to each other within each group. Along with RT status, factors including baseline BCC integrity, age, gender, highest parent education and occupa (open full item for complete abstract)

    Committee: Kim Cecil Ph.D. (Committee Chair); Caleb Adler Ph.D. (Committee Member); Scott Holland Ph.D. (Committee Member); Trent Hummel M.D. (Committee Member); Michael Lamba Ph.D. (Committee Member) Subjects: Neurology
  • 19. Vishal, Kumar Nonreciprocal magnetostatic surface wave in thin ferromagnetic film

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2016, Electrical Engineering

    Interest in the nonreciprocal property of waves inside the magnetic material starts with its current advancement in the field of passive devices. In current technology passive devices without magnetic cores are realized on a silicon chip, usually these devices are lossy and bigger in size. Passive devices constructed with magnetic core material such as isolators, circulators, phase shifters, and gyrators; gives us the chance to minimize such losses. Nonreciprocity in magnetic material is due to the anisotropic property of permeability, it is complex in nature and represent in 3×3 matrix form. We can control the wave flow inside the nonreciprocal devices by making some changes in the matrix. This thesis work includes modeling, simulation, and investigation of ferromagnetic material to enhance nonreciprocity effect. We perform physical measurement on actual devices and then use the same parameters to design the model in comsol multiphysics software to verify the nonreciprocity. This thesis work also suggests different ways to suppress eddy current losses to increase the nonreciprocity effect. Comsol is a simulation tool for our RF passive device model containing magnetic core material. All simulation results were obtaining from comsol; it is used as to model the device and set different parameter defined for ferromagnetic devices. We successfully investigate the nonreciprocity inside the magnetic material by analyzing scattering parameters (S12 and S21). The design of any nonreciprocal device is a big challenge because propagation losses are more, even a small change in structure suppress the nonreciprocity effect. This work has shown that by using these improved configurations we can reduce loss and enhance the overall device performance. Nonreciprocal spin iv wave has been observed in the ferromagnetic thin film by placing it between neighboring metallic layer. By proper selection of line width, spacing between signal lines, position, and thickness of f (open full item for complete abstract)

    Committee: Yan Zhuang Ph.D. (Advisor); Saiyu Ren Ph.D. (Committee Member); Marian Kazimierczuk Ph.D. (Committee Member); Brian Rigling Ph.D. (Other) Subjects: Electrical Engineering; Electromagnetics; Electromagnetism; Engineering; Materials Science
  • 20. Rajbhandari, Samyam Locality Optimizations for Regular and Irregular Applications

    Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering

    The fastest supercomputer in the world as of July 2016 is the Sunway TaihuLight. It can achieve a staggering performance of 93 PetaFlops. This incredible performance is achieved via massive parallelism. Today's supercomputers and compute clusters have tens of thousands of distributed memory nodes with each node comprised of several shared memory multi/many core processors. Scaling on these massively parallel systems is not an easy task. A major performance and scalability bottleneck is the limited data movement bandwidth, which can be orders of magnitude smaller than the computation bandwidth. Developing applications to scale on these massively parallel systems requires minimizing data movement volume at different levels of memory hierarchy using locality optimization techniques. Locality optimization aims to reduce the data movement between slow and fast memory by rescheduling/remapping the original computation to reuse the data once it is in fast memory, thereby avoiding subsequent movement of the same data from slow memory. This dissertation explores multiple aspects of locality optimizations for enhancing scalability and performance of various regular and irregular applications on massively parallel computing environment. It develops distributed algorithms, lower bound techniques, and compiler and runtime frameworks for optimizing Tensor Contractions, Four-Index Transform, Convolutional Neural Networks (CNNs), and Recursive Tree Traversal on k-d trees. Each of these application domains is limited in performance and scalability primarily by data movement costs at a particular level of memory hierarchy. To be specific, on a massively parallel system, distributed Tensor Contractions can have limited scalability due to the cost of communication between distributed memory nodes. The Four-Index Transform, on the other hand, can be limited in the size of the largest problem that can be completed in a reasonable amount of time due to data transfer cost from disk to (open full item for complete abstract)

    Committee: P. Sadayappan (Advisor) Subjects: Computer Engineering; Computer Science