Search Results (1 - 25 of 103 Results)

Sort By  
Sort Dir
 
Results per page  

Hall, Brenton TaylorUsing the Non-Uniform Dynamic Mode Decomposition to Reduce the Storage Required for PDE Simulations
Master of Mathematical Sciences, The Ohio State University, 2017, Mathematical Sciences
Partial Differential Equation simulations can produce large amounts of data that are very slow to transfer. There have been many model reduction techniques that have been proposed and utilized over the past three decades. Two popular techniques Proper Orthogonal Decomposition and Dynamic Mode Decomposition have some hindrances. Non-Uniform Dynamic Mode Decomposition (NU-DMD), which was introduced in 2015 by Gueniat et al., that overcomes some of these hindrances. In this thesis, the NU-DMD's mathematics are explained in detail, and three versions of the NU-DMD's algorithm are outlined. Furthermore, different numerical experiments were performed on the NU-DMD to ascertain its behavior with repect to errors, memory usage, and computational efficiency. It was shown that the NU-DMD could reduce an advection-diffusion simulation to 6.0075% of its original memory storage size. The NU-DMD was also applied to a computational fluid dynamics simulation of a NASA single-stage compressor rotor, which resulted in a reduced model of the simulation (using only three of the five simulation variables) that used only about 4.67% of the full simulation's storage with an overall average percent error of 8.90%. It was concluded that the NU-DMD, if used appropriately, could be used to possibly reduce a model that uses 400GB of memory to a model that uses as little as 18.67GB with less than 9% error. Further conclusions were made about how to best implement the NU-DMD.

Committee:

Ching-Shan Chou (Advisor); Jen-Ping Chen (Committee Member)

Subjects:

Aerospace Engineering; Applied Mathematics; Computer Science; Mathematics; Mechanical Engineering

Keywords:

Fluid Dynamics; Fluid Flow; Model Reduction; Partial Differential Equations; reducing memory; Dynamic Mode Decomposition; Decomposition; memory; Non-Uniform Dynamic Mode Decomposition

Rogers, Francis HThe measurement and decomposition of achievement equity - an introduction to its concepts and methods including a multiyear empirical study of sixth grade reading scores
Doctor of Philosophy, The Ohio State University, 2004, Educational Policy and Leadership
The maintenance of democracy requires the adequate education of our youth. Americans have struggled since the founding of the republic to define the scope of this democratic imperative and to fulfill its requirements. Over the course of the twentieth century, the educational minimum was defined in terms of fiscal resources for education. Educational equality, in turn, has been measured in terms of the equality of the distribution of fiscal resources. This paper contends that the shift away from fiscal equity toward achievement equity calls for a shift in our approach to the measurement of educational equality. The study expands the conceptual framework for measuring educational equality by incorporating the ideas of the democratic imperative for education and standards-based reform. These two ideas are brought together in the concept of egalitarian inequality, a condition marked by all students reaching or exceeding the level of educational attainment necessary to function in a democratic society. Drawing on this framework and established measures of equality, the study offers an empirical analysis of achievement equity. Posing the question of whether achievement equity has improved in recent years, as we would expect based on state and federal accountability legislation, we examine changes in equity in the distribution of reading proficiency scores drawn from a large, Midwestern school district for the years 1998 through 2003. Characteristics of the equality measurement techniques are described and compared. Following the federal mandate for disaggregating student subgroup achievement, decomposition techniques are applied to determine within- and between-groups contributions to district equity. Results of decomposition measures are presented and compared. Implications for policy makers and equity analysts are discussed with regard to gaining a more systematic understanding of the achievement gap and how, or whether, schools and districts are making progress toward closing those gaps. Recommendations for future work are offered both for broader, state and national studies and for more detailed district, school, and subgroup decompositions.

Committee:

William Loadman (Advisor)

Keywords:

Equity measurement; Inequality measurement; Achievement equity; Equity decomposition; Theil statistic; Theil decomposition; Atkinson index; Atkinson decomposition

Deshmukh, RohitModel Order Reduction of Incompressible Turbulent Flows
Doctor of Philosophy, The Ohio State University, 2016, Aero/Astro Engineering
Galerkin projection is a commonly used reduced order modeling approach; however, stability and accuracy of the resulting reduced order models are highly dependent on the modal decomposition technique used. In particular, deriving stable and accurate reduced order models from highly turbulent flow fields is challenging due to the presence of multi-scale phenomenon that cannot be ignored and are not well captured using the ubiquitous Proper Orthogonal Decomposition (POD). A truncated set of proper orthogonal modes is biased towards energy dominant, large-scale structures and results in over-prediction of kinetic energy from the corresponding reduced order model. The accumulation of energy during time-integration of a reduced order model may even cause instabilities. A modal decomposition technique that captures both the energy dominant structures and energy dissipating small scale structures is desired in order to achieve a power balance. The goal of this dissertation is to address the stability and accuracy issues by developing and examining alternative basis identification techniques. In particular, two modal decomposition methods are explored namely, sparse coding and Dynamic Mode Decomposition (DMD). Compared to Proper Orthogonal Decomposition, which seeks to truncate the basis spanning an observed data set into a small set of dominant modes, sparse coding is used to identify a compact representation that span all scales of the observed data. Dynamic mode decomposition seeks to identity bases that capture the underlying dynamics of a full order system. Each of the modal decomposition techniques (POD, Sparse, and DMD) are demonstrated for two canonical problems of an incompressible flow inside a two-dimensional lid-driven cavity and past a stationary cylinder. The constructed reduced order models are compared against the high-fidelity solutions. The sparse coding based reduced order models were found to outperform those developed using the dynamic mode and proper orthogonal decompositions. Furthermore, energy component analyses of high-fidelity and reduced order solutions indicate that the sparse models capture the rate of energy production and dissipation with greater accuracy compared to the dynamic mode and proper orthogonal decomposition based approaches. Significant computational speedups in the fluid flow predictions are obtained using the computed reduced order models as compared to the high-fidelity solvers.

Committee:

Jack McNamara (Advisor); Datta Gaitonde (Committee Member); Ryan Gosse (Committee Member); Joseph Hollkamp (Committee Member); Mohammad Samimy (Committee Member)

Subjects:

Aerospace Engineering

Keywords:

Turbulent flows; reduced order modeling; Navier-Stokes equations; nonlinear dynamics; Galerkin projection; modal decomposition; proper orthogonal decomposition; dynamic mode decomposition; sparse coding

Herman, John E.Linking Microbial Community Dynamics to Litter and Soil Chemistry: Understanding the Mechanisms of Decomposition
Doctor of Philosophy, University of Toledo, 2010, Biology (Ecology)

The mechanisms of leaf litter decomposition were examined using a combination of modeling and field experimental approaches. The modeling components utilized previously reported data from the literature to test two hypotheses generated by the Guild Decomposition Model (GDM; Moorhead and Sinsabaugh 2006), a novel model of microbial based litter decay. First, reported rates of lignin decay (dC3/dt) and holocellulose decay (dC2/dt) were examined in aboveground leaf litter of predominately northern conifer forests to test the hypothesis that the rate of lignin decay is a linear function of the lignocellulose index (LCI = lignin/[holocellulose + lignin]). Simulated dynamics of LCI in decaying litter were highly correlated to observed patterns, particularly when water and ethanol soluble litter fractions from model output were pooled with holocellulose fractions (mean R2 = 0.87 ± 0.02, P < 0.01). More detailed analyses of 64 of these studies yielded variable relationships between lignin decay rate and litter LCI; a regression based on pooled data (N = 385; total number of observations) produced a slope and an intercept that were not significantly different from predicted (slope = 2.33, intercept = -0.93). Both site and litter characteristics had significant effects on the proposed LCI threshold for lignin decay (LCI = 0.4), but no effects on slope or intercept, suggesting that the proposed lignocellulose control hypothesis is relatively robust across a range of litter and forest types.

The next analysis examined reports of microbial biomass associated with decaying plant litter (B:C ratios) in both terrestrial and aquatic systems to test the hypothesis that feedback controls link microbial and litter mass dynamics. Results showed that microbial biomass averaged 2.53% of total system organic matter (microorganisms + litter), but was more than twice as large in aquatic (3.43%, N = 632) as terrestrial (1.05%, N = 384) habitats. Data from a subset of 13 more detailed studies that included multiple observations per experiment showed no difference between habitats. While correlations between observed and simulated values of B:C were highly significant (Spearman’s Rho = 0.316, N = 218, P < 0.01) and Partial Mantel analysis of simulated and pooled observed data found that B:C values were related to litter mass loss, initial lignin content and changing lignin content during litter decay, the model explained only a modest fraction of the total variation in observations (R2 = 0.243, P < 0.01). The available data were insufficient to either conclusively validate or refute the modeling approach used by Moorhead and Sinsabaugh (2006) to mechanistically link the dynamics of microbial biomass and litter decay. Although the most detailed studies showed an increase in B:C values in early decay, followed by a decline in later stages of decay, similar to model behavior (Moorhead and Sinsabaugh, 2006), data were insufficient and too inconsistent between studies to clearly elucidate any pattern. For these reasons, it is suggested that greater insight to mechanistic linkages between decomposer microbial communities and litter decay will require more detailed studies that simultaneously monitor changes in both microbial and litter characteristics. This study found that combining data from disparate studies, which did not examine these factors consistently, provided only limited insight.

My experimental studies examined the general, conceptual model of leaf litter decomposition that predicts increasing litter recalcitrance with mass loss, with nutrient limitations often controlling decay rates in early stages and lignin concentrations dominating the late stages. The activities of extracellular enzymes (EEA) responsible for resource acquisition are predicted to track these changes in litter chemistry, with hydrolytic carbon and nutrient acquiring enzyme activities peaking early in decay and oxidative enzymes responsible for degrading recalcitrant compounds peaking late. As the mechanistic driver of these processes, the microbial community is predicted to track the changes in litter chemistry and EEA, with a shift from a copiotrophic (r-selected) community that utilizes the labile litter substrates early in decay to an oligotrophic (K-selected) community that utilizes the recalcitrant substrates late in decay. However, the conceptual model is based on single species litter experiments whereas natural ecosystems often have mixed species litter. The purpose of the field based study was to examine patterns of changing litter chemistry, EEA and microbial community during decomposition of natural leaf mixtures in two oak dominated forests in northwest Ohio.

Over a two-year period, litter decaying in the urban, Stranahan Arboretum revealed rapid loss of soluble compounds but little evidence of a faster relative loss of holocellulose than lignin contrary to predictions. Similarly, EEA indicated a seasonal pattern more strongly related to litter moisture than litter chemistry. Although the microbial community demonstrated a clear transition between early and late community types, there was an unexpected increase in diversity for the fungal community. In contrast, litter decaying in the nearby but rural, Oak Openings Metropark followed expected patterns of change in litter chemistry, with holocellulose decaying more rapidly than lignin. Additionally, EEA was strongly related to the soluble litter fraction and litter nitrate concentration, consistent with the conceptual model for a nitrogen limited site. Similar to the Arboretum, there was a clear transition between early and late community types, but with a decline in community diversity. Spring flooding of the Arboretum study site, its position in an urban location and a dense population of exotic earthworms (Amynthas agrestis) are postulated to have affected litter decay, EEA and microbial community patterns. Overall, the results suggest that the current conceptual model for leaf litter decomposition based on single species litter experiments is adequate to address patterns of decay in more natural mixes of leaf litter, given site specific factors. Although the experimental data supported the conceptual model for litter decay, no support was found for either the proposed lignocellulose control or microbial to litter mass (B:C) feedback control hypotheses generated by the GDM (data not shown). The experimental data were likely not of sufficient quantity or resolution to provide a conclusive test for simulated predictions.

Committee:

Daryl Moorhead, PhD (Committee Chair); Michael Weintraub, PhD (Committee Member); William Von Sigler, PhD (Committee Member); Margaret Carreiro, PhD (Committee Member); Jared DeForest, PhD (Committee Member)

Subjects:

Ecology; Environmental Science; Soil Sciences

Keywords:

Litter decomposition; extracellular enzyme; lignin; lignocellulose; DGGE; copiotrophic; oligotrophic; litter chemistry; microbial community; litter mixtures; Amynthas agrestis; urban-rural gradient; Guild Decomposition Model

Waindim, MbuOn Unsteadiness in 2-D and 3-D Shock Wave/Turbulent Boundary Layer Interactions
Doctor of Philosophy, The Ohio State University, 2017, Aero/Astro Engineering
Shock-boundary layer interactions (SBLIs) are ubiquitous occurrences in supersonic and hypersonic vehicles and have the tendency to inhibit their structural and aerodynamic performance. For example, in the inlets and isolators of such vehicles, the shock wave generated by one surface interacts with the boundary layer on an adjacent one. They are also present on the exterior of the vehicles, e.g. at the fuselage/vertical stabilizer junctions. These interactions cause unsteady separation, resulting in reduced air in-take efficiency, or unstart in extreme cases; unsteady vortex shedding which yields undesirable broadband noise; and significant pressure fluctuations which compromise the structural integrity of the vehicle and which can lead to loss of control authority. Mitigating these issues is therefore an important part of optimizing aerodynamic and structural design of high speed vehicles. The first step in this respect is obtaining a better understanding of the interaction unsteadiness. Nominally 2-D interactions have been studied extensively and have identified low-frequency shock motions which lead to undesirable pressure loads. The particular frequencies associated with the motions have been characterized using time resolved experiments and computations, and shown to depend on the mean size of the separation. The physical processes responsible for these frequencies are however still under investigation and the physical relationship between the shock motions and pulsations of the separation bubble remains obscure. For flow fields where the shock is swept, a complex 3-D interaction is encountered whose unsteady features are even less well understood. The mean structure of these 3-D interactions has been obtained experimentally and using RANS simulations, and shown to be profoundly different from the 2-D flow field indicating that progress in understanding 2-D interactions cannot be directly translated to 3-D. Specifically, there is no recirculating region in the 3-D interaction contrasting with the 2-D case where a closed separation bubble essentially drives the dynamics. This effort seeks to understand the unsteadiness in such 3-D interactions. Although Large Eddy Simulations (LES) are invaluable for characterizing time evolving features, they are prohibitively expensive for swept interactions. A stability analysis based method, the Mean Flow Perturbation (MFP) technique appears as a better alternative to LES in terms of computational cost. It involves tracking the evolution of disturbances through the interaction in space and time to identify the low frequency and stability properties of the chaotic 3-D dynamical system. A systematic verification and validation of the technique is presented to provide a solid basis for its employment. By implementing the technique on canonical problems with similar properties as the swept interaction, its applicability for flows with strong gradients and viscous effects is established. The method is then implemented on the simpler 2-D interaction with two specific goals: to ensure that MFP accurately captures well known unsteady features of the flow and to learn relevant lessons and establish best practices for SBLI application. By definition, the implementation of MFP requires the knowledge of a mean base state. For the nominally 2-D interactions, the mean of a previously obtained LES is used as the basis. In addition to providing the input to the MFP technique, the LES is used as a testbed to: (i) Improve effectiveness of a technique for generating spatially developing supersonic turbulent boundary layers (TBLs) as correctly characterizing the inflow boundary condition is a critical component of simulating SBLIs. (ii) Generate insight on possible physical mechanisms responsible for the low frequency unsteadiness in 2-D by exploiting asymmetry in the shock motions. Besides the LES supplied mean base state, the MFP technique is also implemented with a RANS supplied base flow. No significant differences in the results are observed indicating a relative independence of the MFP technique to the base flow generation procedure. Due to the highly prohibitive nature of implementing LES for 3-D interactions, the above observation acts as a substantial endorsement for using RANS generated base state for studying 3-D interactions with the MFP technique. It also provides specific insight into ways to correctly implement the technique when the base flow comes from RANS. Finally, the dynamics of the swept interaction are explored to characterize its unsteady and stability features. Here the basic state is obtained by tailoring RANS calculations to experiments at Florida State University, allowing the separation and other relevant mean features to be accurately captured. The flow field obtained by perturbing this mean is post processed to identify the length and time scales relevant to the flow field. This effort accomplishes three things: (i) a tripping technique is generated to efficiently specify turbulent boundary layer appropriate for use as the inflow condition, (ii) LES of 2-D interactions are obtained and analyzed to identify the cause of energetic low frequency and (iii) the relevant frequencies in the swept interaction are characterized using MFP and the stability properties of the interaction are identified, distinguishing its dynamics from the 2-D. Each step involves a myriad of statistical tools that can be adapted for other applications. The findings of each effort are presented. It is found that the effectiveness of tripping a laminar boundary layer to yield its turbulent counterpart is dependent on a range of factors: (i) grid resolution, (ii) strength of the force associated with the trip, (iii) wall thermal condition near the trip and (iv) Mach number. By characterizing the stability of the boundary layer in the trip region, a precursor for transition to turbulence is identified. This makes the method efficient for generating turbulent boundary layers at alternative desired flow conditions (Mach and Reynolds numbers) as appropriate trip parameters can be obtained a priori. Statistical analyses of the shock motions in 2-D interactions reveal an asymmetry whose quality appears to be dependent on the separation bubble size. For massive separation bubbles, where the shear layer is detached, the collapse phase is a rapid process, possibly owing to the ease of formation of Kelvin-Helmholtz (K-H) structures which convect mass and momentum out of the bubble, leading to its collapse. For moderate separation bubbles, the collapse phase is instead the slower process. It is found that the quality of the asymmetry is linked to the linearly stable position of the shock, which has implications for control. In addition, there is evidence of modulation of the frequencies associated with K-H shedding by the low frequencies characteristic of the shock motions, establishing a link between the two physical phenomena and further reinforcing the role of eddies in bubble collapse. The most substantial outcome of this work is the insight obtained regarding the dynamics of the swept interaction. The results show that: (i) The shock generated by the fin is anchored unlike the oscillating reflected shock in 2-D; consequently, the low frequencies observed in 2-D are not present in this swept interaction. (ii) A convective inviscid instability is identified at a frequency an order of magnitude higher than the characteristic frequency of shock motions in 2-D. It is shown to be a consequence of the crossflow and analogous to the mid frequencies associated with K-H shedding in the 2-D interaction. (iii) The absolute instability observed in 2-D does not persist here as the absence of a closed separation reduces the interaction’s ability to perpetually self-sustain introduced perturbations.

Committee:

Datta Gaitonde (Advisor); Jen-Ping Chen (Committee Member); Jack McNamara (Committee Member); Mo Samimy (Committee Member)

Subjects:

Aerospace Engineering; Fluid Dynamics

Keywords:

shock waves; fluid dynamics; turbulence; boundary layers; computational fluid dynamics; CFD; unsteadiness; SBLI; dynamic mode decomposition; DMD; empirical mode decomposition; EMD; stability analysis

Rohaly, Matthew JosephDecomposition of Aromatic Amines in a Jet Fuel Surrogate
Master of Science (M.S.), University of Dayton, 2014, Chemistry
The thermal decomposition of aromatic nitrogen containing compounds in a jet fuel surrogate was studied. The surrogate fuel's decomposition was compared to the decomposition found in natural jet fuels. Then high performance liquid chromatography was used to examine the formation of any polar products from the thermal decomposition of the surrogate fuel. Gas chromatography coupled with mass spectrometry and nuclear magnetic resonance were employed to try and identify the polar products. The large amount of hydrocarbons masking the polar products made fractional collection necessary before any identification could be attempted. After fractional collections were employed several oxygen polar compounds were found and identified from the thermal decomposition of the surrogate fuel. However no nitrogen-containing compounds could be found. This is most likely due to the low concentration of the nitrogen-containing compounds within the surrogate. Due to the effectiveness of the surrogate fuel's thermal decomposition it remains a good candidate for further jet fuel studies that look at reactivity. HPLC was also very effective at observing the formation of polar products within the jet fuel, although it could not identify these products. The fractional collection method that was employed did improve the results of the identification process, but it did not manage enough separation between the polar compounds and the hydrocarbons present in the surrogate. It is likely that a further separation method is needed. GCMS was relatively ineffective at separating and identifying polar products from this reaction. This is due to the bulk hydrocarbons masking the polar product signals. GCMS was able to identify a oxygen-containing compound, but only because the elution point from this compound was far from the elution point of any hydrocarbon. NMR was effective at identifying polar compounds that were present in significant quantities, however the extremely low concentration of the polar products made this process much less effective as well. Overall for GCMS or NMR to be considered effective techniques for this analysis a better separation process must be utilized.

Committee:

David Johnson (Advisor)

Subjects:

Chemistry

Keywords:

jet fuels; jet fuel decomposition; nitrogen contaminants in jet fuels; polar products from jet fuel decomposition; jet fuel separation techniques;

Vyapamakula Sreeramachandra, SankeerthExpedient Modal Decomposition of Massive Datasets Using High Performance Computing Clusters
Master of Science, The Ohio State University, 2018, Computer Science and Engineering
High-fidelity observations of non-linear dynamical systems that are of practical interest lead to massive data sets which do not fit on a single computing node. Therefore, modal decomposition techniques must be able to exploit the capability of high-performance computing (HPC) facilities. Proper Orthogonal Decomposition and Sparse Coding are two of the commonly used modal decomposition techniques to obtain reduced order models. The goal of the research is to parallelize and implement these algorithms so that they can be used on high-performance computing clusters in order to expedite the process of modal decomposition from massive data sets. However, computation on various machines is associated with high memory usage and significant communication cost. Moreover, the overall computational cost is sensitive to the type of data set and various parameters of the algorithm. Therefore, several strategies are discussed and implementations are developed to address these constraints to perform expedient modal decomposition. Furthermore, a systematic study is performed over multiple data sets to assess the performance and scalability of the implementations.

Committee:

Jack McNamara (Committee Member); Sadayappan P (Advisor)

Subjects:

Aerospace Engineering; Computer Science

Keywords:

high-performance computing clusters, PCA, Proper Orthogonal Decomposition, Sparse Coding, POD, ScaLapack, big data, distributed systems, parallel, Modal Decomposition,

Dalton, John ChristianSurface Hardening of Duplex Stainless Steel 2205
Doctor of Philosophy, Case Western Reserve University, 2017, Materials Science and Engineering
Increased use of second-generation duplex stainless steels is expected as demands for tougher, more economical, and corrosion resistant alloys increase. A novel gas-phase carburizing and nitriding procedure operating in the temperature range of 325 °C to 440 °C was utilized with the intent of improving both the tribological and electrochemical performance of the 2205 duplex alloy (22Cr–5Ni–3Mo–0.15N) under so-called paraequilibrium conditions. A suite of state-of-the-art microchemical and structural characterization tools were employed following each treatment, and performance of the treated alloys assessed by chloride-based cyclic polarization and nano-indentation hardness measurements. Particular emphasis was placed on understanding the response of the ferritic phase, which has been a source of speculation and confusion since the development of such treatments. CALPHAD-based thermodynamic modeling was also used to predict phase stability in the 2205 system subjected to gas-phase paraequilibrium nitridation or carburization. Analysis of the interstitially-hardened layer in the austenitic phase of 2205 provides results consistent with similar surface hardening treatments of single-phase austenitic stainless steels: a colossally supersaturated and precipitate-free hardened layer of expanded austenite is formed. The interstitial concentration, case depth, and concomitant mechanical properties can be tailored through control of the temperature, duration, and chemical activity with the gas-phase process. Spatially-resolved chemical and structural analysis within the d-ferrite of 2205 revealed two competitive transformation behaviors following nitridation, while carburization led to only one response. For both carburization and nitridation, carbon or nitrogen supersaturations in ferrite on the order of 20 at.% and 25 at.%, respectively, were observed—greater than 10^6 times the equilibrium concentration at room temperature, yet remarkably with unmeasurable expansion or distortion of the bcc lattice. These supersaturated ferrite grains appear `featureless’ in the TEM, suggesting the presence of a high density of strain centers. High-resolution TEM revealed that, unlike austenite, ferrite may accommodate colossal concentrations of carbon or nitrogen through the formation of an ultra-high density dislocation network = 10^16 m-2, as opposed to occupation of lattice-expanding interstitial sites. These linear defects, presumably formed during yielding under the large residual stresses of restricted lattice expansion, enhanced the kinetics of spinodal-like decomposition of the ferrite to nm-scale Fe-rich (aFe) and Cr-rich (aCr) ferrite domains within the carburized or nitrided layer, and apparently provide lower-energy sites for carbon and nitrogen than in potential secondary phases. Elsewhere in the case of nitrided 2205, an isothermal, nitrogen-induced ferrite-to-austenite phase transformation occurred due to the potent austenite-stabilizing nature of nitrogen. The austenite transformation product formed in a partitionless manner from the parent ferrite with a high-aspect-ratio plate-like morphology in Nishiyama–Wassermann orientation relationship. These, and other microstructural signatures lead to the conclusion that the austenite formed by a shear-like mechanism characteristic of martensitic reactions. The microstructural evolution of these phase transformations in d-ferrite accompanying low-temperature nitridation of 2205, and lack of carburization-induced transformation of ferrite to austenite, are discussed in light of the chemical and strain energies involved.

Committee:

Arthur Heuer, Prof. (Advisor); Frank Ernst, Prof. (Committee Member); Matthew Willard, Prof. (Committee Member); Farrel Martin, Prof. (Committee Member)

Subjects:

Materials Science

Keywords:

stainless steel, surface hardening, surface engineering, carburizing, nitriding, CALPHAD, paraequilibrium, thermodynamic modeling, ferrite, martensite, expanded austenite, S-phase, phase transformations, spinodal decomposition, dislocations

Barker, Rachel ElizabethInvasive Shrub (Lonicera maackii) Effects on Terrestrial-Aquatic Linkages in Stream Ecosystems: Organic Matter Processing and Macroinvertebrate Community Colonization
Master of Science (M.S.), University of Dayton, 2011, Biology
Lonicera maackii, an invasive terrestrial shrub that is dominant in riparian zones, may have substantial impacts on the structure and function of aquatic systems. We investigated linkages between this terrestrial invader and the aquatic community structure in headwater streams by assessing in situ leaf breakdown and the colonization of leaf packs by macroinvertebrates in three 3rd order headwater streams of southwestern Ohio, Possum Run, Fecher Park, and Black Oak Park. Leaf breakdown and colonizing macroinvertebrate taxa richness, density, and functional feeding group relative abundance were assessed using three types of in-stream leaf packs: L. maackii, native (Fraxinus spp., Platanus occidentalus), and a mixture of native and invasive leaves. Invasive leaf breakdown rates were up to 4x faster than native leaves, and significantly less organic matter remained for invasive compared to other leaf types on all sampling days. There were specific macroinvertebrate taxa unique to particular treatments. For instance, invasive leaf packs were dominated by collector-gatherers for the duration of the study and Chironomidae, in particular, was the most important taxon colonizing invasive leaf packs. In contrast, Oligochaeta colonized native leaf packs in greater densities at Possum Run stream and mix leaf packs in Fecher Park stream. Collector-gatherers also dominated mix leaf packs across stream sites and native leaf packs located within Possum Run. In summary, our results demonstrate that L. maackii leaves breakdown significantly faster in headwater streams compared to native leaves, and that macroinvertebrate densities and functional feeding group colonization are influenced by invasive leaf litter input. These results support the hypothesis that L. maackii invasion in riparian zones can have direct and significant impacts on aquatic ecosystems by influencing organic matter processes.

Committee:

Ryan McEwan, PhD (Advisor); Mark Benbow, PhD (Advisor); Jayne Robinson, PhD (Committee Member); Patrick Williams, PhD (Committee Member)

Subjects:

Aquatic Sciences; Biology; Ecology; Freshwater Ecology

Keywords:

invasive species; Lonicera maackii; decomposition; allochthonous inputs; aquatic macroinvertebrates; organic matter processing

He, BoCompatible discretizations for Maxwell equations
Doctor of Philosophy, The Ohio State University, 2006, Electrical Engineering

The main focus of this dissertation is the study and development of numerical techniques to solve Maxwell equations on irregular lattices. This is achieved by means of compatible discretizations that rely on some tools of algebraic topology and a discrete analog of differential forms on a lattice.

Using discrete Hodge decomposition and Euler’s formula for a network of polyhedra, we show that the number of dynamic degrees of freedom (DoFs) of the electric field equals the number of dynamic DoFs of the magnetic field on an arbitrary lattice (cell complex). This identity reflects an essential property of discrete Maxwell equations (Hamiltonian structure) that any compatible discretization scheme should observe. We unveil a new duality called Galerkin duality, a transformation between two (discrete) systems, primal system and dual system. If the discrete Hodge operators are realized by Galerkin Hodges, we show that the primal system recovers the conventional edge-element FEM and suggests a geometric foundation for it. On the other hand, the dual system suggests a new (dual) type of FEM.

We find that inverse Hodge matrices have strong localization properties. Hence we propose two thresholding techniques, viz., algebraic thresholding and topological thresholding, to sparsify inverse Hodge matrices. Based on topological thresholding, we propose a sparse and fully explicit time-domain FEM for Maxwell equations. From a finite-difference viewpoint, topological thresholding provides a general and systematic way to derive stable local finite-difference stencils in irregular grids.

We also propose and implement an E-B mixed FEM scheme to discretize first order Maxwell equations in frequency domain directly. This scheme results in sparse matrices.

In order to tackle low-frequency instabilities in frequency domain FEM and spurious linear growth of time domain FEM solutions, we propose some gauging techniques to regularize the null space of a curl operator.

Committee:

Fernando Teixeira (Advisor)

Keywords:

differential forms; chains and cochains; Whitney forms; de Rham diagram; gauging; compatible discretization; Hodge operator; Hodge decomposition; Euler's formula; FDTD; FEM; Galerkin duality; primal and dual; pure Neumann boundary condition; mixed FEM

Kaufman, Jason R.Digital video watermarking using singular value decomposition and two-dimensional principal component analysis
Master of Science (MS), Ohio University, 2006, Electrical Engineering & Computer Science (Engineering and Technology)

As the state of remote sensing technology improves, the acquisition of three-dimensional images and video will become more common in several different applications. However, the problem of protecting and authenticating three-dimensional data – in particular, three-dimensional video data – has been largely unexplored. An application of the singular value decomposition (SVD) and two-dimensional principal component analysis (2DPCA) to video data with an arbitrary number of channels for the purpose of watermarking is presented.

It will be shown that it is possible to select parameters that preserve the visual quality of the video while effectively embedding the watermark in both the spatial and temporal domains. However, much processing time is required to embed and extract the watermark. Furthermore, it is unclear how robust the presented technique is to attack.

Committee:

Mehmet Celenk (Advisor)

Keywords:

digital video watermarking; information security; singular value decomposition (SVD); two-dimensional principal component analysis (2DPCA)

Jung, Ji YoungNitrogen Fertilization Impacts on Soil Organic Carbon and Structural Properties under Switchgrass
Doctor of Philosophy, The Ohio State University, 2010, Environment and Natural Resources
A paradigm shift from maximum to sustainable agricultural production also applies to cultivation of bioenergy crops. Nitrogen (N) fertilization is needed to sustain the biomass yield of switchgrass as a biofuel feedstock and, consequently, may influence the potential for soil quality improvement through soil organic carbon (SOC) sequestration. Because changes in soil quality can feed back to affect the sustainability of biomass production, the impacts of N application on switchgrass biomass production and soil quality need to be evaluated together. Therefore, the overall objective of this study was to assess the effects of N fertilization on switchgrass biomass, changes in SOC concentration and pool, and soil structural properties. This objective was realized by conducting field experiments in Ohio and Tenessee, and a laboratory incubation study in Ohio. The aboveground biomass of switchgrass was more strongly influenced by N fertilization than the belowground biomass. Even when the aboveground biomass was harvested and removed, N fertilization led to an increase in SOC, both in Ohio and Tennessee. The data from laboratory incubation study showed that N additions could retard the decomposition of organic matter, which may contribute towards higher SOC pools in N fertilized plots. The results from the Tennessee experiments indicated the important role of roots in stabilizing soil structure. Despite higher SOC concentrations in plots receiving a high rate of N fertilization, higher soil structural stability was associated with greater root biomass and longer root length in plots receiving none or a low rate of N fertilizer. These data indicated that root growth is a crucial driver of surface soil structure.

Committee:

Rattan Lal, PhD (Advisor); Peter Curtis, PhD (Committee Member); David Barker, PhD (Committee Member); Julie Jastrow, PhD (Committee Member)

Subjects:

Soil Sciences

Keywords:

switchgrass; soil organic carbon (SOC); nitrogen fertilization; biomass production; soil structure; organic matter decomposition; root

Giai, CarlaFire, Exotic Earthworms and Plant Litter Decomposition in the Landscape Context
Doctor of Philosophy, The Ohio State University, 2009, Evolution, Ecology, and Organismal Biology
Mixed-oak (Quercus ssp.) forests in southern Ohio have been significantly affected by fire suppression policies enforced since the early1920s, possibly altering key ecosystem properties such as soil organic matter quality and quantity, nutrient availability, and microbial activity. The Fire and Fire Surrogate (FFS) program is a multidisciplinary program that included the experimental application of both functional restoration treatments (i.e. prescribed fire at historical intervals) and structural approaches (i.e mechanical thinning to pre-settlement density and species composition). Within that framework, I focused in three unquantified major aspects of southern Ohio forest ecosystems exposed to a quasi-natural disturbance regime: (1) effects on soil microbial communities, (2) possible belowground effects of non-native earthworm species, and (3) impacts on the rates of litter decomposition and subsequent nutrient release. Direct effects of prescribed fire and thinning in soil nutrient dynamics and microbial communities were analyzed three months after the second application of prescribed fire. This short-term analysis was complementary to a series of in-situ, manipulative experiments that evaluated (a) potential effects of non-native earthworm species on N dynamics, as well as changes in soil organic matter and microbial communities within the FFS experimental design, and (b) litterbag incubations of native leaf mixtures and standard litters over more than three years in our study sites, where I evaluated the influence of restoration treatment, leaf litter quality, and landscape position on leaf litter decomposition processes and nutrient release. The major conclusion of my studies is that I did not observe definitive effects of restoration treatments. Rather, a combination of site characteristics and landscape position were responsible for some of the differences observed when comparing unmanaged controls to any of the alternative restoration treatments. I hypothesize that because my studies were performed a short time after the application of such treatments, and only considered two applications of prescribed fire and one thinning event, it is not possible to separate larger scale factors (e.g. topography and climate) determining nutrient dynamics in the area from any effects that could result from restoration treatments. Consequently, further applications of prescribed fire, thinning, and/or their combination are strongly recommended for the region.

Committee:

Raph Boerner, PhD (Advisor); Clive Edwards, PhD (Committee Member); Robert Klips, PhD (Committee Member); David Stetson, PhD (Committee Member); Mac Callaham, Jr/PhD (Committee Member)

Subjects:

Ecology; Forestry

Keywords:

fire; exotic earthworms; plant litter decomposition; terrestrial ecosystems; hardwood forests; restoration ecology; nutrient dynamics

Palan, Rohit ChandulalPyroelectric Properties of Ferroelectric Lanthanum Bismuth Titanate Thin Films
MS, University of Cincinnati, 2001, Engineering : Materials Science
Ferroelectric and pyroelectric effects in the Lanthanum Bismuth Titanate LaxBi4-xTi3O12 (LBIT) solid solution thin films prepared by Metal Organic Decomposition technique were studied. The films were characterized to understand the effect of La substitution on texture of the films. X-Ray Diffraction, RAMAN spectroscopy and electron microscopy were utilized to study the effect of La on structure of the LBIT thin films. Dielectric, P-E hysterisis and static and dynamic pyroelectric measurements were carried out to study the effect of La substitution on electrical properties. Experimental results indicate that La content affects the development of c-orientation in the structure and induces tetragonality in the structure resulting in improved polarization and dielectric properties. Higher polarization along with reduced Curie temperature due to La addition improves the pyroelectric coefficient as measured with static pyroelectric method. Voltage and detectivity figures of merit were found to be high for high La content in the system. For La = 0.6 figures of merit were found to be comparable to widely studied pyroelectric thin film systems suggesting potential of LBIT thin films for the application in pyroelectric sensors.

Committee:

Dr. Relva C. Buchanan (Advisor)

Subjects:

Engineering, Materials Science

Keywords:

Bismuth Titanate; Ferroelectric Properties; Pyroelectric Properties; Metal-Organic Decomposition; Thin Films

Vardar, CerenOn the Correlation of Maximum Loss and Maximum Gain of Stock Price Processes
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2008, Mathematics/Probability and Statistics
One of the primary issues in mathematical finance is the ability to construct portfolios that are optimal with respect to the risk. The stock price is subject to stochastic variability so the risk an investor encounters is due to the stock prices. A commonly used measure of risk is the expected maximum loss of a stock, in other words, how much one can lose. It can be defined informally as the largest drop from a stock peak to a stock nadir. Over a certain fixed length of time, a reasonably low expected maximum loss is as crucial to the success of any fund asa high maximum gain or maximum profit. The correlation coefficient of the maximum loss and the maximum gain indicates the relation between the gain and the risk using measures which are functions of the Sharpe ratio. The price of one share of the risky asset, the stock, is modeled by geometric Brownian motion. By taking the log of geometric Brownian motion, Brownian motion can be used as basis of the calculations related to the geometric Brownian motion. In this dissertation work, we present analytical results related to the joint distribution of the maximum loss and maximum gain of a Brownian motion and the correlation of them, and detailed explanation of this theoretical result which requires a review of standard but difficult literature. We have given an analytical expression for the correlation of the supremum and the infimum of standard Brownian motion up to an independent exponential time, we have shown convexity of the maximum gain and the maximum loss, and we have calculated some bounds for the expected values of maximum gain and maximum loss. We also search for a relation between the Sharpe ratio and the correlation coefficient for Brownian motion with drift and geometric Brownian motion with drift. Using the scaling property, we have shown that the correlation coefficient does not depend on the diffusion coefficient for Brownian motion. And finally, using real-life data, we have presented the correlation of maximum gain and maximum loss and the correlation of the supremum and the infimum of stock prices.

Committee:

Gabor Szekely (Advisor); Craig Zirbel (Advisor); Bullerjahn George (Committee Member); Rizzo Maria (Committee Member); Chen John (Committee Member)

Subjects:

Mathematics

Keywords:

Brownian Motion; Geometric Brownian Motion; Sharpe Ratio; Strong Markov Property; Scaling Property; Bessel Process; Doob's h-transform; Path Decomposition

Shrestha, SumanHigh Resolution Polarimetric Imaging Techniques for Space and Medical Applications
Master of Science in Engineering, University of Akron, 2013, Electrical Engineering
In the year 2012, there were approximately 226,160 cases of lung cancer and 160,340 deaths out of it as per the National Cancer Institute. There are mainly two types of lung cancer, small cell lung cancer and non-small cell lung cancer, of which 87% are diagnosed as non-small cell. A physical algorithm and a systematic study relating the morphological, chemical and metabolic properties of lung cancer to the physical and optical parameters of the polarimetric detection process are missing. Therefore, one of the purposes of the study is to explore the polarimetric phenomenology of near infrared light interaction with healthy and lung cancer monoline cells by using efficient polarimetric backscattering detection techniques. Preliminary results indicate that enhanced discrimination between healthy and di fferent types of lung cancer cells can be achieved based on their backscattered intensities, Mueller matrix, diattenuation and depolarization properties. Also, various optical parameters like linear depolarization ratio and degree of linear polarization play an important role in discriminating healthy and di fferent lung cancer cells. Specifi cally, the sizes of the nuclei of the cancer cells and the nucleus-to-cytoplasmic ratios appear to have potential impact on the detected polarimetric signatures leading to enhanced discrimination of lung cancer cells. The second work in this thesis has been done with the support of the Air Force Research Laboratory (AFRL). Polarimetric signals have always played an important role in the identi fication, discrimination and analysis of a material's optical properties. This work presents a novel remote sensing approach based on polarimetric fractal detection principles. Backscattered polarimetric signals contribution from diff erent materials used in space applications have already been detected using a laboratory LADAR testbed and this thesis presents implememtation of the LADAR testbed and analysis techniques of these backscattered signals based on fractal analysis. Fractal dimension has been chosen as a measure for the discrimination purposes of these materials. The outcome of this thesis indicates that polarimetric fractal principles may enhance the capabilities of the LADAR for characterization and discrimination of di fferent materials.

Committee:

George Giakos, Dr. (Advisor); Kye-Shin Lee, Dr. (Committee Member); Nghi Tran, Dr. (Committee Member)

Subjects:

Biomedical Engineering; Electrical Engineering; Experiments; Optics; Remote Sensing

Keywords:

Polarimetry; Monoline Lung Cancer Cells; Mueller Matrix Decomposition; Remote Sensing; Fractal Dimension

Hasan, M. R.Parametric Study and Higher Mode Response Quantification of Steel Self-Centering Concentrically-Braced Frames
Master of Science, University of Akron, 2012, Civil Engineering
Conventional concentrically braced frame (CBF) systems have limited drift capacity prior to structural damage, often leading to brace buckling under moderate earthquake input, which results in residual drift. Self-centering CBF (SC-CBF) systems have been developed to maintain the economy and stiffness of the conventional CBFs while increasing the ductility and drift capacity. SC-CBF systems are designed such that the columns uplift from the foundation at a specified level of lateral loading, initiating a rocking (rigid body rotation) of the frame. Vertically aligned post tensioning bars resist column uplift and provide a restoring force to return the structure to its initial state (i.e., self-centering the system). Friction elements are used at the lateral-load bearings (where lateral load is transferred from the floor diaphragm to the SC-CBF) to dissipate energy and reduce the peak structural response. Previous research has identified that the frame geometry is a key design parameter for SC-CBFs, as frame geometry relates directly to the energy dissipation capacity of the system. This thesis therefore considered three prototype SC-CBFs with differing frame geometries for carrying out a comparative study. The prototypes were designed using previously developed performance based design criteria and modeled in OpenSees to carry out nonlinear static and dynamic analyses. The design and analysis results were then thoroughly investigated to study the effect of changing frame geometry on the behavior of SC-CBF systems. The rocking response in SC systems introduces large higher mode effects in the dynamic responses of structure, which, if not properly addressed during design, can result in seismic demands significantly exceeding the design values and may ultimately lead to a structural failure. To compare higher mode effects on different frames, proper quantification of the modal responses by standard measures is therefore essential. This thesis proposes three normalized quantification measures based on an intensity-based approach, considering the intensity of the modal responses throughout the ground motion duration rather than focusing only on the peak responses. The effectiveness of the three proposed measures and the conventionally used peak-based measure is studied by applying them on dynamic analysis results from several SC-CBFs. These measures are then used to compare higher mode effects on frames with varying geometric and friction properties.

Committee:

David Roke, Dr. (Advisor); Kallol Sett, Dr. (Committee Co-Chair); Qindan Huang, Dr. (Committee Member)

Subjects:

Civil Engineering

Keywords:

Self-centering frames; SC-CBF, Performance-based design; Modal decomposition; Higher mode effects

Rosenberg, Chase C.Foliar Chemical Analysis, Decomposition, and Effects on Nutrient Cycling of American Chestnut and its Hybrids
Master of Science (MS), Ohio University, 2010, Environmental and Plant Biology (Arts and Sciences)
The American chestnut, Castanea dentata (Marsh.) Borkh, was once the dominant species of many hardwood forests in eastern North America. A fungal pathogen introduced in 1904 killed mature chestnuts range-wide. Recent work has resulted in a putatively pathogen-resistant chestnut hybrid, making future reintroduction a distinct possibility. However, information on the chemistry and potential effects on nutrient cycling of the chestnut hybrid is necessary to understand the consequences of reintroduction. I predicted that litter from chestnut taxa would contain more recalcitrant compounds and subsequently decompose more slowly than the extant species Quercus alba L. and Acer saccharum Marshall. Lignin and tannin are secondary compounds which are known to inhibit decomposition, and which I predict will be relatively abundant in chestnut litter. I also predicted that there are no significant differences between the foliar nutrient content of the hybrid and American chestnut. In this experiment, I measured tannins, litter fractions, ten nutrient elements, and both decomposition rates and enzyme activity (in the Oe/A horizon) of leaves of C. dentata, C. mollissima Blume, hybrid chestnut, Quercus alba , Q.prinus L., Acer rubrum L., and A. saccharum. Nutrient resorption was also quantified in order to further understand hybrid chestnut's potential effect on forest nutrient pools. No significant differences in chemistry or decay rate were found among any of the chestnut taxa. Chestnut contained more tannin than non-chestnut species, and contained slightly more mass after one year of decomposition, but all metrics tested were within the ranges of co-occurring species. These data suggest that a reintroduction of hybrid chestnut will not likely alter nutrient cycles in Appalachian hardwood forests.

Committee:

Brian McCarthy (Advisor); Jared DeForest (Advisor); Ahmed Faik (Committee Member)

Subjects:

Ecology

Keywords:

chestnut; castanea dentata; restoration; foliar chemistry; decomposition

Park, Soo-JinCatalytic Decomposition of Nitric Oxide and Carbon Monoxide Gases Using Nanofiber Based Filter Media
Doctor of Philosophy, University of Akron, 2008, Chemical Engineering

The main sources of NOx are diesel engines, automotives, electric utilities, other industrial, commercial, and residential sources that burn fuels at high temperature. The control and abatement of NOx emissions are important because of their harmful effects on the human body and the environment. The strict regulations of NOx emissions and the growing demand for power compel new design of catalytic materials for pollution removal. The most common method for car exhaust NOx treatment involves wet impregnation of noble metals on ceramic substrates.

In this work, catalytic nanoparticles doped on nanofiber enhanced ceramic fibrous filter medium structure are developed as an alternative method. The noble metals, palladium, platinum and rhodium doped ceramic nanofibers, are synthesized using electrospinning and are incorporated into the micro-fibrous filter. We have discovered ceramic nanofiber containing noble metals also work in liquid phase catalysis by converting styrene to ethylbenzene at room temperature and atmospheric pressure. The reaction temperature is varied and the filters are tested for decomposition of nitric oxide and carbon monoxide using nanofiber based fibrous filter. Carbon dioxide, nitrogen and nitrous oxide gases were produced. Produced nitrous oxide gas was consumed by reacting with carbon monoxide. The efficiency of the catalytic fibrous filter was similar to commercial catalytic converter by adding of smaller amount of catalyst doped on alumina microfibers. As the amount of catalyst in the fibrous filter media increases the temperature at which all NO disappears decreases. As the inlet concentration of NO gas decreases, all NO disappears from the outlet at a lower temperature. As the face velocity through the fibrous filter media increases, efficiency becomes lower as the residence time of gases through the media decreases. We also tested a catalytic fibrous filter media containing Pd, Pt and Rh, and the performance is similar to that of catalytic convertor.

Analytical models are developed to study the performance of filters for isothermal nitric oxide and carbon monoxide gas reaction. The kinetic parameters for the model were determined using the Genetic Algorithm (GA) computer program to determine species concentrations as a function of position. Model and experimental results showed that the decomposition temperature of nitric oxide gas decreases by lowering the inlet gas concentration and increasing of catalyst concentration in the nanofibers. Also a non-isothermal model was developed for direct nitric oxide decomposition to predict temperature and concentration profiles along the filter length.

Committee:

George G. Chase, PhD (Advisor)

Subjects:

Chemical Engineering; Environmental Engineering

Keywords:

Catalytic Decomposition; Nanofiber Based Filter Media; Nitric Oxide; Carbon Monoxide

Hua, LiyanShortest Path - Capacitated Maximum Covering Problems
Doctor of Philosophy, The Ohio State University, 2010, Business Administration

I study the shortest path - capacitated maximum covering problem (SP-CMCLP). Current, ReVelle and Cohon (1985) first studied the un-capacitated version of this problem. The two objectives of the problem are the minimization of the path length from a predetermined starting node to a predetermined terminal node and the maximization of the total demand covered by the facilities located at the nodes in the path. They solved a special case in which a demand can be covered only if it is located on the path. I solve the general model. I also introduce facility capacity constraints, new algorithms and new demand coverage structures to this problem.

I decompose the problem into a k-shortest path problem (kSP) and a capacitated maximum covering problem (CMCLP). The k-shortest path problem is solved by a path deletion algorithm. The capacitated maximum covering problem is solved by various heuristics and meta-heuristics including lagrangian relaxation, two versions of Tabu search and a simulated annealing method.

To the knowledge of the author, the Tabu search and simulated annealing methods introduced are the first meta-heuristics developed for the capacitated maximum covering problem. In these meta-heuristics, I use four neighborhood structures. These are 1) one-interchange which exchanges an selected facility with an unselected facility, 2) client shift which shifts a satisfied demand from one selected facility to another selected facility, 3) demand swap (or demand reallocation) which swaps one (or more) assigned demand node (nodes) with one (or more) unassigned demand node (nodes) within the coverage distance of a selected facility site, 4) demand addition which adds one or more unassigned demand to a selected facility. I design an embedded meta-heuristic procedure which has inner loops of single neighborhoods and an outer loop of multiple alternate inner loops. I design a heuristic method and a penalty method for the demand allocation sub-problem in the embedded Tabu search. In the penalty method, I use surrogate relaxation and add a penalty term to the objective function for the violated capacity constraints. An embedded simulated annealing method with temperature vibration is also designed using heuristic demand allocation.

I solve a new version of the shortest path - capacitated maximum covering problem with tree coverage structure (SP-CMCLP-TREE). Demand is supplied by sub-paths on a minimum spanning tree constructed from an underlying network. A demand is counted as covered if the total arc length of a path from the demand to a facility site is within coverage distance and the demand can be satisfied only if all the intermediate demand nodes on the path are satisfied.

Computational results for networks selected from literature show the effectiveness of the heuristics. Tabu search performs the best in solution quality, while Lagrangian relaxation and simulated annealing generate solutions of satisfactory quality using less time. Different path-coverage structures are used based on the properties of the networks. Tree demand coverage structure works better than traditional coverage structure for large partial networks. The impact of different network parameters are also studied.

Committee:

John R. Current, PhD (Advisor); David A. Schilling, PhD (Committee Member); Keely L. Croxton, PhD (Committee Member)

Subjects:

Management; Operations Research

Keywords:

Shortest Path; Capacitated Maximum Covering Problem; Decomposition; Tabu Search; Simulated Annealing; Path Coverage Structure; Tree

Zhang, YuleiComputer Experiments with Both Quantitative and Qualitative Inputs
Doctor of Philosophy, The Ohio State University, 2014, Statistics
Physical experiments play an important role in agriculture, industry, and medical research. However, physical experiments can sometimes be difficult or even impossible to run. In these situations, computer experiments are becoming desirable surrogates for physical experiments. This dissertation considers designs and the predictive models for computer experiments with both quantitative and qualitative input variables. The existing framework for building Gaussian stochastic process (GaSP) models with quantitative and qualitative inputs is to treat a given set of values of the qualitative inputs as determining a response surface in the qualitative inputs. A GaSP model is assumed for each of these response surfaces and the same covariance structure is used for each response surface. A cross-correlation parameter is introduced for each pair of sets of values of the qualitative variables in order to "capture" correlations between response surfaces. To guarantee that one has a legitimate overall covariance structure, certain conditions are imposed on the cross-correlation parameters. In the first part of this dissertation, we introduce two indicator-based GaSP models by transforming the qualitative inputs into quantitative variables and then use traditional correlation functions for quantitative inputs. We also show the equivalence properties between these new models and the existing model. The second part of this dissertation is about the experimental designs with both quantitative and qualitative inputs. The special data structure requires that a "good" design not only capture the cross-correlation information but also spread observations out over the entire quantitative inputs space. We propose two types of designs, the partial SLHD and partial CSLHD, which are modifications of existing designs in the literature, and compare their prediction accuracy with all the other existing designs for quantitative and qualitative. By examining several examples, we find that what constitutes a "good" design may vary from case to case. We summarize these findings with a "guideline" for selecting initial designs. Furthermore, when the initial design does not perform well, we also propose a sequential design algorithm to interpolate or extrapolate the target response levels in a GaSP model with mixed inputs. Inspired by factor analysis, in the last part of this dissertation, we build a more general composite covariance structure by converting the GaSP model with several qualitative levels into a linear combination of independent stochastic processes with fewer constraints on the variance and correlation functions. Furthermore, this composite covariance structure can be extended to the case with multiple qualitative inputs. In these cases, we introduced the Kronecker product form of the composite covariance function, which can not only reduce the number of the parameters, but also capture the similarity between different qualitative inputs with some identical components. In addition, we propose an ANOVA decomposition form of the Gaussian processes, which imposes a factorial structure on the response outputs. Finally, we extend the sequential design algorithm to the composite GaSP model.

Committee:

William Notz (Advisor); Peter Craigmile (Committee Member); Matthew Pratola (Committee Member)

Subjects:

Statistics

Keywords:

Computer Experiments; Physical Experiments; Gaussian Stochastic Process Model; Quantitative and Qualitative Inputs; Cross-Correlation Parameters; Experimental Designs; Composite GaSP Model; Factor Analysis; Kronecker Product; ANOVA Decomposition

Al-Saidi, Muslem Muhamed MahdiBalanced Disk Separators and Hierarchical Tree Decomposition of Real-Life Networks
MS, Kent State University, 2015, College of Arts and Sciences / Department of Computer Science
Recently in few papers, a balanced disk separator of graphs were recursively used to construct one or a small set of spanning trees that sharply approximate distances in a given graph. The best up to date approximation algorithms were obtained for multiplicative tree spanners and for collective additive tree spanners of graphs. In this thesis, we analyze a number of real-life networks, arising in Internet measurements, social and biological sciences, and demonstrate that they all admit small radius balanced disk separators. As a consequence, the one-to-one distances in those networks can be sharply approximated by the tree metrics. Furthermore, all those networks admit very simple distance and routing labeling schemes.

Committee:

Feodor Dragan (Advisor); Hassan Peyravi (Committee Member); Mikhail Nesterenko (Committee Member)

Subjects:

Computer Science

Keywords:

Balanced Disk Separators; Hierarchical Tree Decomposition; Real-Life Networks

Chabot, John AlvaVALIDATING STEADY TURBULENT FLOW SIMULATIONS USING STOCHASTIC MODELS
Master of Science, Miami University, 2015, Computational Science and Engineering
Proper Orthogonal Decomposition was heralded as an objective means of extracting coherent structures from turbulent flows. Prior to its introduction, coherent structures were subjectively defined and discussed among experts. Since its introduction many reduced order models have been developed with inconsistent and often flow dependent validation procedures. This work sets up a framework for a data driven approach to validation of reduced order models derived from steady turbulent flows. Here it is proposed that the `goodness' of a model can be scored by how similar experimental and simulated data move through the model space. This is achieved by generating a Markov model for both data sets, using clustering techniques and maximum likelihood estimates. Results show increasing scores correlate with improved turbulent kinetic energy and modal amplitude for 3 data sets and 14 models. Additionally the generation of a surrogate Markov model can be used to identify missing dynamics in a simulation.

Committee:

Caraballo Edgar , Dr. (Advisor); Mehdi Safari, Dr. (Committee Member); Andrew Sommers, Dr. (Committee Member)

Subjects:

Fluid Dynamics; Mathematics; Statistics

Keywords:

fluid dynamics; validation; reduced order models; proper orthogonal decomposition; galerkin method; Markov model; maximum likelihood estimate; clustering

Morrison, Adrian Franklin An Efficient Method for Computing Excited State Properties of Extended Molecular Aggregates Based on an Ab-Initio Exciton Model
Doctor of Philosophy, The Ohio State University, 2017, Chemistry
In this work, we outline the development, testing, and application of a novel electronic structure method for computing the properties of excited states of molecular aggregates. The method is an ab-inito realization of the molecular exciton model, proposed a long time ago by Frenkel and Davydov to describe excited states of molecular crystals, and is called the Ab-Initio Frenkel Davydov Exciton Model (AIFDEM). The AIFDEM ansatz follows the traditional exciton model by expanding the supersystem excited state wavefunction as a linear combination of excitations that are localized on the component molecules. Our method is a truly ab-inito implementation of this model as the requisite fragment excited states and the exciton Hamiltonian matrix are computed rigorously, including exact Coulomb and Hartree-Fock exchange interactions, without any neglect of overlap, nearest neighbor, or other common approximations. We have tested this method and found that it can reproduce excitation energies of water clusters, DNA bases, and organic chromophores within ~0.1 eV. A charge embedding scheme is able to reduce the scaling of this method to only quadratic with the number of fragments and provides near perfect parallel performance without reducing the accuracy, significantly outperforming traditional approaches. The method was utilized to investigate the excitation energy transfer dynamics of a napthalene-diimide nanotube where it was found that model systems beyond the scope of traditional methods are necessary for a fully detailed mechanistic picture, including the role of quantum coherence. Analytic derivatives of the AIFDEM Hamiltonian are derived and implemented and these provide access to non-adiabatic couplings as well as Holstein and Peierls electron-phonon coupling constants. This is applied to the challenging electronic structure of the singlet exciton fission process to identify vibrational modes key to the mechanism. Dynamics simulations, using parameters computed via the AIFDEM, suggest that singlet fission in crystalline tetracene is driven by vibronic coherence despite unfavorable electronic energetics. The AIFDEM approach is shown to be a promising method for excited states due to its excellent parallel scalability, unambiguous yet analytic description of the wavefunction, and ability to treat challenging electronic structure.

Committee:

Sherwin Singer (Advisor); Heather Allen (Committee Member); Terry Gustafson (Committee Member)

Subjects:

Physical Chemistry

Keywords:

Quantum Chemistry;excited states;frenkel exciton; singlet fission;vibronic;excitation energy transfer;TDDFT;parallel computing;GPU algorithms; non-adiabatic coupling; corresponding orbitals transformation;derivatives;singular value decomposition;

Hensler, Amanda LaurenA Comparison Of Rotational And Continuously Stocked Pasture In Appalachian Ohio
Master of Science, The Ohio State University, 2008, Horticulture and Crop Science
Rotational stocking (RS) is an increasingly used practice that can benefit producers. The objective of this study was to compare RS and continuous stocking (CS) (two replicates) with 32 cow-calf pairs on 17.2 ha at the North Appalachian Experimental Watershed, Coshocton OH. Forage biomass was measured weekly with a rising plate meter. Calculated forage intake by RS cattle, July-September 2006 and 2007, was 13.3 and 9.6 kg DM/cow/day, respectively. Forage decomposition rate was measured three times during the grazing season for the green and dead vegetation with no differences between grazing treatments. Cattle weight was 10.7 kg heavier for RS pasture in 2006, in 2007 cows on CS gained 12.2 kg more than RS. It was concluded that benefits to RS occurred in 2006, a year with better climatic conditions. Research should continue to determine how accumulative effects, such as on botanical composition, and variation in climate might affect production.

Committee:

David Barker, PhD (Committee Chair); R. Mark Sulc, PhD (Committee Member); Steven Loerch, PhD (Committee Member)

Subjects:

Agriculture; Agronomy; Animals

Keywords:

grazing; decomposition; forage; pasture growth; biomass; beef cattle

Next Page