Search Results (1 - 25 of 93 Results)

Sort By  
Sort Dir
 
Results per page  

Hall, Brenton TaylorUsing the Non-Uniform Dynamic Mode Decomposition to Reduce the Storage Required for PDE Simulations
Master of Mathematical Sciences, The Ohio State University, 2017, Mathematical Sciences
Partial Differential Equation simulations can produce large amounts of data that are very slow to transfer. There have been many model reduction techniques that have been proposed and utilized over the past three decades. Two popular techniques Proper Orthogonal Decomposition and Dynamic Mode Decomposition have some hindrances. Non-Uniform Dynamic Mode Decomposition (NU-DMD), which was introduced in 2015 by Gueniat et al., that overcomes some of these hindrances. In this thesis, the NU-DMD's mathematics are explained in detail, and three versions of the NU-DMD's algorithm are outlined. Furthermore, different numerical experiments were performed on the NU-DMD to ascertain its behavior with repect to errors, memory usage, and computational efficiency. It was shown that the NU-DMD could reduce an advection-diffusion simulation to 6.0075% of its original memory storage size. The NU-DMD was also applied to a computational fluid dynamics simulation of a NASA single-stage compressor rotor, which resulted in a reduced model of the simulation (using only three of the five simulation variables) that used only about 4.67% of the full simulation's storage with an overall average percent error of 8.90%. It was concluded that the NU-DMD, if used appropriately, could be used to possibly reduce a model that uses 400GB of memory to a model that uses as little as 18.67GB with less than 9% error. Further conclusions were made about how to best implement the NU-DMD.

Committee:

Ching-Shan Chou (Advisor); Jen-Ping Chen (Committee Member)

Subjects:

Aerospace Engineering; Applied Mathematics; Computer Science; Mathematics; Mechanical Engineering

Keywords:

Fluid Dynamics; Fluid Flow; Model Reduction; Partial Differential Equations; reducing memory; Dynamic Mode Decomposition; Decomposition; memory; Non-Uniform Dynamic Mode Decomposition

Rogers, Francis HThe measurement and decomposition of achievement equity - an introduction to its concepts and methods including a multiyear empirical study of sixth grade reading scores
Doctor of Philosophy, The Ohio State University, 2004, Educational Policy and Leadership
The maintenance of democracy requires the adequate education of our youth. Americans have struggled since the founding of the republic to define the scope of this democratic imperative and to fulfill its requirements. Over the course of the twentieth century, the educational minimum was defined in terms of fiscal resources for education. Educational equality, in turn, has been measured in terms of the equality of the distribution of fiscal resources. This paper contends that the shift away from fiscal equity toward achievement equity calls for a shift in our approach to the measurement of educational equality. The study expands the conceptual framework for measuring educational equality by incorporating the ideas of the democratic imperative for education and standards-based reform. These two ideas are brought together in the concept of egalitarian inequality, a condition marked by all students reaching or exceeding the level of educational attainment necessary to function in a democratic society. Drawing on this framework and established measures of equality, the study offers an empirical analysis of achievement equity. Posing the question of whether achievement equity has improved in recent years, as we would expect based on state and federal accountability legislation, we examine changes in equity in the distribution of reading proficiency scores drawn from a large, Midwestern school district for the years 1998 through 2003. Characteristics of the equality measurement techniques are described and compared. Following the federal mandate for disaggregating student subgroup achievement, decomposition techniques are applied to determine within- and between-groups contributions to district equity. Results of decomposition measures are presented and compared. Implications for policy makers and equity analysts are discussed with regard to gaining a more systematic understanding of the achievement gap and how, or whether, schools and districts are making progress toward closing those gaps. Recommendations for future work are offered both for broader, state and national studies and for more detailed district, school, and subgroup decompositions.

Committee:

William Loadman (Advisor)

Keywords:

Equity measurement; Inequality measurement; Achievement equity; Equity decomposition; Theil statistic; Theil decomposition; Atkinson index; Atkinson decomposition

Deshmukh, RohitModel Order Reduction of Incompressible Turbulent Flows
Doctor of Philosophy, The Ohio State University, 2016, Aero/Astro Engineering
Galerkin projection is a commonly used reduced order modeling approach; however, stability and accuracy of the resulting reduced order models are highly dependent on the modal decomposition technique used. In particular, deriving stable and accurate reduced order models from highly turbulent flow fields is challenging due to the presence of multi-scale phenomenon that cannot be ignored and are not well captured using the ubiquitous Proper Orthogonal Decomposition (POD). A truncated set of proper orthogonal modes is biased towards energy dominant, large-scale structures and results in over-prediction of kinetic energy from the corresponding reduced order model. The accumulation of energy during time-integration of a reduced order model may even cause instabilities. A modal decomposition technique that captures both the energy dominant structures and energy dissipating small scale structures is desired in order to achieve a power balance. The goal of this dissertation is to address the stability and accuracy issues by developing and examining alternative basis identification techniques. In particular, two modal decomposition methods are explored namely, sparse coding and Dynamic Mode Decomposition (DMD). Compared to Proper Orthogonal Decomposition, which seeks to truncate the basis spanning an observed data set into a small set of dominant modes, sparse coding is used to identify a compact representation that span all scales of the observed data. Dynamic mode decomposition seeks to identity bases that capture the underlying dynamics of a full order system. Each of the modal decomposition techniques (POD, Sparse, and DMD) are demonstrated for two canonical problems of an incompressible flow inside a two-dimensional lid-driven cavity and past a stationary cylinder. The constructed reduced order models are compared against the high-fidelity solutions. The sparse coding based reduced order models were found to outperform those developed using the dynamic mode and proper orthogonal decompositions. Furthermore, energy component analyses of high-fidelity and reduced order solutions indicate that the sparse models capture the rate of energy production and dissipation with greater accuracy compared to the dynamic mode and proper orthogonal decomposition based approaches. Significant computational speedups in the fluid flow predictions are obtained using the computed reduced order models as compared to the high-fidelity solvers.

Committee:

Jack McNamara (Advisor); Datta Gaitonde (Committee Member); Ryan Gosse (Committee Member); Joseph Hollkamp (Committee Member); Mohammad Samimy (Committee Member)

Subjects:

Aerospace Engineering

Keywords:

Turbulent flows; reduced order modeling; Navier-Stokes equations; nonlinear dynamics; Galerkin projection; modal decomposition; proper orthogonal decomposition; dynamic mode decomposition; sparse coding

Herman, John E.Linking Microbial Community Dynamics to Litter and Soil Chemistry: Understanding the Mechanisms of Decomposition
Doctor of Philosophy, University of Toledo, 2010, Biology (Ecology)

The mechanisms of leaf litter decomposition were examined using a combination of modeling and field experimental approaches. The modeling components utilized previously reported data from the literature to test two hypotheses generated by the Guild Decomposition Model (GDM; Moorhead and Sinsabaugh 2006), a novel model of microbial based litter decay. First, reported rates of lignin decay (dC3/dt) and holocellulose decay (dC2/dt) were examined in aboveground leaf litter of predominately northern conifer forests to test the hypothesis that the rate of lignin decay is a linear function of the lignocellulose index (LCI = lignin/[holocellulose + lignin]). Simulated dynamics of LCI in decaying litter were highly correlated to observed patterns, particularly when water and ethanol soluble litter fractions from model output were pooled with holocellulose fractions (mean R2 = 0.87 ± 0.02, P < 0.01). More detailed analyses of 64 of these studies yielded variable relationships between lignin decay rate and litter LCI; a regression based on pooled data (N = 385; total number of observations) produced a slope and an intercept that were not significantly different from predicted (slope = 2.33, intercept = -0.93). Both site and litter characteristics had significant effects on the proposed LCI threshold for lignin decay (LCI = 0.4), but no effects on slope or intercept, suggesting that the proposed lignocellulose control hypothesis is relatively robust across a range of litter and forest types.

The next analysis examined reports of microbial biomass associated with decaying plant litter (B:C ratios) in both terrestrial and aquatic systems to test the hypothesis that feedback controls link microbial and litter mass dynamics. Results showed that microbial biomass averaged 2.53% of total system organic matter (microorganisms + litter), but was more than twice as large in aquatic (3.43%, N = 632) as terrestrial (1.05%, N = 384) habitats. Data from a subset of 13 more detailed studies that included multiple observations per experiment showed no difference between habitats. While correlations between observed and simulated values of B:C were highly significant (Spearman’s Rho = 0.316, N = 218, P < 0.01) and Partial Mantel analysis of simulated and pooled observed data found that B:C values were related to litter mass loss, initial lignin content and changing lignin content during litter decay, the model explained only a modest fraction of the total variation in observations (R2 = 0.243, P < 0.01). The available data were insufficient to either conclusively validate or refute the modeling approach used by Moorhead and Sinsabaugh (2006) to mechanistically link the dynamics of microbial biomass and litter decay. Although the most detailed studies showed an increase in B:C values in early decay, followed by a decline in later stages of decay, similar to model behavior (Moorhead and Sinsabaugh, 2006), data were insufficient and too inconsistent between studies to clearly elucidate any pattern. For these reasons, it is suggested that greater insight to mechanistic linkages between decomposer microbial communities and litter decay will require more detailed studies that simultaneously monitor changes in both microbial and litter characteristics. This study found that combining data from disparate studies, which did not examine these factors consistently, provided only limited insight.

My experimental studies examined the general, conceptual model of leaf litter decomposition that predicts increasing litter recalcitrance with mass loss, with nutrient limitations often controlling decay rates in early stages and lignin concentrations dominating the late stages. The activities of extracellular enzymes (EEA) responsible for resource acquisition are predicted to track these changes in litter chemistry, with hydrolytic carbon and nutrient acquiring enzyme activities peaking early in decay and oxidative enzymes responsible for degrading recalcitrant compounds peaking late. As the mechanistic driver of these processes, the microbial community is predicted to track the changes in litter chemistry and EEA, with a shift from a copiotrophic (r-selected) community that utilizes the labile litter substrates early in decay to an oligotrophic (K-selected) community that utilizes the recalcitrant substrates late in decay. However, the conceptual model is based on single species litter experiments whereas natural ecosystems often have mixed species litter. The purpose of the field based study was to examine patterns of changing litter chemistry, EEA and microbial community during decomposition of natural leaf mixtures in two oak dominated forests in northwest Ohio.

Over a two-year period, litter decaying in the urban, Stranahan Arboretum revealed rapid loss of soluble compounds but little evidence of a faster relative loss of holocellulose than lignin contrary to predictions. Similarly, EEA indicated a seasonal pattern more strongly related to litter moisture than litter chemistry. Although the microbial community demonstrated a clear transition between early and late community types, there was an unexpected increase in diversity for the fungal community. In contrast, litter decaying in the nearby but rural, Oak Openings Metropark followed expected patterns of change in litter chemistry, with holocellulose decaying more rapidly than lignin. Additionally, EEA was strongly related to the soluble litter fraction and litter nitrate concentration, consistent with the conceptual model for a nitrogen limited site. Similar to the Arboretum, there was a clear transition between early and late community types, but with a decline in community diversity. Spring flooding of the Arboretum study site, its position in an urban location and a dense population of exotic earthworms (Amynthas agrestis) are postulated to have affected litter decay, EEA and microbial community patterns. Overall, the results suggest that the current conceptual model for leaf litter decomposition based on single species litter experiments is adequate to address patterns of decay in more natural mixes of leaf litter, given site specific factors. Although the experimental data supported the conceptual model for litter decay, no support was found for either the proposed lignocellulose control or microbial to litter mass (B:C) feedback control hypotheses generated by the GDM (data not shown). The experimental data were likely not of sufficient quantity or resolution to provide a conclusive test for simulated predictions.

Committee:

Daryl Moorhead, PhD (Committee Chair); Michael Weintraub, PhD (Committee Member); William Von Sigler, PhD (Committee Member); Margaret Carreiro, PhD (Committee Member); Jared DeForest, PhD (Committee Member)

Subjects:

Ecology; Environmental Science; Soil Sciences

Keywords:

Litter decomposition; extracellular enzyme; lignin; lignocellulose; DGGE; copiotrophic; oligotrophic; litter chemistry; microbial community; litter mixtures; Amynthas agrestis; urban-rural gradient; Guild Decomposition Model

Rohaly, Matthew JosephDecomposition of Aromatic Amines in a Jet Fuel Surrogate
Master of Science (M.S.), University of Dayton, 2014, Chemistry
The thermal decomposition of aromatic nitrogen containing compounds in a jet fuel surrogate was studied. The surrogate fuel's decomposition was compared to the decomposition found in natural jet fuels. Then high performance liquid chromatography was used to examine the formation of any polar products from the thermal decomposition of the surrogate fuel. Gas chromatography coupled with mass spectrometry and nuclear magnetic resonance were employed to try and identify the polar products. The large amount of hydrocarbons masking the polar products made fractional collection necessary before any identification could be attempted. After fractional collections were employed several oxygen polar compounds were found and identified from the thermal decomposition of the surrogate fuel. However no nitrogen-containing compounds could be found. This is most likely due to the low concentration of the nitrogen-containing compounds within the surrogate. Due to the effectiveness of the surrogate fuel's thermal decomposition it remains a good candidate for further jet fuel studies that look at reactivity. HPLC was also very effective at observing the formation of polar products within the jet fuel, although it could not identify these products. The fractional collection method that was employed did improve the results of the identification process, but it did not manage enough separation between the polar compounds and the hydrocarbons present in the surrogate. It is likely that a further separation method is needed. GCMS was relatively ineffective at separating and identifying polar products from this reaction. This is due to the bulk hydrocarbons masking the polar product signals. GCMS was able to identify a oxygen-containing compound, but only because the elution point from this compound was far from the elution point of any hydrocarbon. NMR was effective at identifying polar compounds that were present in significant quantities, however the extremely low concentration of the polar products made this process much less effective as well. Overall for GCMS or NMR to be considered effective techniques for this analysis a better separation process must be utilized.

Committee:

David Johnson (Advisor)

Subjects:

Chemistry

Keywords:

jet fuels; jet fuel decomposition; nitrogen contaminants in jet fuels; polar products from jet fuel decomposition; jet fuel separation techniques;

Zhang, YuleiComputer Experiments with Both Quantitative and Qualitative Inputs
Doctor of Philosophy, The Ohio State University, 2014, Statistics
Physical experiments play an important role in agriculture, industry, and medical research. However, physical experiments can sometimes be difficult or even impossible to run. In these situations, computer experiments are becoming desirable surrogates for physical experiments. This dissertation considers designs and the predictive models for computer experiments with both quantitative and qualitative input variables. The existing framework for building Gaussian stochastic process (GaSP) models with quantitative and qualitative inputs is to treat a given set of values of the qualitative inputs as determining a response surface in the qualitative inputs. A GaSP model is assumed for each of these response surfaces and the same covariance structure is used for each response surface. A cross-correlation parameter is introduced for each pair of sets of values of the qualitative variables in order to "capture" correlations between response surfaces. To guarantee that one has a legitimate overall covariance structure, certain conditions are imposed on the cross-correlation parameters. In the first part of this dissertation, we introduce two indicator-based GaSP models by transforming the qualitative inputs into quantitative variables and then use traditional correlation functions for quantitative inputs. We also show the equivalence properties between these new models and the existing model. The second part of this dissertation is about the experimental designs with both quantitative and qualitative inputs. The special data structure requires that a "good" design not only capture the cross-correlation information but also spread observations out over the entire quantitative inputs space. We propose two types of designs, the partial SLHD and partial CSLHD, which are modifications of existing designs in the literature, and compare their prediction accuracy with all the other existing designs for quantitative and qualitative. By examining several examples, we find that what constitutes a "good" design may vary from case to case. We summarize these findings with a "guideline" for selecting initial designs. Furthermore, when the initial design does not perform well, we also propose a sequential design algorithm to interpolate or extrapolate the target response levels in a GaSP model with mixed inputs. Inspired by factor analysis, in the last part of this dissertation, we build a more general composite covariance structure by converting the GaSP model with several qualitative levels into a linear combination of independent stochastic processes with fewer constraints on the variance and correlation functions. Furthermore, this composite covariance structure can be extended to the case with multiple qualitative inputs. In these cases, we introduced the Kronecker product form of the composite covariance function, which can not only reduce the number of the parameters, but also capture the similarity between different qualitative inputs with some identical components. In addition, we propose an ANOVA decomposition form of the Gaussian processes, which imposes a factorial structure on the response outputs. Finally, we extend the sequential design algorithm to the composite GaSP model.

Committee:

William Notz (Advisor); Peter Craigmile (Committee Member); Matthew Pratola (Committee Member)

Subjects:

Statistics

Keywords:

Computer Experiments; Physical Experiments; Gaussian Stochastic Process Model; Quantitative and Qualitative Inputs; Cross-Correlation Parameters; Experimental Designs; Composite GaSP Model; Factor Analysis; Kronecker Product; ANOVA Decomposition

Hua, LiyanShortest Path - Capacitated Maximum Covering Problems
Doctor of Philosophy, The Ohio State University, 2010, Business Administration

I study the shortest path - capacitated maximum covering problem (SP-CMCLP). Current, ReVelle and Cohon (1985) first studied the un-capacitated version of this problem. The two objectives of the problem are the minimization of the path length from a predetermined starting node to a predetermined terminal node and the maximization of the total demand covered by the facilities located at the nodes in the path. They solved a special case in which a demand can be covered only if it is located on the path. I solve the general model. I also introduce facility capacity constraints, new algorithms and new demand coverage structures to this problem.

I decompose the problem into a k-shortest path problem (kSP) and a capacitated maximum covering problem (CMCLP). The k-shortest path problem is solved by a path deletion algorithm. The capacitated maximum covering problem is solved by various heuristics and meta-heuristics including lagrangian relaxation, two versions of Tabu search and a simulated annealing method.

To the knowledge of the author, the Tabu search and simulated annealing methods introduced are the first meta-heuristics developed for the capacitated maximum covering problem. In these meta-heuristics, I use four neighborhood structures. These are 1) one-interchange which exchanges an selected facility with an unselected facility, 2) client shift which shifts a satisfied demand from one selected facility to another selected facility, 3) demand swap (or demand reallocation) which swaps one (or more) assigned demand node (nodes) with one (or more) unassigned demand node (nodes) within the coverage distance of a selected facility site, 4) demand addition which adds one or more unassigned demand to a selected facility. I design an embedded meta-heuristic procedure which has inner loops of single neighborhoods and an outer loop of multiple alternate inner loops. I design a heuristic method and a penalty method for the demand allocation sub-problem in the embedded Tabu search. In the penalty method, I use surrogate relaxation and add a penalty term to the objective function for the violated capacity constraints. An embedded simulated annealing method with temperature vibration is also designed using heuristic demand allocation.

I solve a new version of the shortest path - capacitated maximum covering problem with tree coverage structure (SP-CMCLP-TREE). Demand is supplied by sub-paths on a minimum spanning tree constructed from an underlying network. A demand is counted as covered if the total arc length of a path from the demand to a facility site is within coverage distance and the demand can be satisfied only if all the intermediate demand nodes on the path are satisfied.

Computational results for networks selected from literature show the effectiveness of the heuristics. Tabu search performs the best in solution quality, while Lagrangian relaxation and simulated annealing generate solutions of satisfactory quality using less time. Different path-coverage structures are used based on the properties of the networks. Tree demand coverage structure works better than traditional coverage structure for large partial networks. The impact of different network parameters are also studied.

Committee:

John R. Current, PhD (Advisor); David A. Schilling, PhD (Committee Member); Keely L. Croxton, PhD (Committee Member)

Subjects:

Management; Operations Research

Keywords:

Shortest Path; Capacitated Maximum Covering Problem; Decomposition; Tabu Search; Simulated Annealing; Path Coverage Structure; Tree

Park, Soo-JinCatalytic Decomposition of Nitric Oxide and Carbon Monoxide Gases Using Nanofiber Based Filter Media
Doctor of Philosophy, University of Akron, 2008, Chemical Engineering

The main sources of NOx are diesel engines, automotives, electric utilities, other industrial, commercial, and residential sources that burn fuels at high temperature. The control and abatement of NOx emissions are important because of their harmful effects on the human body and the environment. The strict regulations of NOx emissions and the growing demand for power compel new design of catalytic materials for pollution removal. The most common method for car exhaust NOx treatment involves wet impregnation of noble metals on ceramic substrates.

In this work, catalytic nanoparticles doped on nanofiber enhanced ceramic fibrous filter medium structure are developed as an alternative method. The noble metals, palladium, platinum and rhodium doped ceramic nanofibers, are synthesized using electrospinning and are incorporated into the micro-fibrous filter. We have discovered ceramic nanofiber containing noble metals also work in liquid phase catalysis by converting styrene to ethylbenzene at room temperature and atmospheric pressure. The reaction temperature is varied and the filters are tested for decomposition of nitric oxide and carbon monoxide using nanofiber based fibrous filter. Carbon dioxide, nitrogen and nitrous oxide gases were produced. Produced nitrous oxide gas was consumed by reacting with carbon monoxide. The efficiency of the catalytic fibrous filter was similar to commercial catalytic converter by adding of smaller amount of catalyst doped on alumina microfibers. As the amount of catalyst in the fibrous filter media increases the temperature at which all NO disappears decreases. As the inlet concentration of NO gas decreases, all NO disappears from the outlet at a lower temperature. As the face velocity through the fibrous filter media increases, efficiency becomes lower as the residence time of gases through the media decreases. We also tested a catalytic fibrous filter media containing Pd, Pt and Rh, and the performance is similar to that of catalytic convertor.

Analytical models are developed to study the performance of filters for isothermal nitric oxide and carbon monoxide gas reaction. The kinetic parameters for the model were determined using the Genetic Algorithm (GA) computer program to determine species concentrations as a function of position. Model and experimental results showed that the decomposition temperature of nitric oxide gas decreases by lowering the inlet gas concentration and increasing of catalyst concentration in the nanofibers. Also a non-isothermal model was developed for direct nitric oxide decomposition to predict temperature and concentration profiles along the filter length.

Committee:

George G. Chase, PhD (Advisor)

Subjects:

Chemical Engineering; Environmental Engineering

Keywords:

Catalytic Decomposition; Nanofiber Based Filter Media; Nitric Oxide; Carbon Monoxide

Rosenberg, Chase C.Foliar Chemical Analysis, Decomposition, and Effects on Nutrient Cycling of American Chestnut and its Hybrids
Master of Science (MS), Ohio University, 2010, Environmental and Plant Biology (Arts and Sciences)
The American chestnut, Castanea dentata (Marsh.) Borkh, was once the dominant species of many hardwood forests in eastern North America. A fungal pathogen introduced in 1904 killed mature chestnuts range-wide. Recent work has resulted in a putatively pathogen-resistant chestnut hybrid, making future reintroduction a distinct possibility. However, information on the chemistry and potential effects on nutrient cycling of the chestnut hybrid is necessary to understand the consequences of reintroduction. I predicted that litter from chestnut taxa would contain more recalcitrant compounds and subsequently decompose more slowly than the extant species Quercus alba L. and Acer saccharum Marshall. Lignin and tannin are secondary compounds which are known to inhibit decomposition, and which I predict will be relatively abundant in chestnut litter. I also predicted that there are no significant differences between the foliar nutrient content of the hybrid and American chestnut. In this experiment, I measured tannins, litter fractions, ten nutrient elements, and both decomposition rates and enzyme activity (in the Oe/A horizon) of leaves of C. dentata, C. mollissima Blume, hybrid chestnut, Quercus alba , Q.prinus L., Acer rubrum L., and A. saccharum. Nutrient resorption was also quantified in order to further understand hybrid chestnut's potential effect on forest nutrient pools. No significant differences in chemistry or decay rate were found among any of the chestnut taxa. Chestnut contained more tannin than non-chestnut species, and contained slightly more mass after one year of decomposition, but all metrics tested were within the ranges of co-occurring species. These data suggest that a reintroduction of hybrid chestnut will not likely alter nutrient cycles in Appalachian hardwood forests.

Committee:

Brian McCarthy (Advisor); Jared DeForest (Advisor); Ahmed Faik (Committee Member)

Subjects:

Ecology

Keywords:

chestnut; castanea dentata; restoration; foliar chemistry; decomposition

Hasan, M. R.Parametric Study and Higher Mode Response Quantification of Steel Self-Centering Concentrically-Braced Frames
Master of Science, University of Akron, 2012, Civil Engineering
Conventional concentrically braced frame (CBF) systems have limited drift capacity prior to structural damage, often leading to brace buckling under moderate earthquake input, which results in residual drift. Self-centering CBF (SC-CBF) systems have been developed to maintain the economy and stiffness of the conventional CBFs while increasing the ductility and drift capacity. SC-CBF systems are designed such that the columns uplift from the foundation at a specified level of lateral loading, initiating a rocking (rigid body rotation) of the frame. Vertically aligned post tensioning bars resist column uplift and provide a restoring force to return the structure to its initial state (i.e., self-centering the system). Friction elements are used at the lateral-load bearings (where lateral load is transferred from the floor diaphragm to the SC-CBF) to dissipate energy and reduce the peak structural response. Previous research has identified that the frame geometry is a key design parameter for SC-CBFs, as frame geometry relates directly to the energy dissipation capacity of the system. This thesis therefore considered three prototype SC-CBFs with differing frame geometries for carrying out a comparative study. The prototypes were designed using previously developed performance based design criteria and modeled in OpenSees to carry out nonlinear static and dynamic analyses. The design and analysis results were then thoroughly investigated to study the effect of changing frame geometry on the behavior of SC-CBF systems. The rocking response in SC systems introduces large higher mode effects in the dynamic responses of structure, which, if not properly addressed during design, can result in seismic demands significantly exceeding the design values and may ultimately lead to a structural failure. To compare higher mode effects on different frames, proper quantification of the modal responses by standard measures is therefore essential. This thesis proposes three normalized quantification measures based on an intensity-based approach, considering the intensity of the modal responses throughout the ground motion duration rather than focusing only on the peak responses. The effectiveness of the three proposed measures and the conventionally used peak-based measure is studied by applying them on dynamic analysis results from several SC-CBFs. These measures are then used to compare higher mode effects on frames with varying geometric and friction properties.

Committee:

David Roke, Dr. (Advisor); Kallol Sett, Dr. (Committee Co-Chair); Qindan Huang, Dr. (Committee Member)

Subjects:

Civil Engineering

Keywords:

Self-centering frames; SC-CBF, Performance-based design; Modal decomposition; Higher mode effects

Shrestha, SumanHigh Resolution Polarimetric Imaging Techniques for Space and Medical Applications
Master of Science in Engineering, University of Akron, 2013, Electrical Engineering
In the year 2012, there were approximately 226,160 cases of lung cancer and 160,340 deaths out of it as per the National Cancer Institute. There are mainly two types of lung cancer, small cell lung cancer and non-small cell lung cancer, of which 87% are diagnosed as non-small cell. A physical algorithm and a systematic study relating the morphological, chemical and metabolic properties of lung cancer to the physical and optical parameters of the polarimetric detection process are missing. Therefore, one of the purposes of the study is to explore the polarimetric phenomenology of near infrared light interaction with healthy and lung cancer monoline cells by using efficient polarimetric backscattering detection techniques. Preliminary results indicate that enhanced discrimination between healthy and di fferent types of lung cancer cells can be achieved based on their backscattered intensities, Mueller matrix, diattenuation and depolarization properties. Also, various optical parameters like linear depolarization ratio and degree of linear polarization play an important role in discriminating healthy and di fferent lung cancer cells. Specifi cally, the sizes of the nuclei of the cancer cells and the nucleus-to-cytoplasmic ratios appear to have potential impact on the detected polarimetric signatures leading to enhanced discrimination of lung cancer cells. The second work in this thesis has been done with the support of the Air Force Research Laboratory (AFRL). Polarimetric signals have always played an important role in the identi fication, discrimination and analysis of a material's optical properties. This work presents a novel remote sensing approach based on polarimetric fractal detection principles. Backscattered polarimetric signals contribution from diff erent materials used in space applications have already been detected using a laboratory LADAR testbed and this thesis presents implememtation of the LADAR testbed and analysis techniques of these backscattered signals based on fractal analysis. Fractal dimension has been chosen as a measure for the discrimination purposes of these materials. The outcome of this thesis indicates that polarimetric fractal principles may enhance the capabilities of the LADAR for characterization and discrimination of di fferent materials.

Committee:

George Giakos, Dr. (Advisor); Kye-Shin Lee, Dr. (Committee Member); Nghi Tran, Dr. (Committee Member)

Subjects:

Biomedical Engineering; Electrical Engineering; Experiments; Optics; Remote Sensing

Keywords:

Polarimetry; Monoline Lung Cancer Cells; Mueller Matrix Decomposition; Remote Sensing; Fractal Dimension

Dalton, John ChristianSurface Hardening of Duplex Stainless Steel 2205
Doctor of Philosophy, Case Western Reserve University, 2017, Materials Science and Engineering
Increased use of second-generation duplex stainless steels is expected as demands for tougher, more economical, and corrosion resistant alloys increase. A novel gas-phase carburizing and nitriding procedure operating in the temperature range of 325 °C to 440 °C was utilized with the intent of improving both the tribological and electrochemical performance of the 2205 duplex alloy (22Cr–5Ni–3Mo–0.15N) under so-called paraequilibrium conditions. A suite of state-of-the-art microchemical and structural characterization tools were employed following each treatment, and performance of the treated alloys assessed by chloride-based cyclic polarization and nano-indentation hardness measurements. Particular emphasis was placed on understanding the response of the ferritic phase, which has been a source of speculation and confusion since the development of such treatments. CALPHAD-based thermodynamic modeling was also used to predict phase stability in the 2205 system subjected to gas-phase paraequilibrium nitridation or carburization. Analysis of the interstitially-hardened layer in the austenitic phase of 2205 provides results consistent with similar surface hardening treatments of single-phase austenitic stainless steels: a colossally supersaturated and precipitate-free hardened layer of expanded austenite is formed. The interstitial concentration, case depth, and concomitant mechanical properties can be tailored through control of the temperature, duration, and chemical activity with the gas-phase process. Spatially-resolved chemical and structural analysis within the d-ferrite of 2205 revealed two competitive transformation behaviors following nitridation, while carburization led to only one response. For both carburization and nitridation, carbon or nitrogen supersaturations in ferrite on the order of 20 at.% and 25 at.%, respectively, were observed—greater than 10^6 times the equilibrium concentration at room temperature, yet remarkably with unmeasurable expansion or distortion of the bcc lattice. These supersaturated ferrite grains appear `featureless’ in the TEM, suggesting the presence of a high density of strain centers. High-resolution TEM revealed that, unlike austenite, ferrite may accommodate colossal concentrations of carbon or nitrogen through the formation of an ultra-high density dislocation network = 10^16 m-2, as opposed to occupation of lattice-expanding interstitial sites. These linear defects, presumably formed during yielding under the large residual stresses of restricted lattice expansion, enhanced the kinetics of spinodal-like decomposition of the ferrite to nm-scale Fe-rich (aFe) and Cr-rich (aCr) ferrite domains within the carburized or nitrided layer, and apparently provide lower-energy sites for carbon and nitrogen than in potential secondary phases. Elsewhere in the case of nitrided 2205, an isothermal, nitrogen-induced ferrite-to-austenite phase transformation occurred due to the potent austenite-stabilizing nature of nitrogen. The austenite transformation product formed in a partitionless manner from the parent ferrite with a high-aspect-ratio plate-like morphology in Nishiyama–Wassermann orientation relationship. These, and other microstructural signatures lead to the conclusion that the austenite formed by a shear-like mechanism characteristic of martensitic reactions. The microstructural evolution of these phase transformations in d-ferrite accompanying low-temperature nitridation of 2205, and lack of carburization-induced transformation of ferrite to austenite, are discussed in light of the chemical and strain energies involved.

Committee:

Arthur Heuer, Prof. (Advisor); Frank Ernst, Prof. (Committee Member); Matthew Willard, Prof. (Committee Member); Farrel Martin, Prof. (Committee Member)

Subjects:

Materials Science

Keywords:

stainless steel, surface hardening, surface engineering, carburizing, nitriding, CALPHAD, paraequilibrium, thermodynamic modeling, ferrite, martensite, expanded austenite, S-phase, phase transformations, spinodal decomposition, dislocations

Barker, Rachel ElizabethInvasive Shrub (Lonicera maackii) Effects on Terrestrial-Aquatic Linkages in Stream Ecosystems: Organic Matter Processing and Macroinvertebrate Community Colonization
Master of Science (M.S.), University of Dayton, 2011, Biology
Lonicera maackii, an invasive terrestrial shrub that is dominant in riparian zones, may have substantial impacts on the structure and function of aquatic systems. We investigated linkages between this terrestrial invader and the aquatic community structure in headwater streams by assessing in situ leaf breakdown and the colonization of leaf packs by macroinvertebrates in three 3rd order headwater streams of southwestern Ohio, Possum Run, Fecher Park, and Black Oak Park. Leaf breakdown and colonizing macroinvertebrate taxa richness, density, and functional feeding group relative abundance were assessed using three types of in-stream leaf packs: L. maackii, native (Fraxinus spp., Platanus occidentalus), and a mixture of native and invasive leaves. Invasive leaf breakdown rates were up to 4x faster than native leaves, and significantly less organic matter remained for invasive compared to other leaf types on all sampling days. There were specific macroinvertebrate taxa unique to particular treatments. For instance, invasive leaf packs were dominated by collector-gatherers for the duration of the study and Chironomidae, in particular, was the most important taxon colonizing invasive leaf packs. In contrast, Oligochaeta colonized native leaf packs in greater densities at Possum Run stream and mix leaf packs in Fecher Park stream. Collector-gatherers also dominated mix leaf packs across stream sites and native leaf packs located within Possum Run. In summary, our results demonstrate that L. maackii leaves breakdown significantly faster in headwater streams compared to native leaves, and that macroinvertebrate densities and functional feeding group colonization are influenced by invasive leaf litter input. These results support the hypothesis that L. maackii invasion in riparian zones can have direct and significant impacts on aquatic ecosystems by influencing organic matter processes.

Committee:

Ryan McEwan, PhD (Advisor); Mark Benbow, PhD (Advisor); Jayne Robinson, PhD (Committee Member); Patrick Williams, PhD (Committee Member)

Subjects:

Aquatic Sciences; Biology; Ecology; Freshwater Ecology

Keywords:

invasive species; Lonicera maackii; decomposition; allochthonous inputs; aquatic macroinvertebrates; organic matter processing

He, BoCompatible discretizations for Maxwell equations
Doctor of Philosophy, The Ohio State University, 2006, Electrical Engineering

The main focus of this dissertation is the study and development of numerical techniques to solve Maxwell equations on irregular lattices. This is achieved by means of compatible discretizations that rely on some tools of algebraic topology and a discrete analog of differential forms on a lattice.

Using discrete Hodge decomposition and Euler’s formula for a network of polyhedra, we show that the number of dynamic degrees of freedom (DoFs) of the electric field equals the number of dynamic DoFs of the magnetic field on an arbitrary lattice (cell complex). This identity reflects an essential property of discrete Maxwell equations (Hamiltonian structure) that any compatible discretization scheme should observe. We unveil a new duality called Galerkin duality, a transformation between two (discrete) systems, primal system and dual system. If the discrete Hodge operators are realized by Galerkin Hodges, we show that the primal system recovers the conventional edge-element FEM and suggests a geometric foundation for it. On the other hand, the dual system suggests a new (dual) type of FEM.

We find that inverse Hodge matrices have strong localization properties. Hence we propose two thresholding techniques, viz., algebraic thresholding and topological thresholding, to sparsify inverse Hodge matrices. Based on topological thresholding, we propose a sparse and fully explicit time-domain FEM for Maxwell equations. From a finite-difference viewpoint, topological thresholding provides a general and systematic way to derive stable local finite-difference stencils in irregular grids.

We also propose and implement an E-B mixed FEM scheme to discretize first order Maxwell equations in frequency domain directly. This scheme results in sparse matrices.

In order to tackle low-frequency instabilities in frequency domain FEM and spurious linear growth of time domain FEM solutions, we propose some gauging techniques to regularize the null space of a curl operator.

Committee:

Fernando Teixeira (Advisor)

Keywords:

differential forms; chains and cochains; Whitney forms; de Rham diagram; gauging; compatible discretization; Hodge operator; Hodge decomposition; Euler's formula; FDTD; FEM; Galerkin duality; primal and dual; pure Neumann boundary condition; mixed FEM

Kaufman, Jason R.Digital video watermarking using singular value decomposition and two-dimensional principal component analysis
Master of Science (MS), Ohio University, 2006, Electrical Engineering & Computer Science (Engineering and Technology)

As the state of remote sensing technology improves, the acquisition of three-dimensional images and video will become more common in several different applications. However, the problem of protecting and authenticating three-dimensional data – in particular, three-dimensional video data – has been largely unexplored. An application of the singular value decomposition (SVD) and two-dimensional principal component analysis (2DPCA) to video data with an arbitrary number of channels for the purpose of watermarking is presented.

It will be shown that it is possible to select parameters that preserve the visual quality of the video while effectively embedding the watermark in both the spatial and temporal domains. However, much processing time is required to embed and extract the watermark. Furthermore, it is unclear how robust the presented technique is to attack.

Committee:

Mehmet Celenk (Advisor)

Keywords:

digital video watermarking; information security; singular value decomposition (SVD); two-dimensional principal component analysis (2DPCA)

Jung, Ji YoungNitrogen Fertilization Impacts on Soil Organic Carbon and Structural Properties under Switchgrass
Doctor of Philosophy, The Ohio State University, 2010, Environment and Natural Resources
A paradigm shift from maximum to sustainable agricultural production also applies to cultivation of bioenergy crops. Nitrogen (N) fertilization is needed to sustain the biomass yield of switchgrass as a biofuel feedstock and, consequently, may influence the potential for soil quality improvement through soil organic carbon (SOC) sequestration. Because changes in soil quality can feed back to affect the sustainability of biomass production, the impacts of N application on switchgrass biomass production and soil quality need to be evaluated together. Therefore, the overall objective of this study was to assess the effects of N fertilization on switchgrass biomass, changes in SOC concentration and pool, and soil structural properties. This objective was realized by conducting field experiments in Ohio and Tenessee, and a laboratory incubation study in Ohio. The aboveground biomass of switchgrass was more strongly influenced by N fertilization than the belowground biomass. Even when the aboveground biomass was harvested and removed, N fertilization led to an increase in SOC, both in Ohio and Tennessee. The data from laboratory incubation study showed that N additions could retard the decomposition of organic matter, which may contribute towards higher SOC pools in N fertilized plots. The results from the Tennessee experiments indicated the important role of roots in stabilizing soil structure. Despite higher SOC concentrations in plots receiving a high rate of N fertilization, higher soil structural stability was associated with greater root biomass and longer root length in plots receiving none or a low rate of N fertilizer. These data indicated that root growth is a crucial driver of surface soil structure.

Committee:

Rattan Lal, PhD (Advisor); Peter Curtis, PhD (Committee Member); David Barker, PhD (Committee Member); Julie Jastrow, PhD (Committee Member)

Subjects:

Soil Sciences

Keywords:

switchgrass; soil organic carbon (SOC); nitrogen fertilization; biomass production; soil structure; organic matter decomposition; root

Giai, CarlaFire, Exotic Earthworms and Plant Litter Decomposition in the Landscape Context
Doctor of Philosophy, The Ohio State University, 2009, Evolution, Ecology, and Organismal Biology
Mixed-oak (Quercus ssp.) forests in southern Ohio have been significantly affected by fire suppression policies enforced since the early1920s, possibly altering key ecosystem properties such as soil organic matter quality and quantity, nutrient availability, and microbial activity. The Fire and Fire Surrogate (FFS) program is a multidisciplinary program that included the experimental application of both functional restoration treatments (i.e. prescribed fire at historical intervals) and structural approaches (i.e mechanical thinning to pre-settlement density and species composition). Within that framework, I focused in three unquantified major aspects of southern Ohio forest ecosystems exposed to a quasi-natural disturbance regime: (1) effects on soil microbial communities, (2) possible belowground effects of non-native earthworm species, and (3) impacts on the rates of litter decomposition and subsequent nutrient release. Direct effects of prescribed fire and thinning in soil nutrient dynamics and microbial communities were analyzed three months after the second application of prescribed fire. This short-term analysis was complementary to a series of in-situ, manipulative experiments that evaluated (a) potential effects of non-native earthworm species on N dynamics, as well as changes in soil organic matter and microbial communities within the FFS experimental design, and (b) litterbag incubations of native leaf mixtures and standard litters over more than three years in our study sites, where I evaluated the influence of restoration treatment, leaf litter quality, and landscape position on leaf litter decomposition processes and nutrient release. The major conclusion of my studies is that I did not observe definitive effects of restoration treatments. Rather, a combination of site characteristics and landscape position were responsible for some of the differences observed when comparing unmanaged controls to any of the alternative restoration treatments. I hypothesize that because my studies were performed a short time after the application of such treatments, and only considered two applications of prescribed fire and one thinning event, it is not possible to separate larger scale factors (e.g. topography and climate) determining nutrient dynamics in the area from any effects that could result from restoration treatments. Consequently, further applications of prescribed fire, thinning, and/or their combination are strongly recommended for the region.

Committee:

Raph Boerner, PhD (Advisor); Clive Edwards, PhD (Committee Member); Robert Klips, PhD (Committee Member); David Stetson, PhD (Committee Member); Mac Callaham, Jr/PhD (Committee Member)

Subjects:

Ecology; Forestry

Keywords:

fire; exotic earthworms; plant litter decomposition; terrestrial ecosystems; hardwood forests; restoration ecology; nutrient dynamics

Palan, Rohit ChandulalPyroelectric Properties of Ferroelectric Lanthanum Bismuth Titanate Thin Films
MS, University of Cincinnati, 2001, Engineering : Materials Science
Ferroelectric and pyroelectric effects in the Lanthanum Bismuth Titanate LaxBi4-xTi3O12 (LBIT) solid solution thin films prepared by Metal Organic Decomposition technique were studied. The films were characterized to understand the effect of La substitution on texture of the films. X-Ray Diffraction, RAMAN spectroscopy and electron microscopy were utilized to study the effect of La on structure of the LBIT thin films. Dielectric, P-E hysterisis and static and dynamic pyroelectric measurements were carried out to study the effect of La substitution on electrical properties. Experimental results indicate that La content affects the development of c-orientation in the structure and induces tetragonality in the structure resulting in improved polarization and dielectric properties. Higher polarization along with reduced Curie temperature due to La addition improves the pyroelectric coefficient as measured with static pyroelectric method. Voltage and detectivity figures of merit were found to be high for high La content in the system. For La = 0.6 figures of merit were found to be comparable to widely studied pyroelectric thin film systems suggesting potential of LBIT thin films for the application in pyroelectric sensors.

Committee:

Dr. Relva C. Buchanan (Advisor)

Subjects:

Engineering, Materials Science

Keywords:

Bismuth Titanate; Ferroelectric Properties; Pyroelectric Properties; Metal-Organic Decomposition; Thin Films

Vardar, CerenOn the Correlation of Maximum Loss and Maximum Gain of Stock Price Processes
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2008, Mathematics/Probability and Statistics
One of the primary issues in mathematical finance is the ability to construct portfolios that are optimal with respect to the risk. The stock price is subject to stochastic variability so the risk an investor encounters is due to the stock prices. A commonly used measure of risk is the expected maximum loss of a stock, in other words, how much one can lose. It can be defined informally as the largest drop from a stock peak to a stock nadir. Over a certain fixed length of time, a reasonably low expected maximum loss is as crucial to the success of any fund asa high maximum gain or maximum profit. The correlation coefficient of the maximum loss and the maximum gain indicates the relation between the gain and the risk using measures which are functions of the Sharpe ratio. The price of one share of the risky asset, the stock, is modeled by geometric Brownian motion. By taking the log of geometric Brownian motion, Brownian motion can be used as basis of the calculations related to the geometric Brownian motion. In this dissertation work, we present analytical results related to the joint distribution of the maximum loss and maximum gain of a Brownian motion and the correlation of them, and detailed explanation of this theoretical result which requires a review of standard but difficult literature. We have given an analytical expression for the correlation of the supremum and the infimum of standard Brownian motion up to an independent exponential time, we have shown convexity of the maximum gain and the maximum loss, and we have calculated some bounds for the expected values of maximum gain and maximum loss. We also search for a relation between the Sharpe ratio and the correlation coefficient for Brownian motion with drift and geometric Brownian motion with drift. Using the scaling property, we have shown that the correlation coefficient does not depend on the diffusion coefficient for Brownian motion. And finally, using real-life data, we have presented the correlation of maximum gain and maximum loss and the correlation of the supremum and the infimum of stock prices.

Committee:

Gabor Szekely (Advisor); Craig Zirbel (Advisor); Bullerjahn George (Committee Member); Rizzo Maria (Committee Member); Chen John (Committee Member)

Subjects:

Mathematics

Keywords:

Brownian Motion; Geometric Brownian Motion; Sharpe Ratio; Strong Markov Property; Scaling Property; Bessel Process; Doob's h-transform; Path Decomposition

Al-Saidi, Muslem Muhamed MahdiBalanced Disk Separators and Hierarchical Tree Decomposition of Real-Life Networks
MS, Kent State University, 2015, College of Arts and Sciences / Department of Computer Science
Recently in few papers, a balanced disk separator of graphs were recursively used to construct one or a small set of spanning trees that sharply approximate distances in a given graph. The best up to date approximation algorithms were obtained for multiplicative tree spanners and for collective additive tree spanners of graphs. In this thesis, we analyze a number of real-life networks, arising in Internet measurements, social and biological sciences, and demonstrate that they all admit small radius balanced disk separators. As a consequence, the one-to-one distances in those networks can be sharply approximated by the tree metrics. Furthermore, all those networks admit very simple distance and routing labeling schemes.

Committee:

Feodor Dragan (Advisor); Hassan Peyravi (Committee Member); Mikhail Nesterenko (Committee Member)

Subjects:

Computer Science

Keywords:

Balanced Disk Separators; Hierarchical Tree Decomposition; Real-Life Networks

Chabot, John AlvaVALIDATING STEADY TURBULENT FLOW SIMULATIONS USING STOCHASTIC MODELS
Master of Science, Miami University, 2015, Computational Science and Engineering
Proper Orthogonal Decomposition was heralded as an objective means of extracting coherent structures from turbulent flows. Prior to its introduction, coherent structures were subjectively defined and discussed among experts. Since its introduction many reduced order models have been developed with inconsistent and often flow dependent validation procedures. This work sets up a framework for a data driven approach to validation of reduced order models derived from steady turbulent flows. Here it is proposed that the `goodness' of a model can be scored by how similar experimental and simulated data move through the model space. This is achieved by generating a Markov model for both data sets, using clustering techniques and maximum likelihood estimates. Results show increasing scores correlate with improved turbulent kinetic energy and modal amplitude for 3 data sets and 14 models. Additionally the generation of a surrogate Markov model can be used to identify missing dynamics in a simulation.

Committee:

Caraballo Edgar , Dr. (Advisor); Mehdi Safari, Dr. (Committee Member); Andrew Sommers, Dr. (Committee Member)

Subjects:

Fluid Dynamics; Mathematics; Statistics

Keywords:

fluid dynamics; validation; reduced order models; proper orthogonal decomposition; galerkin method; Markov model; maximum likelihood estimate; clustering

Hensler, Amanda LaurenA Comparison Of Rotational And Continuously Stocked Pasture In Appalachian Ohio
Master of Science, The Ohio State University, 2008, Horticulture and Crop Science
Rotational stocking (RS) is an increasingly used practice that can benefit producers. The objective of this study was to compare RS and continuous stocking (CS) (two replicates) with 32 cow-calf pairs on 17.2 ha at the North Appalachian Experimental Watershed, Coshocton OH. Forage biomass was measured weekly with a rising plate meter. Calculated forage intake by RS cattle, July-September 2006 and 2007, was 13.3 and 9.6 kg DM/cow/day, respectively. Forage decomposition rate was measured three times during the grazing season for the green and dead vegetation with no differences between grazing treatments. Cattle weight was 10.7 kg heavier for RS pasture in 2006, in 2007 cows on CS gained 12.2 kg more than RS. It was concluded that benefits to RS occurred in 2006, a year with better climatic conditions. Research should continue to determine how accumulative effects, such as on botanical composition, and variation in climate might affect production.

Committee:

David Barker, PhD (Committee Chair); R. Mark Sulc, PhD (Committee Member); Steven Loerch, PhD (Committee Member)

Subjects:

Agriculture; Agronomy; Animals

Keywords:

grazing; decomposition; forage; pasture growth; biomass; beef cattle

Zhao, KezhongA domain decomposition method for solving electrically large electromagnetic problems
Doctor of Philosophy, The Ohio State University, 2007, Electrical Engineering
This dissertation presents a domain decomposition method as an effective and efficient preconditioner for frequency domain FEM solution of geometrically complex and electrically large electromagnetic problems. The method reduces memory requirements by decomposing the original problem domain into several non-overlapping and possibly repeatable sub-domains. At the heart of this research are the Robin-to-Robin map, the “cement” finite element coupling of non-conforming grids and the concept of duality paring. The Robin’s transmission condition is employed on interfaces between adjacent sub-domains to enforce continuity of electromagnetic fields and to ensure the sub-domain problems are well-posed. Through the introduction of cement variables, the meshes at the interface could be non-conformal which significantly relaxes the meshing procedures. By following the spirit of duality paring a symmetric system is obtained to better reflect physical nature of the problem. These concepts in conjunction with the so-called finite element tearing and interconnecting algorithm form the basic modules of the present domain decomposition method. To enhance the convergence of DDM solver, the Krylov solvers instead of classical stationary solvers are employed and studied. In order to account the radiation condition exactly thus eliminating spurious reflection, a boundary element formulation is hybridized with the present DD method, also through the aforementioned novel concepts. One of the special cases of present hybridization is the well known hybrid finite element and boundary element method. It will be shown that the proposed hybrid offers simultaneously: (1) symmetry, (2) modularity, (3) non-conformity between FEM and BEM domains, (4) free of internal resonance, and (5) natural and effective preconditioning scheme that guarantees spectral radius less or equal to one. Lastly this dissertation presents a DDM solution scheme for analyzing electromagnetic problems involving multiple separable scatterers. The method first decomposes the original problem into several disjoint sub-regions. In each sub-region, the domain decomposition method is further applied rendering geometrically complicated and electrically large sub-region problems tractable. The sub-regions communicate through the near-field Green’s function. To overcome the vast computational costs required in exchanging information between electrically large sub-regions, the adaptive cross approximation algorithm is adopted to expedite the process.

Committee:

Jin-Fa Lee (Advisor)

Keywords:

numerical methods; computational electromagnetics; domain decomposition method; finite element method; hybrid finite element and boundary element method; multi-region method

Khatmullin, Renat RIminium Salts and Their Derivatives as Models for Catalytic Water Oxidation
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2013, Photochemical Sciences
The solar energy utilization is one of the most promising strategies for catering the ever-increasing energy demand in a renewable manner. For this reason, several approaches are pursued for solar energy storage, one of which involves the photocatalytic splitting of water. Over recent years, much research has been directed towards the design of transition-metal based water oxidation catalysts to obtain oxygen based on transition metal complexes. The major drawback of most of these catalysts is the cost of transition- metal complexes. For these reasons, the main focus of our research is based on the design of a fully organic catalyst suitable for water oxidation. Our group recently discovered that a flavinium ion performs electrode-mediated electrocatalytic water oxidation at large overpotentials. It was found that catalysis occurs only in the presence of the electrodes that produce active oxides on their surfaces. The mechanism of the catalysis by the flavinium ions was proposed to involve the coupling reaction two oxygen-centered radicals, one of which is derived from to the flavin moiety and the other one is formed at the electrode surface. The electrochemical oxidation of the formed peroxide species then proposed to release the oxygen molecule and recover the catalyst. However, it is important to note, that the detailed study of the mechanism is limited due the fact that electrode participates in the catalytic cycle. For these reasons, it is crucial to develop a fully homogeneous system to study the mechanism of the catalysis. One approach towards a fully molecular catalysis involves a system composed of two- iminium ion moieties joined covalently by a suitable linker. The mechanism of a catalysis is proposed to involve four individual steps: (i) pseudobase formation via a reaction of flavinium ions with water; (ii) proton-coupled oxidation of pseudobases to generate alkoxyl radicals; (iii) coupling of alkoxyl radicals to generate the peroxide intermediate; (iv) two-electron oxidation of the peroxide to release molecular oxygen and regenerate the catalyst. Therefore, we decided to study each individual step of the proposed mechanism above in great detail. A series of iminium salts and their pseudobases were synthesized. It was found that the efficiency of a pseudobase formation depends on the nature of heterocyclic ion and the nature of substituents bound to it. The proton-coupled electrocatalytic oxidation of pseudobases was studied using cyclic voltammetry. We found that the deprotonation of the amine radical cation formed after one-electron oxidation of pseudobase derivative occurs via two competing pathways: OH vs. C-H deprotonation. To elucidate the side responsible for C-H deprotonation event we synthesized the methoxy derivatives of iminium ions since these compounds do not contain an O-H proton. Additionally, to investigate the general chemistry of the alkoxyl radicals, we prepared 2- ethyl-4-nitroisoquinolinium hydroperoxide. Since hydroperoxides also tend to form alkoxyl radicals upon the decomposition, we decided to investigate the thermal behavior of 2-ethyl-4-nitroisoquinolinium hydroperoxide. The thermal decomposition was investigated using steady-state UV/Vis and NMR spectroscopy. Finally in order to study the two electron-oxidation processes of peroxide leading to the formation of oxygen we report the modified procedures for the synthesis of symmetric peroxide xanthrene based moiety.

Committee:

Ksenija Glusac, Ph.D. (Advisor); Thomas Kinstle, Ph.D. (Committee Member); Marshall Wilson, Ph.D. (Committee Member); Michael Zamkov, Ph.D. (Committee Member)

Subjects:

Chemistry; Energy

Keywords:

Photochemistry, Energy Demand, Iminium Ions, Catalytic Water Oxidation, Decomposition of Organic Peroxides

Parker, Grant HoustonPyrolytic Decomposition of Synthetic Paraffinic Kerosene Fuel Compared to JP-7 and JP-8 Aviation Fuels
Master of Science (M.S.), University of Dayton, 2013, Chemical Engineering
Every generation of advanced military aircraft fly higher and faster than previous generations. With these leaps in performance, aircrafts develop enormous heat loads which can exceed aircraft material limitations. To relieve these heat loads, aircraft can utilize the endothermic heat sink capacity of jet fuel realized through pyrolytic decomposition. Improved understanding of the effect of fuel chemical composition on supercritical pyrolytic reactivity under conditions relevant to advanced aircraft operation can assist with the successful development of viable cooling methodologies. The goal of the current study was to compare the pyrolytic reactivity, primary decomposition products, and global reaction rates of fuels with varying chemical composition. A flowing reactor system was used to explore the pyrolytic chemistry of a Synthetic Paraffinic Kerosene (SPK) and specification jet fuels JP-8 and JP-7. The SPK was comprised solely of iso- and n-paraffins, with negligible cycloparaffin and aromatic content, while the specification fuels had chemical compositions consistent with typical petroleum-derived fuels. The pyrolytic studies were performed using stainless steel tube reactors which were 37.5 cm long and 0.5 mm inside diameter, with inlet flow rates of 0.2 to 0.6 mL/min at a pressure of 3.54 MPa. External reactor wall temperatures ranged from 500C to 650C. The liquid to gas conversion by mass was used as a metric for evaluating the pyrolytic reactivity due to the complex multicomponent composition of the test fuels. SPK averaged 45% higher conversion than JP-7 and 75% higher conversion than JP-8 at each respective temperature. All fuels followed similar reactivity trends with respect to controlling reaction chemistry, such as such as decomposition of long chain nparaffins, olefin formation, cycloparaffins formation, aromatic formation, and gas (e.g.,low molecular weight compound) production. Characterization of the relative reactivity of the fuels was performed by assuming the fuels decomposed via a first order, irreversible reaction pathway with respect to the gravimetric liquid to gas conversion. The calculated reaction rates and temperature data were used to develop Arrhenius plots which yielded the following kinetic perimeters: SPK--pre-exponential (A) factor of 2.3 x 10^12 s-1 and activation energy (Ea) of 223 kcal/mol, JP-7--A of 2.1 x 10^12 s-1 and Ea of 226 kcal/mol, and JP-8--A of 4.6 x 10^12 s-1 and Ea of 235 kcal/mol. These parameters can be used to estimate the initial reactivity and decomposition of these fuels under endothermic conditions. SPK fuels are more pyrolytically reactive compared to JP-7 and JP-8 using the liquid to gas conversion metric due to the variation in the neat chemical compositions. The mildly branched paraffins of the SPK with negligible cycloparaffins and aromatics, which can act as hydrogen donors reducing propagation rate, limited the reaction pathways resulting in a high liquid to gas conversion. JP-7 and JP-8 had a lower liquid to gas conversion due to the significantly higher initial concentrations of cycloparaffins and aromatics, thereby enabling these fuels to participate in a greater number of hydrogen donor reactions which lowers the extent of propagation reactions. Implications of these results can vary depending on the heat sink design and endothermic fuel cooling strategy. The propensity of SPK to react at lower temperature can enable SPK fuels to more readily reach the endothermic heating value. Unfortunately, a fuel with a higher pyrolytic reaction rate can also produce carbon deposition more readily. Further development into the design of a hypersonic heat exchanger system and the determination of the acceptable amount of liquid to gas conversion will dictate the optimal endothermic fuel.

Committee:

Matthew DeWitt (Advisor); Kevin Myers (Committee Member); Steven Zabarnick (Committee Member); Richard Striebich (Committee Member)

Subjects:

Aerospace Engineering; Alternative Energy; Chemical Engineering

Keywords:

pyrolysis, endothermic jet fuels, synthetic paraffinic kerosene, hypersonic, endothermic heat sink, pyrolytic decomposition, JP-7, fischer tropsch, FT

Next Page