Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 16)

Mini-Tools

 
 

Search Report

  • 1. Horiki, Yasutaka A self calibration technique for a DOA array in the presence of mutual coupling and resonant scatterers

    Doctor of Philosophy, The Ohio State University, 2006, Electrical Engineering

    This dissertation describes an iterative self-calibration technique for a Direction-of-Arrival (DOA) array to automatically remove the effects of mutual coupling and near-zone resonant size scatterers. The Inverse Method of Moments (IMM) is also introduced to determine the normalized Method of Moments (MM) impedance matrix and MM voltage vector for a Uniform Linear Array (ULA) given its terminal currents produced by plane waves from known directions. The IMM can be combined with an iterative technique to determine the array DOA angles with mutual coupling effects removed.

    Committee: Edward Newman (Advisor) Subjects:
  • 2. Ren, Kai Physics-Based Near-Field Microwave Imaging Algorithms for Dense Layered Media

    Doctor of Philosophy, The Ohio State University, 2017, Electrical and Computer Engineering

    It is of importance to understand the physics as electromagnetic (EM) waves propagate through stratified media, are scattered back from buried irregularities, and are received by an antenna in the near field. To generate better images, we need to incorporate the physics of the phenomena into the imaging algorithm, such as multiple reflections, refractions resulting from the existence of interfaces, and diffractions from embedded targets. A forward model is developed based on the spectral Green's function associated with layered media weighted by the antenna gain pattern, satisfying the near-field condition and incorporating all refraction effects. Thereby, the weak scattering from deeper layers and wide angles will be compensated in a model-based imaging algorithm with the consideration of the refraction coefficients and gain pattern, respectively. To form real-time continuous images of targets embedded in a layered structure, a near-field uniform diffraction tomographic (UDT) imaging algorithm is developed. Conventional diffraction tomography (DT) improperly applies the stationary phase method for stratified environments to evaluate the innermost spectral integral. In DT the large argument is assumed to be the depth, which is not appropriate for near-field imaging. This results in amplitude discontinuities occurring at the interfaces between adjacent layers. The correct dimensionless large argument is the product of the free space wavenumber and the depth, as used in high-frequency asymptotic solutions. UDT therefore yields uniformly continuous images across the interfaces. And like DT, UDT retains the fast Fourier transform (FFT) relation in the algorithm for generating images very efficiently. Both 2D and 3D cases are investigated to verify the efficacy of the proposed UDT algorithm. To overcome the singularity problem caused by nulls in the antenna gain pattern in DT and UDT, a fast back-projection (FBP) imaging algorithm is propose to provide balanced monosta (open full item for complete abstract)

    Committee: Robert Burkholder Dr. (Committee Member); Fernando Teixeira Dr. (Committee Member); Graeme Smith Dr. (Committee Member) Subjects: Electrical Engineering; Electromagnetics
  • 3. Lim, Kheng-hwee Hybrid PO and MOM for solving non-penetrable target /

    Master of Science, The Ohio State University, 2008, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 4. Du, Fan Methodology for Estimation and Model Selection in High-Dimensional Regression with Endogeneity

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2023, Statistics

    Since the advent of high-dimensional data structures in many areas such as medical and biological sciences, economics, and marketing investigation over the past few decades, the need for statistical modeling techniques of such data has grown. In high-dimensional statistical modeling techniques, model selection is an important aspect. The purpose of model selection is to select the most appropriate model from all possible high-dimensional statistical models where the number of explanatory variables is larger than the sample size. In high-dimensional model selection, endogeneity is a challenging issue. Endogeneity is defined as when a predictor variable (X) in a regression model is related to the model error term (ϵ), which results in inconsistency of model selection. Because of the existence of endogeneity, Fan and Liao (2014) pointed out that exogenous assumptions in most statistical methods are not able to validate in high-dimensional model selection, and exogenous assumptions means a predictor variable (X) in a regression model is not related to the model error term (ϵ). To avoid the effect of endogeneity, Fan and Liao (2014) proposed the focused generalized method-of-moments (FGMM) approach in high-dimensional linear models with endogeneity for selecting significant variables consistently. We propose the FGMM approach with modifications for high-dimensional linear and nonlinear models with endogeneity to choose all of the significant variables. The theorems in Fan and Liao (2014) show that FGMM approach consistently chooses the true model as the sample size goes to infinity in both the linear and nonlinear models. In linear models with endogeneity, we modify the penalty term to improve the selection performance. In nonlinear models with endogeneity, we adjust the loss function in the FGMM approach to achieve model selection consistency, which is to select the true model as the sample size n goes to infinity. This modified approach adopts inst (open full item for complete abstract)

    Committee: Junfeng Shang Ph.D (Committee Chair); Meagan Docherty Ph.D (Other); John Chen Ph.D (Committee Member); Wei Ning Ph.D (Committee Member) Subjects: Statistics
  • 5. Forrester, Andrew Equity Returns and Economic Shocks: A Survey of Macroeconomic Factors and the Co-movement of Asset Returns

    Master of Arts, Miami University, 2017, Economics

    Significant attention in the financial economics literature is given to the usage of aggregated factors in their ability to explain variability in asset returns. Whereas the Capital Asset Pricing Model (CAPM) considers the excess return on the market portfolio as the dominant source of systematic variability in asset returns, the framework of Arbitrage Pricing Theory (APT) suggests that systematic risk can be further decomposed into numerous common risk factors that underlie co-movement in asset returns. Chen, Roll, and Ross (1986) popularized empirical evaluation of macroeconomic indicators in their relation to asset returns, finding that macro-economic indicators can be useful to price assets and carry statistically significant risk premiums in sample. Following the intuition of the Roll (1977) critique, I consider the pricing of risk derived from unexpected shocks, or innovations, to a wider set of macroeconomic and capital market variables. I find that information contained in shocks to common risk factors is significantly priced in the cross-section of asset returns and differs from information contained in the Fama-French-Carhart factors.

    Committee: Thomas Boulton Ph.D. (Advisor); George Davis Ph.D. (Committee Member); Tyler Henry Ph.D. (Committee Member) Subjects: Economics; Finance
  • 6. Kent, William Plane wave scattering by thin linear dielectric-coated wires and dielectric strips : a moment method approach with physical basis functions /

    Doctor of Philosophy, The Ohio State University, 1985, Graduate School

    Committee: Not Provided (Other) Subjects: Engineering
  • 7. Izadian, Jamaledin Two dimensional EM-scattering by buried penetrable non-circular cylinders using the method of moments /

    Doctor of Philosophy, The Ohio State University, 1980, Graduate School

    Committee: Not Provided (Other) Subjects: Physics
  • 8. Xu, Xingbai Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models

    Doctor of Philosophy, The Ohio State University, 2016, Economics

    Spatial econometrics has been obtained more and more attention in the recent years. The spatial autoregressive (SAR) model is one of the most widely used and studied models in spatial econometrics. So far, most studies have been focused on linear SAR models. However, some types of spatial or network data, for example, censored data or discrete choice data, are very common and useful, but not suitable to study by a linear SAR model. That is why I study an SAR Tobit model and an SAR binary choice model in this dissertation. Chapter 1 studies a Tobit model with spatial autoregressive interactions. We consider the maximum likelihood estimation (MLE) for this model and analyze asymptotic properties of the estimator based on the spatial near-epoch dependence (NED) of the dependent variable process generated from the model structure. We show that the MLE is consistent and asymptotically normally distributed. Monte Carlo experiments are performed to verify finite sample properties of the estimator. Chapter 2 extends the MLE estimation of the SAR Tobit model studied in Chapter 1 to distribution-free estimation. We examine the sieve MLE of the model, where the disturbances are i.i.d. with an unknown distribution. This model can be applied to spatial econometrics and social networks when data are censored. We show that related variables are spatial NED. An important contribution of this chapter is that I develop some exponential inequalities for spatial NED random fields, which are also useful in other semiparametric studies when spatial correlation exists. With these inequalities, we establish the consistency of the estimator. Asymptotic distributions of structural parameters of the model are derived from a functional central limit theorem and projection. Simulations show that the sieve MLE can improve the finite sample performance upon misspecified normal MLEs, in terms of reduction in the bias and standard deviation. As an empirical application, we examine the school (open full item for complete abstract)

    Committee: Lung-fei Lee (Advisor); Jason Blevins (Committee Member); Robert de Jong (Committee Member) Subjects: Economics
  • 9. Chalas, Jeffrey Design and Location Optimization of Electrically Small Antennas Using Modal Techniques

    Doctor of Philosophy, The Ohio State University, 2015, Electrical and Computer Engineering

    In this dissertation, the Theory of Characteristic Modes is used as a framework for the design, optimization, and benchmarking of electrically small radiating systems. The foundation of this work is in the theory of Characteristic Modes, an eigenvalue equation of the Method of Moments impedance matrix [Z], that leads to derive the fundamental radiation modes of arbitrary-shaped bodies. After an overview of small antenna theory, we derive a new method for computing the Q factor of arbitrary-shaped radiating bodies using CMs using only the Method of Moments impedance matrix [Z]. Following this derivation, we present a new method for computing the fundamental limits on Q (and thus bandwidth) for arbitrary-shaped antennas. As a by-product of this method, we extract the optimal current distribution as a function of antenna shape for design guidelines. We further extend this theory to find the Q limits of arbitrary-shaped antennas and antenna-platform systems, subject to specific radiation pattern requirements. In the second part of the thesis, we use the Theory of Characteristic Modes to optimize the location and excitation of single and multiple in-situ ESAs mounted on finite, sub-wavelength platforms as relates to unmanned aerial vehicles (UAVs). By properly analyzing the CMs of the supporting platform, we show that a complex, multivariate optimization problems can by radically simplified using CMs. Based on this capability, we present a new, systematic design methodology for location optimization of small antennas on-board finite platforms. The approach is shown to drastically reduce the time, computational cost, and complexity of a multi-element in-situ antenna design, as well as providing significant performance improvements in comparison to a typical single-antenna implementations.

    Committee: John Volakis Dr. (Advisor); Kubilay Sertel Dr. (Advisor); Robert Burkholder Dr. (Committee Member) Subjects: Electrical Engineering
  • 10. O'Donnell, Andrew Sparsity and Compressed Sensing for Electromagnetic Scattering and Radiation Applications

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Real-world electromagnetics problems often involves analysis from electrically large structures. Accurate knowledge of the radar signature of targets is needed for many applications within the defense community. Such applications include target recognition and systems analysis. Finding a complete radar signature for a large target requires a large amount of data over frequency and aspect angle to satisfy the Nyquist sampling criterion. Fortunately, the scattering from electrically large targets often comes from a small set of localized target features. This characteristic invites the use of sparsity and Compressed Sensing to alleviate the amount of data needed to characterize a target. State of the art models based on high-frequency asymptotic physics use scattering centers that scale as a half-power of frequency, but many scattering features on real world targets do not follow this model. A more general model is proposed here that combines physical basis functions with a polynomial basis resulting in a robust representation that is able to compress scattering centers with various types of frequency dependencies for very wide bandwidths. This mixed basis is extended to include angular variations which allows for simultaneous radar signature compression in frequency and angle. Additionally, this model resolves two issues that plague automated scattering center compression algorithms, namely grid mismatch and merged scattering centers caused by resolution limits. Because we have a sparse representation for the radar signature, Compressed Sensing theory can be applied to acquire the scattering center representation with minimal sampled data. It is shown that the mixed basis is able to acquire the scattering center representation through Compressed Sensing with significantly less samples compared to other scattering center models. In addition to electromagnetic scattering applications, this dissertation investigates sparsity and Compressed Sensing for large ape (open full item for complete abstract)

    Committee: Robert Burkholder (Advisor); Joel Johnson (Committee Member); Fernando Teixeira (Committee Member) Subjects: Electrical Engineering
  • 11. Williams, Leslie POST-EMPLACEMENT LEACHING BEHAVIORS OF NANO ZERO VALENT IRON MODIFIED WITH CARBOXYMETHYLCELLULOSE UNDER SIMULATED AQUIFER CONDITIONS

    Master of Science (MS), Wright State University, 2013, Earth and Environmental Sciences

    Nanoscale particles of zero valent iron stabilized with carboxymethylcellulose (CMC-nZVI) have been shown to degrade chlorinated hydrocarbons efficiently in bench-scale investigations. The remediation of contaminated sites by subsurface injection of stabilized nZVI in a slurry form has been investigated at pilot scale and field scale with conflicting results concerning transport of stabilized nZVI and its long-term stability for in situ degradation of chlorinated hydrocarbons. Changes in the hydraulic conductivity in porous media have also been reported following injection of stabilized nZVI slurry in both large tank experiments and in field studies. This study investigated the leaching behavior of CMC-nZVI post-emplacement at a variety of CMC concentrations (1 g/L, 2 g/L, 4 g/L, and 8 g/L) in a sand-filled flow-through reactor (30 cm long x 5 cm diameter). The goal was to identify changes in total iron mass eluted and changes to the hydraulic properties of the column post-emplacement of CMC-nZVI for different CMC concentrations. Experiments were also conducted to determine the amount of unreacted CMC-nZVI that would elute the column post-emplacement. Prior to injection of the CMC-nZVI, tracer breakthough studies were analyzed using the Method of Moments to determine velocity, resulting in calculation based estimations for pore volume (PV) and porosity. CXTFIT, a parameter estimation based on the physical non-equilibrium convection-dispersion equation, was used to demine initial conditions within the column for longitudinal dispersivity, mobile porosity, and the rate of mass transfer from mobile to immobile zones. The CMC-nZVI slurry was emplaced by rapid injection into the sand column through the base at a rate of ~120 mL/min and then flushed with a 10 mM NaCl solution at a velocity of 1 m/d (0.5 mL/min). After CMC-nZVI emplacement, the effluent samples exiting the flow-through reactor were collected with fraction collector for 15 min sampling (open full item for complete abstract)

    Committee: Abinash Agrawal Ph.D. (Advisor); Mark Goltz Ph.D. (Advisor); Robert Ritzi Ph.D. (Committee Member) Subjects: Environmental Geology; Environmental Science; Geochemistry; Hydrologic Sciences; Hydrology; Nanoscience; Nanotechnology
  • 12. Chae, Chun Sik Studies of the Interferometric Phase and Doppler Spectra of Sea Surface Backscattering Using Numerically Simulated Low Grazing Angle Backscatter Data

    Doctor of Philosophy, The Ohio State University, 2012, Electrical and Computer Engineering

    Range-resolved interferometric phase and Doppler spectra are two subjects of interest with regard to the retrieval of sea surface height profiles from coherent marine radar measurements. The studies of this dissertation attempt to improve understanding of the properties and associated measurement errors of these quantities through the use of numerically simulated low-grazing-angle backscatter data. In the first part of the dissertation, studies of the interferometric phase are described. Backscattered fields computed using the method of moments for one dimensional ocean-like surface profiles are used to examine statistical properties of the single-look interferometric phase estimator, in order to investigate the applicability of standard expectations for height retrieval accuracy in this problem. The results show that shadowing and multipath propagation effects cause errors in interferometric phase estimation beyond those caused by speckle effects alone. In addition, the decorrelation between the fields received at two antennas is found to be impacted by shadowing and multipath propagation effects, making standard models for this quantity less applicable as well. These results show that modeling the expected performance of interferometric sea surface height retrieval approaches at low grazing angles is difficult. The second part of the dissertation involves studies of the range-resolved Doppler spectra at low-grazing-angles. Backscattered fields are computed for a single realization of a one-dimensional ocean-like surface profile as the realization evolves in time. Transformation into the range-Doppler domain enables examination of properties of the resulting Doppler spectra (for both HH and VV polarizations) and their relationship to properties of the surface profile. In general, a strong correspondence between the long wave orbital velocity of the surface and the Doppler centroid frequency is observed for visible portions of the surface, as well as some evidence (open full item for complete abstract)

    Committee: Joel Johnson (Advisor); Robert Burkholder (Committee Member); Fernando Teixeira (Committee Member) Subjects: Electrical Engineering
  • 13. Liu, Xiaodong Econometrics on interactions-based models: methods and applications

    Doctor of Philosophy, The Ohio State University, 2007, Economics

    My dissertation research emphasizes estimation methods in evaluating the extent of social, strategic and spatial interactions among economic agents. My first essay, based on my joint research with Lung-fei Lee and John Kagel, generalizes Heckman's (1981) dynamic discrete-choice panel data models by introducing time-lagged social interactions and proposes simulation based methods to implement the maximum likelihood estimation. We use this generalized model to investigate learning from peers in experimental signaling games. We find that subjects' decisions are significantly influenced by the past decisions of their peers in the experiment. Hence the imitation of peers' strategies is an important component of the learning process of strategic play. My second essay explores the robustness of Guerre, Perrigne and Vuong's (2000) two-step nonparametric estimation procedure in first-price sealed-bid auctions with a large number of risk-averse bidders. With an asymptotic approximation of the intractable equilibrium bidding function of risk-averse bidders, I demonstrate that Guerre et al.'s two-step nonparametric estimator based on the equilibrium bidding behavior of risk-neutral bidders is still uniformly consistent even if bidders are risk-averse as long as the number of players in an auction is sufficiently large and derive the uniform convergence rate of the estimator. Furthermore, I show in Monte Carlo experiments that the two-step nonparametric estimator performs reasonably well with a moderate number of risk-averse bidders like six. In my third essay, which is based on my joint research with Lung-fei Lee and Christopher Bollinger, we consider the GMM estimation of the regression model with spatial autoregressive disturbances and the mixed-regressive spatial autoregressive model. We derive the best GMM estimator within the class of GMM estimators that are based on linear and quadratic moment conditions. Our best GMM estimator has the merit of computational simplicity an (open full item for complete abstract)

    Committee: Lung-fei Lee (Advisor) Subjects: Economics, General
  • 14. Seo, Seung Mo A fast IE-FFT algorighm for solving electromagnetic radiation and scattering problems

    Doctor of Philosophy, The Ohio State University, 2006, Electrical Engineering

    In this dissertation a fast integral equation method, termed IE-FFT, is developed, analyzed and applied to solve electromagnetic (EM) scattering problems. The methodology is developed for the Method of Moments (MoM) solution of the Electric Field Integral Equation (EFIE) on electrically large Perfect Electric Conducting (PEC) structures. Similar to other Fast Fourier Transform (FFT) based algorithms, IE-FFT uses a Cartesian grid to drastically decrease memory storage and speed up the matrix-vector multiplication, when used with iterative solvers. The IE-FFT algorithm employs two discretizations, one for the unknown current on an unstructured triangular mesh and the other on a uniform Cartesian grid for interpolating the Green's function. The uniform interpolation of the Green's function allows the fast computation of well-separated MoM interaction terms with the aid of a global FFT. Nevertheless, the coupling between near-interaction terms should be adequately corrected. The major contribution of this dissertation lays on the Lagrangian interpolation of the Green's function. This not only allows simple and efficient algorithmic implementation, but also naturally suggests a rigorous error analysis and tight error bounds of the algorithm. The efficiency of the method is based on the Toeplitz structure of the interpolated Green's function. Therefore, it is applicable on both asymptotically-smooth and oscillatory kernels arising in static and wave propagation problems, respectively. Through the electromagnetic wave scattering from a PEC sphere, the complexity of the IE-FFT algorithm is found to scale as O(N1.5)and O(N1.5log(N)) for memory and CPU time, respectively. More accurate integration of weakly singular integrals for EFIE formulation is proposed. Subsequently the proposed integration improves the robustness and the accuracy of the EFIE greatly. Furthermore, the IE-FFT algorithm can be directly applied to Finite Element Method-Boundary Element Method (FEM-BEM) for (open full item for complete abstract)

    Committee: Jin-Fa Lee (Advisor) Subjects:
  • 15. Munasib, Abdul Lifecycle of social networks: A dynamic analysis of social capital accumulation

    Doctor of Philosophy, The Ohio State University, 2005, Economics

    ABSTRACT This study investigates the lifecycle of social capital formation at the individual level. A dynamic model is developed that analyzes individuals' decision making about social capital accumulation that incorporates characteristics specific to social capital. The structural parameters of the model are estimated that address a variety of social capital issues. Theoretical Model The notion that people build up a network of friends (stock of social capital) by spending time in interacting with others (investment in social capital) is conducive to a neoclassical treatment. The model proposes a two-part return specification where, as distinct from the usual lagged return from stocks, social capital has an instantaneous return in the form of a direct utility accrued from the investment activity itself. The model allows for both the opportunity cost of time and depreciation rates to vary over the lifecycle. When parameterized the model can generate a variety of time paths of interest and allows for comparative dynamic exercises by perturbing parameter values. Econometric Model The structural parameters of the model are estimated using the method of simulated moments where matching is done using a GMM-type minimum distance estimation procedure. The data set used is from the General Social Survey (1972-2002). Chi-square statistics are calculated to test various restrictions to determine whether the parameter estimates are different among different groups. Results and Findings This study finds that social capital does depreciate and this depreciation rate varies over the lifecycle. The stylized fact of existing research that the time path of the stock of social capital has an inverted U-shape is not supported. Net benefits are higher for people with more education and which leads them to invest more in social capital despite having a higher opportunity costs of investment. This resolves a paradox that previous research could not explain. When comparative inves (open full item for complete abstract)

    Committee: Donald Haurin (Advisor) Subjects:
  • 16. Zeineddin, Rafik Numerical electromagnetics codes: Problems, solutions and applications

    Master of Science (MS), Ohio University, 1993, Electrical Engineering & Computer Science (Engineering and Technology)

    Numerical electromagnetics codes: Problems, solutions and applications

    Committee: Roger Radcliff (Advisor) Subjects: