Search Results (1 - 25 of 40 Results)

Sort By  
Sort Dir
 
Results per page  

Stoffel, Joshua DavidLagrange-Chebyshev Based Single Step Methods for Solving Differential Equations
Master of Science, University of Akron, 2012, Applied Mathematics
Many numerical methods are the result of replacing a function by its interpolating polynomial; quadrature formulas are one such method. In this research a special class of quadrature formulas are used that incorporate equally spaced points and zeros of Chebyshev polynomials simultaneously. Some properties of these quadrature formulas are investigated, and they will be used to develop single step methods for solving ordinary differential equations. Examples are presented to compare the approximated solutions with exact solutions.

Committee:

Ali Hajjafar, Dr. (Advisor); John Heminger, Dr. (Other)

Subjects:

Applied Mathematics

Keywords:

quadrature formula;Lagrange polynomial;Chebyshev polynomial;differential equation

Barnes, Caleb J.An Implicit High-Order Spectral Difference Method for the Compressible Navier-Stokes Equations Using Adaptive Polynomial Refinement
Master of Science in Engineering (MSEgr), Wright State University, 2011, Mechanical Engineering
A high/variable-order numerical simulation procedure for gas dynamics problems was developed to model steep grading physical phenomena. Higher order resolution was achieved using an orthogonal polynomial Gauss-Lobatto grid, adaptive polynomial refinement and artificial diffusion activated by a pressure switch. The method is designed to be computationally stable, accurate, and capable of resolving discontinuities and steep gradients without the use of one-sided reconstructions or reducing to low-order. Solutions to several benchmark gas-dynamics problems were produced including a shock-tube and a shock-entropy wave interaction. The scheme's 1st-order solution was validated in comparison to a 1st-order Roe scheme solution. Higher-order solutions were shown to approach reference values for each problem. Uniform polynomial refinement was shown to be capable of producing increasingly accurate solutions on a very coarse mesh. Adaptive polynomial refinement was employed to selectively refine the solution near steep gradient structures and results were nearly identical to those produced by uniform polynomial refinement. Future work will focus on improvements to the diffusion term, complete extensions to the full compressible Navier-Stokes equations, and multi-dimension formulations.

Committee:

George Huang, PhD (Advisor); George Huang, PhD (Committee Member); Joseph Shang, PhD (Committee Member); Miguel Visbal, PhD (Committee Member)

Subjects:

Fluid Dynamics; Mechanical Engineering

Keywords:

high-order; spectral difference; CFD; computational fluid dynamics; Euler equations; Euler; gas dynamics; polynomial refinement; adaptive polynomial refinement; artificial dissipation

Ryu, Jong HoonPermutation polynomial based interleavers for turbo codes over integer rings: theory and applications
Doctor of Philosophy, The Ohio State University, 2007, Electrical Engineering
Turbo codes are a class of high performance error correcting codes (ECC) and an interleaver is a critical component for the channel coding performance of turbo codes. Algebraic constructions of interleavers are of particular interest because they admit analytical designs and simple, practical hardware implementation. Sun and Takeshita have shown that the class of quadratic permutation polynomials over integer rings provides excellent performance for turbo codes. Recently, quadratic permutation polynomial (QPP) based interleavers have been proposed into 3rd Generation Partnership Project Long Term Evolution (3GPP LTE) draft for their excellent error performance, simple implementation and algebraic properties which admit parallel processing and regularity. In some applications, such as deep space communications, a simple implementation of deinterleaver is also of importance. In this dissertation, a necessary and sufficient condition is proven for the existence of a quadratic inverse polynomial (deinterleaver) for a quadratic permutation polynomial over an integer ring. Further, a simple construction is given for the quadratic inverse. We also consider the inverses of QPPs which do not admit quadratic inverses. It is shown that most 3GPP LTE interleavers admit quadratic inverses. However, it is shown that even when the 3GPP LTE interleavers do not admit quadratic inverses, the degrees of the inverse polynomials are less than or equal to 4, which allows a simple implementation of the deinterleavers. An explanation is argued for the observation. The minimum distance and its multiplicity (or the first a few terms of the weight distribution) of error correcting codes are used to estimate the error performance at high signal-to-noise ratio (SNR). We consider efficient algorithms that find an upper bound (UB) on the minimum distance of turbo codes designed with QPP interleavers. Permutation polynomials have been extensively studied, but simple coefficient tests for permutation polynomials over integer rings are only known for limited cases. A simple necessary and sufficient coefficient test is proven for cubic permutation polynomials over integer rings. A possible application is in the design of interleavers for turbo codes.

Committee:

Hesham El Gamal (Advisor)

Keywords:

Turbo codes; interleaver; algebraic; permutation polynomial; quadratic permutation polynomial

Bandreddy, Neel KamalEstimation of Unmeasured Radon Concentrations in Ohio Using Quantile Regression Forest
Master of Science, University of Toledo, 2014, College of Engineering
The most stable isotope of radon is Radon-222, which is a decay product of radium-226 and an indirect decay product of uranium-238, a natural radioactive element. According to the United States Environmental Protection Agency (USEPA), radon is the primary cause of lung cancer among non-smokers. The USEPA classifies Ohio as a zone 1 state because the average radon screening level is more than 4 picocuries per liter. To perform preventive measures, knowing radon concentration levels in all the zip codes of a geographic area is necessary. However, it is impractical to collect the information from all the zip codes due to its inapproachability. Several interpolation techniques have been implemented by researchers to predict the radon concentrations in places where radon data is not available. Hence, to improve the prediction accuracy of radon concentrations, a new technique called Quantile Regression Forests (QRF) is proposed in this thesis. The conventional techniques like Kriging, Local Polynomial Interpolation (LPI), Global Polynomial Interpolation (GPI), and Radial Basis Function (RBF) estimate output using complex mathematics. Artificial Neural Networks (ANN) have been introduced to overcome this problem. Although ANNs show better prediction accuracy in comparison to more conventional techniques, many issues arise, including local minimization and over fitting. To overcome the inadequacies of existing methods, statistical learning techniques such as Support Vector Regression (SVR) and Random Forest Regression (RFR) were implemented. In this thesis, Quantile Regression Forest (QRF) is introduced and compared with SVR, RFR, and other interpolation techniques using available operational performance measures. The study shows that QRF has least validation error compared with other interpolation techniques.

Committee:

Vijay Devabhaktuni (Committee Chair); Ashok Kumar (Committee Member); Mansoor Alam (Committee Member)

Subjects:

Applied Mathematics; Electrical Engineering; Mathematics

Keywords:

Radon; Kriging; Local Polynomial Interpolation; Global Polynomial Interpolation; Radial Basis Function; Artificial Neural Networks; Random Forest Regression; Quantile Regression Forest; operational performance measures

Sosov, YuriyLegendre Polynomial Expansion of the Electron Boltzmann Equation Applied to the Discharge in Argon
Doctor of Philosophy, University of Toledo, 2006, Physics

The main effort of the present dissertation is to establish a framework for construction of the numerical solution of the system of partial differential equations for the coefficients in the N-term expansion of the solution of the Boltzmann equation in Legendre polynomials, also known as the PN approximation of the Boltzmann equation. The key feature of the discussed solution is the presence of multiple waves moving in opposite directions in both velocity and spatial domains, which requires transformation of the expansion coefficients to characteristic variables and a directional treatment (up/down winding) of their velocity and spatial derivatives. After the presence of oppositely directed waves in the general solution is recognized, the boundary conditions at the origin of velocity space are formulated in terms of the arriving and reflected waves, and the meaning of the characteristic variables is determined, then the construction proceeds employing the standard technique of operator splitting. Special effort is made to insure numerically exact particle conservation in treatment of the advection and scattering processes.

The constructed numerical routine has been successfully coupled with a solver for the Poisson equation in a self-consistent model of plasma discharge in argon for a two parallel-plate bare electrode geometry. The results of this numerical experiment were presented at the workshop on "Nonlocal, Collisionless Electron Transport in Plasmas" held at Plasma Physics Laboratory of Princeton University on August 2-4, 2005.

Committee:

Constantine Theodosiou (Advisor)

Subjects:

Physics, Fluid and Plasma

Keywords:

Boltzmann equation; electron Boltzmann equation; electron distribution function; PN approximation; PN approximation; Legendre polynomial expansion of Boltzmann equation; Legendre polynomial expansion of distribution function

GUPTA, ANANYADECENTRALIZED KEY GENERATION SCHEME FOR CELLULAR-BASED HETEROGENEOUS WIRELESS Ad Hoc NETWORKS
MS, University of Cincinnati, 2006, Engineering : Computer Engineering
A majority of group communication applications in cellular-based heterogeneous wireless setups entail secure data exchange. The problem can be effectively tackled if the underlying cellular infrastructure is used to provide an authentication backbone to the security associations. We propose a novel distributed ID based key exchange mechanism using shared polynomials in which the shares are generated by the communicating groups. Our idea employs a mechanism where the Base Stations (BSs) carry out an initial key generation by a polynomial in a distributed manner and then pass on the key material to the Mobile Stations (MSs). The multi-interface MSs can now securely communicate over interfaces other than cellular. The scheme incorporates symmetric polynomials, which are chosen by the BS acting as polynomial distributors. Simulations done to measure performance have shown encouraging results.

Committee:

Dr. Agrawal Dharma (Advisor)

Subjects:

Computer Science

Keywords:

Ad hoc networks; Base Station; Cellular networks; Distributed algorithm; Heterogeneous networks; Multi-interface Mobile Station; Pairwise key; Polynomial; Symmetric key

Werner, Nicholas J.Integer-Valued Polynomials over Quaternion Rings
Doctor of Philosophy, The Ohio State University, 2010, Mathematics
When D is an integral domain with field of fractions K, the ring Int(D) of integer-valued polynomials over D is defined to be the set of all polynomials f(a) in K[x] such that f(a) is in D for all a in D. The goal of this dissertation is to extend the integer-valued polynomial construction to certain noncommutative rings. Specifically, for any ring R, we define the R-algebra RQ to be the set of elements of the form a + bi + cj + dk, where i, j, and k are the standard quaternion units satisfying the relations i2 = j2 = -1 and ij = k = -ji. When this is done with the integers ℤ, we obtain a noncommutative ring ℤQ; when this is done with the rational numbers ℚ, we get a division ring ℚQ. Our main focus is on the construction and study of Int(ℤQ), the set of integer-valued polynomials over ℤQ. We also consider Int(R), where R is an overring of ℤQ in ℚQ. In this treatise, we prove that for such an R, Int(R) has a ring structure and investigate elements, generating sets, and prime ideals of Int(R). The final chapter examines the idea of integer-valued polynomials on subsets of ℤQ.

Committee:

K. Alan Loper, PhD (Advisor); S. Tariq Rizvi, PhD (Committee Member); Daniel Shapiro, PhD (Committee Member)

Subjects:

Mathematics

Keywords:

integer-valued polynomial; quaternion; Hurwitz quaternion; Hurwitz integer

Boopathy, KomahanUncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models
Master of Science (M.S.), University of Dayton, 2014, Aerospace Engineering

Surrogate models are widely used as approximations to exact functions that are computationally expensive to evaluate. The choice of model training information and the estimation of the accuracy of surrogate models are major research avenues. In this work, a unified dynamic framework for surrogate model training point selection and error estimation is proposed. Building auxiliary local surrogate models over sub-domains of the global surrogate model forms the basis of the framework. A discrepancy function, defined as the absolute difference between response predictions from global and local surrogate models for randomly chosen test candidates, drives the framework.

The framework preferably evaluates the expensive exact function at locations, where the value of the discrepancy function is high and when a distance-constraint to previously existing training points are satisfied. As a result, the surrogate model is continually refined in regions of higher uncertainty in prediction, and a better spread of training points is also achieved. Unlike most training point selection approaches, the framework addresses surrogate training from two disparate contexts, as training in the presence and absence of derivative information. The local surrogate models use the derivative information when available and affect the framework via the discrepancy function, and helps determine the locations that require derivative information. The benefits of the dynamic training approach are demonstrated with analytical test functions and the construction of a two-dimensional aerodynamic database. The results show that the proposed method improves the convergence monotonicity and produces more accurate surrogate models, when compared to random and quasi-random training point selection strategies.

The newly introduced discrepancy function is proposed as an approximation to the actual error in the prediction of the surrogate model leading to the quantities: root mean square discrepancy (RMSD) and maximum absolute discrepancy (MAD). The results demonstrate a close agreement of RMSD and MAE with the actual root mean square error (RMSE) and maximum absolute error (MAE), respectively. Therefore, RMSD and MAD are proposed as measures for the accuracy of the surrogate models in applications of practical interest. The benefit of surrogate validation comes without warranting any additional exact function evaluations, which makes the framework computationally viable.

Multivariate interpolation and regression model is employed to build local surrogates, whereas the kriging and polynomial chaos expansions serve as global surrogate models. This demonstrates the applicability of the proposed framework to any surrogate model with an open choice of training data selection.

Finally, the dynamically trained surrogate models are applied to uncertainty quantifications and optimizations under mixed epistemic and aleatory uncertainties (OUU), for structural and aerodynamic test cases. In the OUUs epistemic uncertainties are propagated via box-constrained optimizations, whereas the aleatory uncertainties are propagated via inexpensive sampling of the surrogate models. The structural test cases include designing a three-bar truss and a cantilever beam, whereas the aerodynamic test case involves the robust optimization (lift-constrained drag minimization) of an airfoil under steady flow conditions.

Committee:

Markus Rumpfkeil, Ph.D (Committee Chair); Raymond Kolonay, Ph.D (Committee Member); Aaron Altman, Ph.D (Committee Member)

Subjects:

Aerospace Engineering; Civil Engineering; Mathematics; Mechanical Engineering; Physics

Keywords:

Uncertainty Quantification; Robust Design Optimization; Surrogate Models; Response Surfaces; Design of Experiments; Validation; Error Estimation; Training; Sampling; Kriging; Polynomial Chaos; Regression; Interpolation; Aerodynamic Database;

Molina Aristizabal, Sergio DSemi-Regular Sequences over F2
PhD, University of Cincinnati, 2015, Arts and Sciences: Mathematical Sciences
The concept of semi-regular sequences was introduced in order to assess the complexity of Gröumlbner basis algorithms such as F4 for the solution of polynomial equations. Despite the experimental evidence that semi-regular sequences are common, it was unknown whether there existed semi-regular sequences for all n, except in extremely trivial situations. In the present work I prove some results on the existence and non-existence of semi-regular sequences. It was observed by J. Schlather and T. Hodges that if an element of degree d in Β(n)-variables is semi-regular, then we must have n≤3d. In this thesis, I establish precisely when the elementary symmetric polynomial of degree d is semi-regular. In particular, when d=2t and n=3d, the elementary symmetric polynomial of degree d is semi-regular establishing that the bound given by J. Schlather and T. Hodges is sharp for infinitely many n. For the general case of existence of semi-regular sequences Bardet, Faug&egravere and Salvy conjecture that the proportion π(n, m, d1, . . . , dm) of semi-regular sequences over F2 in the set Ε(n, m, d1, . . . , dm) of algebraic systems of m equations of degrees d1, . . . , dm in n-variables tends to 1 as n tends to infinity. In this work, I show that for a fixed choice of (m, d1, . . . , dm), we have that limn→∞ π(n, m, d1, . . . , dm ) — 0 showing that the conjecture is false in this case.

Committee:

Timothy Hodges, Ph.D. (Committee Chair); Donald French, Ph.D. (Committee Member); Tara Smith, Ph.D. (Committee Member)

Subjects:

Mathematics

Keywords:

Abstract Algebra;Semi-Regular Sequences;Symmetric Polynomials;Cryptography;Regular Sequences;Systems of polynomial equations

Prakash, AbhinavAnonymous and Secure Communication in a Wireless Mesh Network
MS, University of Cincinnati, 2012, Engineering and Applied Science: Computer Science

With the rapid advancement of different types of wireless technologies the problem arose of combining them together to provide improved bandwidth and enhanced throughput. The answer came out in the form of a Wireless Mesh Network (WMN). A typical WMN is made up of mesh routers and mesh clients where mesh routers have somewhat limited mobility and they form the backbone of the network whereas mesh clients are allowed to be highly mobile or completely stationary or somewhere in between. This forms a very versatile network which allows clients with different levels of mobility, interface and bandwidth requirements to be a part of the same network. The communication can be achieved by directly communicating with the router by being in its range or in an ad hoc fashion through several hops. A WMN is mainly designed to be self-configured and self-adjusting dynamically. This ensures large network coverage with minimum infrastructure requirements, hence low cost. Although a WMN gives multifold advantages it is also vulnerable to several security and privacy threats being a dynamic open medium. Different types of clients such as laptops, cell phones, smart devices can join or leave the network anytime they wish. This opens up issues like fake registrations and packet sniffing.

This work deals with the issues of security and privacy separately in two parts in great detail by simulating countermeasures for different kinds of attacks in a WMN. The first part mainly deals with creating a perfectly secure network for safe communication by using a bi-variate polynomial scheme for low overheads instead of a public-private key mechanism. The second part deals with making any communication in the network anonymous by hiding the node initiating the session by using redundancy at the cost of some associated overheads.

Committee:

Dharma Agrawal, DSc (Committee Chair); Yizong Cheng, PhD (Committee Member); Chia Han, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

Mesh Networks; Security; Onion Routing; Bivariate Polynomial Function; Backbone; Hybrid Networks

SHARMA, ANURAGEXPLOITING SPATIAL CORRELATION USING TREE BASED POLYNOMIAL REGRESSION IN A THREE DIMENSIONAL WIRELESS SENSOR NETWORK
MS, University of Cincinnati, 2007, Engineering : Computer Engineering
A Wireless Sensor Network (WSN) consists of a large number of sensor nodes dispersed over a chosen area for monitoring purposes. Information about an event can be captured by the surrounding sensor nodes. Observations from the sensor nodes which are in close proximity are highly correlated. This is called Spatial Correlation. In this thesis, we propose a scheme to exploit the spatial correlation of data in a three dimensional sensor network by using polynomial regression technique. The scheme involves creation of a binary tree in the network, such that the network has two types of nodes viz., Tree nodes and Sensing nodes. The sensing nodes sense the physical attribute and report their position coordinates (x, y, z) and the sensed value to the nearest tree node. The tree nodes, on the other hand, fit a polynomial function on the received values and transmit the coefficients of regression to the parent tree node. The process starts with the leaf tree nodes and stops at the root. At the end of the process, the root has the polynomial function (Attribute value as a function of space coordinates) for the entire sensor network. When the sink queries the root, instead of flooding the entire network, the root can use the polynomial function to compute the attribute value at any location within the boundary. This saves a lot of energy in the sensor network. Simulations have been performed for different tree heights and different sensor nodes density. Results presented in graphical form indicate that a tree with a depth of four provides accurate values, with minimum error. Concluding remarks and plans for future work have also been presented.

Committee:

Dr. Dharma Agrawal (Advisor)

Subjects:

Computer Science

Keywords:

three dimensional wireless sensor network; data aggregation; polynomial regression

He, BingArchitecture Design and Performance Optimization of Wireless Mesh Networks
PhD, University of Cincinnati, 2010, Engineering : Computer Science and Engineering

Wireless Mesh Network (WMN) is a promising wireless technology in providing high-bandwidth Internet access over a specific coverage area, with relative lower investment cost as compared to traditional access network. In a WMN, a mobile client (MC) can access the Internet through a wireless backbone formed by wireless Mesh Routers (MRs) which are interconnected in a multi-hop fashion while some MRs known as Internet Gateways (IGWs) act as the communication bridges between the wireless backbone and the Internet. The design of the network architecture is a fundamental issue for a WMN and is critical in determining the network performance and providing Quality of Service (QoS) for end users, and thus should be addressed carefully.

A unique characteristic of a WMN is the IGW oriented Internet traffic. Thus, the deployment of IGW is the key problem in the network design, and is investigated in this dissertation. Two IGW oriented network architecture are analyzed, and corresponding QoS requirements and constraints are evaluated. The IGW deployment problem is then formulated as a multiple objectives optimization problem. Besides the linear program approach, some heuristic algorithms are proposed and evaluated. Extensive simulations show the effectiveness of proposed solutions.

To improve the performance of a given WMN, load balancing between different IGW domains is also investigated. A fairness between IGWs domains improves the network performance and provides a better QoS for end users. The fairness index is defined for both homogenous and heterogeneous WMNs. A distributed load balancing scheme is proposed, and three load balancing algorithms based on diffusion methodology are introduced in the proposed scheme.

Authenticated key establishment (AKE) schemes enable two entities (e.g., a client and a server) to share common communication keys in an authentic way. Due to mobility of mesh clients (MCs), a WMN needs have a fast and efficient authentication and key establishment scheme to provide adequate security in client's handoff while meeting the Quality of Service (QoS) requirements. In this dissertation, we discuss the authentication performance requirements imposed by the unique WMN characteristics. Distributed authenticated key establishment schemes are proposed based on hierarchical multi-variable symmetric functions (HMSF) and identity-based cryptography (IBC) respectively, which enable fast key agreement and mutual authentication between network entities in a WMN. In the distributed authenticated key establishment scheme, network entities in a WMN such as MCs and mesh access points (e.g. mesh routers) can authenticate each other and establish pairwise communication keys without any interaction from a centralized authentication center, while substantially reducing the communication overhead and the authentication delay.

Committee:

Dharma Agrawal, DSc (Committee Chair); Kenneth Berman, PhD (Committee Member); Chia Han, PhD (Committee Member); Kelly Cohen, PhD (Committee Member); Ernest Hall, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

Wireless Mesh Networks;Internet Gateway;Load Balancing;Authenticated Key Establishment;Symmetric Polynomial;Identity-based Cryptography

Banerjee, TorshaEnergy Efficient Data Representation and Aggregation with Event Region Detection in Wireless Sensor Networks
PhD, University of Cincinnati, 2008, Engineering : Computer Science

Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself.

We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area.

Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds.

We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable.

Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes.

A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS.

Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.

Committee:

Dr. Dharma Agrawal (Advisor)

Subjects:

Remote Sensing

Keywords:

wireless sensor; data aggregation; polynomial regression; energy efficiency; mobile; faulty cognitive radio

Wu, XiaomingApproximation using linear fitting neural network: Polynomial approach and gaussian approach
Master of Science (MS), Ohio University, 1991, Electrical Engineering & Computer Science (Engineering and Technology)
Approximation using linear fitting neural network: Polynomial approach and gaussian approach

Committee:

Henryk Lozykowski (Advisor)

Keywords:

Approximation; Linear Fitting Neural Network; Polynomial Approach; Gaussian Approach

Harris, Mary MargaretUnpacking Emotional Dissonance: Examining the Effects of Event-Level Emotional Dissonance on Well-Being Using Polynomial Regression
Doctor of Philosophy, University of Akron, 2014, Psychology-Industrial/Organizational
This research sought to provide a detailed understanding of emotional dissonance, or the discrepancy between feelings and displays, as it relates to emotional exhaustion and psychological vitality. This was the first study to examine dissonance as it is defined by modeling feelings and displays as separate constructs and to differentiate between different experiences of dissonance. All hypotheses were tested using a large archival dataset collected using experience sampling methodology, which included 250 call center employees who rated consumer interactions approximately 3 times per day over the course of 10 work days. This research provides three overarching contributions to the emotional labor literature. First, modeling dissonance using domain-specific (i.e., positive and/or negative) felt and displayed emotions provided an enhanced understanding of the relationship between well-being and dissonance. In particular, the influence of congruent feelings and displays (i.e., no dissonance) on well-being was influenced by the intensity of those emotions, thus suggesting that not all situations without dissonance are equally healthy. Second, dissonance predicted well-being above and beyond feelings and displays (each of which also had unique relationships with well-being). Third, this research leveraged polynomial regression and three-dimensional response surface methodology to extend our knowledge of the nature and effects of emotional dissonance in customer service situations.

Committee:

James Diefendorff, Dr. (Advisor); Paul Levy, Dr. (Committee Member); Andrea Snell, Dr. (Committee Member); Jennifer Stanley, Dr. (Committee Member); Maria Hamdani, Dr. (Committee Member)

Subjects:

Organizational Behavior; Psychology; Quantitative Psychology; Statistics

Keywords:

emotional dissonance; polynomial regression; three-dimensional response surface modeling; experience sampling method; well-being; emotional exhaustion; psychological vitality; emotional displays; feelings; China; call center; emotion regulation

Sarmah, DipsikhaEvaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA
Master of Science in Civil Engineering, University of Toledo, 2012, Civil Engineering

According to the United States Environmental Protection Agency, radon is the number one cause of lung cancer among non-smokers, and it is responsible for about 21,000 lung cancer deaths every year in the United States. In the State of Ohio, 14% of lung cancer deaths are caused due to radon. It is essential to have the radon concentration data for every location (i.e., zip codes) so that necessary preventive measures can be taken up. Measuring the radon concentration across the entire State of Ohio will be very expensive and time consuming. This research focuses on the application of six geographical information system (GIS) based interpolation techniques to estimate the radon concentration in the unmeasured zip codes in the State of Ohio. The radon concentration in homes has been obtained by The University of Toledo researchers from various commercial testing services, university researchers, and county health departments. The data are divided into two sets. The first set uses 80% of the data for training different interpolation schemes, and the second data set includes 20% of the data to evaluate the interpolation techniques. Statistical performance measures such as coefficient of correlation (r), Spearman correlation coefficient (¿¿), slope of the regression line (m), ratio of the intercept of the regression line to the average observed concentrations (b/Co), fractional variance (FV), fraction of prediction within a factor of two of the observations (FA2), model comparison measure (MCM2), geometric mean bias (MG), geometric mean variance (VG), normalized mean square error (NMSE), fractional bias (FB), revised index of agreement (IOAr), accuracy for paired peak (Ap), maximum ratio (Rmax), scatter plots, quantile – quantile (QQ) plots and bootstrap 95% confidence interval estimates based on extreme-end concentrations (i.e., peak-end/low-end), and the mid-range concentrations of indoor air quality (IAQ) models are performed on the predicted data points to evaluate the best interpolation technique.

Considering the statistical indicators for peak-end, low-end and mid-range estimates, it has been found that cokriging is a suitable technique for peak-end estimates, and the radial basis function (RBF) technique meets all the acceptable criteria for low-end and mid-range estimates. After considering the closeness of the greater number of measures to their respective ideal values, graphical representations of the scatter plots and QQ plots, the RBF technique surpasses the other six interpolation techniques. Again, the summary of the bootstrap confidence interval estimates among the techniques indicate that the RBF technique is not significantly different from the other five interpolation techniques under all situations. Therefore, the RBF technique may not be the best technique always when applied to similar sets of dataset from other states and countries. The RBF technique is tentatively suggested in this thesis to perform the interpolation of radon concentration for the unmeasured zip codes in the State of Ohio. This technique is used to understand the extent of radon problems in Ohio. This approach provides a complete picture of radon distribution in the state. It has been found from the zip code based analysis that the number of zip codes exceeding 2.7 pCi/l (World Health Organization (WHO) recommended limit), 4 pCi/l (US Environmental Protection Agency (EPA) action limit), 8 pCi/l and 20 pCi/l are 1300, 693, 28, and 2, respectively after prediction using the RBF technique.

Committee:

Ashok Kumar, PhD (Committee Chair); Brian W. Randolph, PhD (Committee Member); Matthew Franchetti, PhD (Committee Member)

Subjects:

Environmental Engineering

Keywords:

Radon; GIS; kriging; cokriging; radial basis function (RBF); Inverse Distance Weighting (IDW); Local Polynomial Interpolation (LPI); Global PolynomiaI Interpolation (GPI); interpolation; spatial interpolation

Wang, TingAlgorithms for parallel and sequential matrix-chain product problem
Master of Science (MS), Ohio University, 1997, Electrical Engineering & Computer Science (Engineering and Technology)
Algorithms for parallel and sequential matrix-chain product problem

Committee:

David Juedes (Advisor)

Keywords:

Matrix Chain Product; Polynomial Algorithm; Parallel Polygon Triangulation

Aspiras, Theus HerreraHierarchical Autoassociative Polynomial Network for Deep Learning of Complex Manifolds
Doctor of Philosophy (Ph.D.), University of Dayton, 2015, Electrical Engineering
Artificial neural networks are an area of research that has been explored extensively. With the formation of these networks, models of biological neural networks can be created mathematically for several different purposes. The neural network architecture being explored here is the nonlinear line attractor (NLA) network, which uses a polynomial weighting scheme instead of a linear weighting scheme for specific tasks. We have conducted research on this architecture and found that it works well to converge towards a specific trained pattern and diverge with untrained patterns. We have also improved the architecture with a Gaussian weighting scheme, which provides a modularity in the architecture and reduces redundancy in the network. Testing on the new weighting scheme improves network on different datasets gave better convergence characteristics, quicker training times, and improved recognition rates. The NLA architecture, however, is not able to reduce the dimensionality, thus a nonlinear dimensionality reduction technique is used. To improve the architecture further, we must be able to decompose the NLA architecture further to alleviate problems in the original structures and allow further improvements. We propose a hierarchical autoassociative polynomial network (HAP Net) which reorders the NLA architecture to include different ways to use polynomial weighting. In each layer, we can have orders of each input connected by a weight set, which can be trained by a backpropagation algorithm. By combining different architectures based on the understanding of MLP, attractor, and modular networks, we create a multi-purpose architecture including all aspects of the previous architecture which is far improved for classification and recognition tasks. Experiments conducted on the standard dataset, MNIST, shows very promising results of the HAP Net framework. Research work is progressing in evaluating performance on HAP Net on various datasets and also incorporating advanced learning strategies, convolutional neural networks, and extreme learning machine to investigate the performance.

Committee:

Vijayan Asari, Ph.D. (Committee Chair); Raul Ordonez, Ph.D (Committee Member); Eric Balster, Ph.D. (Committee Member); Wesam Sakla, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Electrical Engineering

Keywords:

Polynomial Neural Network; Complex Manifolds; Deep Learning; Nonlinear Weighting; Modular; Classification; MNIST; HAP net

Shui, YuhaoStrategic Trajectory Planning of Highway Lane Change Maneuver with Longitudinal Speed Control
Master of Science, The Ohio State University, 2015, Electrical and Computer Engineering
Even though extensive research has been conducted on vehicle highway longitudinal control under simple driving scenario, real life implementation of such system requires considering of surrounding trafficc situations and movements such as lane change, weaves, on-ramp and off -ramp merges. In this thesis, the concept of driver being responsible for lateral control with automated longitudinal control is deployed in order to realize the mobility and safety bene fitts and a fundamental framework has been built to investigate highway longitudinal control with lane change trajectory generated by geometric high-order polynomial. With the assumption of surrounding vehicles' position and velocity are available, highway two lane situation with driver being advised by the lane change module for the operation is studied. This system consists of several diff erent modules: vehicle single lane following control module, maneuver generation module, lane change trajectory generation module and Model Predictive Control (MPC) control while lane changing. Three controllers: PID cruise controller, transitional trajectory and vehicle following controller are used to achieve the functionality of vehicle following with a Finite State Machine (FSM) designed for controller switch decision making based on surrounding traffic movements. The following controller is needed to follow the preceding slower vehicle when driver does not intend to make a lane change. The vehicle maneuver generation module is first designed to inform the system, for example, whether an acceleration or deceleration lane change is needed to lead or follow the vehicle in the adjacent lane. Both Time to Collision and Inter Vehicular Time are used as collision indicators to ensure safety. All possible cases of high-way two lane situation with one slow preceding vehicle and one surrounding vehicle in the adjacent lane are integrated into the maneuver generation FSM design. Based on the suggested maneuver, the lane change trajectory generation module provides a desired trajectory for the longitudinal controller to follow. A pure geometric high order polynomial trajectory planning method is used to design obstacle avoidance lane change trajectory. In the end, an MPC controller is utilized to control the speed of the vehicle while lane changing.

Committee:

Umit Ozguner (Advisor); Junmin Wang (Advisor); Lisa Fiorentini (Committee Member); Chia-Hsiang Menq (Committee Member)

Subjects:

Automotive Engineering; Electrical Engineering; Mechanical Engineering

Keywords:

Lane change, Adaptive Cruise Control, Model Predictive Control, Polynomial, Finite State Machine

Martell, Patrick KeithHierarchical Auto-Associative Polynomial Convolutional Neural Networks
Master of Science (M.S.), University of Dayton, 2017, Electrical Engineering
Convolutional neural networks (CNNs) lack ample methods to improve performance without either adding more input data, modifying existing data, or changing network design. This work seeks to add to the methods available that do not require more data or a trial and error approach to network design. This thesis seeks to demonstrate that a polynomial layer inserted into a CNN, compared to all other factors being equal has great potential to improve classification rates. There are some methods that seek to help fill the gap that this research also investigates an alternative solution. Most other methods in the similar problem space look at ways to improve performance of existing layers, such as modifying the type of pooling or activation functions. Also, methods discussed later, Dropout and DropConnect zero out nodes or connections, respectively, seeking to improve performance. This research focused on adding a new type of layer to typical CNNs, the polynomial layer. This layer adds a local connectivity to each of the perceptrons creating N connections up to the Nth power of the initial value of the perceptron. This is done in either the convolutional portion or the fully connected portion, with the idea that the higher dimensionality allows for better description of the input space. This idea was tested on two datasets, MNIST and CIFAR10, both classification databases with 10 classes. These datasets contain 28×28 grayscale and 32×32 RGB images, respectively. It was determined that the polynomial layer universally enabled the tested CNN to perform better on the MNIST data and the convolutional layer polynomials aid CNNs that are trained at a lower learning rate on the CIFAR10 dataset. Looking forward, more CNN designs should be analyzed, along with more learning rates, including ones with a variable rate. Additionally, performing tests on a wider range of datasets would also enable a broader understanding.

Committee:

Vijayan Asari, Ph.D. (Advisor); Theus Aspiras, PH.D. (Committee Member); Eric Balster, Ph.D. (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Convolutional Neural Network; Polynomial; CNN; Classification; MNIST

Zhao, WanchengA Structural Damage Identification Method Based on Unified Matrix Polynomial Approach and Subspace Analysis
MS, University of Cincinnati, 2008, Engineering : Mechanical Engineering
Vibration based damage detection of engineering structures has become an important and difficult issue for the last couple of decades. Research in vibration based structural damage detection has been rapidly expanding from traditional modal parameter estimation based techniques to modern feature based, online monitoring techniques. However, there is still a need for a universal structural damage detection method that does not depend on modal parameter estimation, finite element model or specific structural type. This research outlines and validates a Unified Matrix Polynomial Approach (UMPA) and subspace analysis based structural damage detection method. UMPA presents a theoretical basis and a fundamental mathematical framework for experimental modal parameter estimation algorithms while Singular Value Decomposition (SVD) based subspace analysis provides an mechanism to extract and compare the characteristic features from this mathematical framework to detect structural damage. Simulations were performed on an analytical 15 Degree of Freedom (DOF) mass-spring-damper system and a lightly damped circular plate finite element model to validate and assess the proposed structural damage detection method. The results show that the proposed method successfully identifies structural damage under all test conditions. The proposed method has a significant resistance to measurement uncertainty, has a good consistency with the severity of the damage and is applicable to various structural damage locations.

Committee:

Randall J. Allemang, PhD (Committee Chair); Teik C. Lim, PhD (Committee Member); Allyn W. Phillips, PhD (Committee Member)

Subjects:

Mechanical Engineering

Keywords:

Structural Damage Detection; Singular Value Decomposition; Unified Matrix Polynomial Approach

Parker, William DavidSpeeding Up and Quantifying Approximation Error in Continuum Quantum Monte Carlo Solid-State Calculations
Doctor of Philosophy, The Ohio State University, 2010, Physics

Quantum theory has successfully explained the mechanics of much of the microscopic world. However, Schrödinger's equations are difficult to solve for many-particle systems. Mean-field theories such as Hartree-Fock and density functional theory account for much of the total energy of electronic systems but fail on the crucial correlation energy that predicts solid cohesion and material properties.

Monte Carlo methods solve differential and integral equations with error independent of the number of dimensions in the problem. Variational Monte Carlo (VMC) applies the variational principle to optimize the wave function used in the Monte Carlo integration of Schrödinger's time-independent equation. Diffusion Monte Carlo (DMC) represents the wave function by electron configurations diffusing stochastically in imaginary time to the ground state.

Approximations in VMC and DMC make the problem tractable but introduce error in parameter-controlled and uncontrolled ways. The many-electron wave function consists of single-particle orbitals. The orbitals are combined in a functional form to account for electron exchange and correlation. Plane waves are a convenient basis for the orbitals. However, plane-wave orbitals grow in evaluation cost with basis-set completeness and system size. To speed up the calculation, polynomials approximate the plane-wave sum. Four polynomial methods tested are: Lagrange interpolation, pp-spline interpolation, B-spline interpolation and B-spline approximation. The polynomials all increase speed by an order of the number of particles. B-spline approximation most consistently maintains accuracy in the seven systems tested. However, polynomials increase the memory needed by a factor of two to eight. B-spline approximation with a separate approximation for the Laplacian of the orbitals increases the memory by a factor of four over plane waves.

Polynomial-based orbitals enable larger calculations and careful examination of error introduced by approximations in VMC and DMC. In silicon bulk and interstitial defects, tens of variational parameters in the wave function converge the VMC energy. A basis set cutoff ≅1000 eV converges the VMC energy to within 10 meV. Controlling the population of electron configurations representing the DMC wave function does not bias the energy above 24 configurations. An imaginary time step for the configurations of 10-2 hartree-1 introduces no error above the 10 meV level. Finite-size correction methods on the 16-atom cell size with difference up to 2 eV error and 1 eV discrepancy between 16- and 64-atom cells indicate finite-size error is still significant. Pseudopotentials constructed with and without scalar relativistic correction agree in DMC energy differences at the 100 meV level, and mean-field calculations with and without pseudopotentials suggest a correction of 50-100 meV. Using the VMC wave function to evaluate the nonlocal portion of the pseudopotential introduces an error on the 1 meV level. DMC energies using orbitals produced with varying mean-field approximations produce a 1 eV range in the defect formation energies while applying a backflow transformation to the electron coordinates reduces Monte Carlo fluctuations. The backflow-transformed average also permits an extrapolation to zero fluctuation. The extrapolated value estimates the formation energy unbiased by the starting wave function to be 4.5-5 eV.

Committee:

John W. Wilkins (Advisor); Nandini Trivedi (Committee Member); Jay A. Gupta (Committee Member); Michael G. Poirier (Committee Member); Dhabaleswar Panda (Committee Member)

Subjects:

Materials Science; Physics

Keywords:

quantum Monte Carlo; polynomial approximation; approximation error; Si interstitial defects

Almutairi, Najat BandarON MULTILINEAR POLYNOMIALS EVALUATED ON QUATERNION ALGEBRA
MS, Kent State University, 2016, College of Arts and Sciences / Department of Mathematical Science
The purpose of this thesis is to describe the image of quaternion algebra under multilinear polynomials. In it, we provide a proof that the image of multilinear non-central polynomials contains ai + bj + ck for all real numbers a, b and c.

Committee:

Mikhail Chebotar, Dr (Advisor); Jenya Soprunova, Dr (Committee Member); Benjamin Jaye, Dr (Committee Member)

Subjects:

Mathematics

Keywords:

quaternion algebra; multilinear polynomials; image of multilinear polynomials; central polynomial

Niranjan, Adityanarayan CNormalization of Complex Mode Shapes by Truncation of the Alpha-Polynomial
MS, University of Cincinnati, 2015, Engineering and Applied Science: Mechanical Engineering
Finite element (FE) models are created to correctly predict the dynamic characteristics of any system without the need to test the system. This is a viable solution to test the system in situations and environments which might be either physically unfeasible or exorbitantly expensive. These FE models however need to be validated with the data obtained from actual tests to affirm that the model's predictions are indeed correct. One parameter that is needed to validate analytical model is the mode shape. The mode shapes obtained from F.E. models are always real-valued due to the proportional nature of the assumed damping. The mode shapes obtained through experimental techniques, however, are more often than not complex-valued (complex modes) in nature. In order to validate the F.E. models, the complex mode shapes obtained from experimental data needs to be normalized. In this thesis, two such normalization techniques have been proposed. Over the years, several post-processing techniques have been suggested which are discussed briefly in Chapter (2). In this thesis, of the two proposed techniques, the first method is a pre-processing normalization technique based on truncating the alpha-polynomial obtained from the Unified Matrix Polynomial Approach (UMPA) method of modal parameter estimation. The proposed concept was tested on data obtained from a circular plate using frequency domain based high order algorithm Rational Fraction Polynomial (RFP). The proposed technique eliminated the damping and normalized the complex modal vectors effectively in low order polynomial based algorithms like Polyreference Frequency Domain - 2 (PFD - 2). The undamped natural frequencies, however, deviated from that of the actual system in case of polynomials with order higher than two. These digressions are explained with the help of 2 and 4 degree of freedom (DOF) analytical models. The second normalization method proposed is a post-processing technique based on the partial fraction method of residue synthesis. This technique was tested on data obtained from a rectangular plate. The technique successfully normalized the complex modal vectors with a very good correlation with the actual (complex) mode shapes giving a very high modal assurance criterion (MAC) value.

Committee:

Randall Allemang, Ph.D. (Committee Chair); David L. Brown, Ph.D. (Committee Member); Allyn Phillips, Ph.D. (Committee Member)

Subjects:

Mechanics

Keywords:

Normalization;Truncation of the alpha-polynomial;Complex mode shapes;Damping distribution

Yin, ZhijunSecurity of Unbalanced Oil-Vinegar Signature Scheme
PhD, University of Cincinnati, 2012, Arts and Sciences: Mathematical Sciences

We explore ideas for oil-vinegar signature schemes in the multivariate polynomial cryptography.

In the first half, we focus on TTS (Tame Transformation Signature) systems. We find a structure attack to defeat a family of TTS systems. Then we have the related complexity analysis to claim that a family of TTS systems can be broken in the time complexity O(261).

In the second half, we discuss the algebraic attack for the randomly built unbalanced oil-vinegar signature systems with different characteristics. Then we explore the security of those general oil-vinegar systems under F4 algorithm attack.

Committee:

Jintai Ding, PhD (Committee Chair); Dieter Schmidt, PhD (Committee Member); Ning Zhong, PhD (Committee Member)

Subjects:

Applied Mathematics

Keywords:

Signature Scheme; Cryptanalysis; Cryptography; TTS; F4; multivariate polynomial

Next Page