Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 22)

Mini-Tools

 
 

Search Report

  • 1. Selvganesan, Padmini Smart Statistics, Signal Processing and Instrumentation for Improved Diagnosis of Pediatric Sleep Apnea

    Doctor of Philosophy, University of Akron, 2020, Biomedical Engineering

    Obstructive Sleep Apnea (OSA) is a common pediatric breathing disorder which leads to complete or partial obstruction of the airway while sleeping. It occurs in pediatric patients due to enlarged tonsils (tonsillitis), obesity, cranio-facial abnormalities and facial muscle-related disorders that lead to hypotonia thereby resulting in obstruction of the airway. Polysomnography (PSG) is the gold standard overnight systemic study to diagnose sleep apnea. The quality of data obtained from various sensors in an overnight PSG is important for good diagnosis, but sometimes the data obtained from the thermistor sensor is usually corrupted by the use of a pacifier by pediatric patients that may use them during the study. Hence the corrupted data is discarded and not used for analysis. This causes a problem as the thermistor data captures apnea (complete reduction of airflow) and hypopnea (partial reduction of airflow) events and can lead to an incorrect Apnea Hypopnea Index (AHI) which is the total number of apnea and hypopnea events in an hour of the study. In the first objective, we set up a Bayesian framework to compute the probability of having an apnea and hypopnea event while taking into consideration the data from other sensors from the artifact-free part of the data and applied that probability to artifact part of the data. This determined the probability of the number of events that could be hidden behind those artifacts and in turn led to a revised AHI score which is usually higher than the actual score. This is of great value to the physician as the AHI score plays a major role in determining the standard of care or the diagnostic procedure that will be given to the patient after the PSG. As an example, for one particular patient a score of 4 was increased to greater than 5, and would have led to a different treatment plan. The second objective of the study was to use signal processing techniques to remove the artifacts from the thermistor data. This would pr (open full item for complete abstract)

    Committee: Ajay Mahajan (Advisor); Lawrence Noble Jr. (Committee Member); James Keszenheimer (Committee Member); Jiang Zhe (Committee Member); Sujay Datta (Committee Member) Subjects: Biomedical Engineering
  • 2. Liu, Qing On Anisotropic Functional Fourier Deconvolution Problem with Unknown Kernel

    Doctor of Philosophy (PhD), Ohio University, 2019, Mathematics (Arts and Sciences)

    We consider the estimation of a periodic bivariate function ƒ(⋅,⋅) based on observations from a noisy convolution model, when the kernel (blurring) function 혨(⋅,⋅) is unknown. However, we are able to observe 혨δ(⋅,⋅), a noisy version of 혨, at the same time, which assures the estimation. We perform the deconvolution algorithm in Fourier domain. A preliminary thresholding procedure is applied to the Fourier coefficients of observations 혨δ to ensure a stable inversion. We construct a hard-thresholding wavelet estimator of ƒ using band-limited wavelet bases together with compactly supported wavelet bases so that a fast estimation algorithm exists. To evaluate the performance of our estimator, we derive the lower bounds for the mean integrated squared error (퐿2-risk) assuming that ƒ belongs to the Besov space of mixed smoothness and the kernel 혨 possesses certain smoothness properties. We show that the proposed wavelet estimator is adaptive and asymptotically quasi-optimal within a logarithmic factor (in the minimax sense) in a wide range of Besov balls. Furthermore, we investigate the discrete case of our deconvolution model, as it is the common case in real life. We carry out a limited simulation study and show that our estimator performs well in a finite sample setting. Finally, we extend the minimax results to the more general 퐿푝-risk (1 ≤ 푝 < ∞), and show that our estimator is asymptotically quasi-optimal within a logarithmic factor in this case as well.

    Committee: Rida Benhaddou (Advisor); Wei Lin (Committee Member); Chang Liu (Committee Member); Vladimir Vinogradov (Committee Member) Subjects: Mathematics; Statistics
  • 3. Mohan, Arvind Data-Driven Analysis Methodologies for Unsteady Aerodynamics from High Fidelity Simulations

    Doctor of Philosophy, The Ohio State University, 2017, Aero/Astro Engineering

    Numerical methods and computational approaches for studying fluid flow have become increasingly popular and more mature in their capabilities since the 1960s, when the prohibitive experimental costs associated with flight vehicle development programs necessitated an alternative approach. Computational Fluid Dynamics (CFD) approaches for several practical applications were pioneered in the early 1980s, but despite the algorithmic advances the computing power necessary to compute full three-dimensional flow-fields remained a bottleneck. With the advent of powerful microprocessors and high performance supercomputers, high fidelity three-dimensional CFD for cases of practical interest have become feasible over the last couple of decades. Although these high fidelity simulations contain the desired physics, extracting that information is proving to be a challenge due to their extremely large size. For instance, a 3D Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) of an airfoil can produce terabytes of data. For such large datasets, it becomes difficult to use conventional analysis and visualization techniques without resorting to statistical methods more suited to such ``big data" problems. There are several such methods, each of which usually places emphasis on a certain aspect of the dataset. However, these can be used in conjunction with each other to complement each other to generate a comprehensive understanding of the fundamental physical mechanisms prevalent in a flow-field. The central goal of this dissertation is to develop strategies based on novel statistical, model reduction and signal processing techniques to derive such physical insights into large CFD datasets of practical interest. To demonstrate this, three high fidelity LES datasets of various unsteady flow-fields arising in Micro Air Vehicle (MAV) flight have been analyzed in depth. They are a) Static stall of a NACA 0015 airfoil with plasma control, b) Dynamic stall in a plunging (open full item for complete abstract)

    Committee: Datta Gaitonde (Advisor); Jen Ping-Chen (Committee Member); Sandip Mazumder (Committee Member); Mei Zhuang (Committee Member) Subjects: Aerospace Engineering
  • 4. Aradhye, Hrishikesh Anomaly detection using multiscale methods /

    Doctor of Philosophy, The Ohio State University, 2001, Graduate School

    Committee: Not Provided (Other) Subjects: Engineering
  • 5. May, Heather Wavelet-based Image Processing

    MS, University of Cincinnati, 2015, Arts and Sciences: Mathematical Sciences

    Abstract: This thesis will guide you through the uses of wavelets in image processing. A review of Fourier transforms, z Transforms, multiresolution analysis, and subband coding. A main advantage of discrete wavelet transform (DWT) is the ability to decompose images in the space/time-frequency domain. The diverse applications for wavelets can be seen in biometrics, medical imaging, statistical analysis, and even current technology for images such as the JPEG 2000.

    Committee: Herbert Halpern Ph.D. (Committee Chair); Donald French Ph.D. (Committee Member); Benjamin Vaughan Ph.D. (Committee Member) Subjects: Mathematics
  • 6. Cao, Siyang Radar Sensing Based on Wavelets

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Radar waveform design is an active research area for decades. With the advent of advanced digital signal processing, high speed computing, high frequency electronics, and solid state power amplifiers, emerging radar systems (such as UWB radar, multiple-input and multiple-output (MIMO) radar, cognitive radar, etc.) are expecting more from their waveforms. Taking advantage of the new techniques, scientists and engineers are able to implement new waveforms to achieve significantly better performance for conventional radar systems, namely target detection including range, speed, and shape. The objective of this dissertation is to exploit a practical way to build flexible waveforms for the modern radar. On the other hand, conventional radar systems detect targets or pixels of an area individually. Each target or pixel generates a set of data in real-time, which must be recorded for off-line processing. When the number of elements is increased, phased array radar is able to generate narrow beams, which can detect more targets or cover larger areas for data collection in high definition. The disadvantage is the increased time in sensing since narrow beams need more time to cover the same area than wider beams. To address this issue, the sensing mechanism needs to be studied. The objective of this dissertation is to exploit a new sensing mechanism, named transform sensing, to cover wider areas, tracking more moving objects, and providing high resolution of the target area with limited times of sensing. Because the waveform design and transform sensing in this dissertation are all based on wavelets, the dissertation introduces the wavelet basics. Then the wavelet based waveform is presented. This waveform is generated by concatenating wavelet packets, and can suppress range sidelobes more effectively than the tranditional Linear Frequency Modulated (LFM) waveform. In addition, the wavelet based waveform can de-couple its envelope and carrier for range and velocity (open full item for complete abstract)

    Committee: Yuan Zheng (Advisor); Chris Baker (Committee Member); Chi-Chih Chen (Committee Member) Subjects: Electrical Engineering; Remote Sensing
  • 7. Souare, Moussa Sar Image Analysis In Wavelets Domain

    Doctor of Philosophy, Case Western Reserve University, 0, EECS - Computer Engineering

    A scheme for synthetic aperture radar (SAR) images analysis in wavelet domain is presented. SAR images depict noise known as speckle. This is fundamentally spawned by the variation of the radar microwave signals due to the mixture of the returned echoes from the transmitted antenna beams travelling through different distances as the antenna frequently moves. This thesis has developed a method to analyze SAR images via image processing and some statistical parameters. The analysis was done with the goal of identifying an object using the estimated energy from the image. The proposed solution used wavelet transform. Wavelet domain is chosen to overcome the time and frequency limitations that Fourier Transform exhibits; reducing then the algorithm execution time from ¿(n log n) to ¿(n). SAR image was subject to wavelet coefficients decomposition in Matlab. The Hurst parameter was then computed to measure the long range dependency of the image. The evaluation of Hurst parameter allowed to determine the optimum required level when performing wavelet coefficients decomposition. Additionally, the entropy in wavelet domain of SAR images was evaluated. The spectral entropy signatures appeared to be multi-scale and are useful for understanding the complexity and predictability of SAR images. Subsequently, the energy at individual frequency level was evaluated to determine the amount of information the image conveys; leading to an object identification. Furthermore, the same algorithm was applied to medical images to identify healthy from disease organs or tissues. Finally, the algorithm was implemented in FPGA Virtex-7 that accelerates the process by 40% when compared to previous version of FPGAs. The experimental results are validated and verified using Sandia National Laboratories and DARPA MSTAR mixed targets datasets.

    Committee: Christou Papachristou (Advisor) Subjects: Aerospace Engineering; Computer Engineering; Electromagnetics; Engineering
  • 8. Peterson, Michael Evolutionary Methodology for Optimization of Image Transforms Subject to Quantization Noise

    Doctor of Philosophy (PhD), Wright State University, 2008, Computer Science and Engineering PhD

    Lossy image compression algorithms sacrifice perfect imagereconstruction in favor of decreased storage requirements. Modern lossy compression schemes, such as JPEG2000, rely upon the discrete wavelet transform (DWT) to achieve high levels of compression while minimizing the loss of information for image reconstruction. Some compression applications require higher levels of compression than those achieved through application of the DWT and entropy coding. In such lossy systems, quantization provides high compression rates at the cost of increased distortion. Unfortunately, as the amount of quantization increases, the performance of the DWT for accurate image reconstruction deteriorates. Previous research demonstrates that a genetic algorithm can improve image reconstruction in the presence of quantization error by replacing the wavelet filter coefficients with a set of evolved coefficients. This dissertation develops a methodology for the evolution of digital filters capable of outperforming the DWT for image reconstruction at a given compression rate in the presence of quantization error. This dissertation compares potential fitness measures for evaluating reconstruction error. Experiments compare the usefulness of local versus standard population initialization and mutation operators. In order to perform an efficient yet thorough traversal of the search space, several recombination operators developed specifically for real-valued evolution are evaluated. Additionally, this dissertation presents and develops a novel technique to emphasize the reconstruction of the high-spacial frequency areas of an image through use of edge detection algorithms and focused evolution. An analysis of the ease of traversal through the fitness landscapes defined by various image quality measures supports the development of a framework for evolving robust image transform filters. Particular emphasis is placed upon the development of transforms that provide consistently accurate reconstru (open full item for complete abstract)

    Committee: Michael Raymer PhD (Advisor); Gary Lamont PhD (Committee Member); Travis Doom PhD (Committee Member); Mateen Rizki PhD (Committee Member); Zhiqiang Wu PhD (Committee Member) Subjects: Computer Science; Electrical Engineering
  • 9. Kuttan Chandrika, Unnikrishnan Development of a New Algorithm for Automatic Detection and Rating of Squeak and Rattle Events in Automobiles

    MS, University of Cincinnati, 2009, Engineering : Mechanical Engineering

    Squeak and rattle (S and R) performance is one of the main factors which affect the perceived NVH performance of a vehicle. Historically, squeak and rattles were detected and analyzed using subjective methods. In addition to inherent subjective nature, high cost and long test durations are major drawbacks of subjective methods. This thesis aims to arrive at a procedure for automatic detection and rating of the S and R events. The proposed algorithm uses the wavelet transform technique to extract the time-frequency information and the Zwicker's loudness model to obtain a perceptional squeak and rattle metric. Instantaneous values of specific loudness distribution obtained from Zwicker's loudness model are used along with a leaky integration procedure to obtain the transient specific loudness time histories, from which the perceived transient loudness time history is obtained. The detection threshold of the S and R event was identified by a clever interpretation of the jury test results using the perceived transient loudness as the metric. The proposed algorithm showed a good promise in detecting S and R events, showing well correlated results with the jury tests. A varied form of the perceived transient loudness was developed and used for the purpose of quantitative rating of severity of S and R events. Subjective tests showed the rating metric is correlated fairly well with the subjective rating of S and R events except those with energy distribution predominantly in low frequencies. The new algorithm developed in this work will be able to automate detection and rating of the S and R events with good accuracy.

    Committee: Jay Kim PhD (Committee Chair); Randall Allemang PhD (Committee Member); Allyn Phillips PhD (Committee Member) Subjects: Electrical Engineering; Engineering; Mechanical Engineering
  • 10. RAGHUNATHAN, RAGHAVENDRAN DEVELOPMENT AND COMPARATIVE ASSESSMENT OF CWT BASED DAMAGE DETECTION TECHNIQUES ON SIMULATED GEARBOX SIGNALS

    MS, University of Cincinnati, 2006, Engineering : Mechanical Engineering

    The demand for condition monitoring systems has constantly increased with more and more companies trying to minimize the effect of machine failures, and develop more efficient ways of using the resources. Over the years the role of damage detection systems has changed from being a post processing tool to a prognostic tool, which is expected to deliver information on the current operating condition of the machine and also predict the available service life. The time and frequency localization properties of continuous wavelet transform offers a viable and improved option for analyzing the transient characteristics of defect signals. One of the main limitations in development of diagnostic applications is the need for large data resources which include all defect scenarios. Mathematical modeling and computer simulation has been widely used for the purpose of understanding the dynamics of the system and also to generate the various defect scenarios encountered during actual operation. This thesis presents an extensive literature review on existing dynamic models for gearbox systems and attempts to simulate a simplified model of a single stage gear box system. The research presented in this thesis attempts to bridge the gap between simulation and diagnostic application development. A common interface for developing and fine tuning CWT based diagnostic application for a single stage gearbox is presented. An elaborate mathematical background of the continous wavelet theory is presented along with a brief overview of existing CWT diagnostic techniques. A comparative analysis on existing fault detection algorithms and CWT based diagnostics is presented. Finally, based on the results and discussion a roadmap for development of CWT based diagnostic application is presented.

    Committee: Dr. Randall Allemang (Advisor) Subjects: Engineering, Mechanical
  • 11. NIBHANUPUDI, SWATHI SIGNAL DENOISING USING WAVELETS

    MS, University of Cincinnati, 2003, Engineering : Computer Engineering

    In any type of signal processing, it has been demonstrated that it is important to remove noise from the signal before recognizing or classifying the patterns. Otherwise, the whole process may give wrong results. In this work the choice of denoising mechanisms for various types of input data and Gaussian noise is explored, to increase the signal strength. In this thesis, denoising the input signals using a wavelet transform is discussed. It is shown that the performance of a signal classifier improves when these denoising techniques are introduced before actually applying the classifier. For our experiments, the classifier applied is a hybrid intelligent system that employs three important techniques of artificial intelligence, namely genetic algorithms, neural networks and fuzzy logic. Along with explaining the denoising algorithm clearly, this work shows the importance of selection of a suitable wavelet for the given input data and thus shows that the efficiency of a signal denoiser depends on three factors: the thresholding techniques, the kind of wavelet used in denoising, and the synchronization between the wavelet selected and the input data. This statement is justified with results from experiments on ECG data which employ different kinds of wavelets such as Haar, Daubechies, Symlet and Coiflet. The improvements in denoising after using vector quantization of wavelet coefficients before thresholding are also discussed.

    Committee: Dr. Carla Purdy (Advisor) Subjects:
  • 12. Aradhye, Hrishikesh Anomaly Detection Using Multiscale Methods

    Doctor of Philosophy, The Ohio State University, 2001, Chemical Engineering

    In an environment where most process maneuvers are automated, algorithms to detect and classify abnormal trends in process measurements are of critical importance. The petrochemical industry in the United States loses billions of dollars annually due to improper abnormal situation management, and a staggering one in 16 plant accidents results in a fatality. Hence, Statistical Process Control and Monitoring (SPC) has been an active area of research for many decades and a variety of statistical and machine learning-based methods have been developed. However, most existing methods for process monitoring learn the signal characteristics at a fixed scale, and are best for detecting changes at that single scale. In contrast, data from most industrial processes are inherently multiscale in nature due to events occurring with different localization in time, space, and frequency. Unfortunately, existing techniques are unable to adapt automatically to the scale of these features. Many existing methods also require the measurements to be uncorrelated, whereas, in practice, autocorrelated measurements are very common in industrial processes. In this work, we have investigated the use of multiscale techniques to improve upon these shortcomings of existing single-scale approaches. Because of fundamental functional relationships such as process chemistry, energy and mass balances, measurements in multivariate processes are correlated. Our approach learns these correlations and clustering behaviors in the wavelet space using machine learning methods such as Adaptive Resonance Theory (ART-2) and Principal Component Analysis (PCA), resulting in higher detection accuracy coupled with noise reduction. The performance of our method, named Multi-Scale Statistical Process Control and Monitoring (MSSPC), is compared with existing methods based on the average detection delays for detecting shifts of different sizes. Our ART-2 based MSSPC detector is currently deployed in a large scale petro (open full item for complete abstract)

    Committee: James Davis (Advisor) Subjects: Engineering, Chemical
  • 13. Chakraborty, Pritam WAVELET TRANSFORMATION BASED MULTI-TIME SCALE METHOD FOR FATIGUE CRACK INITIATION IN POLYCRYSTALLINE ALLOYS

    Doctor of Philosophy, The Ohio State University, 2012, Mechanical Engineering

    Fatigue crack nucleation in polycrystalline alloys is strongly influenced by the crystallographical and morphological features of the underlying microstructure. This necessitates the incorporation of microstructural effects in the fatigue crack nucleation models developed for these alloys. Crystal plasticity based finite element (CPFE) simulations of statistically equivalent microstructures coupled with a physically motivated crack nucleation law captures the microstructural effects and can accurately predict fatigue life of polycrystalline alloys. However, fatigue analysis for large number of cycles by single time scale CPFE simulations is computationally exhaustive. To alleviate this problem, a wavelet transformation based multi-time scale (WATMUS) method is developed in the present work. In the WATMUS method, the wavelet based transformation of variables decouples the slow monotonic evolution from the fast oscillatory behavior and permits integration in time steps of cycles to provide significant computational benefit. This method has no assumption of periodicity in the evolution of variables and hence can be used to decouple the strongly non-linear crystal plasticity equations. The accuracy and efficiency of the WATMUS method is compared with conventional single time scale integration scheme through cyclic CPFE simulations of Ti-6242. The load and microstructure sensitive fatigue behavior of Ti-6242 is investigated using a physically motivated non-local crack initiation law and the proposed WATMUS method based CPFE simulations. The crack nucleation model is based on dislocation pile up and stress concentration at grain interfaces, caused by inhomogeneous plastic deformation in the microstructure. The dislocation pile up length and stress concentration increases with load cycles, causing macroscopic crack nucleation. The number of cycles to nucleate a crack and the microstructural feature characteristics at the initiation site, predicted by the model shows good a (open full item for complete abstract)

    Committee: Somnath Ghosh Prof. (Advisor); June K. Lee Prof. (Committee Member); Vish V. Subramaniam Prof. (Committee Member); Jack. J. McNamara Prof. (Committee Member) Subjects: Mechanical Engineering
  • 14. Sankari, Ziad EEG-Based Analysis of Cortical Connectivity in Alzheimer's Disease

    Master of Science, The Ohio State University, 2010, Biomedical Engineering

    Problem Statement: Alzheimer's disease (AD) is the most common cause of brain dementia among elderly. Patients with AD are expected to live half as long as those without dementia. Early diagnosis of AD can help extend life quality for probable patients as effective drugs are becoming increasingly available to slow down the progression of the disease. Currently, there is no single clinical test available for accurate diagnosis of AD. Objectives: The objective of this research is to classify AD and healthy subjects based on EEG analysis of brain cortical connectivity. Methods: EEGs recorded by 19 scalp electrodes are obtained from 20 AD probable patients and 7 healthy (control) subjects. EEGs are divided into four sub-bands: delta, theta, alpha and beta. Cortical connectivity is evaluated using two methods: pairwise electrode conventional coherence and wavelet coherence. Statistically significant features extracted by the two methods are applied to a probabilistic neural network (PNN) model for classification. Results: One-way Analysis of Variance (ANOVA) test shows a set of statistically significant differences in electrode coherence between AD and controls. For conventional coherence, AD patients present a significant pattern of increase in the left intrahemispheric frontal coherence in the delta, theta, and alpha bands, an increase in the left intrahemispheric temporo-parietal coherence in all bands, and a decrease in the right temporo-parieto-central coherence in all bands. The decrease in coherence is an indication of lower cortical connectivity. The increase in coherence could be attributed to compensatory mechanisms that attempt to make up for the decrease in memory and cognitive functions caused by the progression of AD. For wavelet coherence, AD coherence values are lower in all cortical regions and within all bands. Features used in the PNN model show that wavelet coherence is a better classifier of AD in single-band analysis, while conventional coherence (open full item for complete abstract)

    Committee: Hojjat Adeli PhD (Advisor); Bradley Clymer PhD (Advisor); Yi Zhao PhD (Committee Member); Ashok Krishnamurthy PhD (Committee Member) Subjects: Biomedical Research; Electrical Engineering
  • 15. Twa, Michael Structural classification of glaucomatous optic neuropathy

    Doctor of Philosophy, The Ohio State University, 2006, Physiological Optics

    Glaucoma is a leading cause of blindness. Quantitative methods of imaging the optic nerve head (e.g. confocal scanning laser tomography) are increasingly used to diagnose glaucomatous optic neuropathy and monitor its progression, yet there is considerable controversy about how to interpret and make best use of this structural information. In this research, machine learning methods are proposed and evaluated as alternatives to current methods of disease classification. First, multiple mathematical modeling methods such as radial polynomials, wavelet analysis and B-spline fitting were used to reconstruct topographic descriptions of the optic nerve head and peripapillary region. Next, features derived from these models were extracted and used as classification features for automated decision tree induction. Decision tree classification performance was compared with conventional techniques such as expert grading of stereographic photos, Moorfields Regression Analysis, and visual field-based standards for the cross-sectional identification of glaucomatous optic neuropathy. Pseudozernike polynomial modeling methods provided the most compact and faithful representation of these structural data, albeit at considerably greater computational expense when compared to wavelet and B-spline modeling methods. The pseudozernike-based classifier had the greatest area under the receiver-operating characteristic (ROC) curve, 85% compared to 73% and 71% for the wavelet and B-spline-based classification models respectively. These results show that automated analysis of optic nerve head structural features can identify glaucomatous optic neuropathy in very good agreement with expert assessments of stereographic disc photos. Moreover, these quantitative methods can improve the standardization and agreement of these assessments. Extensions of these methods may provide alternative ways to evaluate structural and functional disease relationships in glaucoma.

    Committee: Mark Bullimore (Advisor) Subjects:
  • 16. Jiang, Xiaomo Dynamic fuzzy wavelet neural network for system identification, damage detection and active control of highrise buildings

    Doctor of Philosophy, The Ohio State University, 2005, Civil Engineering

    A multi-paradigm nonparametric model, dynamic fuzzy wavelet neural network (WNN) model, is developed for structural system identification of three dimensional highrise buildings. The model integrates chaos theory (nonlinear dynamics theory), a signal processing method (wavelets), and two complementary soft computing methods (fuzzy logic and neural network). An adaptive Levenberg-Marquardt-least-squares learning algorithm is developed for adjusting parameters of the dynamic fuzzy WNN model. The methodology is applied to one five-story test frame and two highrise moment-resisting building structures. Results demonstrate that the methodology incorporates the imprecision existing in the sensor data effectively and balances the global and local influences of the training data. It therefore provides more accurate system identifications and nonlinear approximation with a fast training convergence. A nonparametric system identification-based model is developed for damage detection of highrise building structures subjected to seismic excitations using the dynamic fuzzy WNN model. The model does not require complete measurements of the dynamic responses of the whole structure. A damage evaluation method is proposed based on a power density spectrum method. The multiple signal classification method is employed to compute the pseudospectrum from the structural response time series. The methodology is validated using experimental data obtained for a 38-story concrete test model. It is demonstrated that the WNN model together with the pseudospectrum method is effective for damage detection of highrise buildings based on a small amount of sensed data. A nonlinear control model is developed for active control of highrise three dimensional building structures including geometrical and material nonlinearities, coupling action between lateral and torsional motions, and actuator dynamics. A dynamic fuzzy wavelet neuroemulator is developed for predicting the structural response in futur (open full item for complete abstract)

    Committee: Hojjat Adeli (Advisor) Subjects: Engineering, Civil
  • 17. Balster, Eric Video compression and rate control methods based on the wavelet transform

    Doctor of Philosophy, The Ohio State University, 2004, Electrical Engineering

    Wavelet-based image and video compression techniques have become popular areas in the research community. In March of 2000, the Joint Pictures Expert Group (JPEG) released JPEG2000. JPEG2000 is a wavelet-based image compression standard and predicted to completely replace the original JPEG standard. In the video compression field, a compression technique called 3D wavelet compression shows promise. Thus, wavelet-based compression techniques have received more attention from the research community. This dissertation involves further investigation of the wavelet transform in the compression of image and video signals, and a rate control method for real-time transfer of wavelet-based compressed video. A pre-processing algorithm based on the wavelet transform is developed for the removal of noise in images prior to compression. The intelligent removal of noise reduces the entropy of the original signal, aiding in compressibility. The proposed wavelet-based denoising method shows a computational speedup of at least an order of magnitude than previously established image denoising methods and a higher peak signal-to-noise ratio (PSNR). A video denoising algorithm is also included which eliminates both intra- and inter-frame noise. The inter-frame noise removal technique estimates the amount of motion in the image sequence. Using motion and noise level estimates, a video denoising technique is established which is robust to various levels of noise corruption and various levels of motion. A virtual-object video compression method is included. Object-based compression methods have come to the forefront of the research community with the adoption of the MPEG-4 (Motion Pictures Expert Group) standard. Object-based compression methods promise higher compression ratios without further cost in reconstructed quality. Results show that virtual-object compression outperforms 3D wavelet compression with an increase in compression ratio and higher PSNR. Finally, a rate-control method (open full item for complete abstract)

    Committee: Yuan Zheng (Advisor) Subjects:
  • 18. Kim, Hongjin WAVELET-BASED ADAPTIVE CONTROL OF STRUCTURES UNDER SEISMIC AND WIND LOADS

    Doctor of Philosophy, The Ohio State University, 2002, Civil Engineering

    A new control algorithm, wavelet-hybrid feedback LMS algorithm, is developed to overcome the shortcomings of the classical feedback control algorithms and the filtered-x LMS control algorithm. It integrates a feedback control algorithm such as the LQR or LQG algorithm with the filtered-x LMS algorithm and utilizes a wavelet multi-resolution analysis for the low-pass filtering of external dynamic excitations. Since the control forces determined by the filtered-x LMS algorithm are adapted by updating the FIR filter coefficients at each sampling time until the output error is minimized, the new control algorithm is effective in control of both steady-state and transient vibrations. It is shown that the algorithm is capable of suppressing vibrations over a range of input excitation frequencies unlike the classic feedback control algorithms whose control effectiveness decreases considerably when the frequency of the external disturbance differs from the fundamental frequency of the system. Further, results demonstrate that the wavelet transform can be effectively used as a low-pass filter for control of civil structures without any significant additional computational burden. A new hybrid control system, hybrid damper-TLCD system, is developed through judicious integration of a passive supplementary viscous fluid damping system with a semi-active TLCD system, and its performance is evaluated for control of responses of 3D irregular buildings under various seismic excitations and for control of wind-induced motion of high-rise buildings. The new hybrid control system utilizes the advantages of both passive and semi-active control systems along with improving the overall performance and eliminating the need for a large power requirement, unlike other proposed hybrid control systems where active and passive systems are combined. Simulation results show that the new hybrid control system is effective in reducing the response of structures significantly under seismic excitati (open full item for complete abstract)

    Committee: Hojjat Adeli (Advisor) Subjects: Engineering, Civil
  • 19. Jones, William A unified approach to orthogonally multiplexed communication using wavelet bases and digital filter banks

    Doctor of Philosophy (PhD), Ohio University, 1994, Electrical Engineering & Computer Science (Engineering and Technology)

    Orthogonally multiplexed communication is a bandwidth efficient modulation format that places independent QAM symbols on orthogonal pulses. With the traditional approach, this provides a uniform decomposition of the bandwidth allowing for variable compensation techniques within each subband tailored to the channel distortion peculiar to that band. In this dissertation, we propose orthonormal wavelet basis functions for the orthogonal pulses. In particular, dyadic wavelets are used to provide a nonuniform decomposition of the data bandwidth while M-band wavelets are used to generate a uniform decomposition. All the fundamental characterization of these new waveforms is provided and it is shown that digital filter banks enable an efficient discrete time implementation. These filter banks are studied in detail resulting in the identification and solution of a number of practical problems encountered with FIR filter realizations. Most importantly, an even order FIR filter bank is developed for applications requiring linear phase. We then evaluate candidate pulse designs and make the important connection between the Meyer scaling function and the Nyquist communication pulse. In the dyadic case, we derive an analytical form for the square root raised cosine wavelet. Additionally, this allows us to prove several results in extending the Meyer scaling function to the M-band case. We finally use function approximation to the Meyer scaling function to design linear phase FIR filters for use in the new even order filter bank. As an application for the newly developed waveforms, we introduce a unified framework for spread spectrum communication. In particular,dimensionality in time is obtained with traditional DSPN having a scaling function chip pulse, dimensionality in time-frequency is obtained with dyadic wavelets and dimensionality in frequency is achieved with M-band wavelets. An optimum filter is developed which operates on the coefficients obtained by projecting the rece (open full item for complete abstract)

    Committee: Jeffrey Dill (Advisor) Subjects:
  • 20. Renfrew, Mark A Comparison of Signal Processing and Classification Methods for Brain-Computer Interface

    Master of Sciences (Engineering), Case Western Reserve University, 2009, EECS - Computer Engineering

    Non-invasive Brain-Computer Interface (BCI) methods have been investigated for use in physical therapy of stroke patients with motor deficits. This study investigates several methods of feature extraction and classification for suitability for use in such therapy. Electroencephalographic (EEG) data were collected during a motor task from four healthy control subjects and three subjects with motor deficiencies resulting from stroke. The EEG data were filtered using autoregressive (AR), mu-matched, and wavelet decomposition (WD) methods. The filtered data were classified using Support Vector Machines (SVM) and a linear classifier. Wavelet filtering showed a statistically significant (p < 0:05) improvement in classification accuracy over AR filtering for one subject when using the linear classifier. SVMs showed a statistically significant improvement over the linear classifier for all filtering methods for three subjects. No difference in classification accuracy was seen between linear and nonlinear SVMs.

    Committee: M. Cenk Cavusoglu PhD (Committee Chair); Janis Daly PhD, MS (Committee Member); Wyatt Newman PhD (Committee Member); Mark Dohring PhD (Committee Member) Subjects: Biomedical Research; Electrical Engineering