Search Results (1 - 25 of 100 Results)

Sort By  
Sort Dir
 
Results per page  

Mathari Bakthavatsalam, PagalavanHardware Acceleration of a Neighborhood Dependent Component Feature Learning (NDCFL) Super-Resolution Algorithm
Master of Science (M.S.), University of Dayton, 2013, Electrical Engineering
Image processing and computer vision algorithms allow computers to make sense of pictures and video seen through cameras. These have applications in a large variety of “real time” applications like surveillance, intelligence gathering, robotics, automobile driving, aviation, etc., where the picture from the video needs to be processed by a computer as soon as it is taken. However these algorithms are time intensive because of its compute bound nature. In this literature, a single image super resolution algorithm based on Neighborhood Dependent Component Feature Learning (NDCFL) is accelerated by multiple GPUs and multiple CPU cores, using NVIDIA’s Computer Unified Device Architecture (CUDA), OpenCV and POSIX threads. Given a low resolution input, this method uses image features to adaptively learn the regression kernel based on local covariance to estimate the high resolution image. The accelerated implementation performs at speed 51 times faster than that of original implementation for 590X580 frame, and achieves processing rate close to real-time.

Committee:

Tarek Taha, Ph.D. (Committee Chair); Eric Balster, Ph.D. (Committee Member); Vijayan Asari, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Electrical Engineering; Engineering

Keywords:

GPU; CUDA; GPGPU; Super-resolution on GPU; Acceleration of super-resolution; Image Processing; NDCFL; super-resolution; GPU acceleration; Accelerating Image processing algorithm using GPU; Multi-core acceleration;

Kretzler, Madison ElizabethAUTOMATED CURVED HAIR DETECTION AND REMOVAL IN SKIN IMAGES TO SUPPORT AUTOMATED MELANOMA DETECTION
Master of Sciences, Case Western Reserve University, 2013, EECS - Electrical Engineering
If detected early, skin cancer has a 95-100% successful treatment rate; therefore early detection is crucial and several computer-aided methods have been developed to assist dermatologists. In skin images removing hairs without altering the lesion is important to effectively apply detection algorithms. This thesis focuses on the use of image processing techniques to remove hairs by identifying hair pixels contained within a binary image mask using the Generalized Radon Transform. The Radon Transform was adapted to find quadratic curves characterized by rotational angle and scaling. The method detects curved hairs in the image mask for removal and replacement through pixel interpolation. Implementing this technique in MATLAB gives the ability to perform tests rapidly on both simulated and actual images. The quadratic Radon transform performs well in curve detection; however, the research points out the need for better algorithms to improve hair masking, peak detection, and interpolation replacement.

Committee:

Marc Buchner, PhD (Advisor); Kenneth Loparo, PhD (Committee Member); Vira Chankong, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Image Processing Hair Curve Detection Skin Lesion Cancer Radon Transform Quadratic Automated Melanoma

Tsanakas, Panagiotis D.Algorithms and data structures for hierarchical image processing
Master of Science (MS), Ohio University, 1985, Electrical Engineering & Computer Science (Engineering and Technology)

This Thesis discusses methods for solving problems associated with hierarchical image processing systems. In particular, the problems dealt here are the hierarchical image data structures, the progressive image transmission and the hierarchical edge detection. Original methods for alleviating certain problems associated with time and/or space requirements are proposed. Analytical evaluations of the proposed schemes are also presented along with comparisons with other relevant methods.

Committee:

Henryk Lozokowski (Advisor)

Keywords:

Algorithms; Data Structures; Hierarchical Image Processing

FERRO, ANDREW FDEFECTIVE PIXEL CORRECTION AND RESTORATION IN STARING REMOTE SENSOR FOCAL PLANE ARRAYS
MS, University of Cincinnati, 2005, Engineering : Electrical Engineering
A staring sensor can be used to obtain sets of images that describe the various features of its intended target. It is its focal plane array (FPA), or the grid of individual sensing elements, that captures the photometric data within the imaging system. In many cases, some of the individual detectors become corrupted, causing the corresponding image pixels to exhibit characteristics different from those of the rest of the image. These defective pixel intensities can be abnormally noisy or completely saturated, having serious consequences on image quality and adversely affecting automated calculations that are performed on the images. This thesis presents the problems associated with defective pixels and discusses how these pixels can be compensated in a remote FPA non-imaging application requiring very strict photometric accuracy. It provides improved alternatives for defective pixel correction and the numerical analysis regarding the algorithms’ applicability under various non-imaging circumstances.

Committee:

Dr. William Wee (Advisor)

Keywords:

Bad pixel replacement; non-imaging; artifacts; principal components; image processing

Peterson, Michael RayEvolutionary Methodology for Optimization of Image Transforms Subject to Quantization Noise
Doctor of Philosophy (PhD), Wright State University, 2008, Computer Science and Engineering PhD

Lossy image compression algorithms sacrifice perfect imagereconstruction in favor of decreased storage requirements. Modern lossy compression schemes, such as JPEG2000, rely upon the discrete wavelet transform (DWT) to achieve high levels of compression while minimizing the loss of information for image reconstruction. Some compression applications require higher levels of compression than those achieved through application of the DWT and entropy coding. In such lossy systems, quantization provides high compression rates at the cost of increased distortion. Unfortunately, as the amount of quantization increases, the performance of the DWT for accurate image reconstruction deteriorates. Previous research demonstrates that a genetic algorithm can improve image reconstruction in the presence of quantization error by replacing the wavelet filter coefficients with a set of evolved coefficients.

This dissertation develops a methodology for the evolution of digital filters capable of outperforming the DWT for image reconstruction at a given compression rate in the presence of quantization error. This dissertation compares potential fitness measures for evaluating reconstruction error. Experiments compare the usefulness of local versus standard population initialization and mutation operators. In order to perform an efficient yet thorough traversal of the search space, several recombination operators developed specifically for real-valued evolution are evaluated. Additionally, this dissertation presents and develops a novel technique to emphasize the reconstruction of the high-spacial frequency areas of an image through use of edge detection algorithms and focused evolution. An analysis of the ease of traversal through the fitness landscapes defined by various image quality measures supports the development of a framework for evolving robust image transform filters.

Particular emphasis is placed upon the development of transforms that provide consistently accurate reconstruction of quantized satellite and aerial reconnaissance images. The development of transforms that preserve the intelligence that can be gained from highly compressed images transmitted over a limited bandwidth is of defense and security interest. This dissertation assembles a database of publicly available satellite images collected for a wide range of subjects, including aircraft and airfields, naval bases, army bases, cities, and factories. Experiments employing these images are geared toward the development of filters appropriate for civilian and military aerial reconnaissance applications requiring limited bandwidth for image transmission. Because the evolution employs the DWT algorithm, the resulting filters are easy to implement in hardware appropriate for digital signal processing applications.

Committee:

Michael Raymer, PhD (Advisor); Gary Lamont, PhD (Committee Member); Travis Doom, PhD (Committee Member); Mateen Rizki, PhD (Committee Member); Zhiqiang Wu, PhD (Committee Member)

Subjects:

Computer Science; Electrical Engineering

Keywords:

evolutionary computation; genetic algorithms; image processing; wavelets; fitness landscapes; quantization; optimization; signal processing

Hwangbo, Ju WonIntegration of Orbital and Ground Imagery for Automation of Rover Localization
Doctor of Philosophy, The Ohio State University, 2010, Geodetic Science and Surveying
Rover localization is essential to the exploration of space. The availability of sub-meter resolution satellite imagery, especially High Resolution Imaging Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter (MRO), has opened the possibility of computing rover locations at higher accuracy by making use of detailed features seen in the satellite orbital images. This dissertation describes a new development towards automation of the rover localization process, using orbital and ground image networks. A HiRISE orbital image network on Mars is constructed based on a rigorous sensor model, bundle adjustment of HiRISE stereo images and absolute positioning using Mars Orbiter Laser Altimeter (MOLA) data. The unique HiRISE sensor configuration consists of 14 CCDs fixed to a focal plane. Due to the complexity of its sensor geometry, two technical issues need to be resolved in HiRISE stereo processing for precision topographic mapping. These technical issues are achieving coherence in the exterior orientation parameters between stereos as well as overlapping CCDs, and accurate geopositioning of HiRISE data without ground-control points. In this research, bundle adjustment strategies based on polynomial function models are applied to improve the exterior-orientation parameters. Disagreement between HiRISE CCDs is handled by the bundle adjustment, using inter-CCD tie points. HiRISE DTM was matched with MOLA DTM and points data to obtain the absolute position of the stereo model. Performance analysis of this new experiment will be given. A ground image network is also constructed using matching of Mars Exploration Rover (MER) stereo images. Rocks detected from both orbital and ground imagery serve as tie points for rover localization. From orbital images, rocks are extracted based on brightness values and the shape of dark spots. Rocks in ground images are extracted through dense stereo matching, rock peak and surface point extraction, and rock modeling. To narrow down a precise rover position, terrain matching is performed using DTMs generated from orbital and ground imagery. Finally, distribution pattern matching is implemented for rocks detected from orbital and ground imagery. The rover position is adjusted based on a 2-D affine transformation obtained from rock pattern matching. The proposed method has been tested for the Spirit rover traverse. Selection of optimal parameter values and quality control is discussed. Experimental results show that the orbital/ground rock matching approach has performed successfully for MER rover localization.

Committee:

Rongxing Li (Advisor); Alan Saalfeld (Committee Chair); Alper Yilmaz (Committee Member)

Subjects:

Engineering; Robots; Scientific Imaging; Technology

Keywords:

Photogrammetry; image processing; rover localization

Hoffman, MarcEfficient Software and Hardware Implementations of the H.264 Entropy Encoders
Master of Science (M.S.), University of Dayton, 2011, Engineering
Part 10 of MPEG-4 describes the Advanced Video Coding (AVC) method widely known as H.264. H.264 is the product of a collaborative effort known as the Joint Video Team(JVT). The final draft of the standard was completed in May of 2003, and since then H.264 has become one of the most commonly used formats for compression of high definition video [9]. The entire H.264/AVC encoder is inherently a sequential process, which typically lends itself to a software solution. Within the H.264 Standard, two entropy decoders are discussed. These two lossless encoding methods are known as Context Adaptive Variable Length Coding (CAVLC) and Context Adaptive Binary Arithmetic Coding (CABAC). CAVLC offers the most basic solution, while CABAC provides increased compression rates at a cost in algorithm complexity. For fast encoding of H.264 bit streams, three solutions are presented in this thesis. Two implementations of CAVLC are discussed, including a software and a hardware solution. Finally, a simple implementation of CABAC is proposed.

Committee:

Eric Balster, PhD (Advisor); Frank Scarpino, PhD (Committee Member); Vijayan Asari, PhD (Committee Member)

Subjects:

Computer Engineering; Electrical Engineering; Information Science

Keywords:

H.264; video compression; image processing

Kakarlapudi, SwarnaAPPLICATION OF IMAGE ANALYSIS TECHNIQUES IN FORWARD LOOKING SYNTHETIC VISION SYSTEM INTEGRITY MONITORS
Master of Science (MS), Ohio University, 2004, Electrical Engineering & Computer Science (Engineering and Technology)

This thesis illustrates the application of pattern recognition techniques to consistency checking of the stored and sensed data for integrity monitoring of Synthetic Vision Systems (SVS). Consistency checking requires a common domain representation of the data. A shadow region has been proposed as the feature that can enable the representation of the data into a common domain. The data has been converted into binary images with the shadow region being the foreground object. Similarity techniques based on the geometrical properties of the shadow regions have been proposed. In particular geometric properties like area, center of mass and shape moments like Hu moments have been investigated in fusion with template matching technique. This similarity has been used as a criterion to detect horizontal biases in the stored databases.

Committee:

Maarten Uijt de Haag (Advisor)

Keywords:

Terrain Database Integrity Monitor; Weather Radar; Image Processing

Castro Pareja, Carlos RaulReal-time 3D elastic image registration
Doctor of Philosophy, The Ohio State University, 2004, Electrical Engineering
Real-time elastic image registration is potentially an enabling technology for the effective and efficient use of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. Mutual information is currently the best-known image similarity measure for multimodality image registration. A well-known problem with elastic registration algorithms is their high computational cost, with common elastic registration times in the order of hours. This complexity is due both to the large number of image similarity calculations required to converge to the optimal transformation and the time required to calculate image similarity. This dissertation presents an algorithm for elastic image registration that is optimized to minimize the execution time, and a hardware architecture for algorithm acceleration. Novel features of the algorithm include the use of a priori information to limit the search space for possible transformations, linear bound-based grid folding prevention, and adaptive optimization algorithm tolerance. The hardware architecture accelerates mutual information calculation, which is a memory-intensive task that does not benefit from cache-based memory architecture in standard software implementations, but can be efficiently implemented in a pipeline using parallel memory access techniques. Its calculation is performed in two steps, namely mutual histogram calculation and entropy accumulation. The main focus of acceleration is on mutual histogram calculation, which corresponds to about 99% of the overall mutual information calculation time. The architecture employs parallel, independent access to the image and mutual histogram memories and includes a mutual histogram partitioning scheme that allows multiple parallel accesses to the mutual histogram memory. Entropy calculation and accumulation is performed using a novel variable segment size piecewise polynomial approximation implemented using look-up tables. A proof-of-concept implementation of the architecture achieved speedups of 30x for linear registration and 100x for elastic registration against a 3.2 GHz Pentium III Xeon workstation, achieving total elastic registration times in the order of minutes. The total speedup can be increased by using several modules in parallel, thus allowing real-time performance (in the order of seconds). The architecture presented in this dissertation will be a significant tool in enabling the use of elastic image registration outside of research environments.

Committee:

Jogikal Jagadeesh (Advisor)

Keywords:

Image registration; Mutual information; Digital systems; Entropy calculation; Image processing

Bugyei, Francis KyeiPolarity and Endocytic Traffic in the Mammalian Cell
Master of Science (MS), Bowling Green State University, 2014, Biological Sciences
Endocytosis involves selective packaging of substances in endosomes for traffic into cells. The cell traffic pathway can be grouped into two functional compartments: Compartment I which represents the early stages of sorting and Compartment II representing the final stage of trafficking and degradation. The sorting events and fate of molecules following uptake is dependent on a variety of cell signaling pathways. The tumor promoter, phorbol 12-myristate 13-acetate (PMA) is known to cause transient loss of filopodia and increased pinocytosis in cells. Filopodia are finger-like plasma membrane protrusions of a number of animal cells containing receptors involved in cell sensing. In neurons where filopodia have been extensively studied, there is some evidence of increased endocytosis during filopodia retraction following encounters with repulsive cues. We hypothesized that, mammalian cells increase endocytic uptake during tumor promotion (with loss of filopodia), and polarity may be connected to vesicle trafficking. PMA effects its action through activation of the enzyme protein kinase C (PKC). To classify the traffic events in mammalian cells in tumor promotion, we selected conditions that enabled us to visualize the contents of Compartment II and compare them to the contents of Compartment I. To investigate the effects of PMA on trafficking three fluorescent markers were applied to rat liver cells of the IAR20 line. Using pixel-by-pixel analysis and image processing, we found an increased accumulation of fluorescent marker in compartments I and II in PMA-treated over the untreated cells. We also found that functional compartments I and II are trafficked in different directions inside cells. In another study on epithelial cells from the rat trachea, 1000W, we found an insignificant difference in fluorescent marker accumulated by endocytosis on the two halves of cells plated on a haptotactic gradient.

Committee:

Carol Heckman, PhD (Advisor); John Wade, PhD (Committee Member); Hans Wildschutte, PhD (Committee Member)

Subjects:

Biology; Biophysics; Cellular Biology; Molecular Biology

Keywords:

endocytosis; pinocytosis; macropinocytosis; haptotactic gradient; fluorescent microscopy; image processing; PMA, protein kinase C; filopodia; Cdc42; Cell Polarity; Cell traffic; Rat liver cells; Rat tracheal epithelial cells; haptotaxis

Shalev, Ronny YAUTOMATED MACHINE LEARNING BASED ANALYSIS OF INTRAVASCULAR OPTICAL COHERENCE TOMOGRAPHY IMAGES
Doctor of Philosophy, Case Western Reserve University, 2016, EECS - Electrical Engineering
Coronary artery disease (CAD) is the leading cause of death in the world. Most acute coronary events (e.g. heart attacks) are due to the rupture of atherosclerotic plaques inside the arteries, however, calcified lesion is the most widely treatable, typically, by stent implantation via percutaneous coronary intervention (PCI). Intravascular Optical Coherence Tomography (IVOCT) imaging has the resolution, contrast, and penetration depth to characterize coronary artery plaques. Conventional manual evaluation of IVOCT images, based on qualitative interpretation of image features, is tedious and time consuming. The aim of this PhD dissertation was to develop advanced algorithms to fully automate the task of plaque characterization, thereby significantly reduce image analysis time, enable intervention planning, and increase IVOCT data usability. We based our algorithms on machine learning combined with advanced image processing techniques. We developed a processing pipeline on a 3D local region of support for estimation of optical properties of atherosclerotic plaques from coronary artery, IVOCT pullbacks. Performance was assessed in comparison with observer-defined standards using clinical pullback data. Values (calcium 3.58±1.74mm−¹, lipid 9.93±2.44mm−¹ and fibrous 1.96±1.11mm−¹) were consistent with previous measurements. We, then, created a method to automatically classify plaque tissues as fibrous, calcified, or lipid-rich. For this multi-class problem, we used one-versus-rest SVM classifiers for each of the three plaque types, rules to exclude many voxels called “other,” and both physics-inspired and local texture features to classify voxels. Experiments on the clinical training data yielded 5-fold, voxel-wise accuracy of 87.7±8.6%, 96.7±4.9% and 97.3±2.4% for calcified, lipid-rich and fibrotic tissues, respectively. Experiments on the independent validation data (ex-vivo image data accurately labeled using registered 3D microscopic cryo-imaging and was used as ground truth) yielded overall 87.1% accuracy indicating generalizability. This was followed by a development of a novel approach for real-time calcium segmentation. The trained algorithm was evaluated on the independent validation data. We achieved 5-fold cross validation calcium classification with F1 score of 93.7±2.7%, recall of ≥89%, precision of ≥97%, and running time of 2.6 seconds per frame suggesting possible on-line use. We conclude with an application whose purpose is to be a complementary to the cardiologist in data analysis, off-line and on-line.

Committee:

David Wilson, PhD (Advisor); Soumya Ray, PhD (Committee Member); Hiram Bezerra, PhD, MD (Committee Member); Murat Cavusoglu, PhD (Committee Chair); Francis Merat, PhD (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Medical Imaging

Keywords:

Machine learning, support vector machine, texture classification, SVM, Image Processing, optical coherence tomography, plaque classification, calcium classification, intracoronary plaque, intravascular, OCT, CAD

Kohram, MaryamA Combined Microscopy and Spectroscopy Approach to Study Membrane Biophysics
Master of Science, University of Akron, 2015, Physics
Cellular membranes are complex assemblies and a clear understanding of the physical interactions during their function is of paramount importance. Here, we perform two separate studies for a better understanding of the interactions between membrane compartments and other biomolecules. In the first study, we developed a coupler to integrate a high sensitivity spectrometer with an epi-fluorescence microscope to measure fluorescence spectra of small area samples (400 micrometer squared). We applied our measurements on standard samples, performed three corrections on them and after a linear demixing process, the percentage of FRET efficiency was obtained. The development of this method will be advantageous in future single cell studies for detecting population heterogeneity. In the second study, we investigated the dynamics of membrane lipids in a supported lipid bilayer. Single particle tracking total internal reflection fluorescence microscopy (TIRF) was used to study the lateral mobility of phosphatidylinositol phosphate (PIP) lipids with and without an adsorbed polycationic polymer, quaternized polyvinylpyridine (QPVP). Diffusion coefficients were determined with Brownian and anomalous models. Our results indicate a decrease in diffusion coefficient of the lipids in the presence of QPVP in comparison to its absence, revealing their interaction.

Committee:

Adam Smith (Advisor); Jutta Luettmer-Strathmann (Committee Chair); Sergei Lyuksyutov (Committee Member)

Subjects:

Biophysics; Chemistry; Physical Chemistry; Physics

Keywords:

FRET; fluorescence microscopy; fluorescence spectroscopy; single particle tracking; TIRF; membrane biophysics; single cell imaging; image processing; lipid bilayer; diffusion coefficient; lipid-polymer interactions; Brownian diffusion; lipid mobility

Regmi, Hem KantaA Real-Time Computational Decision Support System for Compounded Sterile Preparations using Image Processing and Artificial Neural Networks
Master of Science, University of Toledo, 2016, Electrical Engineering
The purpose of this research is to design a computational decision support system (DSS) for compounded sterile preparations (CSP). Error-free compounding is dependent on the proper selection of components and adherence to procedure during compounding. A material selection system (MSS) based on a graphical user interface (GUI), coupled with a barcode scanner and back-end database, has been developed and tested for proper selection of items involving three different medication orders (MO). A video processing system (VPS) has been implemented in MATLAB that evaluates the live video feed from the compounding hood to monitor the compounding procedure when compounding the MO’s. Surf detection is used to detect and locate compounding items placed in the hood. Various algorithms have been developed and tested to enhance the accuracy and robustness of the VPS. The Decision Support System (DSS) is further improved with integration of another digital camera to ensure that correct volume of medicine with appropriate syringe is performed during the whole compounding process. The template matching and SURF object detection application on the digital image of the syringe, along with minimum distance classifier and artificial neural networks (ANNs) on the previously collected data from several experimental observations, were explored in classification and volume measurement of a syringe. The MSS was tested for all items used in compounding the MO’s and performed error-free. The VPS evolved to VPS.03 from VPS.01 and VPS.02. The greatest accuracy and ability for real-time realization were seen in VPS.03. All deliberate mistakes made when compounding the tested medication orders were captured by VPS.03. Leur-lock syringes of different sizes from 1 mL to 30 mL were tested, and an accuracy of 95+ % was obtained with very high precision. The new computational decision support system facilitates error-free selection of components and is able to monitor and evaluate the compounding process and correct volume measurement in real time. The platform may be used in CSP compounding rooms to audit techniques and procedures as well as in training or educational settings.

Committee:

Vijay Devabhaktuni, Dr. (Committee Chair); Jerry Nesamony, Dr. (Committee Co-Chair); Devinder Kaur, Dr. (Committee Member); Ezzatollah Salari, Dr. (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Compounding Sterile Preparations, Graphical User Interface, Artificial Neural Networks, Image Processing, Video Processing, Decision Support System, Medication Order, Object Detection, Correlation Calculation, Connected Component Analysis

Khan, PreoyatiCluster Based Image Processing for ImageJ
MS, Kent State University, 2017, College of Arts and Sciences / Department of Computer Science
ImageJ is a java based open source desktop application which is widely used in the image processing community. It is a combination of various user authored plugins. The developer API can be used to implement new plugins for specific image processing tasks or analysis. However, ImageJ wasn't designed to work on a distributed system. Currently, it is still being used on single machines to process large medical images which takes several hours to complete. In this thesis, we make a number of important and widely used ImageJ plugins to work within a clustered architecture. For easy communication among distributed nodes of the cluster we used a shared drive cluster architecture. One of the main challenges of running these plugins on a cluster is to generate combined final results with high accuracy from outputs generated by the original existing processing plugin running on different nodes. We implement several ImageJ plugins to distribute tasks and generate combined results. In particular, we consider the 3D object counter plugin for testing the developed cluster based system. Experimental results on test images shows high accuracy and similarity with single machine based results. However, for extra overhead for task distribution and gathering results we obtain improved performance of our system only for large size images in terms of execution time.

Committee:

Cheng Chang Lu (Advisor); Robert J. Clements (Advisor); Arvind K. Bansal (Committee Member); Kambiz Ghazinour (Committee Member)

Subjects:

Computer Science

Keywords:

ImageJ; Fiji; Plugin; Macro; Image Processing; Distributed Computing; Cluster

Chen, Hoover J.A minicomputer-multiple microprocessor system for gait analysis using television and force plate data /
Doctor of Philosophy, The Ohio State University, 1979, Graduate School

Committee:

Not Provided (Other)

Subjects:

Engineering

Keywords:

Image processing;Human locomotion

Li, YanImage Processing and Clinical Applications of Anterior Segment Optical Coherence Tomography
Doctor of Philosophy, Case Western Reserve University, 2008, Biomedical Engineering

Optical coherence tomography (OCT) is an emerging imaging modality. It provides in vivo cross-sectional images with micron level resolution. OCT is attractive for imaging the anterior eye segment because of its characteristics on high resolution, non-contact, and easy anatomical interpretation. The fast data acquisition rate (>2000 axial scans per second) of current anterior segment OCT systems allows the measurement of corneal dimensions without significant motion error. It also generates huge amount of data. Manual anatomic reconstruction and measurement are time consuming and operator-dependent. Automated image processing and analysis techniques can overcome these limitations and make routine clinical use of OCT possible.

A “dewarping” method was developed to remove the distortion due to refraction and transition of the group index at the interfaces. Image averaging and smooth filtering techniques were used to suppress the speckle noise. Image segmentation algorithms were developed to automatically detect the anterior and posterior corneal boundaries and the LASIK flap interface. The boundaries detected by the computer algorithm were compared with manual segmentation results to evaluate the performance of the algorithm.

Three clinical studies had been conducted to investigate the anterior segment OCT applications in corneal thickness mapping, keratoconus screening, and LASIK thickness measurements.

The results showed that: 1. The anterior segment OCT could accurately map the corneal thickness over a wide area, both in normal and opacified eyes. The corneal thickness measurements were highly reproducible. 2. OCT pachymetry based method could detect abnormal corneal thinning in keratoconic eyes. 3.The anterior segment OCT was capable of measure corneal, flap, and stromal bed thickness profiles and maps after LASIK.

Committee:

Andrew M. Rollins, PhD (Committee Chair); David Huang, MD, PhD (Advisor); Kenneth D. Singer, PhD (Committee Member); Xin Yu, ScD (Committee Member)

Subjects:

Biomedical Research; Ophthalmology

Keywords:

Optical Coherence Tomography; Anterior eye Segment; Image Processing; Corneal Thickness Map; Refractive Surgery; LASIK

Stone, Meredith EliseImaging of Blood Vessels: Parameter Estimation in MRI and Cryo-Imaging Techniques
Master of Sciences (Engineering), Case Western Reserve University, 2008, Biomedical Engineering
Atherosclerotic vulnerable plaque imaging is an important clinical goal. Cryo-imaging can validate vessel tissue classification. A validation study showed overall sensitivity of 89% using naïve raters. However, cryo-imaging cannot be clinically translated. Instead, MRI microcoil imaging shows promise. An important component of MR tissue identification is parameter estimation of T1, T2, and proton density. A new parameter estimation method based on Maximum Likelihood Estimation accounting for Rician noise bias with Markov Random Field spatial regularization was developed. A flat image test case shows decreased noise levels comparable to neighborhood averaging. For the lowest signal-to-noise ratio test case, neighborhood averaging decreased the standard deviation of the error to 34% of the single-pixel techniques, while the new method decreased it 37-43%. Tests using simulated phantom image also show well-preserved edges, which is not possible with neighborhood averaging. Finally, the algorithm was tested on brain data to fit T2*, showing visually positive results.

Committee:

David Wilson (Advisor); Mark Pagel (Committee Member); Jeff Duerk (Committee Member)

Subjects:

Biomedical Research; Engineering; Radiology

Keywords:

MRI; medical imaging; parameter estimation; blood vessels; cryo-imaging; T1; T2; proton density; image processing

Hur, MisunNeighborhood Satisfaction, Physical and Perceived Characteristics
Doctor of Philosophy, The Ohio State University, 2008, City and Regional Planning

Researchers from many disciplines have examined neighborhood satisfaction. The research has focused more on the personal, social, and psychological dimensions than on the physical environment in relation to neighborhood satisfaction. When studies include physical characteristics, they tend to look at the perceived physical characteristics, possibly overlooking the impact of the physical environmental features on neighborhood satisfaction. Furthermore, while studies have identified physical attributes of environments related to “aesthetic” response, research has not examined the link between physical and judgmental ratings of those attributes, nor has it looked at the link between those measures and neighborhood satisfaction.

This dissertation builds a conceptual model that focuses on the multi-level connections among the physical, perceptual, and evaluative measures of the neighborhood environment as determinants of overall neighborhood satisfaction.

It uses on-line survey, on-site observations, and the combined satellite image processing and GIS spatial analysis. The survey asks residents to rate their perceptions and evaluations of the neighborhood environment, and then to rate their overall neighborhood satisfaction. 382 survey responses were collected in Franklin County, Ohio. Observations assess the conditions of the fixed and semi-fixed physical environmental elements on 2021 blocks. For observations, this study designed, used, and tested the neighborhood physical environment inventory (NPEI) instrument on a handheld Personal Digital Assistant (PDA). The Normalized Differential Vegetation Index (NDVI) method was combined with GIS spatial analysis.

To assess both structural and measurement relations in a single model, this study uses a Structural Regression Model based on the two-step modeling process. The findings reveal the significant direct and indirect effects of physical, perceptual, and evaluative measures on neighborhood satisfaction. The results extend the understanding of the neighborhood environment and residents´ quality of life. Methodologically, the study demonstrates advances in data collection through its on-line survey, on-site observation via a personal digital assistant (PDA), and GIS.

Committee:

Jack L. Nasar, PhD (Advisor); Jean-Michel Guldmann, PhD (Committee Member); Hazel Morrow-Jones, PhD (Committee Member)

Subjects:

Area Planning and Development; Urban Planning

Keywords:

Neighborhood Satisfaction; Physical and Perceived Environmental Attributes; On-line Survey; Satellite Image Processing; GIS; Neighborhood Physical Environment Inventory; PDA; NDVI; Structural Regression Model; CFA; Incivilities; Vacant House

Kale, Mehmet CemilMultispectral co-occurence analysis for medical image processing
Doctor of Philosophy, The Ohio State University, 2007, Electrical Engineering
Presented is a new computer aided multispectral image processing method which is used in 3 spatial dimensions and 1 spectral dimension where the parametric dynamic contrast enhanced magnetic resonance breast maps derived from voxelwise model-fitting represent the spectral dimension. The method is based on co-occurrence analysis using a 3-dimensional window of observation which introduces an automated identification of suspicious lesions. The co-occurrence analysis defines 21 different statistical features, a subset of which were inputted to a neural network classifier where the assessments of voxelwise majority of a group of radiologist readings were used as the gold standard. The voxelwise true positive fraction (TPF) and false positive fraction (FPF) results of the computer classifier were statistically indistinguishable from the TPF and FPF results of the readers using a one sample paired t-test. In order to observe the generality of the method, two different groups of studies were used with widely different image acquisition specifications.

Committee:

Bradley Clymer (Advisor)

Keywords:

Multispectral Image Processing; Statistical Co-occurrence Analysis; DCE-MRI; Breast; Neural Networks

Xinyu, ChangNeuron Segmentation and Inner Structure Analysis of 3D Electron Microscopy Data
MS, Kent State University, 2013, College of Arts and Sciences / Department of Computer Science
Abstract- The 3D image generated by Serial block face scanning electron microscopy (SBFSEM) is in very large scale. Its data size could easily exceed 100 GB. The 3D image is constituted by over hundreds of slices in very high resolution. Due to the immenseness of the data scale, any manual operation or user intervention involved to the analysis is incredible time consuming. Current methods need the user to set the ROI or to correct the execution result, which are a huge amount of work. In this paper we introduce a completely automatic work flow to process and analyze the large image data. The whole work flow is implemented as a plugin based on an open-source framework, ImageJ, which is a public domain, Java-based image processing program. The plugin can automatically identify the seed point to start the whole process, segment single axons from the image, analyze their inner structures, obtain the information for each single axon like volume, number of mitochondria and number of smooth endoplasmic reticulum (SER), and finally visualize the analysis result.

Committee:

Cheng-Chang Lu, Dr. (Advisor); Ye Zhao, Dr. (Committee Member); Melton Austin, Dr. (Committee Member); Clements Robert , Dr. (Committee Member)

Subjects:

Computer Science

Keywords:

Neuron 3D Electron Microscopy region growth Imagej mitochondria large-scale image processing

Gopalan, SowmyaEstimating Columnar Grain Size in Steel-Weld Images using Image Processing Techniques
Master of Science, The Ohio State University, 2009, Electrical and Computer Engineering
Image processing for material science is a relatively new research area that has been gaining importance lately. The goal is to automate the image analysis, which is traditionally and still is, primarily human driven. The reasons for automation are manifold, with two major reasons being speed and precision. This thesis presents an image processing algorithm to segment columnar laths in gray-scale steel-weld images. It employs many tools from image processing theory; such as median-filtering for noise-removal, Otsu’s segmentation and dilation/erosion from morphology. The image processing is primarily a shape-recognition problem where the goal is to identify columnar grain structures from these images. We use “skeleton” approaches to accomplish this. A theoretical analysis of the various image processing techniques is presented in the beginning, followed by results of the algorithm. The correctness of the algorithm has been verified both qualitatively (visually) and quantitatively (intercept measure) with human-analysis.

Committee:

Ashok Krishnamurthy, PhD (Advisor); Sudarsanam Babu, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Image Processing; Skeletons; Shape Recognition; Segmentation; Otsu; Steel Welds; Grain Size

KHAN, JAWAD BASITiPACE-V1: A PORTAABLE ADAPTIVE COMPUTING ENGINE
MS, University of Cincinnati, 2002, Engineering : Computer Engineering
The iPACE-V1 (Image Processing Adaptive Computing Engine) is a portable, reconfigurable hardware platform, designed in the Digital Design Environments Laboratory at the University of Cincinnati. iPACE-V1 was specifically designed for real time, in-field image processing applications. Adaptive computing systems can be broadly defined as those systems which can modify their digital hardware to match the requirements of an application at hand. Field Programmable Gate Arrays (FPGA) are essential building blocks of such systems. iPACE-V1 has three Xilinx Virtex FPGAs: one houses the controller module, another acts as the main user programmable module and the data capture module is implemented in the last one. A maximum of 800,000 logic gates are available for computing in the form of FPGAs. Furthermore, 4 Mbytes of ZBT (Zero Bus turnaround) SRAM is interfaced. In addition to this, the board has a maximum of 1 Gigabytes SDRAM capacity. For non volatile data storage we have provided 4 Mbytes of FLASH ROM. Two serial ports along with a USB port are also provided. A camera is attached which provides video data and a small LCD is interfaced for image output. Every effort was made to incorporate as many debugging features, as possible: programmable clock, observable memories, partial reconfiguration and FPGA read-back are some features which top this list. Several controller cores have been written for various subsystems in iPACE-V1. These cores enable the user to efficiently exploit the available resources. This thesis discusses the hardware architecture of iPACE-V1 along with the VHDL controller cores. We also show the functional correctness and effectiveness of iPACE-V1. We have developed two demonstration examples for iPACE-V1: A frame grabber and a background elimination application. The frame grabber is implemented to demonstrate the functional correctness of the hardware. Whereas, the background elimination application is more performance oriented and is used to show the effectiveness of this architecture for realtime image processing.

Committee:

Dr. Ranga Vemuri (Advisor)

Subjects:

Computer Science

Keywords:

reconfigurable computing; adaptive computing; FPGAs; mobile computing; image processing

Yang, RongVehicle Detection and Classification from a LIDAR equipped probe vehicle
Master of Science, The Ohio State University, 2009, Electrical and Computer Engineering
Vehicle detection and classification is important in traffic analysis and management. Various sensing techniques can be used in this field, while most preceding work relies on sensors mounted along the road way, this study develops a mobile platform using a LIDAR equipped probe vehicle to collect ambient traffic data while it drives. A vehicle detection method is developed to extract on-road vehicles from the background. The system employs two LIDAR sensors to measure the speed of the detected vehicles and then their length. A vehicle classification scheme is developed using length and height to sort the vehicles into six pre-defined categories. Ground truth data were generated from a developed GUI interface. Both the vehicle detection algorithm and the vehicle classification algorithm are evaluated by comparing the LIDAR measurement with the ground truth data, with good result.

Committee:

Benjamin Coifman (Advisor); Charles Toth (Committee Member)

Subjects:

Electrical Engineering

Keywords:

vehicle detection; vehicle classification; image processing; LIDAR sensor; moving observer

Ravishankar, MaheshAutomatic Parallelization of Loops with Data Dependent Control Flow and Array Access Patterns
Doctor of Philosophy, The Ohio State University, 2014, Computer Science and Engineering
With the era of increasing clock speeds coming to an end, parallel computing architectures have now become main-stream. Due to the wide range of architectures available today that can be used to exploit parallelism, ranging from multicore CPUs, to GPGPUs, to distributed memory machines; adapting applications for efficient execution on all these architectures poses a significant challenge. Scientific computing applications exhibit significant coarse-grained parallelism. Domain experts have exploited this to target distributed memory machines through the use of Message Passing Interface (MPI) libraries. Many such applications have been shown to scale to hundreds of thousands of processors. While powerful, programming in MPI is tedious and error prone, with significant portion of the parallelized application dedicated to managing communication and synchronization between the processes. Developing compiler techniques that can automatically generate parallel distributed memory versions of such codes is challenging due to the presence of data-dependent control flow and data access patterns. In this thesis we develop compiler algorithms for automatic parallelization of a class of computations common to many scientific applications. Under the inspector/executor paradigm, the generated code partitions and executes the computation in load-balanced manner, while reducing the communication costs. This approach is further enhanced by developing a framework capable of expressing both affine and non-affine parts of the code. This enables the use of polyhedral compilation tools to analyze parts of the computation which can be completely characterized statically. The effectiveness of the developed approach is demonstrated on several benchmarks and real-world applications. Image processing applications on the other hand exhibit significant fine-grained parallelism and are well suited for architectures with Single-Instruction Multiple Data (SIMD) processing units like the vector processing units on CPUs or special graphics processing units like the NVIDIA Kepler GPU kepler. Such architectures can be targeted either through hardware specific intrinsics (like SSE/AVX to target vector units), or through language support for writing device-specific code (like CUDA to target NVIDIA GPUs). State of the art compilers like ICC, Pluto, etc. can automatically implement architecture specific optimizations for programs written in low-level languages like C/C++. While effective, these approaches still require significant effort from the application developers. Programs written this way are less portable and difficult to maintain. Domain Specific Languages offer a convenient abstraction, allowing programmers to specify the computation at a high-level, while still allowing the compiler to use the semantics of operations to generate efficient code to target multiple architectures. In this thesis, we develop Forma, a DSL that provides a convenient syntax to specify many common image processing pipelines in a succinct manner. The compiler backend can generate code to target both multicore CPUs with SIMD units and NVIDIA GPUs, while making use of device specific features like texture units on GPUs. These ease of programming in Forma is demonstrated by porting complex image processing pipelines like Laplacian Pyramids and Exposure Fusion. The performance of the generated code is compared against a state of the art DSL for image processing, Halide.

Committee:

P Sadayappan, Prof. (Advisor); Atanas Rountev, Prof. (Committee Member); Gagan Agrawal, Prof. (Committee Member)

Subjects:

Computer Engineering

Keywords:

Distributed Memory, Inspector-Executor, DSL, Image Processing, Irregular Computations, Compiler

Jackson, Brian PatrickAutomated Complexity-Sensitive Image Fusion
Doctor of Philosophy (PhD), Wright State University, 2014, Computer Science and Engineering PhD
To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine the wavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties.

Committee:

Arthur Goshtasby, Ph.D. (Advisor); Jack Jean, Ph.D. (Committee Member); Thomas Wischgoll, Ph.D. (Committee Member); Lang Hong, Ph.D. (Committee Member); Vincent Schmidt, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Computer Science

Keywords:

image registration; image fusion; stationary wavelet transform; entropy; image quality measures; fusion quality measures; adaptive image registration; multimodal image fusion; multimodal image processing; spatiotemporal analysis

Next Page