Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 157)

Mini-Tools

 
 

Search Report

  • 1. Alexiou, Michail A Deep Understanding of Structural and Functional Behavior of Tabular and Graphical Modules in Technical Documents

    Doctor of Philosophy (PhD), Wright State University, 2021, Computer Science and Engineering PhD

    The rapid increase of published research papers in recent years has escalated the need for automated ways to process and understand them. The successful recognition of the information that is contained in technical documents, depends on the understanding of the document's individual modalities. These modalities include tables, graphics, diagrams and etc. as defined in Bourbakis' pioneering work. However, the depth of understanding is correlated to the efficiency of detection and recognition. In this work, a novel methodology is proposed for automatic processing of and understanding of tables and graphics images in technical document. Previous attempts on tables and graphics understanding retrieve only superficial knowledge such as table contents and axis values. However, the focus on capturing the internal associations and relations between the extracted data from each figure is studied here. The proposed methodology is divided into the following steps: 1) figure detection, 2) figure recognition, 3) figure understanding, by figures we mean tables, graphics and diagrams. More specifically, we evaluate different heuristic and learning methods for classifying table and graphics images as part of the detection module. Table recognition and deep understanding includes the extraction of the knowledge that is illustrated in a table image along with the deeper associations between the table variables. The graphics recognition module follows a clustering based approach in order to recognize middle points. Middle points are 2D points where the direction of the curves changes. They delimit the straight line segments that construct the graphics curves. We use these detected middle points in order to understand various features of each line segment and the associations between them. Additionally, we convert the extracted internal tabular associations and the captured curves' structural and functional behavior into a common and at the same time uni (open full item for complete abstract)

    Committee: Nikolaos G. Bourbakis, Ph.D. (Advisor); Soon M. Chung, Ph.D. (Committee Member); Bin Wang, Ph.D. (Committee Member); Euripides G. M. Petrakis, Ph.D. (Committee Member); George A. Tsihrintzis, Ph.D. (Committee Member) Subjects: Computer Science
  • 2. Dhinagar, Nikhil Morphological Change Monitoring of Skin Lesions for Early Melanoma Detection

    Doctor of Philosophy (PhD), Ohio University, 2018, Electrical Engineering & Computer Science (Engineering and Technology)

    Changes in the morphology of a skin lesion is indicative of melanoma, a deadly type of skin cancer. This dissertation proposes a temporal analysis method to monitor the vascularity, pigmentation, size and other critical morphological attributes of the lesion. Digital images of a skin lesion acquired during follow-up imaging sessions are input to the proposed system. The images are pre-processed to normalize variations introduced over time. The vascularity is modelled as the skin images' red channel information and its changes by the Kullback-Leibler (KL) divergence of the probability density function approximation of histograms. The pigmentation is quantified as textural energy, changes in the energy and pigment coverage in the lesion. An optical flow field and divergence measure indicates the magnitude and direction of global changes in the lesion. Sub-surface change is predicted based on the surface skin lesion image with a novel approach. Changes in key morphological features such as lesions' shape, color, texture, size, and border regularity are computed. Future trends of the skin lesions features are estimated by an auto-regressive predictor. Finally, the features extracted using deep convolutional neural networks and the hand-crafted lesion features are compared with classification metrics. An accuracy of 80.5%, specificity of 98.14%, sensitivity of 76.9% with a deep learning neural network is achieved. Experimental results show the potential of the proposed method to monitor a skin lesion in real-time during routine skin exams.

    Committee: Mehmet Celenk Ph.D. (Advisor); Savas Kaya Ph.D. (Committee Member); Jundong Liu Ph.D. (Committee Member); Razvan Bunescu Ph.D. (Committee Member); Xiaoping Shen Ph.D. (Committee Member); Sergio Lopez-Permouth Ph.D. (Committee Member) Subjects: Computer Science; Electrical Engineering; Medical Imaging; Oncology
  • 3. Zhu, Zimo Techniques for Finite Element Modeling and Remodeling of Bones with Applications to Pig Skulls

    MS, University of Cincinnati, 2017, Engineering and Applied Science: Aerospace Engineering

    To better understand skull response after surgeries in patients born with an alveolar cleft, a finite element method (FEM) has been designed to predict how bone is strained during normal loading cycles of mastication. To test the effectiveness of treatment utilizing stem cells on resorbable scaffolds, a juvenile swine was operated by a surgically created alveolar cleft. Then, the FEM model of the pig skull was built based from pre- and post-surgery computed tomography (CT) to estimate strain dynamics in the healing bone. Scan resolutions were insufficient to visualize bone at the scale of trabeculae, a necessary item to determine how depositional fields in healing and growing bone will respond to loading. This is important because bone deposition is sensitive to both strain and material properties of depositional substrates. Hence, a detailed model is urgently needed with more accurate skull structure and mechanical properties. In this thesis, a new semi-automated method is proposed to build a more accurate skull model with microCT (µCT) scans. In addition to applying the new model in alveolar cleft repair, this new method can also be used to estimate remodeling algorithms incorporated with three dimensional (3D) finite element methods (FEM) script. In this way, finite element method (FEM) can been used to predict how mature bone partially remodels under mechanical loadings.

    Committee: Gui-Rong Liu Ph.D. (Committee Chair); Yao Fu (Committee Member); Donna Jones Ph.D. (Committee Member) Subjects: Biomechanics
  • 4. McNichols, John Design and Implementation of an Embedded NIOS II System for JPEG2000 Tier II Encoding

    Master of Science (M.S.), University of Dayton, 2012, Electrical Engineering

    Image compression standards continually strive to to achieve higher compression ratios while maintaining image quality. In addition to these goals, new applications require expanded features and flexibility as compared to existing compression standards. JPEG2000 is the latest in the line of image compression standards, offering higher compression ratios than its predecessor JPEG while maintaining comparable image quality. In addition, JPEG2000 offers an extended range of features including bit-rate control, region of interest coding and file-stream scalability with respect to resolution, image quality, components and spatial region. However, these additional features come with associated costs, primarily in the form of computational complexity. Due to the increased computational costs, JPEG2000 has not achieved the same wide-spread usage as JPEG. However, there are a number of specialized applications such as medical imaging and wide-area surveillance which demand the extended features offered by JPEG2000. These applications generally deal with high resolution imagery, resulting in extremely long encoding times when using consumer off the shelf platforms. As a result, many hardware implementations of the most computationally complex portions of JPEG2000, namely Tier I encoding, have been proposed. This thesis proposes using an embedded soft-core processor on a Field Programmable Gate Array (FPGA) for JPEG2000 code stream organization, known as Tier II. The soft-core processor chosen, Altera's NIOS II core, is coupled with existing Discrete Wavelet Transform (DWT) and Tier I implementations on a single FPGA to realize a fully embedded JPEG2000 encoder. Results show the feasibility of using an embedded soft-core processor on a FPGA to perform Tier II processing for JPEG2000.

    Committee: Eric Balster PhD (Committee Chair); John Weber PhD (Committee Member); Frank Scarpino PhD (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 5. Mathari Bakthavatsalam, Pagalavan Hardware Acceleration of a Neighborhood Dependent Component Feature Learning (NDCFL) Super-Resolution Algorithm

    Master of Science (M.S.), University of Dayton, 2013, Electrical Engineering

    Image processing and computer vision algorithms allow computers to make sense of pictures and video seen through cameras. These have applications in a large variety of “real time” applications like surveillance, intelligence gathering, robotics, automobile driving, aviation, etc., where the picture from the video needs to be processed by a computer as soon as it is taken. However these algorithms are time intensive because of its compute bound nature. In this literature, a single image super resolution algorithm based on Neighborhood Dependent Component Feature Learning (NDCFL) is accelerated by multiple GPUs and multiple CPU cores, using NVIDIA's Computer Unified Device Architecture (CUDA), OpenCV and POSIX threads. Given a low resolution input, this method uses image features to adaptively learn the regression kernel based on local covariance to estimate the high resolution image. The accelerated implementation performs at speed 51 times faster than that of original implementation for 590X580 frame, and achieves processing rate close to real-time.

    Committee: Tarek Taha Ph.D. (Committee Chair); Eric Balster Ph.D. (Committee Member); Vijayan Asari Ph.D. (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Engineering
  • 6. Vorhies, John Low-complexity Algorithms for Light Field Image Processing

    Master of Science in Engineering, University of Akron, 2020, Electrical Engineering

    Light fields present a new paradigm for capturing scene structure. While conventional 2-D images fail to convey depth, this information is inherent in 4-D light field images. Computer vision systems in the areas of robotics and autonomous vehicles can make use of this depth information for decision making processes. A camera gantry is designed and used for capturing light field images. Camera calibration techniques are explored and it is shown that calibration is required for determining the frequency domain region of support for a given depth. This thesis focuses on the use of light fields for extracting objects in a scene by their depth. A 4-D linear filter, presented in [Dansereau 2003, Dansereau 2007], is used for depth filtering, providing a constant runtime solution that is of low computational complexity. A novel method for determining the frequency content associated with a given depth is devised, and implemented in digital hardware. This method is capable of tuning a depth filter in real time as an object moves to different depths in a scene. Due to the large amount of data generated by light fields, data compression is required to effectively store and transmit light field images. This thesis explores 4-D light field compression, using the Karhunen-Loeve transform (KLT). The 4-D KLT is applied to light field images and compression performance is discussed.

    Committee: Kye-shin Lee (Advisor); Arjuna Madanayake (Committee Member); Nghi Tran (Committee Member); Alexander Hoover (Committee Member) Subjects: Electrical Engineering
  • 7. Brookshire, Charles Illumination Recovery For Optical Microscopy

    Master of Science (M.S.), University of Dayton, 2020, Electrical Engineering

    We propose a novel solution to the correction of illumination non-uniformity without removing the imaging sample. Calibration of the spatial illumination pattern in microscopy is challenging due to the fact that the illumination source is colocated with the objective lens and therefore cannot be observed directly. Our proposed methodology overcomes this by collecting three spatially translated images in a strategic way. We prove that ``log-illumination pattern'' recovery can be reformulated as a deconvolution of log-ratio of captured images, and develop an efficient, noise-robust implementation. Experiments with simulated and reflectance microscopy data verifies the effectiveness of our proposed approach.

    Committee: Keigo Hirakawa PhD (Advisor); Bradley Ratliff PhD (Committee Member); Michael Uchic PhD (Committee Member) Subjects: Electrical Engineering; Scientific Imaging
  • 8. Raihani, Nilgoun Respiration Pattern Using Amplified Video

    Doctor of Philosophy, Case Western Reserve University, 2018, EECS - Electrical Engineering

    I present both the theoretical background and the experimental results of an algorithm developed to measure human respiratory rate without physical contact. This algorithm is based on motion detection techniques and measuring motion changes across successive frames of the video derives the respiratory rate. The algorithm identifies the moving pixels within each frame, computes direction and speed of a moving object from one image or video frame to the next to discern if any changes are due to respiration or due to limited movement (artifact) of the object being monitored. To experimentally validate the proposed algorithm, the respiratory rates were determined from whole-body plethysmography and compared with those estimated by the algorithm. Over 20 different recordings were performed with 5 different mice freely moving in the whole body Plethymography chamber that were set in front of the camera. The first 15 recording data sets were used to determine the suitable environment for recording. The next 5 data sets were conducted to compare the performance of the proposed system with respect to the gold standard in ideal conditions of light, orientation distance, and camera. The experimental results showed that the system can correctly measure the respiratory rate and it is a viable alternative to monitor the respiratory activity of mice without using invasive sensors.

    Committee: Keneth Loparo (Advisor); Farhad Kaffashi (Committee Member); Frank Jacono (Committee Member); Ted Dick (Committee Member) Subjects: Biomedical Engineering; Biomedical Research; Engineering
  • 9. Carlson, Wayne Techniques for the generation of three dimensional data for use in complex image synthesis /

    Doctor of Philosophy, The Ohio State University, 1982, Graduate School

    Committee: Not Provided (Other) Subjects: Computer Science
  • 10. Regmi, Hem A Real-Time Computational Decision Support System for Compounded Sterile Preparations using Image Processing and Artificial Neural Networks

    Master of Science, University of Toledo, 2016, Electrical Engineering

    The purpose of this research is to design a computational decision support system (DSS) for compounded sterile preparations (CSP). Error-free compounding is dependent on the proper selection of components and adherence to procedure during compounding. A material selection system (MSS) based on a graphical user interface (GUI), coupled with a barcode scanner and back-end database, has been developed and tested for proper selection of items involving three different medication orders (MO). A video processing system (VPS) has been implemented in MATLAB that evaluates the live video feed from the compounding hood to monitor the compounding procedure when compounding the MO's. Surf detection is used to detect and locate compounding items placed in the hood. Various algorithms have been developed and tested to enhance the accuracy and robustness of the VPS. The Decision Support System (DSS) is further improved with integration of another digital camera to ensure that correct volume of medicine with appropriate syringe is performed during the whole compounding process. The template matching and SURF object detection application on the digital image of the syringe, along with minimum distance classifier and artificial neural networks (ANNs) on the previously collected data from several experimental observations, were explored in classification and volume measurement of a syringe. The MSS was tested for all items used in compounding the MO's and performed error-free. The VPS evolved to VPS.03 from VPS.01 and VPS.02. The greatest accuracy and ability for real-time realization were seen in VPS.03. All deliberate mistakes made when compounding the tested medication orders were captured by VPS.03. Leur-lock syringes of different sizes from 1 mL to 30 mL were tested, and an accuracy of 95+ % was obtained with very high precision. The new computational decision support system facilitates error-free selection of components and is able to monitor and evaluate the compounding proce (open full item for complete abstract)

    Committee: Vijay Devabhaktuni Dr. (Committee Chair); Jerry Nesamony Dr. (Committee Co-Chair); Devinder Kaur Dr. (Committee Member); Ezzatollah Salari Dr. (Committee Member) Subjects: Electrical Engineering
  • 11. May, Heather Wavelet-based Image Processing

    MS, University of Cincinnati, 2015, Arts and Sciences: Mathematical Sciences

    Abstract: This thesis will guide you through the uses of wavelets in image processing. A review of Fourier transforms, z Transforms, multiresolution analysis, and subband coding. A main advantage of discrete wavelet transform (DWT) is the ability to decompose images in the space/time-frequency domain. The diverse applications for wavelets can be seen in biometrics, medical imaging, statistical analysis, and even current technology for images such as the JPEG 2000.

    Committee: Herbert Halpern Ph.D. (Committee Chair); Donald French Ph.D. (Committee Member); Benjamin Vaughan Ph.D. (Committee Member) Subjects: Mathematics
  • 12. Peterson, Michael Evolutionary Methodology for Optimization of Image Transforms Subject to Quantization Noise

    Doctor of Philosophy (PhD), Wright State University, 2008, Computer Science and Engineering PhD

    Lossy image compression algorithms sacrifice perfect imagereconstruction in favor of decreased storage requirements. Modern lossy compression schemes, such as JPEG2000, rely upon the discrete wavelet transform (DWT) to achieve high levels of compression while minimizing the loss of information for image reconstruction. Some compression applications require higher levels of compression than those achieved through application of the DWT and entropy coding. In such lossy systems, quantization provides high compression rates at the cost of increased distortion. Unfortunately, as the amount of quantization increases, the performance of the DWT for accurate image reconstruction deteriorates. Previous research demonstrates that a genetic algorithm can improve image reconstruction in the presence of quantization error by replacing the wavelet filter coefficients with a set of evolved coefficients. This dissertation develops a methodology for the evolution of digital filters capable of outperforming the DWT for image reconstruction at a given compression rate in the presence of quantization error. This dissertation compares potential fitness measures for evaluating reconstruction error. Experiments compare the usefulness of local versus standard population initialization and mutation operators. In order to perform an efficient yet thorough traversal of the search space, several recombination operators developed specifically for real-valued evolution are evaluated. Additionally, this dissertation presents and develops a novel technique to emphasize the reconstruction of the high-spacial frequency areas of an image through use of edge detection algorithms and focused evolution. An analysis of the ease of traversal through the fitness landscapes defined by various image quality measures supports the development of a framework for evolving robust image transform filters. Particular emphasis is placed upon the development of transforms that provide consistently accurate reconstru (open full item for complete abstract)

    Committee: Michael Raymer PhD (Advisor); Gary Lamont PhD (Committee Member); Travis Doom PhD (Committee Member); Mateen Rizki PhD (Committee Member); Zhiqiang Wu PhD (Committee Member) Subjects: Computer Science; Electrical Engineering
  • 13. Steyer, Grant IMAGING OF CARDIOVASCULAR CELLULAR THERAPEUTICS WITH A CRYO-IMAGING SYSTEM

    Doctor of Philosophy, Case Western Reserve University, 2010, Biomedical Engineering

    The long term goal of this research is to use cryo-imaging to detect and spatially map nearly every stem cell in a mouse and quantify tissue specific, cell therapy PK. With this enabling technology, one will be able to quantitatively assess specificity of homing, quiescent survival in specific niches, emerging homing factors, delivery processes, cell engraftment dose response, etc., analyses which have, at best, been done qualitatively. To this end, we developed a cryo-imaging system, which alternates between sectioning (10-40 µm) and imaging bright field and fluorescence block-face image volumes with micron-scale-resolution. For applications requiring single-cell detection of fluorescently labeled cells anywhere in a mouse, we developed an algorithm, next-image processing, for reduction of subsurface fluorescence. Next-image processing greatly improves axial-resolution, enabling high quality 3D volume renderings, and improved automated enumeration of single cells by up to 24%. To answer many of the pressing questions in stem cell therapies, automated methods for quantification and detection of stem cells are required. We developed algorithms for the automated detection and quantification of fluorescently labeled cells. Our model based quantification algorithm was performed on low resolution images of fluorescently labeled stem cells and the results were compared to visual quantification of stem cells in high resolution images. In all cases but two, the algorithm was within ± 1 cells of the actual number present in a cluster. The total number of cells counted by the expert was 393 compared to 386 by the algorithm, giving an error rate of 1.7%. To estimate homing of stem cells to damaged tissues, quantification of stem cells in infracted mice hearts was compared to non-infarcted control mice hearts. The number of MSC's detected in the heart was significantly higher for infarcted mice. The delivery ratio for infarcted mice was 8.0 ± 0.8 times larger than the deliver (open full item for complete abstract)

    Committee: David Wilson PhD (Committee Chair); Marc Penn MD, PhD (Committee Member); Andrew Rollins PhD (Committee Member); James Basilion PhD (Committee Member) Subjects: Biomedical Research
  • 14. Hamsici, Onur Feature extraction : the role of subclass divisions and spherical representations /

    Master of Science, The Ohio State University, 2005, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 15. Vemparala Narayana Murthy, Balavignesh Advanced Computational and Deep Learning Techniques for Modeling Materials with Complex Microstructures

    Doctor of Philosophy, The Ohio State University, 2024, Mechanical Engineering

    The mechanical properties of materials are fundamentally governed by their microstructural characteristics, delineating a profound relationship between structure and behavior. Whether manifesting as polycrystalline arrangements composed of grains, particulate dispersion within composites, or the intricacies of Selective Laser Melting (SLM)-induced melt pools, microstructural heterogeneity profoundly influences material response to external loads. Moreover, the presence of defects such as voids, precipitates, and cracks introduces additional complexities, underscoring the critical role of microstructural analysis in elucidating material performance. As such, comprehending and manipulating these microstructural features hold paramount importance in the design and optimization of materials tailored to specific engineering requirements. This introductory exploration sets the stage for a comprehensive investigation into the interplay between microstructure and mechanical behavior in diverse material systems. The first component of this dissertation focuses on modeling Polycrystalline materials from imaging data. As mentioned earlier, polycrystalline microstructures are composed of grains and hence, it is important to accurately capture the grain boundaries when modeling them from microstructure images. Moreover, it is also possible for defects to be present in microstructures such as precipitates, voids, and cracks, which can impact mechanical behavior. Therefore, we also present an example modeling the presence of precipitates in a polycrystalline microstructure, which shows that the developed framework can handle them. To do this, we introduce a set of integrated image processing algorithms for processing low-resolution images of a polycrystalline microstructure and convert the grain boundaries into a Non-Uniform Rational B-Splines (NURBS) representation. Next, the NURBS representation of the material microstructures is used as an input to a non-iterative mesh (open full item for complete abstract)

    Committee: Soheil Soghrati (Advisor); David Talbot (Committee Member); Rebecca Dupaix (Committee Member) Subjects: Artificial Intelligence; Computer Science; Materials Science; Mechanical Engineering
  • 16. Dasgupta, Sandeepan Optical Transmission through Sessile Water Droplets Undergoing Solidification, and the Effect of Hydrophobicity

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Mechanical Engineering

    The current increase in the usage of solar panels in cold weather climates requires a comprehensive understanding of droplet freezing. Freezing dynamics of a water droplet resting on a hydrophobic surface is well understood but changes in optical transmission during this process have not received much attention. As the droplet freezes, its refractive index and the shape change simultaneously. Here, an attempt is made to characterize the change in optical transmission during the freeing process using a unique setup that allows for bidirectional imaging of a droplet as it undergoes freezing on coated glass slides. This allows for phase change dynamics to be decoupled from the change in droplet geometry. There is a 45% reduction in optical transmission during the freezing process, but this is primarily due to reflection and not attenuation. The droplet geometry is observed to influence a change in optical transmission due to changes in both reflection and attenuation. Common hydrophobic coatings are tested and their influence on the optical transmission is characterized. The results of this study provide new insight in the development of optically transparent ice/water repellent coatings for solar panels.

    Committee: Kishan Bellur Ph.D. (Committee Chair); Sarah Watzman Ph.D. (Committee Member); Milind Jog Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 17. Balasubramaniam, Sowbaranika Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Motion-activated cameras, commonly known as camera traps, play a crucial role in wildlife monitoring, biodiversity conservation, and basic ecological science, capturing a diverse range of species in their natural habitats. However, the sheer volume of images generated requires manual analysis by ecologists, making this a time-consuming and inefficient process. This is particularly frustrating since many of the images do not contain animals because camera traps are triggered by the motion of vegetation due to wind or miss the animal due to the slow camera response. This thesis presents an approach aimed at addressing these challenges through the automation of image classification and the development of smart camera traps that leverage artificial intelligence implementations in edge devices. First, we develop a species classifier pipeline consisting of object detection and a species classifier for a novel camera trap setting. We train a convolutional neural network to identify 45 trained species using 1.2M images, achieving an accuracy of around 89\% at the class level without manual intervention. This research demonstrates the combination of deep learning methodologies and human oversight, showcasing their ability to facilitate automatic or semiautomatic, precise, and efficient surveys of small animals in the wild, particularly in novel settings or for novel species. The suggested pipeline helps achieve 95\% accuracy in image classification, resulting in 11\% of images that require manual review at the class level. This pipeline helps to automatically annotate the image while maintaining accuracy. The camera trap generates a large number of images. The time delay between data capture and image processing leads to the loss of critical information. Real-time labeling while capturing images can help mitigate this issue. Following the traditional classifier, we investigate methods for deploying an AI model in a camera trap. Smart Camera Traps would result in real (open full item for complete abstract)

    Committee: Tanya Berger-Wolf (Advisor); Christopher Stewart (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering
  • 18. Nair, Srijith Robust Blind Image Denoising via Instance Normalization

    Master of Science, The Ohio State University, 2024, Electrical and Computer Engineering

    Image denoising is a fundamental problem in image processing where a high fidelity image is recovered from a noise corrupted version. Denoising is fundamental because, from the Bayesian perspective denoisers are believed to also encode information about the prior probability distribution of images. This in turn, makes denoisers a widely applicable tool in many image inverse problems like compressive sensing, deblurring, in-painting, super-resolution, etc. As a result various algorithmic approaches for denoising have been studied in the past decades. However, data-driven denoising methods, which learn to denoise images from large image datasets using deep neural networks, have demonstrated far superior performance compared to the classical algorithmic methods while having much faster inference times. The data-driven methods can be broadly classified into two categories: blind and non-blind methods. While non-blind methods require knowledge of the noise level contained within the image, blind methods which require no such information are more practical. However, the performance of many recent state-of-the-art blind denoisers depend heavily on the noise levels used during training. In more recent work, ideas of inducing scale and normalization equivariance properties in denoisers have been explored in order to make denoisers more robust to changes in noise levels from training to test data. In our work we extend upon this idea, where we introduce a method to make any given denoiser normalization equivariant using a simple idea of instance normalization, which improves the noise level robustness of the denoiser by a significantly large margin with minimal change to the underlying architecture. In this thesis, we theoretically formulate our idea from the perspective of minimizers of the Wasserstein-1 distance between empirical distributions of training and test data, and propose a more practically feasible 2-pixel approximation that yi (open full item for complete abstract)

    Committee: Philip Schniter (Advisor); Lee Potter (Committee Member) Subjects: Electrical Engineering
  • 19. Weragoda, Gamage Sugandima Nishadi A Medical Physics Chronicle In Three Acts: Machine Learning In Radiation Oncology; Designing RF Shim Coils In MRI; New Partial Volume Analysis In MRF.

    Doctor of Philosophy, Case Western Reserve University, 2024, Physics

    Adaptive Radiotherapy (ART) facilitates replanning during head and neck radiotherapy. Central to ART is Deformable Image Registration (DIR), which enables quantification of tissue and tumor changes between initial planning and daily images. However, evaluating DIR algorithms for physiological accuracy remains challenging. To address this, we present a proof-of-principle study introducing an automatic DIR evaluation tool that employs deep learning for tissue segmentation and a finite element model for deformation analysis. Our study illustrates the end-to-end training and implementation of the proposed evaluation framework and argues for its clinical feasibility. Efficient automatic liver dome delineation can improve the clinical feasibility of the online breath-hold verification technique for liver SBRT, proposed in a previous study by our institution. We present a proof-of-principle study for a deep learning-based pipeline for the automatic delineation of liver domes in kV triggered images. The pipeline uses a deep learning model for liver dome region segmentation and postprocessing for dome delineation. Cross-validation assesses the generalizability of the pipeline. In MRI, the region behind the knee of a supine patient is vulnerable to artifacts. Building on previous work by our group, we design and develop a prototype single half-cylinder shim coil for knee MRI and investigate its limits. In vivo data show that our method, which is based on Turner's target field for coil design, could be optimized to mitigate B0 non-uniformity in localized regions of interest. Magnetic Resonance Fingerprinting (MRF) offers promise as a quantitative imaging method. A key challenge in MRF are partial volume (PV) effects. Building on the seminal work into PV MRF, we propose a method based on density-based clustering, which is semi-automatic and interpretable. The brain tissue properties calculated by our method aligns closely with literature values, promising increased eff (open full item for complete abstract)

    Committee: Michael Martens (Committee Chair); Robert Brown (Committee Member); Bingqi Guo (Committee Member); Ping Xia (Committee Member); Gary Chottiner (Committee Member) Subjects: Artificial Intelligence; Medical Imaging; Physics
  • 20. Scott, Austin Volume Reflection Gratings in Photorefractive Materials

    Doctor of Philosophy (Ph.D.), University of Dayton, 2023, Electro-Optics

    This dissertation is concerned with two distinct applications of volume gratings recorded in photorefractive electro-optic crystals. The first of two applications involves the use of these volume gratings to non-mechanically steer laser beams. A geometric and physical-optics based analysis shows the potential for writing programmable volume gratings in lithium niobate using visible wavelengths in the transmission geometry, and subsequently probing those gratings using infrared wavelengths in a reflection geometry. By appropriate adjustments made to the writing beams, it is shown that both the grating spacing and grating tilt angle can be controlled such that the grating becomes a rotatable Bragg mirror for the incident probe beam, thus steering it to desired angles. The second application of these volume gratings is in image processing. System transfer functions determining the spatial evolution of the reference (input wave) and signal (diffracted wave) beams as they propagate inside a self-pumped volume reflection grating are derived and solved numerically. The solutions are then used to highlight the spatial filtering properties of self-pumped volume reflection gratings, with the focus being on the transmitted (un-diffracted) portion of the reference beam, which is shown to be high-pass spatially filtered. The high-pass spatial filtering manifests as programmable 2-dimensional edge enhancement in the transmitted reference beam. Contrast analysis is done for edge enhanced images, both through simulations and experiments, which show a direct proportionality between the strength of edge enhancement seen in the filtered images and the intensity of the writing beam used to record the grating.

    Committee: Partha Banerjee (Advisor) Subjects: Optics; Physics