Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 41)

Mini-Tools

 
 

Search Report

  • 1. Girish, Deeptha Action Recognition in Still Images and Inference of Object Affordances

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Electrical Engineering

    Action recognition is an important computer vision task. It focuses on identifying the behavior or the action performed by humans from images. Action recognition using various wearable sensors and videos is a well studied and well established topic. This thesis focuses on action recognition in still images, a new and challenging area of research. For example, understanding motion from static images is a difficult task as spatio-temporal features that is most commonly used for predicting actions is not available. Action recognition in still images has a variety of applications such as searching for frames in videos using action, searching a database of images using an action label, surveillance, robotic applications etc. It can also be used to give a more meaningful description of the image. The goal of this thesis is to perform action recognition in still images and infer object affordances by characterizing the interaction between the human and the object. Object affordance refers to determining the use of an object based on its physical properties. The main idea is to learn high level concepts such as action and object affordance by extracting information of the objects and their interactions in an image.

    Committee: Anca Ralescu Ph.D. (Committee Chair); Kenneth Berman Ph.D. (Committee Member); Rashmi Jha Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 2. Shakeel, Amlaan Service robot for the visually impaired: Providing navigational assistance using Deep Learning

    Master of Science, Miami University, 2017, Computational Science and Engineering

    Assistive technology helps improve the day to day activities for people with disabilities. One of the methods utilized by assistive technologists employs the use of robots. These are called service robots. This thesis explores the idea of a service robot for the visually impaired to assist with navigation and is inspired by the use of guide dogs. The focus of this thesis is to develop a robot to achieve autonomous indoor navigation using computer vision to identify image based goals in an unfamiliar environment. The method presented in this thesis utilizes a deep learning framework, called Faster R-CNN, to train a computer to classify and localize exit signs in real time. A proof of concept is presented using NVIDIA Jetson, and TurtleBot, a robot kit, which runs a robot software development framework Robot Operating System (ROS). The model is trained successfully using Faster R-CNN and is validated. The model is used for real-time object classification on the prototype robot.

    Committee: Yamuna Rajasekhar (Advisor); John Femiani (Committee Member); Donald Ucci (Committee Member) Subjects: Computer Science; Electrical Engineering; Robotics
  • 3. Saich, Stephanie Application of Multiple Data Augmentation Techniques to Improve Training with Synthetic SAR Data in Common CNN

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2024, Electrical Engineering

    To address the issues of limited target data in the Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) problem set, synthetic data is often used to aid in filling the gap. This paper covers an in depth look at the use of colorization, dynamic range adjustment, and target extraction as data augmentation techniques to improve the accuracy of deep learning networks trained on synthetic SAR data. The use of multiple different data augmentations combine to dramatically improve the accuracy of a common Convolutional Neural Network (CNN) over the use of standard synthetic data. A comparison of increasing fraction of measured data were used to show that the less measured data there is available the more critical these data augmentation techniques are to improve target recognition.

    Committee: Josh Ash Ph.D. (Committee Co-Chair); Brian Rigling Ph.D. (Committee Co-Chair); Fred Garber Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 4. Dhakal, Rabin Towards a Low-Cost Distributed AWOS: Machine Learning for Optical Ceilometry, Cloud Detection, and Classification

    Master of Science (MS), Ohio University, 2024, Computer Science (Engineering and Technology)

    Larger, commercial, towered airports are highly equipped to provide pilots with real-time weather related data before flying the aircraft. In the case of small airports, there is a weather data gap for the aircraft flying at a lower altitude. Accurate cloud information (cloud type and its height from the ground) is crucial data for pilots flying at low altitudes because it affects both visibility and safety. A ceilometer is a device that estimates cloud height from the ground, but it is often costly and lacks portability. This thesis proposes an innovative, cost-effective approach using computer vision and deep learning to address these limitations. One of the primary challenges for these methods is the need for extensive datasets for training and evaluation, as real-world data collection of cloud height and type is time-consuming and resource-intensive. To overcome this, we generated synthetic cloud data using a stereo camera setup with ground truth height information in a virtual environment. In this thesis, cloud information involves cloud-base height estimation and classification of the type of cloud. We proposed methods that can provide better efficiency in predicting the cloud-base height than state-of-the-art methods when applied to the real-world dataset in the future. We have incorporated synthetic data to evaluate the performance of our method. These synthetic data, created by simulating VDB clouds, enable the testing and validation of cloud detection models and calibrating height predictions. We rendered the 3D scene and created ground truth bounding box and cloud-type datasets, such as Altocumulus, Altostratus, Cirrocumulus, Cumulonimbus, Cumulus, Cirrostratus, Cirrus, Stratocumulus, and Stratus. We trained the YOLO-v8 model with the cloud detection dataset and employed unseen synthetic data to assess its robustness and accuracy. Once vetted, we generated disparity images from the stereo pairs. We trained several CNN-based regression models using this di (open full item for complete abstract)

    Committee: Chad Mourning (Advisor); Zhewei Wang (Committee Member); Jundong Liu (Committee Member); Bhaven Naik (Committee Member) Subjects: Computer Science
  • 5. Seltzer, Gregory Measured Phase History Data for Target Recognition Studies

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2024, Electrical Engineering

    Performing automatic target recognition (ATR) on full-size aircraft targets using inverse synthetic aperture radar (ISAR) data is challenging and expensive. The use of scale models and radar systems of such large targets saves time and reduces facility requirements. This study examines the feasibility of performing ATR on 1:144 scale model airplanes at Ka-band. The scale model and Ka-band radar simulate the collection of full-scale targets at VHF-band. The phase history measurement collections were completed in the Sensors and Signals Exploitation Laboratory (SSEL) at Wright State University. To ensure sufficient data for training and testing, the phase history data was augmented through mathematical translation and rotation of the scene. These augmented images were processed using the polar format algorithm and subsequently classified using support vector machines and convolutional neural networks. The resulting ATR models achieved a classification accuracy of over 82 percent for all aircraft types, except for the very similar B747-8 and B747-8F, which exhibited misclassification rates consistent with expectations for such similar targets.

    Committee: Michael A. Saville Ph.D. (Advisor); Cheryl B. Schrader Ph.D. (Committee Member); Michael L. Raymer Ph.D. (Committee Member); Josh Ash Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 6. Aldyaflah, Izdehar BLOCKCHAIN-BASED SECURE SENSING DATA PROCESSING AND LOGGING

    Doctor of Engineering, Cleveland State University, 2024, Washkewicz College of Engineering

    This dissertation research investigated how to use the blockchain technology to secure sensor data processing and logging. The research was done in three phases. First, to ensure the legitimate of the sensor to log data into Blockchain, sensor identifcation and authentication mechanism is used where only the defned sensors sensing data are accepted. Second, to minimize the throughput demand on large public blockchain such as Bitcoin and Ethereum and the fnancial cost of using blockchain services, only a small amount of raw sensing data are placed on the blockchain through an aggregation process, where a group of raw sensing data is converted to one condensed data time. A Merkle tree based mechanism is used to protect the security of the of-chain data (raw sensing data) with the condensed Data placed on the blockchain. The system was tested with the IOTA Shimmer test network, and the Ethereum test network. The second phase focuses on developing an Ethereum smart contract to manage access control for storing and retrieving condensed data on the blockchain. The smart contract introduces three levels of authorization (read, write, and admin) to regulate data access securely. Gas consumption optimization is achieved through a tag-based secure data-store mechanism embedded in the smart contract design. In the fnal phase, a deep learning model using Convolution Neural Networks (CNN) is introduced to detect vulnerabilities in smart contracts. Four input techniques—Word2Vec, FastText, Bag of Words (BoW), and TF-IDF—are compared for their efectiveness in identifying six types of vulnerabilities. TF-IDF emerges as the most efcient input technique, consistently achieving high detection rates (90% to 100%) across all vulnerability types. In particular, TF-IDF excels in detecting the Reentrancy vulnerability, achieving performance metrics of 96% to 97%. Word2Vec and FastText performed comparably with slight changes, however BoW consistently dropped behind (open full item for complete abstract)

    Committee: Wenbing Zhao (Advisor); Timothy V Arndt (Committee Member); Hongkai Yu (Committee Member); Lili Dong (Committee Member); Sun S. Chung (Committee Member) Subjects: Computer Engineering; Computer Science
  • 7. Siddiqui, Nimra Dr. Lego: AI-Driven Assessment Instrument for Analyzing Block-Based Codes

    Master of Computing and Information Systems, Youngstown State University, 2024, Department of Computer Science and Information Systems

    The field of coding education is rapidly evolving, with emerging technologies playing a pivotal role in transforming traditional learning methodologies. This thesis introduces Dr. Lego, an innovative framework designed to revolutionize the assessment and understanding of block-based coding through the integration of sophisticated deep learning models. Dr. Lego combines cutting-edge technologies such as MobileNetV3 (Howard, 2019), for visual recognition and BERT (Devlin et al., 2018), and XLNet (Yang et al., 2019) for natural language processing to offer a comprehensive approach to evaluating coding proficiency. The research methodology involves the meticulous curation of a diverse dataset comprising projects from the LEGO SPIKE app (LEGO Education, 2022), ensuring that the models are subjected to a broad range of coding scenarios. Leveraging the dynamic educational environment provided by the LEGO SPIKE app (LEGO Education, 2022), Dr. Lego empowers users to design and implement various coding projects, fostering hands-on learning experiences. This thesis delves into methodologies aimed at enhancing coding education by exploring model integration, data generation, and fine-tuning of pre-trained models. Dr. Lego not only evaluates coding proficiency but also provides cohesive and insightful feedback, enhancing the learning experience for users. The adaptability of the framework highlights its potential to shape the future of coding education, paving the way for a new era of interactive and engaging learning experiences.

    Committee: Abdu Arslanyilmaz PhD (Advisor); Feng Yu PhD (Committee Member); Carrie Jackson EdD, BCBA (Committee Member) Subjects: Computer Science; Engineering; Information Systems; Robotics; Teaching
  • 8. Steiner, Adam Hyperspectal W-Net: Exploratory Unsupervised Hyperspectral Image Segmentation

    Master of Science in Electrical Engineering, University of Dayton, 2024, Electrical Engineering

    Remote sensing techniques are capable of capturing large scenes of data over several sensing domains. Hyperspectral imagery (HSI), often accompanied with lIDAR and orthoimagery sensors during collection, can provide deeper contextual information for a wide range of applications in many different fields. Complex characteristics across spectral bands in addition to high-dimensionality of HSI data present challenges to accurate classification. Generally, dimensionality reduction of the input hyperspectral data cube is performed through multi-phase analytical algorithms as a pre-processing step before further analysis to include machine learning networks. These networks commonly rely on labeled training data for segmentation. Annotating ground truth aerial data can prove to be a cumbersome endeavor that may require specific expertise for accurate assessment. This inspires exploratory research for useful unsupervised feature-learning approaches that can withdraw essential information from HSI data to map scenes without labeled data thereby providing a start-to-finish scene segmentation process.

    Committee: Vijayan Asari (Committee Chair); Theus Aspiras (Advisor); Brad Ratliff (Advisor) Subjects: Electrical Engineering; Engineering; Environmental Geology; Environmental Science; Environmental Studies; Geology; Geophysics; Remote Sensing; Urban Planning
  • 9. Santhis, Ishaan Detecting Deepfakes : Fine-tuning VGG16 for Improved Accuracy

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science

    The continuous threat of deepfakes, cleverly crafted deceptions masquerading as reality, necessitates cutting-edge detection methods. While there are many methods available, this project dives into the realm of fine-tuning the VGG16 convolutional neural network (CNN) and synergistically integrating Natural Language Processing (NLP) to unveil deepfake images effectively. By using the Keras API and machine learning principles, we empower the model to discern authentic images from their manipulated counterparts, drawing inspiration from real-world cases like the notorious Jennifer Aniston deepfake scam. Firstly, we establish a robust foundation for feature extraction by pre-training the VGG16 architecture on vast image datasets. Subsequently, we meticulously curate a comprehensive deepfake image dataset encompassing diverse manipulation techniques and real-world scenarios. This tailor-made dataset fuels the fine-tuning of specific VGG16 layers, accurately crafting a model with exceptional generalizability. Intriguingly. The project rigorously evaluates the fine-tuned VGG16 model's performance on unseen deepfakes through a battery of meticulous metrics, including accuracy, and loss while detecting the deepfakes. We delve into a comprehensive comparison, carefully analyzing these results not only against the baseline performance of a model I created from scratch, the untrained VGG16, the VGG16 after I applied transfer learning. This project aspires to make a significant contribution to the ongoing battle against deepfakes by showcasing the remarkable potential of fine-tuning the VGG16 that helps us in achieving superior detection accuracy. By thoroughly incorporating real-world examples and harnessing the synergistic power of CNNs, we strive to develop a robust and adaptable solution capable of combating the ever-evolving landscape of deepfakes. Ultimately, this endeavor aims to safeguard online safety and trust, mitigating the detrimental effects of deepfakes on (open full item for complete abstract)

    Committee: Yizong Cheng Ph.D. (Committee Chair); William Hawkins Ph.D. (Committee Member); Jun Bai Ph.D. (Committee Member) Subjects: Computer Science
  • 10. Simra, Tabassum A Novel Knowledge-based Federated Deep Learning Approach for Enhancing Security and Privacy Preservation in IoT Edge Computing Applications

    Master of Science in Computer Engineering (MSCE), Wright State University, 2023, Computer Engineering

    The Internet of Things (IoT) infrastructure encompasses smart devices and real-time sensors connected through the Internet, facilitating the exchange of large datasets among these devices. This interconnected network of IoT sensors generates a significant volume of data for processing and analysis by embedded IoT Edge Computing systems. IoT Edge Computing systems enable efficient real-time analysis and data communications. Furthermore, IoT Edge Computing emerges to enhance the overall efficiency of IoT applications, making them adept at handling the dynamic demands of AI-based and large data-driven applications. The integration of IoT Edge Computing introduces several unique research challenges. Unfortunately, IoT Edge Computing applications are increasingly integrated into non-secure physical environments, rendering them vulnerable to new cyberattacks. These cybersecurity threats can compromise the security and privacy of sensitive real-time information, potentially leading to life-threatening situations. In addition to the limited computational and storage capabilities of edge devices demanding resource-efficient algorithms for real-time analysis, the distributed nature of edge devices and the diverse range of data they generate pose challenges regarding data integrity, confidentiality, privacy, and availability. For that, detecting anomalies in IoT Edge Computing is crucial for ensuring essential real-time and life-critical applications' integrity, confidentiality, and availability. In this thesis, we address above mentioned challenges by proposing a novel hybrid knowledge-based federated learning approach. This approach enhances the security and efficiency of IoT Edge Computing applications and considers the specific constraints and requirements associated with the edge environment. By integrating knowledge distillation and federated techniques, our model ensures optimal resource utilization while maintaining robust security and privacy protocols. To execute (open full item for complete abstract)

    Committee: Fathi Amsaad Ph.D. (Advisor); Wen Zhang Ph.D. (Committee Member); Huaining Cheng Ph.D. (Committee Member) Subjects: Computer Engineering
  • 11. PATEL, GAURANG SECURING ADVERSARIAL MACHINE LEARNING IN MEDICAL IMAGING APPLICATIONS

    Master of Computer and Information Science, Cleveland State University, 2023, Washkewicz College of Engineering

    Deep learning has revolutionized several fields including the medical image processing in the past decade. Convolutional Neural Networks can now perform many image processing tasks better than humans. As a result, Convolution Neural Networks (CNNs) are increasingly used in the automation of diagnosis of life-threatening diseases. CNNs perform complex image classification tasks with greater accuracy and output quality. However, recent discovery of adversarial attacks raises a significant threat against safety and accuracy of the CNNs. CNNs are vulnerable to perturbations in the input image that are imperceptible to human eyes, which leads to misclassification of the model output. This research work proposes a novel Super Resolution Generative Adversarial Network-based approach to improve classification robustness of CNN against adversarial attacks using MRI dataset as an example. Robustness of proposed novel network model is compared with existing state of the art models in the field. The experiment results demonstrate that proposed approach improves CNN model robustness by 95% against adversarial attacks when compared to state-of-the-art approaches such as context-aware-models and conventional CNN.

    Committee: SATHISH KUMAR (Committee Chair); HONGKAI YU (Committee Member); JANCHE SANG (Committee Member) Subjects: Artificial Intelligence
  • 12. Ahmed, Jishan Cost-Aware Machine Learning and Deep Learning for Extremely Imbalanced Data

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2023, Data Science

    Many real-world datasets, such as those used for failure and anomaly detection, are severely imbalanced, with a relatively small number of failed instances compared to the number of normal instances. This imbalance often results in bias towards the majority class during learning, making mitigation a serious challenge. To address these issues, this dissertation leverages the Backblaze HDD data and makes several contributions to hard drive failure prediction. It begins with an evaluation of the current state of the art techniques, and the identification of any existing shortcomings. Multiple facets of machine learning (ML) and deep learning (DL) approaches to address these challenges are explored. The synthetic minority over-sampling technique (SMOTE) is investigated by evaluating its performance with different distance metrics and nearest neighbor search algorithms, and a novel approach that integrates SMOTE with Gaussian mixture models (GMM), called GMM SMOTE, is proposed to address various issues. Subsequently, a comprehensive analysis of different cost-aware ML techniques applied to disk failure prediction is provided, emphasizing the challenges in current implementations. The research also expands to create explore a variety of cost-aware DL models, from 1D convolutional neural networks (CNN) and long short-term memory (LSTM) models to a hybrid model combining 1D CNN and bidirectional LSTM (BLSTM) approaches to utilize the sequential nature of hard drive sensor data. A modified focal loss function is introduced to address the class imbalance issue prevalent in the hard drive dataset. The performance of DL models is compared to traditional ML algorithms, such as random forest (RF) and logistic regression (LR), demonstrating superior results, suggesting the potential effectiveness of the proposed focal loss function. In addition to these efforts, this dissertation aims to provide a comprehensive understanding of hard drive longevity and the critical factors contrib (open full item for complete abstract)

    Committee: Robert C. Green II Ph.D. (Committee Chair); Liuling Liu Ph.D. (Other); Umar D Islambekov Ph.D. (Committee Member); Junfeng Shang Ph.D. (Committee Member) Subjects: Computer Science; Statistics
  • 13. Hopkins, Nicholas Data Driven Video Source Camera Identification

    Doctor of Philosophy (Ph.D.), University of Dayton, 2023, Engineering

    Given a set of video imagery from unknown device provenance, video-based source camera identification (V-SCI) refers to a task of identifying which device collected the imagery. V-SCI techniques predominantly leverage photo response non-uniformity (PRNU) patterns extracted from digital video for device identification decisions. PRNU patterns function as device fingerprints and SCI methods using PRNU from digital still imagery (I-SCI) are relatively mature; however, advancements in video processing, namely electronic image stabilization (EIS) algorithms, degrade video extracted PRNU distinctiveness yielding a significant obstacle toward extending I-SCI performance to EIS processed video datasets. We provide a new, more relevant PRNU dataset, UDAYTON23VSCI, for V-SCI benchmarking in contrast to current publicly available datasets. To address the EIS V-SCI challenge, we present a data-driven approach to exploit PRNU signals derived from EIS video via ``device-specific'' neural networks implemented with a novel PRNU image training and transfer learning strategy. Results implementing our device-specific network approach on UDAYTON23VSCI and a leading publicly available dataset confirm the advantages of our approach over state of the art SCI methods. We provide a new PRNU computation approach via Log-noise PRNU estimation which overcomes multiplicative noise constraints inherent to PRNU patterns in imagery. We show our Log-noise PRNU estimation approach outperforms the current widely accepted PRNU estimation approach based on maximum likelihood estimation (MLE) in V-SCI task thus eliminating the need for MLE in computing PRNU. Lastly, by removing MLE PRNU computation requirement, we show our Log-noise PRNU estimation approach is a key contribution toward realizing a fully data driven end-to-end (E2E) network design for tackling EIS V-SCI.

    Committee: Keigo Hirakawa (Advisor); Barath Narayanan (Committee Member); Partha Banerjee (Committee Member); Vijayan Asari (Committee Member) Subjects: Artificial Intelligence; Electrical Engineering
  • 14. Siefring, Matthew Polarimetric Imagery for Object Pose Estimation

    Master of Science (M.S.), University of Dayton, 2023, Electrical Engineering

    Polarization imaging is a rich modality that describes the orientation of reflected optical radiance in a scene. Polarization has been shown to be useful for computer vision tasks by improving robustness to low visibility conditions, improving contrast between polarized and non-polarized objects, and providing shape information about polarized objects. However, properly applying polarimetric information to convolutional neural networks (CNNs) is an ongoing area of research. In this work, our goal is to explore new and existing methods of introducing polarimetric imagery to pretrained RGB intensity CNNs for the purpose of object pose estimation. As part of our research, we design and execute a controlled data collection where we measure the linear Stokes parameters at each point in a well-lit image. For each well-lit image, we generate a synthetic low-light image. We then develop a pipeline to generate 3D bounding box parameters for objects of interest in a semi-automated manner. Lastly, we use our dataset to create several deep-learning-based pose estimation models which utilize polarization information in differing ways. We compare the pose estimation performance of each network under varying illumination conditions.

    Committee: Bradley Ratliff (Committee Chair); Jason Kaufman (Committee Member); Eric Balster (Committee Member) Subjects: Electrical Engineering; Optics
  • 15. Tomczak, Nathaniel Convolutional Neural Networks for Indexing Transmission Electron Microscopy Patterns: a Proof of Concept

    Master of Engineering, Case Western Reserve University, 2023, Materials Science and Engineering

    Understanding phase transitions and their behaviors are critical in nuclear fuel applications. The transmission electron microscopy (TEM) diffraction field is used to further the understanding of plutonium-zirconium alloys in metallic fuels. Indexing patterns to gain an understanding is time intensive for researchers. Convolutional neural networks (CNN) have exhibited exceptional performance in classification tasks in other fields. Applying them to selected area electron diffraction (SAED) patterns has yielded better than random results. A CNN is capable of classifying between three phase groups, at 650 patterns per second, with a total accuracy up to 83.74%. CNNs can also classify between two phase groups at 82.31% accuracy while being able to discern a difference between patterns of identical symmetry but distinct structure.

    Committee: Jennifer Carter (Advisor); Sunniva Collins (Committee Member); Laura Bruckman (Committee Member) Subjects: Computer Science; Materials Science
  • 16. Gidwani, Mishka Evaluating Artificial Intelligence Radiology Models for Survival Prediction Following Immunogenic Regimen in Brain Metastases

    Doctor of Philosophy, Case Western Reserve University, 0, Molecular Medicine

    Novel therapeutic regimens which spur the endogenous immune system to kill cancer cells, such as stereotactic radiosurgery (SRS) and immune checkpoint inhibition (ICI), are heterogeneously effective. Understanding causal factors of response is vital to guide risk assessment and treatment decisions. In this thesis, I evaluate the ability of three methods to prognosticate survival for brain metastases patients following SRS and ICI treatment. These include the clinically utilized response assessment in neuro-oncology for brain metastases (RANO-BM) protocol, as well as investigational computational methods such as radiomic feature analysis and convolutional neural network (CNN) image analysis. I find that easing the 10mm RANO-BM diameter threshold for measurable disease allows new lesions to be discovered as proof of progression in ICI-treated metastases. Further, I find that the trajectory of RANO-BM diameter can be more instructive for risk prediction than the ratio-change and that neither volume nor number of metastases, nor RANO-BM diameter can significantly predict survival until a year after treatment. Reproducing common radiomic methodology flaws observed in the published literature, I demonstrate that inconsistent partitioning, or the improper division of radiomic feature data into Training, Validation, Test, and External test sets, can provide a 1.4x performance boost to reported accuracy (AUROC) for predictive models. Additionally, I highlight how spurious correlations with biological variables can overstate the importance of radiomic features. Leveraging the conclusions from my radiomic reproduction study, I assess the ability of radiomic features and convolutional neural networks (CNNs) to predict overall survival in the largest ICI-treated brain metastases cohort assembled to date, comprising 175 patients from three institutions in two countries. I find that neither radiomic features nor any architecture of the survival AI model MetsSurv is capable of p (open full item for complete abstract)

    Committee: Jacob Scott (Advisor); Brian Rubin (Committee Chair); Elizabeth Gerstner (Committee Member); Anant Madabhushi (Committee Member); Jayashree Kalpathy-Cramer (Advisor); Nathan Pennell (Committee Member) Subjects: Artificial Intelligence; Computer Science; Immunology; Medical Imaging; Molecular Biology; Neurology; Oncology; Radiology
  • 17. Abhishek, Abhinav Cyberbullying Detection Using Weakly Supervised and Fully Supervised Learning

    Master of Computer and Information Science, Cleveland State University, 2022, Washkewicz College of Engineering

    Machine learning is a very useful tool to solve issues in multiple domains such as sentiment analysis, fake news detection, facial recognition, and cyberbullying. In this work, we have leveraged its ability to understand the nuances of natural language to detect cyberbullying. We have further utilized it to detect the subject of cyberbullying such as age, gender, ethnicity, and religion. Further, we have built another layer to detect the cases of misogyny in cyberbullying. In one of our experiments, we created a three-layered architecture to detect cyberbullying , then to detect if it is gender based and finally if it is a case of misogyny or not. In each of our experimentation we trained models with support vector machines, RNNLSTM, BERT and distilBERT, and evaluated it using multiple performance measuring parameters like accuracy, bias, mean square error, recall, precision and F1 score to evaluate each model more efficiently in terms of bias and fairness. In addition to fully supervised learning, we also used weakly supervised learning techniques to detect the cyberbullying and its subject during our experimentations. Finally, we compared the performance of models trained using fully supervised learning and weakly supervised learning algorithms. This comparison further demonstrated that using weak supervision we can develop models to handle complex use cases such as cyberbullying. Finally, the thesis document concludes by describing lessons learned, future work recommendations and the concluding remarks.

    Committee: Sathish Kumar, Ph.D. (Committee Chair); Hongkai Yu, Ph.D. (Committee Member); Chansu Yu, Ph.D. (Committee Member) Subjects: Computer Science
  • 18. Sudalairaj, Shivchander Spatio-Temporal Analysis of EEG using Deep Learning

    MS, University of Cincinnati, 2022, Engineering and Applied Science: Computer Science

    In recent times, the field of EEG-BCI has seen tremendous advancements in terms of research. On the other hand, Deep Learning has been pushing boundaries of what is possible in various domains like Natural Language and Vision. It has helped us to remove the roadblocks of domain expertise and feature engineering, and opened doors to end-to-end learning (from raw data to downstream task). The field of EEG-BCI has recently utilized Deep Learning to decode EEG and extract intent, mostly in the Motor-Imagery paradigm. But most of the current research in this field either have a spatial approach or a temporal approach towards decoding EEG. In this study, we explore the idea of decoding EEG signals in a Spatio- Temporal manner. We take an approach akin to how humans would view and decode EEG readings - by treating the signal as a 2D image matrix. Utilizing the powers of convolutional network to capture local spatial features and transformers to capture global and long term temporal dependencies, we propose an architecture which effectively combines the strength of these two networks to present an end-to-end spatio-temporal architecture capable of decoding Motor-Imagery intent from raw EEG signals.

    Committee: Anca Ralescu Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member); Kenneth Berman Ph.D. (Committee Member) Subjects: Computer Science
  • 19. Bai, Yongsheng Deep Learning with Vision-based Technologies for Structural Damage Detection and Health Monitoring

    Doctor of Philosophy, The Ohio State University, 2022, Civil Engineering

    There are three main research conducted in this paper, including using deep learning methods with vision-based technologies on Structural Damage Detection (SDD), Structural Health Monitoring (SHM) and progressive collapse study. During the learning and improvement process, many goals of automation in SDD and SHM have been achieved, although there will be a large room for further improvement and development on these studies. In progressive collapse study, remote sensing technologies and data fusion are applied on a field experiment of a real building at the Central Campus of the Ohio State University. The major contributions of this paper are shown as follows: A few comprehensive experimental studies for automated SDD in extreme events using deep learning methods for processing 2D images. In the first study, a 152-layer Residual Network (ResNet) is utilized to identify multiple classes in eight SDD tasks, which include identification of scene levels, damage levels, material types, etc. The proposed ResNet achieved high accuracy for each task while the positions of the damage are not identifiable. In the second study, the existing ResNet and a segmentation network (U-Net) are combined into a new pipeline, cascaded networks, for categorizing and locating structural damage. The results show that the accuracy of damage detection is significantly improved compared to only using a segmentation network. In the third and fourth studies, end-to-end networks are developed and tested as a new solution to directly detect cracks and spalling in the image collections of recent large earthquakes. One of the proposed networks can achieve an accuracy above $67.6\%$ for all tested images at various scales and resolutions, and shows its robustness for these human-free detection tasks. Studies are conducted with a pipeline to automatically track and measure displacements and vibrations of structures or structural components in laboratory and field experiments. This novel framework (open full item for complete abstract)

    Committee: Halil Sezen Dr. (Advisor); Farhang Pourboghrat Dr. (Committee Member); Rongjun Qin Dr. (Committee Member); Alper Yilmaz Dr. (Advisor) Subjects: Civil Engineering; Computer Science; Mechanics
  • 20. Zhong, Shaochen Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions

    Master of Sciences, Case Western Reserve University, 2022, EECS - Computer and Information Sciences

    Structured pruning methods which are capable of delivering a densely pruned network are among the most popular techniques in the realm of neural network pruning, where most methods prune the original network at a filter or layer level. Although such methods may provide immediate compression and acceleration benefits, we argue that the blanket removal of an entire filter or layer may result in undesired accuracy loss. In this paper, we revisit the idea of kernel pruning (to only prune one or several k × k kernels out of a 3D-filter), a heavily overlooked approach under the context of structured pruning. This is because kernel pruning will naturally introduce sparsity to filters within the same convolutional layer — thus, making the remaining network no longer dense. We address this problem by proposing a versatile grouped pruning framework where we first cluster filters from each convolutional layer into equal-sized groups, prune the grouped kernels we deem unimportant from each filter group, then permute the remaining filters to form a densely grouped convolutional architecture (which also enables the parallel computing capability) for fine-tuning. Specifically, we consult empirical findings from a series of literature regarding Lottery Ticket Hypothesis to determine the optimal clustering scheme per layer, and develop a simple yet cost-efficient greedy approximation algorithm to determine which group kernels to keep within each filter group. Extensive experiments also demonstrate our method often outperforms comparable SOTA methods with lesser data augmentation needed, smaller fine-tuning budget required, and sometimes even much simpler procedure executed (e.g., one-shot v. iterative).

    Committee: Shuai Xu (Advisor); Mehmet Koyutürk (Committee Member); Harold Connamacher (Committee Member); Shuai Xu (Committee Chair) Subjects: Artificial Intelligence; Computer Science