Master of Science (MS), Wright State University, 2024, Computer Science
Graph Neural Networks (GNNs) have increasingly gained popularity as tools for analyzing graph data in areas like biology, knowledge-graphs, social networks, biology, and recommendation systems. However, their vulnerability to adversarial attacks - small, targeted manipulations of graph structures or node features - raises serious concerns about their reliability in real-world applications.
Existing defense strategies, such as adversarial training, edge filtering, low-rank approximations, and randomization-based methods, often suffer from high computational costs, scalability issues, or reduced clean-data performance. Unlike these methods, the proposed approach integrates multi-hop relationships, applies adaptive regularization, and maintains a balance between feature-based and structural embeddings, ensuring improved performance under both clean and adversarial conditions. The learned embeddings from multi-hop relationships also result in performance improvements.
Extensive experiments on benchmark datasets (Cora, Citeseer, PubMed, Texas, Wisconsin) under various attack scenario - DICE, MetaAttack, and random perturbations - are performed on the proposed method as well as baseline models (GCN, GAT, WRGCN, LinkX). The proposed model consistently outperformed baseline models on both clean data and perturbed data. By addressing key challenges in adversarial machine learning, such as dataset consistency, hyperparameter tuning, and computational efficiency, this research provides a scalable, adaptable solution for deploying GNNs in sensitive applications like fraud detection, cybersecurity, and recommendation systems.
Committee: Lingwei Chen Ph.D. (Advisor); Cogan Shimizu Ph.D. (Committee Member); Michael Raymer Ph.D. (Committee Member)
Subjects: Computer Science