Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Learning Effective Features With Self-Supervision

Abstract Details

2023, PhD, University of Cincinnati, Engineering and Applied Science: Computer Science and Engineering.
Deep learning techniques are being unified for decision support in various applications. However, it remains challenging to train robust deep learning models, due to the inherent insufficient labeled data that is usually time-consuming and labor-intensive. Self-supervised learning is a feature representation learning paradigm to learn robust features from insufficient annotated datasets. It contains two types of task stages, including the pretext task and the downstream task. The model is typically pre-trained with the pretext task in an unsupervised manner, where the data itself provides supervision. Afterward, the model is fine-tuned in a real downstream supervised task. Although self-supervised learning can effectively learn the robust latent feature representations and reduce human annotation efforts, it highly relies on designing efficient pretext tasks. Therefore, studying effective pretext tasks is desirable to learn more effective features and further improve the model prediction performance for decision support. In self-supervised learning, pretext tasks with deep metric/contrastive learning styles received more and more attention, as the learned distance representations are useful to capture the similarity relationship among samples and further improve the performance of various supervised or unsupervised learning tasks. In this dissertation proposal, we survey the recent state-of-the-art self-supervised learning methods and propose several new deep metric and contrastive learning strategies for learning effective features. Firstly, we propose a new deep metric learning method for image recognition. The proposed method learns an effective distance metric from both geometric and probabilistic space. Secondly, we develop a novel contrastive learning method using the Bregman divergence, extending the contrastive learning loss function into a more generalized divergence form, which improves the quality of self-supervised learned feature representation. Additionally, we present a new collaborative self-supervised learning method in real radiology applications. The proposed method collaboratively learns the robust latent feature representations from radiomic data in a self-supervised manner to reduce human annotation efforts, which benefits the disease diagnosis. Meanwhile, we propose a new joint self-supervised and supervised contrastive learning method to learn an enhanced multimodal feature representation by amalgamating complementary information across different modalities and capturing shared information among similar subjects. Finally, we delve into our future research topics, which center around the discussion of innovative self-supervised learning approaches for domain adaptation and large language modeling.
Anca Ralescu, Ph.D. (Committee Chair)
Kenneth Berman, Ph.D. (Committee Member)
Dan Ralescu, Ph.D. (Committee Member)
Lili He, Ph.D. (Committee Member)
Yizong Cheng, Ph.D. (Committee Member)
200 p.

Recommended Citations

Citations

  • Li, Z. (2023). Learning Effective Features With Self-Supervision [Doctoral dissertation, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1703175261923853

    APA Style (7th edition)

  • Li, Zhiyuan. Learning Effective Features With Self-Supervision. 2023. University of Cincinnati, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=ucin1703175261923853.

    MLA Style (8th edition)

  • Li, Zhiyuan. "Learning Effective Features With Self-Supervision." Doctoral dissertation, University of Cincinnati, 2023. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1703175261923853

    Chicago Manual of Style (17th edition)