Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
main.pdf (28.54 MB)
ETD Abstract Container
Abstract Header
Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach
Author Info
Wang, Junpeng
ORCID® Identifier
http://orcid.org/0000-0002-1130-9914
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829
Abstract Details
Year and Degree
2019, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Abstract
Recently, Deep Neural Networks (DNNs) have demonstrated superhuman capabilities in solving problems across a wide range of applications. The outstanding performance of those models shows great potentials in replacing human labor with artificial intelligence in the near future. However, behind this beautiful picture, those deep learning models often work like black-boxes, i.e., people using them have a good sense of their inputs and outcomes, but have little knowledge on the complicated working mechanisms in between. Given that DNN models are not infallible, deploying them in real-world applications may put humans in a very dangerous situation. This is especially true for safety-critical applications, such as medical diagnosis and autonomous driving. The internal working mechanisms of DNNs are not easily interpretable. A DNN approximates a very complex function through an intricate combination of numerous linear and non-linear functions, which are organized as multiple computational layers. The more layers a DNN has (i.e., deeper), the more powerful it could be. Although each individual linear/non-linear function is simple, the combination of them could be exponentially complicated. This is where the powerful expression of DNNs comes from, as well as the reason that makes DNNs hard to be interpreted, as the data transformation in high-dimensional spaces could easily go beyond humans' interpretation capability. The emerging field of Explainable Artificial Intelligence (XAI) targets to open the black-boxes of DNNs, and many interpretation techniques have been proposed, e.g., sensitivity analysis, guided back-propagation. Focusing on a specific data instance of interest, those techniques interpret a DNN by examining how the model output is resulted from the corresponding input. Still, domain experts lack the ability in flexibly exploring the detailed intermediate outcomes and combining the piece-by-piece outcomes to draw insightful conclusions or derive useful insight. These abilities are significantly important in deepening the experts' understanding on what was actually going on and diagnosing potential deficiencies of their models. We believe combing XAI techniques with visual analytics is a promising solution here, which uses graphical interface with powerful interactivity to construct a human-in-the-loop analysis pipeline and empowers deep learning experts to conveniently examine DNNs with multiple levels of details. This dissertation investigates how visual analytics can assist the process of interpreting, diagnosing, and potentially improving deep learning models. Two general solutions of a Black-Box approach and a White-Box approach are introduced through three design studies, covering supervised, unsupervised, and reinforcement learning models. In the first study, we propose DeepVID, which brings together the power of deep learning and visual analytics to effectively interpret and diagnose supervised DNN classifiers. The second study focuses on a popular unsupervised deep generative model, i.e., Generative Adversarial Nets (GANs), and we design GANViz to unveil the mysterious power of the model in image generation. Last but not least, we present the design study of DQNViz, which demonstrates the training evolutions of deep reinforcement learning models, and contributes to potential improvements of the Deep Q-Networks (DQNs). This dissertation ends with actionable future research directions, as well as a summary of our contributions.
Committee
Han-Wei Shen (Advisor)
Huan Sun (Committee Member)
Hyunwoo Park (Committee Member)
Pages
188 p.
Subject Headings
Computer Science
Keywords
Deep Neural Networks
;
Deep Learning
;
Visual Analytics
;
Visualization
;
Model Interpretation and Diagnosis
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Wang, J. (2019).
Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829
APA Style (7th edition)
Wang, Junpeng.
Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach.
2019. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829.
MLA Style (8th edition)
Wang, Junpeng. "Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach." Doctoral dissertation, Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1555499299957829
Download Count:
762
Copyright Info
© 2019, some rights reserved.
Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach by Junpeng Wang is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at etd.ohiolink.edu.
This open access ETD is published by The Ohio State University and OhioLINK.