Degree
Doctor of Philosophy (PhD)
Department
Computer Science
Document Type
Dissertation
Abstract
Deep neural network learn a wide range of features from the input data. These features take many different forms from, structural to textural, and can be very scale invariant. The complexity of these features also differs from layer to layer. Much like the human brain, this behavior in deep neural networks can also be used to cluster and separate classes. Applicability in deep neural networks is the quantitative measurement of the networks ability to differentiate between clusters in feature space. Applicability can measure the differentiation between clusters of sets of classes, single classes, or even within the same class. In this work we present our metric and methodology for applicability, and compute the applicability for different sets, classes, inputs and octaves within a class. We also compute how applicability of features learned through adversarial training and show, to the first of our knowledge, how the features learned in a generator and discriminator overlap. Additionally, we use applicability to create a unsupervised tree like neural network that uses applicability to facilitate branching and maximized reuse of learned features. Lastly, we use Progressive training of Generative Adversarial Networks (GANs) to show how specializing and transferring features can lead to more accurate segmentation results.
Date
5-14-2021
Recommended Citation
Collier, Edward, "Quantifying Feature Overlaps in Deep Neural Networks and Their Applications in Unsupervised Learning and Generative Adversarial Networks" (2021). LSU Doctoral Dissertations. 5550.
https://repository.lsu.edu/gradschool_dissertations/5550
Committee Chair
Mukhopadhyay, Supratik
DOI
10.31390/gradschool_dissertations.5550