Using Applicability to Quantifying Octave Resonance in Deep Neural Networks
Document Type
Conference Proceeding
Publication Date
1-1-2020
Abstract
Features in a deep neural network are only as robust as those present in the data provided for training. The robustness of features applies to not just the types of features and how they apply to various classes, known or unknown, but also to how those features apply to different octaves, or scales. Neural Networks trained at one octave have been shown to be invariant to other octaves, while neural networks trained on large robust datasets operate optimally at only the octaves that resonate best with the learned features. This may still discard features that existed in the data. Not knowing the octave a trained neural network is most applicable to can lead to sub-optimal results during prediction due to poor preprocessing. Recent work has shown good results in quantifying how the learned features in a neural network apply to objects. In this work, we follow up on work in feature applicability, using it to quantify which octaves the features in a trained neural network resonate best with.
Publication Source (Journal or Book title)
Communications in Computer and Information Science
First Page
229
Last Page
237
Recommended Citation
Collier, E., DiBiano, R., & Mukhopadhyay, S. (2020). Using Applicability to Quantifying Octave Resonance in Deep Neural Networks. Communications in Computer and Information Science, 1333, 229-237. https://doi.org/10.1007/978-3-030-63823-8_28