Title
Deep neural networks for texture classification-A theoretical analysis
Document Type
Article
Publication Date
1-1-2018
Abstract
We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.
Publication Source (Journal or Book title)
Neural networks : the official journal of the International Neural Network Society
First Page
173
Last Page
182
Recommended Citation
Basu, S., Mukhopadhyay, S., Karki, M., DiBiano, R., Ganguly, S., Abel, S., Abeliovich, H., Abildgaard, M. H., Abudu, Y. P., Acevedo-Arozena, A., Adamopoulos, I. E., & Adeli, K. (2018). Deep neural networks for texture classification-A theoretical analysis. Neural networks : the official journal of the International Neural Network Society, 97, 173-182. https://doi.org/10.1016/j.neunet.2017.10.001