Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets
Document Type
Article
Publication Date
6-1-2017
Abstract
Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.
Publication Source (Journal or Book title)
Neural Processing Letters
First Page
855
Last Page
867
Recommended Citation
Basu, S., Karki, M., Ganguly, S., DiBiano, R., Mukhopadhyay, S., Gayaka, S., Kannan, R., & Nemani, R. (2017). Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets. Neural Processing Letters, 45 (3), 855-867. https://doi.org/10.1007/s11063-016-9556-4