Properties of a new adaptive sampling method with applications to scalable learning

Document Type

Article

Publication Date

11-18-2015

Abstract

Scalability has become an increasingly important issue for data mining and knowledge discovery in this big data era. Random sampling can be used to tackle the problem of scalable learning. Adaptive sampling is superior to traditional batch sampling methods as adaptive sampling typically requires a much lower sample size and thus better efficiency while assuring guaranteed level of estimation accuracy and confidence. In recent works, a new adaptive sampling method was developed by the author and colleagues, which has then been applied to build an efficient, scalable boosting learning algorithm. New results are presented in this paper on theoretical analysis of the proposed sampling method. Specifically, for controlling absolute error, a bound is established on the probability that the new sampling method would stop too early. It is also shown that the criterion function for controlling relative error is in fact a convex function. A new variant of the sampling method is also presented. Empirical simulation results indicate that our methods, both the new variant and the original algorithm, often use significantly lower sample size (i.e., the number of sampled instances) while maintaining competitive accuracy and confidence when compared with batch sampling method. An empirical study is also reported here showing efficient classification of Web advertisements using a sampling-based ensemble learner.

Publication Source (Journal or Book title)

Web Intelligence

First Page

215

Last Page

227

This document is currently not available here.

Share

COinS