On complexity issues of online learning algorithms

Yuan Yao*

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

27 Citations (Scopus)

Abstract

In this paper, some new probabilistic upper bounds are presented for the online learning algorithm proposed in , and more generally for linear stochastic approximations in Hilbert spaces. With these upper bounds not only does one recover almost sure convergence, but also relaxes the square summable condition on the step size appeared in the early work. Furthermore two probabilistic upper bounds are given for an averaging process, both of which achieve the same rate with respect to sample size as in "batch learning" algorithms, and one of which is tight in both sample size and regularization parameter.

Original languageEnglish
Article number5625646
Pages (from-to)6470-6481
Number of pages12
JournalIEEE Transactions on Information Theory
Volume56
Issue number12
DOIs
Publication statusPublished - Dec 2010
Externally publishedYes

Keywords

  • Averaging process
  • online learning
  • regularization
  • reproducing kernel Hilbert space
  • stochastic approximation

Fingerprint

Dive into the research topics of 'On complexity issues of online learning algorithms'. Together they form a unique fingerprint.

Cite this