Learning gradients via an early stopping gradient descent method

Research output: Journal article publicationJournal articleAcademic researchpeer-review

4 Citations (Scopus)

Abstract

We propose an early stopping algorithm for learning gradients. The motivation is to choose "useful" or "relevant" variables by a ranking method according to norms of partial derivatives in some function spaces. In the algorithm, we used an early stopping technique, instead of the classical Tikhonov regularization, to avoid over-fitting. After stating dimension-dependent learning rates valid for any dimension of the input space, we present a novel error bound when the dimension is large. Our novelty is the independence of power index of the learning rates on the dimension of the input space.
Original languageEnglish
Pages (from-to)1919-1944
Number of pages26
JournalJournal of Approximation Theory
Volume162
Issue number11
DOIs
Publication statusPublished - 1 Nov 2010
Externally publishedYes

Keywords

  • Approximation error
  • Early stopping
  • Gradient learning
  • Reproducing kernel Hilbert spaces

ASJC Scopus subject areas

  • Analysis
  • Numerical Analysis
  • Mathematics(all)
  • Applied Mathematics

Cite this