Abstract
We propose an early stopping algorithm for learning gradients. The motivation is to choose "useful" or "relevant" variables by a ranking method according to norms of partial derivatives in some function spaces. In the algorithm, we used an early stopping technique, instead of the classical Tikhonov regularization, to avoid over-fitting. After stating dimension-dependent learning rates valid for any dimension of the input space, we present a novel error bound when the dimension is large. Our novelty is the independence of power index of the learning rates on the dimension of the input space.
Original language | English |
---|---|
Pages (from-to) | 1919-1944 |
Number of pages | 26 |
Journal | Journal of Approximation Theory |
Volume | 162 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Nov 2010 |
Externally published | Yes |
Keywords
- Approximation error
- Early stopping
- Gradient learning
- Reproducing kernel Hilbert spaces
ASJC Scopus subject areas
- Analysis
- Numerical Analysis
- General Mathematics
- Applied Mathematics