Abstract
We describe a broadly-applicable conservative error correcting model, N-fold Templated Piped Correction or NTPC ("nitpick"), that consistently improves the accuracy of existing high-accuracy base models. Under circumstances where most obvious approaches actually reduce accuracy more than they improve it, NTPC nevertheless comes with little risk of accidentally degrading performance. NTPC is particularly well suited for natural language applications involving high-dimensional feature spaces, such as bracketing and disambiguation tasks, since its easily customizable template-driven learner allows efficient search over the kind of complex feature combinations that have typically eluded the base models. We show empirically that NTPC yields small but consistent accuracy gains on top of even high-performing models like boosting. We also give evidence that the various extreme design parameters in NTPC are indeed necessary for the intended operating range, even though they diverge from usual practice.
Original language | English |
---|---|
Pages (from-to) | 476-486 |
Number of pages | 11 |
Journal | Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) |
Volume | 3248 |
Publication status | Published - 17 Oct 2005 |
Event | First International Joint Conference on Natural Language Processing - IJCNLP 2004 - Hainan Island, China Duration: 22 Mar 2004 → 24 Mar 2004 |
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science