Abstract
Transfer learning method has been widely used in machine learning when training data is limited. However, class noise accumulated during learning iterations can lead to negative transfer which can adversely affect performance when more training data is used. In this paper, we propose a novel method to identify noise samples for noise reduction. More importantly, the method can detect the point where negative transfer happens such that transfer learning can terminate at the near top performance point. In this method, we use the sum of the Rademacher distribution to estimate the class noise rate of transferred data. Transferred data having high probability of being labeled wrongly is removed to reduce noise accumulation. This negative sample reduction process can be repeated several times during transfer learning until we find the point where negative transfer occurs. As we can detect the point where negative transfer occurs, our method not only has the ability to delay the point where negative transfer happens, but also the ability to stop transfer learning algorithms at the right place for top performance gain. Evaluation based on cross-lingual/domain opinion analysis evaluation data set shows that our algorithm achieves the state-of-the-art result. Furthermore, our system shows a monotonic increase trend in performance improvement when more training data are used beating the performance degradation curse of most transfer learning methods when training data reaches certain size.
Original language | English |
---|---|
Pages (from-to) | 185-197 |
Number of pages | 13 |
Journal | International Journal of Machine Learning and Cybernetics |
Volume | 9 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Feb 2018 |
Keywords
- Class noise detection
- Negative transfer
- Transfer learning
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Artificial Intelligence