Abstract
In order to overcome the difficulty in TakagiSugenoKang (TSK) fuzzy modeling for large datasets, scalable TSK (STSK) fuzzy-model training is investigated in this study based on the core-set-based minimal-enclosing-ball (MEB) approximation technique. The specified L2-norm penalty-based ε-insensitive criterion is first proposed for TSK-model training, and it is found that such TSK fuzzy-model training can be equivalently expressed as a center-constrained MEB problem. With this finding, an STSK fuzzy-model-training algorithm, which is called STSK, for large or very large datasets is then proposed by using the core-set-based MEB-approximation technique. The proposed algorithm has two distinctive advantages over classical TSK fuzzy-model training algorithms: The maximum space complexity for training is not reliant on the size of the training dataset, and the maximum time complexity for training is linear with the size of the training dataset, as confirmed by extensive experiments on both synthetic and real-world regression datasets.
Original language | English |
---|---|
Article number | 5629439 |
Pages (from-to) | 210-226 |
Number of pages | 17 |
Journal | IEEE Transactions on Fuzzy Systems |
Volume | 19 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Apr 2011 |
Keywords
- ε-insensitive training
- Core set
- core vector machine (CVM)
- minimal-enclosing-ball (MEB) approximation
- TakagiSugenoKang (TSK) fuzzy modeling
- very large datasets
ASJC Scopus subject areas
- Control and Systems Engineering
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics