Non-negative latent factor (NLF) models can efficiently acquire useful knowledge from high-dimensional and sparse (HiDS) matrices filled with non-negative data. Single latent factor-dependent, non-negative and multiplicative update (SLF-NMU) is an efficient algorithm for building an NLF model on an HiDS matrix, yet it suffers slow convergence. On the other hand, a momentum method is frequently adopted to accelerate a learning algorithm explicitly depending on gradients, yet it is incompatible with learning algorithms implicitly depending on gradients, like SLF-NMU. To build a fast NLF model, we firstly propose a generalized momentum method compatible with SLF-NMU. With it, we propose the single latent factor-dependent, non-negative, multiplicative and momentum-integrated update (SLF-NM2U) algorithm for accelerating the building process of an NLF model, thereby achieving a fast non-negative latent factor (FNLF) model. Empirical studies on six HiDS matrices from industrial application indicate that with the incorporated momentum effects, FNLF outperforms NLF in terms of both convergence rate and prediction accuracy for missing data. Hence, compare with an NLF model, an FNLF model is more practical in industrial applications.