Recurrent Neural Network (RNN) have been used for sequence-related learning tasks, such as language and action, in the field of cognitive robotics. Gated mechanisms used in LSTM and GRU perform well in remembering long-term dependency. But to better mimic the neural dynamics in cognitive processes, the Multiple Time-scales (MT) RNN uses a hierarchical organization of memory updates which is similar to human cognition. Since the MT feature is typically used with a vanilla RNN or different gated mechanisms, its effect on the updates and training is still not fully uncovered. Therefore, we conduct a comparative experiment on two MT recurrent neural network models, i.e. the Multiple Time-Scale Recurrent Neural Network (MTRNN) and the Multiple Time-Scale Gated Recurrent Unit (MTGRU), for action sequence learning in robotics. The experiment shows that the MTRNN model can be used in learning tasks with low requirements for learning of long-term dependency due to its low computation. On the other hand, the MTGRU model is appropriate for learning the longterm dependency. Furthermore, because of the duplicated feature of the MT and the GRU feature, we also propose a simplified MTGRU model, named Multiple Time-scale SingleGate Recurrent Unit (MTSRU) which could reduce computational cost while it achieves the similar performance as the original version.