Digital twin maps the physical plant to a real-time digital representation and facilities product design and decision-making processes. In this paper, we propose a novel digital twin-enabled reinforcement learning approach and apply it to an autonomous driving scenario. To further improve the data efficiency of reinforcement learning, which often requires a large amount of agent-environment interactions during the training process, we propose a digital-twin environment model that can predict the transition dynamics of the physical driving scene. Moreover, we propose a rollout prediction-compatible reinforcement learning framework, which is able to further improve the training efficiency. The proposed framework is validated in an autonomous driving task with a focus on lateral motion control. The simulation results illustrate that our method could significantly speed up the learning process and the resulting driving policy could achieve better performance, compared to the conventional reinforcement learning approach, which demonstrates the feasibility and effectiveness of the proposed digital-twin-enabled reinforcement learning method.