Visual Simultaneous Localization and Mapping (Visual SLAM) has been studied for the past years and many state-of-the-art algorithms have been proposed with rather satisfactory performance in static scenarios. However, in dynamic scenarios, off-the-shelf Visual SLAM algorithms cannot localize the robot very accurately. To address this problem, we propose a novel method that uses optical flow to distinguish and eliminate dynamic feature points from extracted ones by using the RGB images as the only input. The static feature points are fed into the Visual SLAM algorithm for camera pose estimation. We integrate our method with the ORB-SLAM system and validate the proposed method with challenging dynamic sequences from the TUM dataset. The entire system can run in real time. Qualitative and quantitative evaluations demonstrate that our method significantly improves the performance of the Visual SLAM in dynamic scenarios.