This paper presents a feature-guided retexturing method that preserves similar underlying texture distortion between the original and the retextured images/videos. Existing image retexturing and synthesis methods deal with texture distortion by inter-pixel distances manipulation in image space, and the underlying texture distortion of the original images is always destroyed when performing the nowaday retexturing techniques due to limitations like improper distortion caused by human mesh stretching, or unavoidable texture splitting caused by synthesis. The long processing time due to time-consuming bilateral filtering is also unacceptable. We propose to utilize corner features to naturally discover the underlying texture distortion existed in the original images. Gradient-based depth map reconstruction and feature-guided optimization are then applied to accomplish the distortion process. We facilitate the HDR image/video retexturing via real-time bilateral grid and feature-guided texture distortion optimization on GPU-CUDA platform. Our retexturing using feature-guided optimization in gradient domain provides realistic retexturing of HDR images while preserving elite texture distortion in cornered area. In multiple experiments, our method consistently demonstrates high-quality performance of image/video retexturing in real time.