In light field imaging, one challenging task is to fuse the spatial and angular information with spectral information. Existing methods for multi-modal and multi-spectral correspondence problem are descriptors based algorithms which are time consuming and not able to handle the small base line issue in light field. In recent years, deep learning based methods demonstrate good performance in many challenging computer vision areas. Inspired by their powerful performance, we propose a learning based method to transfer the parallax information across channels in light field. We exploit spatial and angular information from two reference channels and the spatial information from the target channel to predict the different views and finally reconstruct the target channel. Experimental results demonstrate that compared with other descriptors based methods, our learning based method is much less time consuming and able to effectively transfer the parallax information even when the parallax shift is very small.