Link prediction is an important and interesting application for social networks because it can infer potential links among network participants. Existing approaches basically work with the homophily principle, i.e., people of similar characteristics tend to befriend each other. In this way, however, they are not suitable for inferring negative links or hostile links, which usually take place among people with different characteristics. Moreover, negative links tend to couple with positive links to form signed networks. In this paper, we thus study the problem of disentangled link prediction (DLP) for signed networks, which includes two separate tasks, i.e., inferring positive links and inferring negative links. Recently, representation learning methods have been proposed to solve the link prediction problem because the entire network structure can be encoded in representations. For the DLP problem, we thus propose to disentangle a node representation into two representations, and use one for positive link prediction and another for negative link prediction. Experiments on three real-world signed networks demonstrate the proposed disentangled representation learning (DRL) method significantly outperforms alternatives in the DLP problem.