Dialogue response generation system is one of the hot topics in natural language processing, but it is still a long way to go before it can generate human-like dialogues. A good evaluation method will help narrow the gap between the machine and human in dialogue generation. Unfortunately, current evaluation methods cannot measure whether the dialogue response generation system is able to produce high-quality, knowledge-related, and informative dialogues. Aiming to identify and measure the existence of information in dialogues, we propose a novel automatic evaluation metric. By learning from the knowledge representation method in knowledge base, we define the heuristic rules to extract the information triples from dialogue pairs. And we design an information matching method to measure the probability of the existence of information in a dialogue. In experiments, our proposed metric demonstrates its effectiveness in dialogue selection and model evaluation on the Reddit dataset (English) and the Weibo dataset (Chinese).