Emerging self-ensembling methods have achieved promising semi-supervised segmentation performances on medical images through forcing consistent predictions of unannotated data under different perturbations. However, the consistency only penalizes on independent pixel-level predictions, making structure-level information of predictions not exploited in the learning procedure. In view of this, we propose a novel structure-aware entropy regularized mean teacher model to address the above limitation. Specifically, we firstly introduce the entropy minimization principle to the student network, thereby adjusting itself to produce high-confident predictions of unannotated images. Based on this, we design a local structural consistency loss to encourage the consistency of inter-voxel similarities within the same local region of predictions from teacher and student networks. To further capture local structural dependencies, we enforce the global structural consistency by matching the weighted self-information maps between two networks. In this way, our model can minimize the prediction uncertainty of unannotated images, and more importantly that it can capture local and global structural information and their complementarity. We evaluate the proposed method on a publicly available 3D left atrium MR image dataset. Experimental results demonstrate that our method achieves outstanding segmentation performances than the state-of-the-art approaches in scenes with limited annotated images.