This paper presents a conceptual model for movement rehabilitation of traumatic brain injury (TBI) using virtual environments. This hybrid model integrates principles from ecological systems theory with recent advances in cognitive neuroscience, and supports a multilevel approach to both assessment and treatment. Performance outcomes at any stage of recovery are determined by the interplay of task, individual, and environmental/contextual factors. We argue that any system of rehabilitation should provide enough flexibility for task and context factors to be varied systematically, based on the current neuromotor and biomechanical capabilities of the performer or patient. Thus, in order to understand how treatment modalities are to be designed and implemented, there is a need to understand the function of brain systems that support learning at a given stage of recovery, and the inherent plasticity of the system. We know that virtual reality (VR) systems allow training environments to be presented in a highly automated, reliable, and scalable way. Presentation of these virtual environments (VEs) should permit movement analysis at three fundamental levels of behaviour: (i) neurocognitive bases of performance (we focus in particular on the development and use of internal models for action which support adaptive, on-line control); (ii) movement forms and patterns that describe the patients' movement signature at a given stage of recovery (i.e, kinetic and kinematic markers of movement proficiency), (iii) functional outcomes of the movement. Each level of analysis can also map quite seamlessly to different modes of treatment. At the neurocognitive level, for example, semi-immersive VEs can help retrain internal modeling processes by reinforcing the patients' sense of multimodal space (via augmented feedback), their position within it, and the ability to predict and control actions flexibly (via movement simulation and imagery training). More specifically, we derive four - key therapeutic environment concepts (or Elements) presented using VR technologies: Embodiment (simulation and imagery), Spatial Sense (augmenting position sense), Procedural (automaticity and dual-task control), and Participatory (self-initiated action). The use of tangible media/objects, force transduction, and vision-based tracking systems for the augmentation of gestures and physical presence will be discussed in this context.