We present a self-organizing neural network model that can acquire an incremental lexicon. The model allows the acquisition of new words without disrupting learned structure. The model consists of three major components. First, the word co-occurrence detector computes word transition probabilities and represents word meanings in terms of context vectors. Second, word representations are projected to a lower, constant dimension. Third, the growing lexical map (GLM) self-organizes on the dimension-reduced word representations. The model is initialized with a subset of units in GLM and a subset of the lexicon, which enables it to capture the regularities of the input space and decrease chances of catastrophic interference. During growth, new nodes are inserted in order to reduce the map quantization error, and the insertion occurs only to yet unoccupied grid positions, thus preserving the 2D map topology. We have tested GLM on a portion of parental speech extracted from the CHILDES database, with an initial 200 words scattered among 800 nodes. The model demonstrates the ability to highly preserve learned lexical structure when 100 new words are gradually added. Implications of the model are discussed with respect to language acquisition by children.