The aim was to investigate a method of developing mobile robot controllers based on ideas about how plastic neural systems adapt to their environment by extracting regularities from the amalgamated behavior of inflexible (non-plastic) innate s ubsystems interacting with the world. Incremental bootstrapping of neural network controllers was examined. The objective was twofold. First, to develop and evaluate the use of prewired or innate robot controllers to bootstrap backpropagation learning for Multi-Layer Perceptron (MLP) controllers. Second, to develop and evaluate a new MLP controller trained on the back of another bootstrapped controller. The experimental hypothesis was that MLPs would improve on the performance of controllers used to train them. The performances of the innate and bootstrapped MLP controllers were compared in eight experiments on the tasks of avoiding obstacles and finding goals. Four quantitative measures were employed: the number of sensorimotor loops required to complete a task; the distance traveled; the mean distance from walls and obstacles; the smoothness of travel. The overall pattern of results from statistical analyses of these quantities su pported the hypothesis; the MLP controllers completed the tasks faster, smoother, and steered further from obstacles and walls than their innate teachers. In particular, a single MLP controller incrementally bootstrapped by a MLP subsumption controller was superior to the others.