We propose a developmental method that enables a robot to identify visual locations associated with its own body from a cluttered visual image based on the concept of visuomotor predictors. A set of statistical predictors are trained by linear regression to predict the visual features at each visual location from proprioceptive input. By measuring each predictor's predictability using the R2 statistics, the algorithm can determine which visual locations correspond to the robot's body parts. Visual features are extracted using biologically plausible visual motion processing models. We demonstrate that while both orientation selective and motion selective visual features can be used for self-identification, motion selective features are more robust to changes in appearance. © 2012 IEEE.