Letter recognition is the foundation of the human reading system. Despite this, it tends to receive little attention in computational modelling of single word reading. Here we present a model that can be trained to recognise letters in various spatial transformations. When presented with degraded stimuli the model makes letter confusion errors that correlate with human confusability data. Analyses of the internal representations of the model suggest that a small set of learned visual feature detectors support the recognition of both upper case and lower case letters in various fonts and transformations. We postulated that a damaged version of the model might be expected to act in a similar manner to patients suffering from pure alexia. Summed error score generated from the model was found to be a very good predictor of the reading times of pure alexic patients, outperforming simple word length, and accounting for 47% of the variance. These findings are consistent with a hypothesis suggesting that impaired visual processing is a key to understanding the strong word-length effects found in pure alexic patients. © 2012 Elsevier Ltd.