Computer vision has seen great advances recently due to artificial neural networks (ANNs) with multiple hidden layers; in particular, convolutional neural networks (CNNs) have shown great flexibility in the tasks they are able to perform. These networks are usually trained in a supervised manner using the back-propagation algorithm. Spiking neural networks (SNNs) are the third generation of neuron simulation and have been proven to have better mathematical properties than previous generations. This approach to simulating biological neurons uses pulses to communicate the state of the units. The event-driven nature of SNNs imposes challenges for computer vision tasks. For example, neuromorphic vision sensors generate data when changes in their field of view are perceived; in contrast, standard cameras send full images. Furthermore, SNN training methodologies are not as mature as previous generations' procedures; critically, biologically-plausible supervised learning algorithms are scarce. In this thesis we explore event-based computer vision using spiking neurons as computation units; the main goal is to build a fully-spiking visual pipeline. Firstly, we develop methods to transform standard images into spike representations. We then process the generated spike trains to extract salient features using biological principles. Finally, supervised learning algorithms for SNNs are developed and applied in a computer vision context.