We are interested in the efficient numerical solution of parameter-dependent saddle point problems arising from discretisations of systems of PDEs with uncertain inputs. Two popular solution strategies for such problems are stochastic collocation mixed finite element methods and stochastic Galerkin mixed finite element methods. The former requires the solution of a long sequence of deterministic linear systems corresponding to different choices of the input parameters while the latter requires the solution of a single extremely large linear system. Typically, a fine spatial discretisation is required, and this can lead to large discrete problems that are extremely time and memory intensive to solve. These are significant bottlenecks in the forward propagation of uncertainty in PDE models. Reduced basis methods (RBMs) seek to overcome these bottlenecks by reducing the size of the so-called high fidelity problem through a projection onto a lower-dimensional space. The dimension of this space should be large enough so that the corresponding reduced solution is sufficiently accurate but small enough that the reduced problem is cheaper to solve. There are two novel contributions in this thesis. First, we develop an efficient RBM algorithm to reduce the computational costs associated with stochastic collocation mixed finite element methods. For saddle point problems, we need to construct two reduced spaces which must be compatible to provide a well-posed reduced problem. This is done in a potentially expensive offline stage. We also use a discrete empirical interpolation method (DEIM) to approximate the uncertain inputs as these may depend non-affinely on the input parameters. We are then able to solve the sequence of linear systems at a dramatically cheaper cost. In addition, our algorithm is faster than the standard greedy algorithm for constructing the reduced spaces. Second, we develop an efficient solution algorithm for solving the huge saddle point systems arising from stochastic Galerkin mixed finite element methods in cases where there is not enough memory to use standard Krylov subspace methods. Our approach uses ideas from rational Krylov subspace approximation to iteratively generate a reduced approximation space. This strategy provides a low-rank approximation to the solution of the stochastic Galerkin equations which can be obtained at a much lower cost and using less memory when compared to standard preconditioned iterative methods. Both of the new reduced basis algorithms proposed in this thesis allow for the efficient forward propagation of uncertainty in PDE models. We demonstrate the extent of savings achieved through comprehensive numerical experiments.