In this thesis we try to investigate the implementation of a Stochastic Local Volatility (SLV) model, using the Alternate Direction Implicit scheme (ADI), on different High Performance Computing (HPC) platforms, such as CUDA and OpenMP. We start by analysing different implementations of serial and parallel tridiagonal solvers and the various optimization techniques that can make them faster. These tridiagonal solvers will be then used in order to speedup the ADI scheme and therefore the SLV model. To better analyse the factors affecting the performance of each implemented tridiagonal solver and ADI scheme using CUDA, we have used the NVIDIA visual profiler. The results obtained show that the coalesced global memory access and shared memory access with no bank conflicts proves to be crucial in achieving good speedup. In the final part of the thesis we benchmark the fastest GPU version of the SLV model against a fully multi-threaded CPU implementation. The results show that the CUDA and OpenMP implementation, with 8 threads, achieves approximately 8x and 7.5x speedup, respectively, over the single threaded SLV program. However, we believe that both the CUDA and the OpenMP codes of SLV can be more optimised.