The half precision (fp16) ﬂoating-point format, deﬁned in the 2008 revision of the IEEE standard for ﬂoating-point arithmetic, and a more recently proposed half precision format bﬂoat16, are increasingly available in GPUs and other accelerators. While the support for low precision arithmetic is mainly motivated by machine learning applications, general purpose numerical algorithms can beneﬁt from it, too, gaining in speed, energy usage, and reduced communication costs. Since the appropriate hardware is not always available, and one may wish to experiment with new arithmetics not yet implemented in hardware, software simulations of low precision arithmetic are needed. We discuss how to simulate low precision arithmetic using arithmetic of higher precision. We examine the correctness of such simulations and explain via rounding error analysis why a natural method of simulation can provide results that are more accurate than actual computations at low precision. We provide a MATLAB function chop that can be used to eﬃciently simulate fp16, bﬂoat16, and other low precision arithmetics, with or without the representation of subnormal numbers and with the options of round to nearest, directed rounding, stochastic rounding, and random bit ﬂips in the signiﬁcand. We demonstrate the advantages of this approach over deﬁning a new MATLAB class and overloading operators.