The literature on bounded rationality and learning in macroeconomics has often used recursive algorithms to depict the evolution of agents' beliefs over time. In this thesis we assess this practice from an applied perspective, focusing on the use of such algorithms for the computation of forecasts of macroeconomic variables. Our analysis develops around three issues we find to have been previously neglected in the literature: (i) the initialization of the learning algorithms; (ii) the determination and calibration of the learning gains, which are key parameters of the algorithms' specifications; and, (iii) the choice of a representative learning mechanism. In order to approach these issues we establish an estimation framework under which we unify the two main algorithms considered in this literature, namely the least squares and the stochastic gradient algorithms. We then propose an evaluation framework that mimics the real-time process of expectation formation through learning-to-forecast exercises. To analyze the quality of the forecasts associated to the learning approach, we evaluate their forecasting accuracy and resemblance to surveys, these latter taken as proxy for agents' expectations. In spite of taking these two criteria as mutually desirable, it is not clear whether they are compatible with each other: whilst forecasting accuracy represents the goal of optimizing agents, resemblance to surveys is indicative of actual agents behavior. We carry out these exercises using real-time quarterly data on US inflation and output growth covering a broad post-WWII period of time.Our main contribution is to show that a proper assessment of the adaptive learning approach requires going beyond the previous views in the literature about these issues. For the initialization of the learning algorithms we argue that such initial estimates need to be coherent with the ongoing learning process that was already in place at the beginning of our sample of data. We find that the previous initialization methods in the literature are vulnerable to this requirement, and propose a new smoothing-based method that is not prone to this critic. Regarding the learning gains, we distinguish between two possible rationales to its determination: as a choice of agents; or, as a primitive parameter of agents learning-to-forecast behavior. Our results provide strong evidence in favor of the gain as a primitive approach, hence favoring the use of surveys data for their calibration. In the third issue, about the choice of a representative algorithm, we challenge the view that learning should be represented by only one of the above algorithms; on the basis of our two evaluation criteria, our results suggest that using a single algorithm represents a misspecification. That motivate us to propose the use of hybrid forms of the LS and SG algorithms, for which we find favorable evidence as representatives of how agents learn. Finally, our analysis concludes with an optimistic assessment on the plausibility of adaptive learning, though conditioned to an appropriate treatment of the above issues. We hope our results provide some guidance on that respect.