Robots stand at the heart of a techno-scientific revolution which promises to alter the way in which we conceive our society. Recent discoveries point towards a future in which artificial agents will become fully integrated in our social structures, thus becoming important actors in our life. In this scenario, it will be critical for them to be able to understand us in the most human-like fashion and to assist us in our routines. We state that collaboration between humans and robots is fostered by two cognitive skills: intention reading and trust. The former is the capacity to discern the goal that is driving the actions of someone, while the latter is the ability to evaluate the trustworthiness of another agent. A robot endowed with these skills will be able to understand what kind and which degree of assistance its partner needs during a collaboration. This thesis aims at advancing the scientific understanding of trust and intention compliant support in the interaction of humans and machines by presenting a robot learning architecture for collaborative intelligence based on the developmental robotics approach. We use probabilistic reasoning and a novel clustering algorithm that integrates multimodal social cues to infer the intention of the other agent, while estimating the partner's trustworthiness through a Bayesian network and a novel episodic memory system. These two skills are then combined to formulate collaborative action plans. We tested our models in human-robot interaction experiments involving joint manipulation tasks. The data we have collected demonstrate the effectiveness of our original methods, the importance of computational robotic models of human trustworthiness and, finally, the superior performance of collaborations involving trust estimations over ones based solely on goal prediction. Our results show that the synergistic implementation of these cognitive skills enables the robot to collaborate in a meaningful way, with the intention reading model allowing a correct goal prediction and with the trust component enhancing the likelihood of a positive outcome of the task.