This thesis presents the journey taken in the development of a valid and reliable assessment tool for assessing communication skills of undergraduate pharmacy students. The main aim of the tool was to improve the communication skills of undergraduate pharmacy students; however, addressing this aim is beyond the scope of the findings presented here and is a goal for future work. Instead, the aim of this thesis is to develop and evaluate the validity and reliability of a pharmacy-specific communication skills assessment tool, to be used in the teaching and assessment of undergraduate pharmacy students in all levels of study in the Manchester programme. Justification for the approach taken to develop the tool has in the main part been taken from medical education studies to inform the content, structure and evaluation of the tool, due to limited studies in the area of pharmacy undergraduates. The main focus of paper 1 presented in this thesis is to outline the tool development and the process used to improve the validity of the tool in an attempt to also improve the inter-rater reliability. A number of assessors were asked to mark eight video recorded communication skills assessments, before and after consensus methods were used to agree the contents of the tool. Cohenâs kappa scores were calculated for scores awarded before and after the consensus meeting to determine the change in inter-rater reliability of the tool. Validity of the tool was determined during the consensus meeting by the assessors used as a panel of experts. Following this study, the tool was embedded in the teaching and assessment of students at all levels to allow for constructive alignment and a spiral curriculum in the area of communication skills. Paper 2 describes the analysis of the tools inter-rater reliability when used in the year one assessment again using Cohens kappa to determine inter-rater reliability of a sample of studentsâ first and second marks. The findings presented in paper 1 suggest that consensus methods can successfully improve validity and reliability of assessment tools. Further improvement of inter-rater reliability was demonstrated in paper 2 though it was below conventionally acceptable levels of inter-rater reliability for high stakes examinations. The main conclusions drawn at this point are that achieving reliability in a subjective area such as communication skills is difficult, and that recommendations for changing the manner of marking used by assessors from real time to using video recording could help to avoid compounding factors during assessment such as assessor fatigue.