We describe an framework that simultaneously segments and registers a set of medical images in an automatic manner, incrementally constructing a model of the structure and shape deformations of the set. The framework extends existing groupwise registration and modeling approaches by explicitly modeling the fraction of each tissue type in each voxel, rather than the expected intensity in each voxel. This decouples the model from the effects of the imaging sequence and thus imaging modality. When estimating the optimal deformation field between examples in the set each image is compared to its reconstruction generated from the model segment fractions and the current estimate of its intensity distributions for each tissue type (i.e. an estimate of how the model would appear given the imaging conditions for that image). We also present a method to determine the optimal number of tissue types and fully automate the approach as well as model construction methods that ensure efficient convergeance. We describe the algorithm in detail and present results of applying it to a set of MR images of the brain.