Ontologies are complex systems of axioms in which unanticipated consequences of changes are both frequent, and diffcult for ontology authors to apprehend. The effects of modelling actions range from unintended inferences to outright defects such as incoherency or even inconsistency. One of the central ontology authoring activities is verifying that a particular modelling step has had the intended consequences, often with the help of reasoners. For users of
Protégé, this involves, for example, exploring the inferred class hierarchy.
This paper provides evidence that making entailment set changes explicit to authors significantly improves the understanding of authoring actions regarding both correctness and speed. This is tested by means of the Inference
Inspector, a Protégé plugin we created that provides authors with specific details about the effects of an authoring action. We empirically validate the effectiveness of the Inference Inspector in two studies. In a first, exploratory study we determine the feasibility of the Inference Inspector for supporting verification and isolating authoring actions. In a second, controlled study we formally evaluate the Inference Inspector and determine that making changes to key entailment sets explicit significantly improves author verification compared to the standard static hierarchy/framebased approach. We discuss the advantages of the Inference Inspector for different types of verification questions and find that our approach is best suited for verifying added restrictions where no new signature, such as class names, is introduced, with a 42% improvement in verification correctness.