Ontology Extraction for Large Ontologies via Modularity and Forgetting

Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • Authors:
  • Jieying Chen
  • Ghadah Alghamdi
  • Renate Schmidt
  • Dirk Walther
  • Yongsheng Gao

Abstract

We are interested in the computation of ontology extracts based on forgetting from large ontologies in real-world scenarios. Such scenarios require nearly all of the terms in the ontology to be forgotten, which poses a significant challenge to forgetting tools. In this paper we show that modularization and forgetting can be combined beneficially in order to compute ontology extracts. While a module is a subset of axioms of a given ontology, the solution of forgetting (also known as a uniform interpolant) is a compact representation of the ontology limited to a subset of the signature. The approach introduced in this paper uses an iterative workflow of four stages: (i) extension of the given signature and, if needed partitioning, (ii) modularization, (iii) forgetting, and (iv) evaluation by domain expert. For modularization we use three kinds of modules: localitybased, semantic and minimal subsumption modules. For forgetting three tools are used: Nui, Lethe and Fame. An evaluation on the SNOMED CT and NCIt ontologies for standard concept name lists showed that precomputing ontology modules reduces the number of terms that need to be forgotten. An advantage of the presented approach is high precision of the computed ontology extracts.

Bibliographical metadata

Original languageEnglish
Title of host publicationInternational Conference on Knowledge Capture (K-CAP)
Publication statusAccepted/In press - 21 Sep 2019
EventTenth International Conference on Knowledge Capture - Marina del Rey, United States
Event duration: 19 Nov 201921 Nov 2019

Conference

ConferenceTenth International Conference on Knowledge Capture
Abbreviated titleK-CAP 2019
CountryUnited States
CityMarina del Rey
Period19/11/1921/11/19