The surface (or, foreground) structure of linked data and their associated OWL vocabularies can be complemented by background models expressing valid ontological distinctions that may have become obscured by the modeling style chosen by the vocabulary designer. Background models can generally serve for debugging, visualization, matching, or even pattern-based design of operational ontologies such as linked data vocabularies. An example of a well-known background model language, primarily suited for taxonomic ontologies, is the system of OntoClean meta-properties.
We present an alternative type of background model language, dubbed PURO, which is oriented towards linked data ontologies, and relies on particular-universal and relationship-object dichotomies. With this language a background model (i.e., an ontologically relevant model) of each vocabulary can be constructed and both models then can be mapped. We will next discuss how such a model can be used to better understand the nature of the entities of the foreground model and also how the ontological coherence of the foreground model can be (automatically) checked.
Martin Homola received his PhD from Comenius University of Bratislava, Slovakia in 2010. Between 2009 and 2012 he held a postdoc position in Fondazione Bruno Kessler in Trneto, Italy, and since 2012 he is an assistant professor at Comenius University of Bratislava. His research interests include logical knowledge representation, specifically ontologies and description logics with the accent on distributed and heterogeneous knowledge sources. He also has some background in non-monotonic reasoning. He has recently investigated contextualized knowledge representation for the Semantic Web and the problem of ontological relevance in linked data vocabularies.