Brief analysis of Ontolex

These days I’m trying to face one of my unresolved matters: having a fine-grained look at semantic linguistic models. Of course it starts with the W3C Community Group Report Ontolex, and as I go step by step, I focus on the basic module Ontology-lexicon interface (ontolex). All quotes in this text are taken from https://www.w3.org/2016/05/ontolex.

After reading the detailed documentation, including descriptions, examples an diagrams, provided by the community group, I usually try to understand the model in my own way looking at the ontology code. For doing so, I reproduce the ontology code retrieved by the URI http://www.w3.org/ns/lemon/ontolex# on the 25th of March, 2018, by creating a diagram following an adapted UML_Ont profile for ontologies. In this post, I’ll try to show the benefits of combining both approaches, that is, understanding the model from the documentation and from the code, double checking definitions and implementation. It should be mentioned that this analysis is by no mean intended to be exhaustive.

It should be mentioned that for those domains and ranges not defined in the ontology I display them as owl:Thing, so that an instance of any class could be placed. I only found out during this analysis that it is not always the case, as in this particular model some properties aim at linking to classes instead of instances. Anyway, it was interesting taking this decision to analyze the model to see some differences in the properties definition.

The resulting diagram is:

ontolex

While creating the diagram I observed that the inverse properties ontolex:isLexicalizedSenseOf and ontolex:lexicalizedSense are actually defined with the same domain and range. It seems that ontolex:isLexicalizedSenseOf domain and range should be interchanged.

For the case of the property ontolex:isDenotedBy, as I establish empty domains to owl:Thing, its domain does not match, in the diagram, with its inverse range, which is set at rdfs:Resource. Even though it is not a critical issue and there are examples of use in the documentation, it might be a good idea to clarify that, also in the code. According to the documentation provided for ontolex, the expected domain would be rdfs:Resource due to the following explanation:

Note that the target of a denotation does not need to be an individual in the ontology but may also refer to a class, property or datatype property defined by the ontology.

However, one should keep in mind that using, for example, ontology classes as objects of a materialized property, could make the model run into OWL Full, as a URI would act as individual and class at the same time.

Another issue with the domains appears in the property chain: ontolex:sense o ontolex:reference -> ontolex:denotes, as one might expect that the range of the first property is compatible with the domain of the second one in the antecedent.  Such a chain is supposed to be used as documented in section 3.4. However, looking at the diagram extracted from the ontology the domain of the ontolex:reference does not quite match the expected from the examples and core lemon ontolex diagram provided. According to the owl code the domain of ontolex:reference  is the union of ontolex:LexicalEntry  and synsem:OntoMap. In this case, ontolex:LexicalEntry  should be replaces in the domain by ontolex:LexicalSense according to the HTML documentation:

Reference (Object Property)

The reference property relates a lexical sense to an ontological predicate that represents the denotation of the corresponding lexical entry.

Domain: LexicalSense or synsem:OntoMap

Having seen this issues with the domains and ranges I realized I didn’t check what OOPS! could spot (how odd..). Apart from OOPS!’s usual complaints, there is one interesting issue. It is about “P40. Namespace hijacking” (powered by Triple-Checker). In this case the elements in which the pitfall is detected are:

The third case seems to be a false positive, even though TripleChecker finds a difference of 1 character, I can’t actually find it.

Regarding this issue, it is worth noting the following line of the RDF/XML code:

  • Line 620: <owl:Class rdf:about=”&rdf;Resource”/>

The element “Resource” here is defined in the rdf namespace instead of rdfs where it is originally defined as “rdfs:Resource a rdfs:Class“.

Finally, from a user point of view I usually find helpful that the elements intended to be used in the model coming from other vocabularies, for example skos, are included also in the code and the documentation in a consistent way. I mean, skos:definition appears in the HTML documentation but not in the owl code, however, skos:Concept and skos:ConceptSchema do appear in the code as subclasses of them are defined. The property skos:definition  could also be included, and maybe add a local restriction in the class expected to have such attribute, ontolex:LexicalConcept, according to the documentation:

“A definition can be added to a lexical concept as a gloss by using the skos:definition property.”

Acknowledgments: I’d like to thank Julia Bosque Gil first for all the help with linguistic models and for the comments about this post.

Advertisements

After the Ontology Summit 2013 hackathon

In this post I am going to briefly talk about and show the outcomes from the Ontology Summit 2013 Hackathon. As I said in my previous post, OOPS! was involved in the projects “HC-03 Evaluation of OOPS!, OQuaRE and Other Tools for FIBO Ontologies” and “HC-07 Ontohub-OOR-OOPS! Integration”.

During the first project, we scanned a merged version of the FIBO OWL ontologies with OOPS! and analyse and discuss every pitfall detected. After this process, FIBO development team established that “most of the metrics were ones we would want to apply to the FIBO Business Conceptual Ontologies, not just operational ontologies.” Next steps were to apply also OQuaRE and OntoQA metrics to FIBO ontologies. Finally, we took another working day to determine how to apply OQuaRE characteristics to FIBO ontologies and map them to OntoQA metrics and OOPS! pitfalls. This set of slides summarizes the good and intensive work that the HC-03 team carried out during the project that was awarded with the “First IAOA Best OntologySummit Hackathon-Clinic Prize” during the Ontology Summit 2013 Symposium.

During the second project, OOPS! was integrated into Ontohub web interface and an API for the Ontohub-OOR integration was proposed. The great work mainly done by Ontohub development team is summarised in these slides. OOPS! team work during this project was overall about supporting and helping the Ontohub-OOPS! integration when needed providing details about the OOPS! RESTful web service.

Finally, here there is an example of an ontology analysed with OOPS! within the Ontohub portal. In the first screenshot there is a “Test with OOPS!” button that is active before the ontology is being scanned.

Example ontology before being scanned

Example ontology before being scanned

While OOPS! is scanning the ontology, the Ontohub interface shows the status information “OOPS State: pending” as in this screenshot:

Example of ontology during the scanning process

Example of ontology during the scanning process

When the process is done, the number of pitfalls detected, if any, is displayed (“5 responses” in this example) and an explanation of them is provided when clicking in the ontology element affected by the pitfall, as shown in the last screenshot:

Example of results for an object property

Example of results for an object property

Finally, these and other results from the Ontology Summit were presented at the Ontology Summit 2013 Symposium together with the Ontology Summit 2013 Comunique.

Getting involved in the Ontology Summit 2013 hackathon

The Ontolog Forum is “an open, international, virtual community of practice devoted to advancing the field of ontology, ontological engineering and semantic technology, and advocating their adoption into mainstream applications and international standards”.  The forum was reconstituted in 2002 and organizes annual series of events called Ontology Summits since 2006. An Ontology Summit is “an organized thinking machine that work from January to April every year to brain-storm one of a topic of interest for ontology engineering community” (see source). This year’s summit topic is “Ontology Evaluation Across the Ontology Lifecycle”.

Fortunately, I was invited to participate giving a talk at the “Intrinsic Aspects of Ontology Evaluation: Practice and Theory” session about the work done in OOPS! (OntOlogy Pitfall Scanner!). At the end of the session, Ontology Summit organizers proposed all the participants to get involved in the hackathon they were planning to carry out.  At that moment there was little information about it, indeed it was more of a imprecise plan about an event like a “hackathon” without a clear idea of when, who, how… but with the definite aim of creating something ‘real’ and ‘useful’ for ontology evaluation. So we accepted the invitation… or the challenge?

Soon we had some more information. There were three types of projects:

  • Hackathon: its goal is to create some new code, new API, or new ontology that are relevant to this Ontology Summit and/or this year’s “Ontology Evaluation” theme.
  • Ontology Evaluation Clinic” (abbrev. “Ontology Clinic“): aims at evaluating ontologies or gold standard ontologies through the “evaluation tool,” study the results, diagnose problems with the ontology, and see how the ontology, and the tool, may be improved,
  • Ontology-based Application Evaluation Clinic (abbrev. “Application Clinic“): helps the users evaluate whether ontologies the users already had in mind are fit for the intended purpose and whether the quality of those ontologies are satisfactory and provide appropriate recommendations

Participants had to write a proposal for the type of project they were interested in. As result, 8 hackathon projects, 4 ontology clinics and 3 application clinics were proposed.  After aligning proposals and schedule restrictions, 7 projects were selected for being carried out along the three selected weekends. Finally, OOPS! got involved in two of them, one ontology clinic and one hackathon project. The first one, “Evaluation of OOPS!, OQuaRE and Other Tools for FIBO Ontologies”, aims to explore the application of ontology quality measures to ontologies produced under the Financial Industry Business Ontology (FIBO) umbrella, while “Ontohub-OOR-OOPS! Integration” aims at the integration of OOPS! into the Ontohub and OOR ontology repositories. These two projects will take place the 13th of April 2013.

Now it is time for doing real work and getting some tangible outcomes. Results… in next posts.