In recent years, the explainability of complex systems such as decision support systems, automatic decision systems, machine learning-based/trained systems, and artificial intelligence in general has been expressed not only as a desired property, but also as a property that is required by law. For example, the General Data Protection Regulation’s (GDPR) „right to explanation“ demands that the results of ML/AI-based decisions are explained. The explainability of complex systems, especially of ML-based and AI-based systems, becomes increasingly relevant as more and more aspects of our lives are influenced by these systems‘ actions and decisions.

Several workshops address the problem of explainable AI. However, none of these workshops has a focus on semantic technologies such as ontologies and reasoning. We believe that semantic technologies and explainability coalesce in two ways. First, systems that are based on semantic technologies must be explainable like all other AI systems. In addition, semantic technology seems predestined to support in rendering explainable those systems that are not themselves based on semantic technologies.

Turning a system that already makes use of ontologies into an explainable system could be supported by the ontologies, as ideally the ontologies capture some aspects of the users‘ conceptualizations of a problem domain. However, how can such systems make use of these ontologies to generate explanations of actions they performed and decisions they took? Which criteria must an ontology fulfill so that it supports the generation of explanations? Do we have adequate ontologies that enable to express explanations and enable to model and reason about what is understandable or comprehensible for a certain user? What kind of lexicographic information is necessary to generate linguistic utterances? How to evaluate a system‘s understandability? How to design ontologies for system understandability? What are models of human-machine interaction where the system enables to interact with the system until the user understood a certain action or decision? How can explanatory components be reused with other systems that they have not been designed for?

Turning systems that are not yet based on ontologies but on sub-symbolic representations/distributed semantics such as deep learning-based approaches into explainable systems might be supported by the use of ontologies. Some efforts in this field have been referred to as neural-symbolic integration.

This workshop aims to bring together international experts interested in the application of semantic technologies for explainability of artificial intelligence/machine learning to stimulate research, engineering and evaluation – towards making machine decisions transparent, re-traceable, comprehensible, interpretable, explainable, and reproducible. Semantic technologies have the potential to play an important role in the field of explainability since they lend themselves very well to the task, as they enable to model users‘ conceptualizations of the problem domain. However, this field has so far only been only rarely explored.

The workshop is held in conjunction with the
18th International Semantic Web Conference (ISWC 2019) in Auckland, New Zealand, 26-30 October, 2019.

We are very happy to announce that we will have Freddy Lecue as invited speaker!

Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) at Thales in Montreal – Canada. He is also a research associate at INRIA, in WIMMICS team, Sophia Antipolis – France.

His research team is working at the frontier of learning and reasoning systems, with a strong interest in Explainable AI i.e., AI systems, models and results which can be explained to human and business experts cf. recent research / industry presentation.

Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture, he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at the University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008.


Topics of Interest Include but are not Limited to:

  • Explainability of machine learning models based on semantics/ontologies
  • Exploiting semantics/ontologies for explainable/traceable recommendations
  • Explanations based on semantics/ontologies in the context of decision making/decision support systems
  • Semantic user modelling for personalized explanations
  • Design criteria for explainability-supporting ontologies
  • Dialogue management and natural language generation based on semantics/ontologies
  • Visual explanations based on semantics/ontologies
  • Multi-modal explanations using semantics/ontologies
  • Interactive/incremental explanations based on semantics/ontologies
  • Ontological modeling of explanations and user profiles
  • Real-world applications and use cases of semantic/ontologies for explanation generation
  • Approaches to human expertise/knowledge capture for use in semantic/ontology based explanation generation


Philipp Cimiano – Bielefeld University
Basil Ell – Bielefeld University, Oslo University
Agnieszka Lawrynowicz – Poznan University of Technology
Laura Moss – University of Glasgow
Axel-Cyrille Ngonga Ngomo – Paderborn University

Program Committee

Ahmet Soylu – Norwegian University of Science and Technology / SINTEF Digital, Norway
Amrapali Zaveri – Maastricht University, Netherlands
Andreas Harth – Fraunhofer IIS, Germany
Anisa Rula – University of Milano – Bicocca, Italy
Axel-Cyrille Ngonga Ngomo – Paderborn University, Germany
Axel Polleres – Wirtschaftsuniversität Wien, Austria
Basil Ell – Bielefeld University, Germany and University of Oslo, Norway
Benno Stein – Bauhaus-Universität Weimar, Germany
Christos Dimitrakakis – Chalmers University of Technology, Sweden
Ernesto Jimenez-Ruiz – The Alan Turing Institute, UK
Evgenij Thorstensen – University of Oslo, Norway
Francesco Osborne – The Open University, UK
Gong Cheng – Nanjing University, China
Heiner Stuckenschmidt – University of Mannheim, Germany
Jürgen Ziegler – University of Duisburg-Essen, Germany
Mariano Rico – Universidad Politécnica de Madrid, Spain
Maribel Acosta – Karlsruhe Institute of Technology, Germany
Martin G. Skjæveland – University of Oslo, Norway
Mathieu d’Aquin – National University of Ireland Galway, Ireland
Menna El-Assady – University of Konstanz, Germany
Michael Kohlhase – Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
Pascal Hitzler – Wright State University, USA
Philipp Cimiano – Bielefeld University, Germany
Ralf Schenkel – Trier University, Germany
Serena Villata – Université Côte d’Azur, CNRS, Inria, I3S, France
Stefan Schlobach – Vrije Universiteit Amsterdam, The Netherlands
Steffen Staab – University of Koblenz-Landau, Germany


We invite research papers and demonstration papers, either in long (16 pages) or short (8 pages) format.

All papers have to be submitted electronically via EasyChair (https://easychair.org/conferences/?conf=semex2019).

All research submissions must be in English, and no longer than 16 pages for long papers, and 8 pages for short papers (including references).

Submissions must be in PDF, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions.

Accepted papers will be published as CEUR workshop proceedings. At least one author of each accepted paper must register for the workshop and present the paper there.


– Abstract: June 21, 2019
– Submission: June 28, 2019
– Notification: July 24, 2019
– Camera-ready: August 16, 2019
– Workshop: October 26 or 27, 2019