In recent years, the explainability of complex systems such as decision support systems, automatic decision systems, machine learning-based/trained systems, and artificial intelligence in general has been expressed not only as a desired property, but also as a property that is required by law. For example, the General Data Protection Regulation’s (GDPR) „right to explanation“ demands that the results of ML/AI-based decisions are explained. The explainability of complex systems, especially of ML-based and AI-based systems, becomes increasingly relevant as more and more aspects of our lives are influenced by these systems‘ actions and decisions.
Several workshops address the problem of explainable AI. However, none of these workshops has a focus on semantic technologies such as ontologies and reasoning. We believe that semantic technologies and explainability coalesce in two ways. First, systems that are based on semantic technologies must be explainable like all other AI systems. In addition, semantic technology seems predestined to support in rendering explainable those systems that are not themselves based on semantic technologies.
Turning a system that already makes use of ontologies into an explainable system could be supported by the ontologies, as ideally the ontologies capture some aspects of the users‘ conceptualizations of a problem domain. However, how can such systems make use of these ontologies to generate explanations of actions they performed and decisions they took? Which criteria must an ontology fulfill so that it supports the generation of explanations? Do we have adequate ontologies that enable to express explanations and enable to model and reason about what is understandable or comprehensible for a certain user? What kind of lexicographic information is necessary to generate linguistic utterances? How to evaluate a system‘s understandability? How to design ontologies for system understandability? What are models of human-machine interaction where the system enables to interact with the system until the user understood a certain action or decision? How can explanatory components be reused with other systems that they have not been designed for?
Turning systems that are not yet based on ontologies but on sub-symbolic representations/distributed semantics such as deep learning-based approaches into explainable systems might be supported by the use of ontologies. Some efforts in this field have been referred to as neural-symbolic integration.
This workshop aims to bring together international experts interested in the application of semantic technologies for explainability of artificial intelligence/machine learning to stimulate research, engineering and evaluation – towards making machine decisions transparent, re-traceable, comprehensible, interpretable, explainable, and reproducible. Semantic technologies have the potential to play an important role in the field of explainability since they lend themselves very well to the task, as they enable to model users‘ conceptualizations of the problem domain. However, this field has so far only been only rarely explored.
The workshop is held in conjunction with
13th IEEE International Conference on Semantic Computing (ICSC2019) Jan 30 – Feb 1, 2019, Newport Beach, California USA.