The workshop will take place on 27 October 2019 between 12:00 and 17:20:

12:00-13:00 Invited Talk by Freddy Lecue (see below) [room: t.b.a]
13:00-14:00 Lunch [OGGB Foyer]
14:00-14:15 Introduction and Overview [Case room 3]
14:15-14:45 25 + 5 min) Paper: Semantic Web Technologies for Explainable Machine Learning Models: A Literature Review [Case room 3]
14:45-15:15 (25 + 5 min) Paper: Towards Explaining Natural Language Arguments with Background Knowledge [Case room 3]
15:15-16:00 Break
16:00-16:30 (25 + 5 min) Paper: Interactive Causal Discovery in Knowledge Graphs [Case room 3]
16:30-17:00 (25 + 5 min) Paper: Persuasive Explanation of Reasoning Inferences on Dietary Data [Case room 3]
17:00-17:20 Discussion and Next Steps [Case room 3]

Invited Talk by Freddy Lecue

Title: On The Role of Knowledge Graphs in Explainable AI

Abstract: The current hype of Artificial Intelligence (AI) mostly refers to the success of machine learning and its sub-domain of deep learning. However, AI is also about other areas, such as Knowledge Representation and Reasoning, or Distributed AI, i.e., areas that need to be combined to reach the level of intelligence initially envisioned in the 1950s. Explainable AI (XAI) now refers to the core backup for industry to apply AI in products at scale, particularly for industries operating with critical systems. This presentation reviews XAI not only from a Machine Learning perspective, but also from the other AI research areas, such as AI Planning or Constraint Satisfaction and Search. We expose the XAI challenges of AI fields, their existing approaches, limitations and the great opportunities for Semantic Web Technologies and Knowledge Graphs to push the boundaries of XAI further.