softXplain

Led by:  Kurt Schneider
Team:  Jakob Droste, Hannah Deters, Martin Obaidi
Year:  2022
Funding:  Deutsche Forschungsgemeinschaft (DFG)
Further information https://gepris.dfg.de/gepris/projekt/470146331

Motivation

Software systems are becoming increasingly complex and at the same time we are dependent on these systems in more and more areas. It is therefore becoming increasingly difficult to understand these systems, while at the same time becoming more critical. Since there is usually no contact person available for questions and problems when using software systems, the software systems must be able to answer possible questions themselves. This property of a software system to explain its own behaviour is called explainability. Similar to usability, security or maintainability, explainability is a non-functional requirement whose exact meaning and degree of implementation must be specified in a software project. If these properties are implemented well, confusion during use can be reduced and acceptance of the system can be increased.

Research Goals

Since different quality attributes may contradict each other, it is important to determine which explainability requirements are truly required. The focus is on which features are useful, what is realistically implementable and how a balance between competing quality requirements can be achieved. To make this possible, we seek to answer the following key questions:

  • How can the software know what behaviour is expected of it?
  • How can we tell if people really need explanations?
  • What is the correct form and the appropriate timing for these explanations?

In support of this, we want to develop demonstrators and prototypes that enable empirical studies in the context of our research.

The focus of this project is not to make black box models such as neural networks explainable. Indeed, we do not focus on areas such as artificial intelligence or machine learning in particular. Rather, complicated, self-explanatory software systems are to be investigated in general, revealing when explanations are truly needed and to what extent they can be provided.

Mental Models

One possible cause of confusion is a discrepancy between the users' mental model and the actual behaviour of a system. A mental model is a construct of ideas that users have built up about the system. Depending on this, users form expectations about the system's behaviour and adapt their own actions towards it. If the users' predictions do not match the actual behaviour, confusion arises. This confusion can be counteracted with the help of explanations. For this purpose, explicit mental models can be created that predict the deviation of the expected behaviour from the actual behaviour. The figure below illustrates this process. If there is a deviation, explanations are provided and thus the confusion is prevented.

Publications

  • Chazette, L., & Schneider, K. (2020). Explainability as a non-functional requirement: challenges and recommendations. Requirements Engineering, 25(4), 493-514. https://doi.org/10.1007/s00766-020-00333-1
  • Deters HL, Droste JRC, Schneider K. A Means to what End? Evaluating the Explainability of Software Systems using Goal-Oriented Heuristics. In EASE '23: Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering. 2023. S. 329-338. https://doi.org/10.1145/3593434.3593444
  • Droste JRC, Deters HL, Puglisi J, Schneider K. Designing End-user Personas for Explainability Requirements using Mixed Methods Research. In 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). 2023. S. 129-135 https://doi.org/10.1109/REW57809.2023.00028
  • Deters HL, Droste JRC, Fechner M, Schneider K. Explanations on Demand - a Technique for Eliciting the Actual Need for Explanations. In 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). 2023. S. 345 - 351 https://doi.org/10.1109/REW57809.2023.00065
  • Chazette, L., Klünder, J., Balci, M., & Schneider, K. (2022, May). How can we develop explainable systems? Insights from a literature review and an interview study. In Proceedings of the International Conference on Software and System Processes and International Conference on Global Software Engineering (pp. 1-12). https://doi.org/10.1145/3529320.3529321