- Main
Modeling Reliance on XAI Indicating Its Purpose and Attention
Abstract
This study used XAI, which shows its purposes and attention as explanations of its process, and investigated how these explanations affect human trust in and use of AI. In this study, we generated heatmaps indicating AI attention, conducted Experiment 1 to confirm the validity of the interpretability of the heatmaps, and conducted Experiment 2 to investigate the effects of the purpose and heatmaps in terms of reliance (depending on AI) and compliance (accepting answers of AI). The results of structural equation modeling analyses showed that (1) displaying the purpose of AI positively and negatively influenced trust depending on the types of AI usage, reliance or compliance, and task difficulty, (2) just displaying the heatmaps negatively influenced trust in a more difficult task, and (3) the heatmaps positively influenced trust according to their interpretability in a more difficult task.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-