- Main
Do Saliency-Based Explainable AI Methods Help Us Understand AI's Decisions? The Case of Object Detection AI
Abstract
Saliency-based Explainable AI (XAI) methods have been commonly used for explaining computer vision models, but whether they could indeed enhance user understanding at different levels remains unclear. We showed that for object detection AI, presenting users with AI 's output for a given input was sufficient for improving feature-level and some instance-level user understanding, particularly for false alarms, and providing saliency-based explanations did not have additional benefit. This was in contrast to previous research on image classification models where such explanations enhanced understanding. Analyses with human attention maps suggested that humans already attended to features important for AI's output in object detection and thus could infer AI's decision-making processes without saliency-based explanations. However, it did not enhance users' ability to distinguish AI's misses and hits, or system-level understanding. Therefore, the effectiveness of saliency-based explanations is task-dependent, and alternative XAI methods are required for object detection models to better enhance understanding.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-