- Akatsuka, Jun;
- Yamamoto, Yoichiro;
- Sekine, Tetsuro;
- Numata, Yasushi;
- Morikawa, Hiromu;
- Tsutsumi, Kotaro;
- Yanagi, Masato;
- Endo, Yuki;
- Takeda, Hayato;
- Hayashi, Tatsuro;
- Ueki, Masao;
- Tamiya, Gen;
- Maeda, Ichiro;
- Fukumoto, Manabu;
- Shimizu, Akira;
- Tsuzuki, Toyonori;
- Kimura, Go;
- Kondo, Yukihiro
Deep learning algorithms have achieved great success in cancer image classification. However, it is imperative to understand the differences between the deep learning and human approaches. Using an explainable model, we aimed to compare the deep learning-focused regions of magnetic resonance (MR) images with cancerous locations identified by radiologists and pathologists. First, 307 prostate MR images were classified using a well-established deep neural network without locational information of cancers. Subsequently, we assessed whether the deep learning-focused regions overlapped the radiologist-identified targets. Furthermore, pathologists provided histopathological diagnoses on 896 pathological images, and we compared the deep learning-focused regions with the genuine cancer locations through 3D reconstruction of pathological images. The area under the curve (AUC) for MR images classification was sufficiently high (AUC = 0.90, 95% confidence interval 0.87-0.94). Deep learning-focused regions overlapped radiologist-identified targets by 70.5% and pathologist-identified cancer locations by 72.1%. Lymphocyte aggregation and dilated prostatic ducts were observed in non-cancerous regions focused by deep learning. Deep learning algorithms can achieve highly accurate image classification without necessarily identifying radiological targets or cancer locations. Deep learning may find clues that can help a clinical diagnosis even if the cancer is not visible.