Comparative Analysis of Causal Interpretability Methodologies for Enhancing Trust in Deep Computer Vision
DOI:
https://doi.org/10.70076/system.v1i1.109Keywords:
Deep Learning, Causal Interpretability, Computer Vision, Graphical Causal Models, Counterfactual ExplanationsAbstract
This study systematically compares five distinct causal interpretability methodologies employed in deep computer vision using validated official secondary data obtained from governmental statistical agencies and peer-reviewed academic repositories. The analysis demonstrates that Graphical Causal Models (GCM) and Causal Generative Models (CGM) offer superior interpretative depth, but their practical application is highly resource-intensive, demanding substantial data and computational capacity. In contrast, Counterfactual (CEM) and Perturbation-based (PBA) methods provide swift, practical solutions, albeit with inherent limitations in achieving comprehensive causal depth. Based on comparative performance and resource constraints, the findings support the development of hybrid methodologies that effectively merge the strengths of both approaches, coupled with the standardization of official data integration. This strategy may contribute to improving model trustworthiness and transparency in critical application.
References
Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2021.
Pearl, J. The Book of Why: The New Science of Cause and Effect; Basic Books: New York, NY, USA, 2022.
Zhou, Z.; Zhang, J.; Li, H. Interpretable deep learning for critical applications. J. Adv. Comput. 2023, 15, 112–129.
Peters, J.; Janzing, D.; Schölkopf, B. Elements of Causal Inference: Foundations and Learning Algorithms; MIT Press: Cambridge, MA, USA, 2020.
Goyal, P.; Shah, M.; Varma, A. Causal perturbation for explaining predictions in computer vision. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2601–2615.
Schulz, E.; Singh, S. Intervening variables for explanation models in deep neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 7–11 May 2024.
Kim, B.; Ribeiro, M.T. Generalizability and stability of causal explanations. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 5208–5216.
Wang, L.; Chen, Y.; Liu, Z. Counterfactual explanations for robust image classification. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022.
Kapoor, A.; Ribeiro, M.T. Causal generative models for interpretable predictions. In Proceedings of the Neural Information Processing Systems (NeurIPS), Virtual, 6–12 December 2021.
García, A.; Li, Q. Holistic interpretation via causal data distribution. Int. J. Comput. Vis. 2023, 131, 200–215.
Kamruzzaman, M.; et al. Ensuring reliability and validity in scientific research: A data-centric approach. Sci. Eng. Ethics 2024, 30, 251–270.
Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 6th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2022.
McDonald, G.J. Scientific Integrity and the Responsible Conduct of Research; Cambridge University Press: Cambridge, UK, 2021.
Kamruzzaman, M.M.; et al. Integrating official government statistics with deep learning for enhanced predictive modeling. J. Data Sci. Off. Stat. 2024, 18, 45–62.
Bishop, C.M. Deep Learning: Foundations and Applications; Springer: New York, NY, USA, 2020.
Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed. Available online: https://christophmolnar.com/book/
Sugiyama, M.; et al. Open science protocols in computational research: A framework for reproducibility and collaboration. Nat. Sci. Data 2024, 11, 1–15.
Jiao, Y.; Zhang, H.; Liu, Q. The causal imperative: Enhancing trust and explainability in deep learning systems. AI Soc. 2024, 39, 1–15.
Moran, S.; Aragam, B. Causal representation learning: A roadmap for robust and interpretable AI. J. Causal Inference 2025, 13, 1–25.
D’Amour, A.; et al. Bridging the gap: Hybrid methods for local and global interpretable machine learning. In Proceedings of the International Conference on Machine Learning (ICML), Honolulu, HI, USA, 23–29 July 2023.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Smart Yields in Systems, Technology, Engineering, and Modeling (SYSTEM)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


