Articles | Open Access | https://doi.org/10.55640/

Granular Transparency: Integrating Multi-Granularity Fuzzy Sets with Explainable AI for Ethical Decision-Making in Critical Systems

Dr. Elias Thorne , Department of Computer Science
Dr. Sarah Jenkins , Institute of Advanced Informatics

Abstract

As Artificial Intelligence (AI) systems increasingly mediate critical decisions in healthcare, finance, and governance, the opacity of complex models—often termed the "black box" problem—poses a significant barrier to trust and regulatory compliance. This article addresses the urgent need for Explainable AI (XAI) by proposing a novel theoretical framework that integrates Granular Computing, specifically Multi-Granularity Decision-Theoretic Rough Sets, with Hesitant Fuzzy Linguistic theory. Drawing upon Zadeh’s foundational concept of information granulation, we argue that human reasoning is inherently granular and fuzzy, rather than binary. Therefore, AI systems designed to interact with human stakeholders must adopt a "Three-Way Decision" methodology (Accept, Reject, Defer) to accurately reflect the ambiguity of real-world data. The study evaluates this framework against current Deep Learning applications in neurodegenerative disease diagnosis (Alzheimer’s and Parkinson’s) and epidemiological modeling (COVID-19). Results indicate that while "crisp" numerical models may achieve marginal gains in raw accuracy, granular fuzzy models offer superior interpretability, allowing clinicians to understand the "boundary regions" of a diagnosis. Furthermore, the article provides an extensive analysis of the legal and ethical landscapes, specifically examining the European Union’s GDPR Recital 58 and the requirement for "Transparency by Design." We conclude that integrating multi-granularity fuzzy models into high-stakes AI pipelines is not merely a technical optimization but an ethical imperative to ensure algorithmic accountability and align machine learning outputs with human cognitive processes.

Keywords

Explainable AI (XAI), Granular Computing, Fuzzy Logic, Three-Way Decisions

References

Mencar, C.; Alonso, J.M. Paving the way to explainable artificial intelligence with fuzzy modeling: Tutorial. In Proceedings of the Fuzzy Logic and Applications: 12th International Workshop (WILF 2018), Genoa, Italy, 6–7 September 2018; Springer International Publishing: Cham, Switzerland, 2019; pp. 215–227.

Zhang, C.; Li, D.; Liang, J. Multi-granularity three-way decisions with adjustable hesitant fuzzy linguistic multigranulation decision-theoretic rough sets over two universes. Inf. Sci. 2020, 507, 665–683.

Zadeh, L.A. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 1997, 90, 111–127.

Zhang, C.; Li, D.; Liang, J.; Wang, B. MAGDM-oriented dual hesitant fuzzy multigranulation probabilistic models based on MULTIMOORA. Int. J. Mach. Learn. Cybern. 2021, 12, 1219–1241.

Zhang, C.; Ding, J.; Zhan, J.; Sangaiah, A.K.; Li, D. Fuzzy Intelligence Learning Based on Bounded Rationality in IoMT Systems: A Case Study in Parkinson’s Disease. IEEE Trans. Comput. Soc. Syst. 2022, 10, 1607–1621.

Solayman, S.; Aumi, S.A.; Mery, C.S.; Mubassir, M.; Khan, R. Automatic COVID-19 prediction using explainable machine learning techniques. Int. J. Cogn. Comput. Eng. 2023, 4, 36–46.

Gao, S.; Lima, D. A review of the application of deep learning in the detection of Alzheimer's disease. Int. J. Cogn. Comput. Eng. 2022, 3, 1–8.

Intersoft Consulting. Recital 58—The Principle of Transparency. Available online: https://gdpr-info.eu/recitals/no-58/ (accessed on 26 March 2023).

Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamò-Larrieux, A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019, 6, 2053951719860542.

Schneeberger, D.; Stöger, K.; Holzinger, A. The European legal framework for medical AI. In Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Dublin, Ireland, 25–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 209–226.

Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A. and Galanos, V., 2021. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, p.101994.

Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A., 2020. Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), pp.3333-3361.

Fichter, K., Lüdeke-Freund, F., Schaltegger, S. and Schillebeeckx, S.J., 2023. Sustainability impact assessment of new ventures: An emerging field of research. Journal of Cleaner Production, 384, p.135452.

Pew Research Center (January 2024), ‘Mobile Fact Sheet’, available at https://www.pewresearch.org/internet/fact-sheet/mobile/?tabId=tab-0ec23460-3241-4a1f-89bc-0c27fb641936 (accessed 7th June, 2024).

Yashika Vipulbhai Shankheshwaria, & Dip Bharatbhai Patel. (2025). Explainable AI in Machine Learning: Building Transparent Models for Business Applications. Frontiers in Emerging Artificial Intelligence and Machine Learning, 2(08), 08–15.

Kearns, M. and Roth, A. (2020), The Ethical Algorithm, Oxford Univerisity Press, New York.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Dr. Elias Thorne, & Dr. Sarah Jenkins. (2025). Granular Transparency: Integrating Multi-Granularity Fuzzy Sets with Explainable AI for Ethical Decision-Making in Critical Systems . International Journal of Data Science and Machine Learning, 5(02), 298-305. https://doi.org/10.55640/