Explainable AI for Decision-Making Systems: Investigate the Development of Explainable AI Techniques for Decision-Making Systems and Evaluate their Effectiveness in Improving the Transparency and Accountability of these Systems

Authors

  • Diwash Kapil Chettri Skill Instructor, School of Information and Communication Technology, Medhavi Skills University, Bermiok, India

Keywords:

Accountability, Artificial Intelligence, Decision-Making Systems, Deep Learning, Ethics, Explainable AI, Interpretable models, Machine Learning, Transparency, Trustworthiness

Abstract

This research paper provides a comprehensive analysis of explainable artificial intelligence (XAI) techniques for decision-making systems. The paper reviews the state-of-the-art in XAI and highlights the importance of transparency, accountability, and trust in AI-driven decisions. To address the limitations of current techniques, the paper proposes new XAI techniques and evaluates their effectiveness. The results show that the developed techniques improve the transparency and interpretability of AI-driven decisions, enabling users to understand how the system arrived at its decisions and to identify potential biases in the system's behavior. The paper also provides new insights and recommendations for future research in the area of XAI. Overall, this research contributes to the field of AI and decision-making systems by highlighting the importance of XAI and providing new techniques for improving the transparency and accountability of these systems.

Downloads

Download data is not yet available.

Downloads

Published

22-02-2023

Issue

Section

Articles

How to Cite

[1]
D. K. Chettri, “Explainable AI for Decision-Making Systems: Investigate the Development of Explainable AI Techniques for Decision-Making Systems and Evaluate their Effectiveness in Improving the Transparency and Accountability of these Systems”, IJMDES, vol. 2, no. 2, pp. 1–6, Feb. 2023, Accessed: Apr. 16, 2024. [Online]. Available: https://journal.ijmdes.com/ijmdes/article/view/109