Srinivasa Rao Kolusu / /REST Journal on Data Analytics and Artificial Intelligence 2(3), September 2023, 78-93 Copyright@ REST Publisher 78 REST Journal on Data Analytics and Artificial Intelligence Vol: 2(3), September 2023 REST Publisher; ISSN: 2583-5564 Website: http: //restpublisher.com/journals/jdaai/ DOI: https://doi.org/10.46632/jdaai/2/3/14 Assessing Explainable in Artificial Intelligence: A TOPSIS Approach to Decision-Making *Srinivasa Rao Kolusu Sr. Technical Account Manager, Dallas, Texas, USA. *Corresponding author: srk8082@ieee.org Abstract: Explainable in Artificial Intelligence (AI) is the ability to comprehend and explain how AI models generate judgments or predictions. The complexity of AI systems, especially machine learning models, is increasing. understanding their reasoning process becomes crucial for ensuring trust, fairness, and accountability. Explainable AI (XAI) helps demystify the "black box" character of sophisticated models, Deep neural networks, for example, which allows users to to grasp how inputs are transformed into outputs. In many AI system judgments can have a big impact on industries including healthcare, banking, and law making transparency a necessity. Explainable also aids in identifying and mitigating biases, improving model performance, and complying with regulatory requirements. As AI technologies evolve, there is an increasing emphasis on balancing model accuracy with interpretability, making some AI systems remain ethical, transparent, and in line with human values. In artificial intelligence (AI) research, Explainable is essential for fostering confidence, guaranteeing responsibility, and enhancing The openness of artificial intelligence systems. As Artificial intelligence models, especially intricate ones like deep learning, become more widely adopted, understanding their Processes for making decisions are crucial for validating their outcomes. The goal of explainable AI (XAI) research is to create models interpretable so that users can comprehend the decision-making process. This is particularly crucial in high-stakes industries like healthcare, banking, and law, where poor or prejudiced choices can have serious repercussions. Explainable also supports regulatory compliance, model improvement, and ethical AI deployment. An approach to decision-making known as TOPSIS (Technique for Order of Preference by Similarity to Ideal Answer) evaluates how far an alternative is from the worst-case situation and how close it is to the ideal solution. The worst-case solution shows the lowest values, while the ideal solution shows the best values given the desired criteria. Each alternative is given a similarity score by TOPSIS, which ranks them according to how near the ideal answer they are. This method is frequently used to enhance decision-making in a variety of domains, including business, engineering, environmental research, and healthcare. Alternative: LIME (Local Interpretable Model), SHAP (Shapley Additive Explanations), Deep LIFT (Deep Learning Important Features), Anchor Explanations, ICE (Individual Conditional Expectation), Counterfactual Explanations, Rule-based Explanation Systems, Saliency Maps (for CNNs), Integrated Gradients, XAI for Healthcare. Evaluation preference: Interpretability, Accuracy of Explanations, User Trust, Computational Complexity, Scalability, Flexibility. The results indicate that XAI for Healthcare ranks highest, while Saliency Maps (for CNNs) holds the lowest rank. Keywords: LIME, SHAP, ICE, TOPSIS. 1. INTRODUCTION Because consumers need to feel secure and trusted about the processes and reasoning underlying automated decision- making across all domains, Explainable in AI has become a renewed focus of current research various sectors, including autonomous vehicles, healthcare diagnostics, and banking as well as finance. Even while Explainable in Although artificial intelligence (AI) has gained a lot of attention recently, the field's roots can be traced back to earlier