International Journal of Computer Science and Data Engineering
Journal homepage: www.sciforce.org
ISSN : 3066-6813
Citation: Kakulavaram, S. R. (2024). “Intelligent Healthcare Decisions Leveraging WASPAS for Transparent AI Applications” Journal of Business Intelligence and Data
Analytics, vol. 1 no. 1, pp. 1–7. doi: https://dx.doi.org/10.55124/csdb.v1i1.261
Intelligent Healthcare Decisions Leveraging WASPAS for Transparent AI
Applications
Sridhar Reddy Kakulavaram*
Technical Project Manager, Webilent Technology Inc., United States
Abstract
Introduction: The use application of artificial intelligence (AI) in healthcare has changed dramatically since its early exploration in diagnosing acute abdominal
pain. Today, AI enhances clinical decision-making, precision medicine, and diagnostics, particularly in visually-focused specialties like radiology and dermatology.
Despite its potential, widespread adoption is hindered by concerns over transparency, especially with black-box models. Explainable AI aims to address this by
improving the transparency and traceability of complex machine learning models, thereby maintaining patient trust and supporting evidence-based decision-
making.
Research Significance: This research is significant as it explores the way that artificial intelligence (AI) is changing medical practice, emphasizing explainable
AI to enhance transparency and trust. By addressing challenges in complex clinical decision-making and advancing precision medicine, this study contributes to
improved diagnostics and treatment. Additionally, it examines the ethical, educational, and regulatory aspects of AI integration, paving the way for safer and more
effective healthcare applications, ultimately benefiting patient care and outcomes.
Methodology: Alternatives: Incineration, Autoclave, Encapsulation, Distillation, Ozonation.
Evaluation Parameters: Waste residues, Process complexity, financial profit, Impact on quality of life.
Result: The results show that Autoclave received the highest ranking, whereas Ozonation received the lowest ranking.
Conclusion: Autoclave has the highest value for artificial intelligence and medicine according to the WASPAS approach.
Keywords: Artificial Intelligence (AI), Explainable AI (XAI), Medical Applications, Transparency, Machine Learning.
Research Article
Mini Review Article
Received date: August 07, 2024 Accepted date: August 18, 2024;
Published date: August 25, 2024
*Corresponding Author: Kakulavaram, S. R. Technical Project Manager, Webilent
Technology Inc, United States ., E- mail: Kakulavaram@gmail.com
Copyright: © 2024 Kakulavaram, S. R. This is an open-access article distributed
under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the
original author and source are credited.
Open Access
1
Open Access
Introduction
[1] e use of AI technology in surgery was initially explored by
Gunn in 1976, who investigated the potential of using computer analysis
to diagnose acute abdominal pain. Over the past twenty years, interest
in medical AI has grown significantly. Contemporary medicine faces
the challenge of gathering, interpreting, and utilizing vast amounts of
information needed to address complex clinical issues. [2 ]Explainable in
the world of medicine, artificial intelligence has drawn a lot of attention.
e difficulty of explain ability has been present since the inception of
AI, with traditional AI systems offering transparent and understandable
methods. However, they struggled with managing real-world uncertainties.
e rise of probabilistic learning improved the effectiveness of AI
applications but also made them more opaque. Explainable AI seeks to
enhance the transparency and traceability of complex statistical machine
learning models, especially deep learning (DL). However, to achieve truly
explainable medicine, there is a need to move beyond explainable AI
and embrace causability.[3] Advancements in artificial intelligence (AI)
algorithms and increased availability of training data have recently made
it possible for AI to enhance or even replace certain tasks performed by
physicians. However, despite growing interest from various stakeholders,
the widespread use of AI in medicine remains limited. According to many
experts, a major barrier to adoption is the lack of transparency in some
AI algorithms, particularly black-box models. Clinical medicine, which
depends on evidence-based decision-making, requires transparency.
If AI systems cannot provide medically understandable explanations
and physicians cannot clearly justify their decisions, patient trust may
be compromised. To overcome this transparency challenge, explainable
AI has been developed. [4] Medical practice is about to undergo a
transformation because to artificial intelligence. It has been investigated
in a number of healthcare domains, such as natural language processing,
population health, and precision medicine. e use of AI to visual tasks,
or computer vision, is one field that has received a lot of attention. Because
of this, AI is especially pertinent to fields that rely heavily on visual cues,
such as radiology, pathology, ophthalmology, and dermatology. Large-
scale digital datasets are a major factor in the development of AI since
they teach deep learning algorithms how to carry out tasks like identifying
lesions in medical images.[5] Medical artificial intelligence focuses
on developing AI systems that assist with diagnosing conditions and
suggesting treatment options. In contrast to medical applications that rely
solely on statistical or probabilistic methods, medical AI uses symbolic
models that represent disease entities and their connections to patient
characteristics and clinical symptoms.[6] Advancements in machine
learning techniques within artificial intelligence (AI) are transforming
medical practice.