Effect of Criteria Range on the Similarity of Results
in the COMET Method
Andrii Shekhovtsov, Jakub Wi˛ eckowski, Bartlomiej Kizielewicz and Wojciech Salabun
Research Team on Intelligent Decision Support Systems,
Department of Artificial Intelligence Methods and Applied Mathematics,
Faculty of Computer Science and Information Technology
West Pomeranian University of Technology in Szczecin
ul.
˙
Zolnierska 49, 71-210 Szczecin, Poland
Email: {andrii-shekhovtsov, jakub-wieckowski, bartlomiej-kizielewicz, wojciech.salabun}@zut.edu.pl
Abstract—Defining input values in the decision-making process
can be done with appropriate methods or based on expert
knowledge. It is essential to ensure that the values are adequate
for the problem to be solved in both cases. There may be
situations where values are overestimated, and it should be
checked whether this affects the final results.
In this paper, the Characteristic Objects Method (COMET)
was used to investigate the overestimation effect on the final
rankings. The decision matrixes with a different number of
alternatives and criteria were assessed The obtained results were
compared using the WS similarity coefficient and Spearman’s
weighted correlation coefficient. The study showed that overesti-
mation has a significant effect on the rankings. A larger number
of criteria has a positive effect on the correlation strength of
the compared rankings. In contrast, a large overestimation of
characteristic values has a negative effect on the similarity of the
results.
I. I NTRODUCTION
In decision-making, expert knowledge is an important el-
ement influencing the results obtained [1]. It is important in
specifying the importance of criteria and the weighting of each
criterion in the process of evaluating alternatives [2], [3]. These
decisions directly translate into the obtained preference values
guaranteed by the selected multi-criteria methods [4], [5], [6].
For some Multi-Criteria Decision-Making (MCDM) meth-
ods to solve decision-making problems, the expert must define
the algorithm’s input parameters based on his experience and
knowledge [7], [8]. Some methods allow the use of methods
that determine weights for criteria in a defined problem [9],
[10]. In other cases, the data determined for the method’s oper-
ation must be specified solely based on expert knowledge [11],
[12]. Multi-Criteria Decision-Making methods are eagerly
used in solving problems where many factors contribute to
the final assessment [13]. The development of new techniques
attracts the attention of a growing audience, who use them
to solve medical problems [14], [15], [16], [17], for resource
planning [18], [19], [20], or the selection of sustainable means
of transport [21], [22], [23].
One of the multi-criteria methods is the Characteristic Ob-
jects Method (COMET), which uses the rule-based approach
when evaluating the quality of alternatives [24]. The expert’s
task using this method to solve the problem is to determine
the characteristic values, which will be used to assess the
preference of alternatives in subsequent steps [25], [26]. The
advantage of this method is that it is resistant to the phe-
nomenon of ranking reversal when the number of alternatives
in the analyzed set changes [8].
In this paper, based on the COMET method’s operation,
an attempt has been made to determine the effect of over-
estimation of characteristic values on the results depending
on the number of alternatives and criteria. Different levels
of overestimation were used to examine and compare the
results obtained. The results were then compared using the WS
similarity coefficient and the weighted Spearman correlation
coefficient to analyze the resulting rankings’ correlation.
The rest of the paper is organized as follows. Section 2
presents the preliminaries and main assumptions of the
COMET method. Section 3 includes the study case descrip-
tion, where the influence of the overestimation of characteristic
values on the received results was examined. Finally, in
Section 4 the summary and conclusions from the research are
drawn.
II. PRELIMINARIES
A. Weighted Spearman’s Rank Coefficient
Weighted Spearman’s rank coefficient is defined as (1),
where N is a sample size, rank values for both rankings is
named as x
i
and y
i
. In this approach, the positions at the top
of both rankings are the most important. The weight of signif-
icance is calculated for each alternative. It is the element that
determines the main difference to Spearman’s rank correlation
coefficient, which examines whether the differences appeared
and not where they appeared [27].
r
w
=1 -
6
∑
N
i=1
(x
i
- y
i
)
2
((N - x
i
+ 1) + (N - y
i
+ 1))
N
4
+ N
3
- N
2
- N
(1)
B. WS Rank Similarity Coefficient
Rank Similarity Coefficient WS is defined as (2). Un-
like r
w
, it is an asymmetric measure. The weight of a given
comparison is determined based on the significance of the
Proceedings of the 16
th
Conference on Computer
Science and Intelligence Systems pp. 453–457
DOI: 10.15439/2021F44
ISSN 2300-5963 ACSIS, Vol. 25
IEEE Catalog Number: CFP2185N-ART ©2021, PTI 453