IEEE TRANSACTIONS ON ELECTROMAGNETIC COMPATIBILITY, VOL. 50, NO. 2, MAY2008 413
Short Papers
Offset Difference Measure Enhancement
for the Feature-Selective Validation Method
Alistair Duffy, Antonio Orlandi, and Hugh Sasse
Abstract—The feature-selective validation (FSV) method is proving it-
self to be a robust and helpful technique to quantify visually complex
measurement sets, such as those resulting from computational electromag-
netic validation exercises or experimental repeatability studies. This paper
reports on an enhancement to this technique that includes data related to
the level of dc difference (i.e., offset) between two sets of results, hitherto
disregarded within the method. This offset difference measure (ODM) con-
tributes to the amplitude difference measure (ADM) and ensures that the
ADM and global difference measure values reflect the level of disagree-
ment between the two traces even if this is the only difference between the
two. The paper describes the background to this development and provides
details of the selection and implementation of the ODM measure.
Index Terms—Feature-selective validation (FSV), numerical modeling,
repeatability, validation.
I. INTRODUCTION
The feature-selective validation (FSV) method was developed to ad-
dress the growing need to quantify the level of similarity or difference
in data sets resulting from computational electromagnetic validation
exercises [1]. Typically, this will involve one or more model imple-
mentations being compared with experimental results and the model-
ers asking the simple question “which is better?” However, this simple
question often has no simple answer, with some aspects of a simulation
result being better and other aspects being worse than another simula-
tion. The basis of the FSV method was to attempt to mirror the overall
visual assessment of a group of engineers with the general background
of those likely to be performing the comparison. It has been shown that
it does this remarkably well [2]. However, the formulation currently
used [3] ignores the data that give the dc information in both of the
traces to be compared. This was originally done for the following two
reasons.
1) The vast majority of the data being compared were already colo-
cated on the “y-axis” and it was assumed that where large offsets
between the two were apparent, they would preclude comparison
using the FSV method. Essentially, it was thought that if there
was a large difference, the FSV would not be used.
2) Without removing these data from the FSV as formulated, small
differences that would be ignored by eye would have a dispro-
portionately large effect on the overall FSV results.
Since these original developments, it has become clear that point 1) is
not generally true, as FSV has been called on to quantify improvements
Manuscript received July 12, 2007. This work was supported in part by the
Italian Ministry of University (MIUR) under a program for the Development
of Research of National Interest [Project of Relevant National Interest (PRIN)]
under Grant 2006095890.
A. Duffy and H. Sasse are with the Applied Electromagnetics Group,
De Montfort University, Leicester LE2 7DR, U.K. (e-mail: apd@dmu.ac.uk;
hgs@dmu.ac.uk).
A. Orlandi is with the Universidad Aut´ onoma de Quer´ etaro (UAq) Electro-
magnetic Compatibility Laboratory, University of L’Aquila, L’Aquila 67040,
Italy (e-mail: orlandi@ing.univaq.it).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TEMC.2008.919000
to a model. Hence, an enhancement to the method is required that
overcomes the limitations captured by point 2). The result is an FSV
method that is more robust.
This paper first reprises the FSV method, its background and imple-
mentation, and then describes the selection and implementation of the
offset difference measure (ODM), which successfully overcomes the
limitations outlined in point 2).
II. FEATURE-SELECTIVE V ALIDATION METHOD
One of the most fundamental problems with the validation of com-
putational electromagnetics has been that anyone can say “my model
is good” with very little fear of contradiction because there has been
no objective way to demonstrate otherwise. Similarly, with models that
give data that are visually nontrivial, it has been difficult to identify
whether one design iteration has resulted in a model that is better or
worse than the previous design iteration. Given a visually complicated
trace, two engineers will regard it differently, some looking at the align-
ment of peaks, some looking at the overall “grassiness” of the traces.
Often, there will be poor agreement between the experts as to what
constitutes a significant difference.
Further, if simulations are required to be used as part of an opti-
mization exercise, perhaps using something like genetic algorithms,
only simple fitness functions can be utilized based on the results of the
simulations, because they must be comparable (i.e., ordered). Clearly,
there is a need to be able to quantify comparisons, particularly for vali-
dation purposes. This need has also been raised previously [4], [5] and
was the driving force behind the development of the (currently draft)
IEEE standard on validation of computational electromagnetics [6].
One of the most important aspects in the development of FSV has
been the observation that statistical techniques have applicability only
to limited domains within validation of computational electromagnetics
[7], reinforcing the need for an algorithmic approach that mirrors the
decision-making process of a group of engineers.
The FSV method was developed to provide this quantification and
has become the central technique used in the validation standard [6].
It is a heuristic approach to satisfying the six criteria of quantifying
methods as described in [3]. They are as follows.
1) Implementation of the validation technique should be simple.
2) The technique should be computationally straightforward.
3) The technique should mirror human perceptions and should be
largely intuitive.
4) The method should not be limited to data from a single applica-
tion area.
5) The technique should provide tiered diagnostic information.
6) The comparison should be commutative.
The results of this can be seen by comparing the data of Fig. 1.
These data were shown to approximately 50 electromagnetic com-
patibility (EMC) engineers who were asked to rate the comparison
as “excellent,” “very good,” “good,” “fair,” “poor,” or “very poor.”
The FSV method was used to compare the data and the overall mea-
sure of similarity, the global difference measure (GDM), was com-
puted on a point-by-point basis, and the proportions falling into the
previous categories (using the equivalence as described in [2]) pre-
sented as a histogram and compared with the visual assessment.
This is presented in Fig. 2 (note that the titles “graph 4” and
“graph 6,” from [2] have been left in to help in the comparison of
results).
0018-9375/$25.00 © 2008 IEEE