Robustness Analysis of Likelihood Ratio Score Fusion Rule for Multimodal Biometric Systems under Spoof Attacks Zahid Akhtar, Giorgio Fumera, Gian Luca Marcialis and Fabio Roli Dept. of Electrical and Electronic Engineering University of Cagliari Piazza d’Armi, 09123 Cagliari, Italy Email: {z.momin,fumera,marcialis,roli}@diee.unica.it Abstract—Recent works have shown that, contrary to a com- mon belief, multi-modal biometric systems may be “forced” by an impostor by submitting a spoofed biometric replica of a genuine user to only one of the matchers. Although those results were obtained under a worst-case scenario when the attacker is able to replicate the exact appearance of the true biometric, this raises the issue of investigating more thoroughly the robustness of multi- modal systems against spoof attacks and devising new methods to design robust systems against them. To this aim, in this paper we propose a robustness evaluation method which takes into account also scenarios more realistic than the worst-case one. Our method is based on an analytical model of the score distribution of fake traits, which is assumed to lie between the one of genuine and impostor scores, and is parametrised by a measure of the relative distance to the distribution of impostor scores, we name “fake strength”. Varying the value of such parameter allows one to simulate the different factors which can affect the distribution of fake scores, like the ability of the attacker to replicate a certain biometric. Preliminary experimental results on real bi- modal biometric data sets made up of faces and fingerprints show that the widely used LLR rule can be highly vulnerable to spoof attacks against one only matcher, even when the attack has a low fake strength. I. I NTRODUCTION With the rapid growth in the use of biometric systems, issues about their robustness and security against external attacks are also raising. Several researchers are investigating the vulner- abilities of biometric systems, the potential attacks and the related countermeasures. Among the others, the attack which is of greatest interest in the biometric community consists in submitting to the system a counterfeit, or fake, biometric [1], which is known as “spoof attack”, “direct attack”, since the true biometric is replaced by a fake one. Several authors showed that biometrics such as fingerprints, iris and faces, can be stealthy procured and used to generate synthetic biometric traits to attack biometric sensors. Although several potential countermeasures have been proposed so far, no effective one exists yet. Besides ad hoc countermeasures, it is commonly believed that multi-modal systems are intrinsically more robust against spoof attacks, since their evasion would require to spoof all biometric traits simultaneously [2]. However such belief is not based on theoretical or empirical evidences, but only on intuitive and qualitative arguments, which rely mainly on the higher performance of multi-modal systems with respect to mono-modal ones. Actually, such belief has been questioned very recently in [3]–[5], where it has been shown that multi-modal systems can be cracked by faking only one of the biometric traits. Those results have been obtained under the stringent, worst- case scenario when the attacker is capable to produce an exact replica of the targeted client’s biometric. Anyway, they raise the need of further investigations on the robustness of multi- modal systems under spoof attacks, and of developing effective countermeasures. In this work, we address this issue by proposing a method to evaluate the robustness of a multi-modal system against spoof attacks. Our goal is to avoid the straightforward but cumbersome solution of constructing spoofed biometric traits to test the system. Since no multi-modal data set containing spoof attacks has been made available so far, our method is based on simulating the effects of a spoof attack on the distribution of the corresponding matching scores, as in [3]– [5]. However, differently from these works, our aim is to take into account also more realistic, non-worst-case scenarios, in which the fake score distribution can be different than the genuine one. The distribution of fake scores may be affected by different factors, like the particular spoofed biometric, the sensor, the matching algorithm, the technique used to construct fake biometrics, the skills of the attacker, etc. However, at the state of the art their effect is unknown. We thus propose to model such distribution by assuming that by effect of the above factors they can exhibit different shapes, and in particular, that in can be identical either to the impostor or to the genuine score distributions, or lies between them. To model distributions lying between the ones of genuine and impostor scores, we introduce a single parameter that controls their relative similarity to the genuine distribution (or equivalently, the relative distance from the impostor one), which we name “fake strength”: the higher the similarity, the higher the “strength” of the spoof attack. For instance, this can reflect the different ability of attackers to replicate the targeted genuine biometric, being equal all the other factors mentioned