JID:FSS AID:7531 /FLA [m3SC+; v1.289; Prn:12/11/2018; 14:44] P.1(1-16) Available online at www.sciencedirect.com ScienceDirect Fuzzy Sets and Systems ••• (••••) •••••• www.elsevier.com/locate/fss Fuzzy classification boundaries against adversarial network attacks Félix Iglesias , Jelena Milosevic, Tanja Zseby TU Wien Gusshausstraße 25/E389, 1040 Vienna, Austria Received 28 May 2018; received in revised form 21 September 2018; accepted 1 November 2018 Abstract Adversarial machine learning copes with the development of methods to prevent machine learning algorithms from being misled by malicious users. This field is especially relevant for applications where machine learning lies in the core of security systems. In the field of network security, adversarial samples are actually novel network attacks or old attacks with tuned properties. This paper proposes to blur classification boundaries in order to enhance machine learning robustness and improve the detection of adversarial samples that exploit learning weaknesses. We test this concept by an experimental setup with network traffic in which linear decision trees are wrapped by a one-class-membership scoring algorithm. We benchmark our proposal with plain linear decision trees and fuzzy decision trees. Results show that evasive attacks (i.e., false negatives) tend to be ranked with low class-membership levels, meaning that they are located in zones close to classification thresholds. In addition, classification performances improve when membership scores are added as new features. Using fuzzy class boundaries is highly consistent with the interpretation of many network traffic features used for malware detection; moreover, it prevents network attackers from exploiting classification boundaries as attack objectives. 2018 Elsevier B.V. All rights reserved. Keywords: Learning; Fuzzy system models; Data analysis; Adversarial Machine Learning; Network Security 1. Introduction Huang et al. [1] present the concept of adversarial machine learning as “the study of effective machine learning techniques against an adversarial opponent”. This definition introduces a scenario in which an artificial intelli- gence (AI) is in charge of a decision making process and a third party actor aims to deceive or make useless its operation. A second AI on the attacker’s side is usually involved. The described scheme is pertinent to the field of network security, where cyber-criminals take the role of the mentioned third party and look for best ways to perpetrate intrusions and attacks. As exposed by Kennedy et al. [2], * Corresponding author. E-mail addresses: felix.iglesias@nt.tuwien.ac.at (F. Iglesias), jelena.milosevic@tuwien.ac.at (J. Milosevic), tanja.zseby@tuwien.ac.at (T. Zseby). URLs: https://www.nt.tuwien.ac.at/about-us/staff/felix-iglesias/ (F. Iglesias), https://www.nt.tuwien.ac.at/about-us/staff/jelena-milosevic/ (J. Milosevic), https://www.nt.tuwien.ac.at/about-us/staff/tanja-zseby/ (T. Zseby). https://doi.org/10.1016/j.fss.2018.11.004 0165-0114/2018 Elsevier B.V. All rights reserved.