(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 6, 2020 169 | Page www.ijacsa.thesai.org Analyzing the Performance of Web-services during Traffic Anomalies Avneet Dhingra 1 Research Scholar, Department of Computer Science and Engineering, I.K. Gujral Punjab Technical University Kapurthala-144603, India Monika Sachdeva 2 Associate Professor, Department of Computer Science and Engineering, I.K. Gujral Punjab Technical University Kapurthala-144603, India Abstract—Intentional or unintentional, service denial leads to substantial economic and reputation losses to the users and the web-service provider. However, it is possible to take proper measures only if we understand and quantify the impact of such anomalies on the victim. In this paper, essential performance metrics distinguishing both transmission issues and application issues have been discussed and evaluated. The legitimate and attack traffic has been synthetically generated in hybrid testbed using open-source software tools. The experiment covers two scenarios, representing DDoS attacks and Flash Events, with varying attack strengths to analyze the impact of anomalies on the server and the network. It has been demonstrated that as the traffic surges, response time increases, and the performance of the target web-server degrades. The performance of the server and the network is measured using various network level, application level, and aggregate level metrics, including throughput, average response time, number of legitimate active connections and percentage of failed transactions. Keywords—Denial of service; DDoS attack; flash event; performance metrics; throughput; response time I. INTRODUCTION In the event of network traffic anomaly, the users get to grips with either a drastic slowdown of the service or a complete outage. Recent years have witnessed a rise in the frequency and strength of some illegitimate anomalies known as DDoS attacks. These attacks compromise the availability of the web-services of the victim server. The motive behind such activity varies from being personal to political. Whatever the cause, these attacks can be very troublesome and costly for a target. For instance, the online encyclopedia, Wikipedia, suffered a DDoS attack on September 6, 2019, that lasted for about three days [1]. The intermittent outages and performance degradation were faced by users in the Middle East, Europe, the United Kingdom, and the United States. Many such instances are confronted daily, across the globe, due to an exponential increase in the use of Internet-based applications. As per the Kaspersky report, the number of attacks has increased by 80% in the first three months of 2020 against the attacks observed in 2019 [2]. The need hence arises to generate realistic techniques to evaluate the performance and measure the impact of anomalies (legitimate or illegitimate) on the services of the web-server. Measuring the performance of the server under such anomalies can help understand the preventive techniques required to be installed along with the type of potential defenses. The importance of performance testing is also realized in the situation when multiple users generate concurrent traffic creating a heavy load on the network similar to that created during a DDoS attack. The network responding to anomalies needs to be tested repetitively with short-duration attack traffic to evaluate the overall performance of the server and the cost involved for installing the required security measures. The metric, thus calculated, provides the information related to the network traffic in case of saturation. The literature reviewed [3,4,5] for the impact analysis highlights the use of a simulator for generating traffic and analyzing the performance of the network. However, the experiment presented in this paper makes use of emulation to generate synthetic traffic. Emulation has the advantage of using real-time OS and apps, along with the simulated elements such as virtual nodes and soft network links. Exploits. The paper also presents the exhaustive review undertaken to comprehend the concept of performance and quantifying the impact of anomalies on the web-services. Various application-level, as well as network-level and server-level performance metrics, have been identified and evaluated using the synthetically generated traffic in DDoSTB hybrid testbed [6]. The results of the study have been presented as graphs showing the effect of traffic surges and realize their impact on performance. The background traffic is mainly composed of TCP protocol. The attack traffic is composed of UDP, with varying packets per second. The HTTP traffic is generated with varying percentage of requests per second and represents the flash event (a legitimate anomaly). The paper defines performance metrics quantifying the quality of service (QoS) of the web server during normal conditions and under the increased traffic load. The experimental set-up and procedure to evaluate performance metrics of the designed network have been discussed. The paper has been organized as follows. The related literature has been reviewed in Section 2. Section 3 gives an overview of what performance metrics are and its importance in the detection of anomalies. Section 4 describes the model of an experimental network using realistic topology and software tools used to generate legitimate and attack traffic. Section 5 discusses the metrics selected for analysis, and the results obtained are presented as graphs for better understanding. The paper concludes the observations of the experiment in Section 6. The scope for future work in the same field has been mentioned in section 7.