Multi-Layered CAPTCHA- A New Approach to Tackle Web Robots Abstract - "CAPTCHA" stands for Complete Automated Public Turning Test[1], is a security mechanism designed to differentiate between online bots and human. It is a challenged- response test for computing to determine whether the user is human or not. It is used to defend against the malicious bots programs. As use of Internet has been become vital issue and many web application facing a threat of web bots(Robots). Web Bots or, Robots is an automated script which executes over an Internet and occupy web space and increases the network traffic [6]-[7]. The complication with currently used CAPTCHA i.e., text-based CAPTCHA or, graphic-based CAPTCHA is that, it is it is troublesome to read even for the human and Image based or, voice based CAPTCHA has been broken many times. Multi-layered CAPTCHA is type of hybrid CAPTCHA, in which two layers are there. First layer is face- recognition CAPTCHA and the second layer is either image- based CAPTCHA or, text-based CAPTCHA or, audio-based or, graphic-based CAPTCHA. This paper discusses the existing CAPTCHA and multi-layered CAPTCHA. Keywords:- CAPTCHA, Text-Based/Graphic-Based CAPTCHA, Image-Based/Audio-Based CAPTCHA, Face-Recognition CAPTCHA, Multi-Layered CAPTCHA I. INTRODUCTION CAPTCHA stands for Complete Automated Public Turning Test, is a security mechanism designed to differentiate between online bots and human. In a website, if a person wants to sign up for free e-mail services, then before submitting the web forms, he/she first has to pass a test. The test is very easy and simple for human but for the online bots or, computer it is impossible to solve that test. It is a challenged- response test for computing to determine whether the user is human or not. It is designed to prevent automated attacks by requiring users to perform tasks that are relatively easy for humans but challenging for web bots. CATCHAs provide an additional layer of security and are frequently paired with account login system to prevent brute force password attacks. The term CAPTCHA was devise or, coined in 2000 by Luis Von Ahn, Manuel Blum, Nicholas J. Hopper (all are of Carnegie Mellon University), and John Langford(then of IBM).[6][5] CAPTCHAs automatically generate and evaluate the test; this test is difficult for the computer or, online bots but easy for the humans. If the success rate of human for solving the CAPTCHA is 90% or, higher than the computer programs or bots, only achieve a success rate of less than 1%[5]. So, the CAPTCHA can be considered a secure. There are some properties defined in development of CAPTCHA:[20] Automated:- Computer program should be able to generate the tests. Open:- The underlying database(s) and algorithm(s) used to generate and grade the tests should be public. This is accordance with Kerckhoff's Principle. Usable:- The effect of any user's language, physical location and perceptual abilities should be minimal. Secure:- The program generated test should be difficult for machine to solve by using any algorithm. Currently, existing CAPTCHA implementation generally belongs to one or these categories, they are:- Text-Based CAPTCHA Graphic-Based CAPTCHA Image-Based CAPTCHA Audio-Based CAPTCHA The most common CAPTCHA is Text-Based CAPTCHA, in which user have to enter the string of character that appear in a distorted form on the screen. Researcher had recently claimed that their simple generic attack have been broken a wide range of text-based CAPTCHA. The robustness of the text-based CAPTCHAs should rely on the difficulty of finding where each character is (segmentation), rather than what character is. So, the strong CAPTCHA have to be designed and built so that spammer cannot harm with web security. This paper migrates the shortcoming of existing approaches and proposed a new CAPTCHA, termed as Multi-Layered CAPTCHA, which is user friendly and add an additional layer of security to the existing CAPTCHA. II. BACKGROUND DETAIL The need for CAPTCHAs arises to keep out the website/search engine abuse by bots. In 1997, the AltaVista team comprised of Lillibridge, Adabdi, Bharat, began work on a system to prevent Internet bots from adding active URL's to the AltaVista, the search engine platform. To do this the AltaVista team work to prevent OCR (Optical Character Recognition), attacks by building puzzles and images which would cause OCR attack to fail. The AltaVista team worked to create system of varied typefaces, backgrounds, type style and size which would fool OCR reader [12] In November 1999, slashdot.com released a poll to vote for the best CS College in the US. Students from the Carnegie Mellon University and the Massachusetts Institute of Technology created bots that repeatedly voted for their respective colleges. This incident created the urge to use International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Published by, www.ijert.org ICCCS - 2017 Conference Proceedings Volume 5, Issue 10 Special Issue - 2017 Dayanand Research Scholar, Department of Computer Science and Information Technology, Sam Higginbottom University of Agriculture, Technology and Sciences, Allahabad, Uttar Pradesh, India Megha Saloni Student, Dept. of Information Technology , HMR Institute of Technology and Management, Hamidpur, New Delhi, India Wilson Jeberson Professor, Department of Computer Science and Information Technology, Sam Higginbottom University of Agriculture, Technology and Sciences, Allahabad, Uttar Pradesh, India