International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611 Volume 4 Issue 10, October 2015 www.ijsr.net Licensed Under Creative Commons Attribution CC BY Threshold Based Filtering System for OSN-Online Social Networks Shareen S. Anthony 1 , Nitin A. Shelke 2 1 M.E. final year CSE, GHRCEM, Amravati, India 2 Assistant Professor, Department of Computer Science and Engineering, GHRCEM, Amravati, India Abstract: Online Social Networks (OSNs) are today one of the most popular interactive medium to share, communicate, and distribute a significant amount of human life information. In OSNs, information filtering can also be used for a different, more responsive, function. This is owing to the fact that in OSNs there is the possibility of posting or commenting other posts on particular public/private regions, called in general walls. Information filtering can therefore be used to give users the ability to automatically control the messages written on their own walls, by filtering out unwanted messages. OSNs provide negligible amount of support prevent undesired messages on user walls. To propose and experimentally evaluate an automated system, called Filtered Wall (FW), able to filter unwanted messages from OSN user walls. The proposed work deals with the prepossessing steps which is used to decrease the size of the database containing the abusive words. Keywords: Online Social Networks, Machine Learning, Filtering Rules, Content-based filtering. 1. Introduction Information and communication technology plays a significant role in today’s networked society. It has affected the online interaction between users, who are aware of security applications and their implications on personal privacy. There is a need to develop more security mechanisms for different communication technologies, particularly online social networks. OSNs provide negligible amount of support to prevent unwanted messages on user walls. With the lack of classification or filtering tools, the user receives all messages posted by the users he follows. In most cases, the user receives a noisy stream of updates. In this paper, an information Filtering system is introduced. The system focuses on one kind of feeds: Lists which are a manually selected group of users on OSN. List feeds tend to be focused on specific topics; however it is still noisy due to irrelevant messages. Therefore, we propose an online filtering system, which extracts such topics in a list, filtering out irrelevant messages [1]. Following is the conceptual architecture of Filtering System. Figure 1: Filtering System Conceptual Architecture In OSNs, information filtering can also be used for a different, more sensitive, purpose. This is due to the fact that in OSNs there is the possibility of posting or commenting other posts on particular public/private areas, called in general walls. In the proposed system Information filtering can therefore be used to give users the ability to automatically control the messages written on their own walls, by filtering out unwanted messages. The aim of the present work is therefore to propose and experimentally evaluate an automated system, called Filtered Wall (FW), able to filter unwanted messages from OSN user walls. We exploit Machine Learning (ML) text categorization techniques [2] to automatically assign with each short text message a set of categories based on its content. The major efforts in building a robust short text classifier are concentrated in the extraction and selection of a set of characterizing and discriminate features. 2. Proposed Work Our goal is to design an online message filtering system that is deployed at the OSN service provider side. Once deployed, it inspects every message before rendering the message to the intended recipients and makes immediate decision on whether or not the message under inspection should be dropped. 2.1 Working Modules 2.1.1 Filtering Rules A powerful rule layer exploiting a flexible language is provided by the system to specify Filtering Rules (FRs), by which users are able to state what contents, should not be displayed on their walls. Paper ID: SUB158815 762