A Study of Friend Abuse Perception in Facebook
SAJEDUL TALUKDER, Edinboro University, USA
BOGDAN CARBUNAR, Florida Int’l University, USA
Social networks like Facebook provide functionality that can expose users to abuse perpetrated by their
contacts. For instance, Facebook users can often access sensitive profle information and timeline posts of
their friends, and also post abuse on the timeline and news feed of their friends. In this article we introduce
AbuSnif, a system to identify Facebook friends perceived to be abusive or strangers, and protect the user by
restricting the access to information for such friends. We develop a questionnaire to detect perceived strangers
and friend abuse. We train supervised learning algorithms to predict questionnaire responses using features
extracted from the mutual activities with Facebook friends. In our experiments, participants recruited from a
crowdsourcing site agreed with 78% of the defense actions suggested by AbuSnif, without having to answer
any questions about their friends. When compared to a control app, AbuSnif signifcantly increased the
willingness of participants to take a defensive action against friends. AbuSnif also increased the participant
self-reported willingness to reject friend invitations from strangers and abusers, their awareness of friend
abuse implications and their perceived protection from friend abuse.
CCS Concepts: · Security and privacy → Privacy protections; Social aspects of security and privacy;
Spoofng attacks;
Additional Key Words and Phrases: Social network friend abuse, friend spam, supervised detection
ACM Reference Format:
Sajedul Talukder and Bogdan Carbunar. 2020. A Study of Friend Abuse Perception in Facebook. ACM Trans.
Soc. Comput. 1, 1, Article 1 (January 2020), 33 pages. https://doi.org/10.1145/3408040
1 INTRODUCTION
Infuential social networks like Facebook encourage casual friendship relations. Social network
users often have signifcantly more than 150 friends
1
, which is the number of meaningful friend
relations that a person can manage [26]). Past work has shown that adversaries, including bot-
operated user accounts [71]
2
, can establish friend relations with unsuspecting social network users,
then expose them to vulnerabilities and abuse that include the collection and misuse of private
information [24, 35, 59, 83, 84], identity theft [49] and spear phishing [28] attacks, the distribution
of ofensive, misleading, false or malicious information [2, 4, 19, 74], and cyber abuse that includes
cyberstalking [25], doxing [24, 59], sextorsion [84] and cyberbullying [36, 37, 56]. High-profle
cases of abuse perpetrated through Facebook include Cambridge Analytica’s use of data collected
from 87 million Facebook users [40] to identify łdeep-seated underlying fears, concernsž [39] and
to inject content to change user perception [50] and infuence the outcome of elections [9, 10].
1
The average number of friends per Facebook user is 338, while the median is 200 [58].
2
Facebook estimated that 13% (i.e., 270 million) of their user accounts are either bots or clones [32].
Authors’ addresses: Sajedul Talukder, Edinboro University, USA, stalukder@edinboro.edu; Bogdan Carbunar, Florida Int’l
University, USA, carbunar@gmail.com.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for proft or commercial advantage and that copies bear this notice and
the full citation on the frst page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specifc permission and/or a fee. Request permissions from permissions@acm.org.
© 2020 Association for Computing Machinery.
2469-7818/2020/1-ART1 $15.00
https://doi.org/10.1145/3408040
ACM Trans. Soc. Comput., Vol. 1, No. 1, Article 1. Publication date: January 2020.