QnQ: A Reputation Model to Secure Mobile Crowdsourcing Applications from Incentive Losses Shameek Bhattacharjee, Nirnay Ghosh, Vijay K. Shah, and Sajal K. Das Department of Computer Science, Missouri University of Science and Technology, Rolla, USA {shameek, ghoshn, vksr38, sdas}@mst.edu Abstract—A major limitation of mobile Crowd Sourcing (CS) applications is the generation of false (or spam) contributions due to selfish and malicious behaviors of users, or wrong perception of an event. Such false contributions induce loss of revenue through disbursement of undue incentives and also negatively affects the application’s operational reliability. In this work, we propose a reputation model, called QnQ, to segregate different user classes such as honest, selfish, or malicious based on their reputation scores. The resultant score is then used as an indicator to decide an incentive for a user. Unlike existing works, QnQ ensures fairness to different user behaviors by unifying ‘quantity’ (degree of partic- ipation) and ‘quality’ (accuracy of contribution). Specifically, QnQ utilizes evidences from a rating feedback mechanism to propose an event-specific expected truthfulness metric by considering total feedback volume, probability mass for positive evidence, and the discounted probability mass of uncertain evidence. To classify an event as true or not, a generalized linear model is used to transform its truthfulness into quality of information (QoI). Finally, the QoIs of various events in which a user participates, are aggregated to compute a user’s reputation score. For evaluation of QnQ through experimental study, we consider a vehicular crowdsourcing application. QoI performance of our model is compared with Jøsang’s belief model, while reputation and incentive leakage is compared with Dempster-Shafer based reputation model. Experimental results demonstrate that QnQ is able to better capture subtle differences in user behaviors by unifying both quality and quantity, and significantly reduces undue incentives in presence of rogue contributions. Index Terms—Participatory sensing, Trust, Reputation, Secure Crowd sourcing, Security Economics, Vehicular Crowd Sensing, I. I NTRODUCTION Sophistication in mobile devices (e.g., smartphones, tablets) and their widespread adoption have given rise to a novel inter- active sensing paradigm, known as Participatory Sensing (PS) or Crowd Sourcing (CS) [5]. In CS systems, a crowd of citizens voluntarily submit certain observations termed as contributions (viz., report, image, audio) about their environment to a CS application server, which thereafter fuses such contributions to conclude a summarized statistic (or information) and publishes it to support improved decision making. An important category of CS applications is vehicular traffic management and monitoring [3]. In such applications, a user’s contributions are equivalent to ‘reports’ about various road conditions that they might have observed. Based on certain correlations among such reports, the CS application decides whether a certain traffic ‘event’ has occurred, and publishes this ‘information’ as a broadcast notification on the smartphone application. Such information improves driving experiences through dynamic route planning and re-routing of traffic in busy cities. Two notable examples of real vehicular CS ap- plications include Google’s Waze and Nericell [15]. Other practical examples of CS applications are FourSquare, and Yelp which help users to find best destinations in their geographical proximity for food, entertainment, and other attractions or events of interest. The real benefit of CS applications is that fine grained and precise sensory observations can be obtained quickly without depending on the deployment of expensive and dedicated infrastructures [17]. However, the major drawback is its “open” nature (accessible to all) which may expose such applications to false contributions [9] [21]. Most of the CS applications need to use various incen- tive mechanisms to motivate the users to keep contributing regularly, and thus preserve their viability [13]. It has been noted that in most of these mechanisms, the deciding factor of incentive is the user’s degree of participation (i.e. “quantity” or how much they contribute). However, selfish users may take advantage of this loophole and intermittently generate false contributions to boost their participation for gaining undue incentives [17], incurring revenue losses to the CS system. Furthermore, there could be malicious users who attempt to cripple the CS applications by generating a large number of bogus contributions in collusion [21]. Recently, such colluding attack was launched against Waze in Israel, by which fake traf- fic jam reports were created to orchestrate traffic re-routing and unnecessary roadblocks [19]. Occasionally, false contributions may also be generated owing to wrong perception. Regardless of the motive, false contributions incur loss of revenue due to unnecessary disbursement of incentives and also tarnishes operational reliability of the CS application. In our preliminary work, we studied a real data set from Waze [3], and established that the ‘quantity’ rather than ‘qual- ity’ of contributions decides incentives (details presented in Section II-B). We argue that besides the quantity, there is also a simultaneous need for assessing quality of information (QoI) generated from user contributions. This QoI is essentially a measure of the trustworthiness of the summary statistic and is equivalent to its trust score. Additionally, user reputation based on his level of truthful cooperation is required to determine: 978-1-5386-0683-4/17/$31.00 c 2017 IEEE