Lie on the Fly: Strategic Voting in an Iterative Preference Elicitation Process Lihi Naamani-Dery 1 , Svetlana Obraztsova 2 , Zinovi Rabinovich 3 , Meir Kalech 4 1 Ariel University, Ariel, Israel. lihid@ariel.ac.il 2 Nanyang Technological University, Singapore. lana@ntu.edu.sg 3 Nanyang Technological University, Singapore. zinovi@ntu.edu.sg 4 Ben-Gurion University, Beer Sheva, Israel. kalech@bgu.ac.il Abstract A voting center is in charge of collecting and aggregating voter preferences. In an iterative process, the center sends comparison queries to voters, requesting them to submit their preference between two items. Voters might discuss the candidates among themselves, figuring out during the elicitation process which candidates stand a chance of winning and which do not. Consequently, strategic voters might attempt to manipulate by deviating from their true preferences and instead submit a different response in order to attempt to maximize their profit. We provide a practical algorithm for strategic voters which computes the best manipulative vote and maximizes the voter’s selfish outcome when such a vote exists. We also provide a careful voting center which is aware of the possible manipulations and avoids manipulative queries when possible. In an empirical study on four real world domains, we show that in practice manipulation occurs in a low percentage of settings and has a low impact on the final outcome. The careful voting center reduces manipulation even further, thus allowing for a non-distorted group decision process to take place. We thus provide a core technology study of a voting process that can be adopted in opinion or information aggregation systems and in crowdsourcing applications, e.g., peer grading in Massive Open Online Courses (MOOCs). Keywords: Iterative voting, Preference Elicitation, Group decisions, Crowdsourcing 1 Introduction Voting procedures are used for combining voters’ individual preferences over a set of alternatives, enabling them to reach a joint decision. However, sometimes the full set of preferences is unavailable. Take, for ex- ample, a recruiting committee that convenes to decide on the appropriate candidate to fill a position. Ideally, each committee member is required to rank all applicants; then a joint decision is reached based on all opin- ions (see e.g. [13]). However, as their time is limited, each committee member is reluctant to describe and disclose a complete list of ranked preferences (see e.g. the discussion in [69]). As another example, consider peer grading in Massive Open Online Courses (MOOCs). Since students are not professional educators, they are not trained to provide grades in absolute terms. Rather, students provide comparative information by answering some binary comparative queries (see e.g. [12]). Even when the voter is acquainted with all of the candidates, it is easier to answer relative comparison queries than to rank all of the alternatives [2]. Furthermore, voters are more accurate when making relative comparisons than (a) ranking all items [45]; and than (b) presenting precise numerical values [18]. 1