Please cite as :Brincker, Maria. "Disoriented and Alone in the “Experience Machine” – On Netflix, Shared World Deceptions and the Consequences of Deepening Algorithmic Personalization" SATS, vol. 22, no. 1, 2021, pp. 75-96. https://doi.org/10.1515/sats-2021-0005 1 Disoriented and Alone in the “Experience Machine” – On Netflix, Shared World Deceptions and the Consequences of Deepening Algorithmic Personalization By Maria Brincker Abstract: Most online platforms are becoming increasingly algorithmically personalized. The question is if these practices are simply satisfying users preferences or if something is lost in this process. This article focuses on how to reconcile the personalization with the importance of being able to share cultural objects – including fiction – with others. In analyzing two concrete personalization examples from the streaming giant Netflix, several tendencies are observed. One is to isolate users and sometimes entirely eliminate shared world aspects. Another tendency is to blur the boundary between shared cultural objects and personalized content, which can be misleading and disorienting. A further tendency is for personalization algorithms to be optimized to deceptively prey on desires for content that mirrors one’s own lived experience. Some specific – often minority targeting – “clickbait” practices received public blowback. These practices show disregard both for honest labeling and for our desires to have access and repre-sentation in a shared world. The article concludes that personalization tendencies are moving towards increasingly isolating and disorienting interfaces, but that platforms could be redesigned to support better social world orientation. Keywords: algorithmic personalization, shared cultural objects, racial profiling, data surveillance, social epistemology 1 Introduction More and more, our informational, cultural, and social experiences are mediated by algorithmically personalized platforms and other “smart” tools and applications. As Reviglio and Agosti write in a recent paper: “Online personalization is our interface” (2020: 1). Algorithmic personalization is championed as a necessary means to navigate the cluttered digital sphere and deal with information and option overload. By way of personal data-driven algorithms our options can be filtered, sorted, and presented in a curated way that optimize our interfaces and thus serve experiences according to the preferences that we have expressed through the data trail of our prior choices. What is not to love? As many scholars have pointed out, there are some quite significant downsides and ethical worries around these powerful algorithmic tools. 1 Some consistently highlighted concerns: 1. Monetization: Conflict of interest, as data harvesting and predictive “optimization” is controlled by for-profit companies and their financial imperatives (Zuboff 2019). 2. Manipulation:Personalized“choicearchitectures”as“hypernudging”(Yeung 2018) and imposing hidden coercive influences (Susser 2019). 3. Lack of transparency: Algorithms as legally protected as private and proprietary (Cohen 2013) and operating like “black boxes” (Pasquale 2015). 4. Bias: “Smart” tech as perpetrating “algorithmic bias” and discrimination (Benjamin 2019), e.g., via “social sorting” (Lyons 2003). 5. Filter bubbles: Personalization as trapping users in past preferences, “filter bubbles” rife with polarization and misinformation (Hendricks and Vestergaard 2019; Pariser 2011). Relating to these five concerns, this article focuses on the value of being oriented in our broader social world, and analyzes deceptive and disorienting features of current personalization practices, which increasingly and imperceptibly mingle individualized platform content with content originating beyond the platform. While other scholars have raised worries around 1 See also Yeung (2018) for a recent overview.