Representation in the Prediction Error Minimization Framework/ Kiefer & Hohwy [draft submitted for Routledge Handbook to the Philosophy of Psychology; (eds.) John Symons, Paco Calvo, & Sarah Robins] 1 Representation in the Prediction Error Minimization Framework Alex Kiefer 1 Jakob Hohwy 2 1 City University of New York Graduate Center, New York, USA, akiefer@gmail.com 2 Jakob Hohwy, Cognition & Philosophy Lab, Monash University, Melbourne, Australia Jakob.Hohwy@monash.edu [draft submitted for Routledge Handbook to the Philosophy of Psychology (eds.) John Symons, Paco Calvo, & Sarah Robins] Introduction The Prediction Error Minimization (PEM) framework in cognitive science is an approach to cognition and perception centered on a simple idea: organisms represent the world by constantly predicting their own internal states. Predictions consist of efferent signals traveling via “top-down” synaptic connections from higher (e.g. frontal and temporal) cortical regions to lower-level sensory and motor cortices. Cascades of predictions are matched against incoming sensory signals, which act as negative feedback to correct a generative model encoded in the top-down and lateral connections. Comparisons between predictions and bottom-up signals occur at each stage of hierarchical cortical processing, and only the “error signal”, the unpredicted portion of the bottom-up input, feeds forward to the next stage (as a process theory, this hypothesized mechanism is known as “predictive coding”—see e.g. (Rao and Sejnowski 2002, Friston 2005, Clark 2013, Hohwy 2013, Clark 2016) for details and discussion). In this chapter, we focus on what’s novel in the perspective that the PEM framework affords on the cognitive-scientific project of explaining intelligence by appeal to internal representations. 1 The core representational structure posited by such theories is the hierarchical generative model. Generative models have long been informally hypothesized to play a role in perception (Helmholtz 1860), and have been proposed as a unifying framework for understanding unsupervised learning within neural networks (Hinton and Sejnowski 1999). More recently, the PEM framework and predictive coding theories have drawn long-deserved attention, within wider cognitive-scientific circles, to generative statistical modelling as a powerful overarching theoretical approach to mental and neural representation. Generative models are a philosophically interesting class of representation in part because they can be understood both in terms of Bayesian updating of subjective probabilities in light of evidence, and thus as part of a hypothesis-testing model of cognition (Fodor1975), and also in terms of simulation or (exploitable) structural resemblance to modelled sets of causes (Cummins 1994, Gładziejewski 2016, Gładziejewski and Miłkowski 2017). By the close of the chapter we aim to have shown how truth-conditional and resemblance-based approaches to representation in generative models may be integrated. 1 Here, we assume representationalism as a background approach to cognitive science, and do not explicitly address anti-representationalist arguments. That said, the somewhat novel way in which representations are treated within the PEM framework, outlined here, may provide a basis for responding to such arguments, which are often based on certain philosophical assumptions about representation— see (Gładziejewski and Miłkowski 2017).