Mark Garrison AI’s Potentially Contagious Psychosis 27 Educational Abundance: Journal of the New York State Foundations of Education Association, Volume 5 (2025) Is AI Psychotic? Mark Garrison West Texas A&M University Abstract: I explore the thesis that Artificial Intelligence (AI) exhibits characteristics analogous to human psychosis, and that AI hype constitutes a “double bind,” a communication dilemma associated with schizophrenia. The article explains how neural networks mirror distortions of time and boundaries found in psychotic conditions. To break free of the double bind, I argue that distinguishing intelligence from consciousness is key. While AI is focused on "nexting" (predicting immediate future events based on past data), unique to humans is the ability to imagine futures. Rather than fixating on concerns of AIs becoming conscious, the article warns that the pervasive integration of AI could lead to a “psychotic socialization” of humans, fostering a “cybernetic personality” that prioritizes automated responses and limits imaginative capacity. What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine; that is, as a device which could perform the kind of functions which a brain must perform if it is only to go wrong and have a psychosis. — Warren McCulloch (psychiatrist/AI developer) 1 With generative AI (Artificial Intelligence) we are both encouraged and warned. AI can be a powerful but error-prone assistant: “Oftentimes, the answers produced by AI will be a mixture of truth and fiction…. Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a ‘hallucination’” (Research Guides, 2023). 2 What is not widely known, however, is that the architects of so-called neural networks — an important methodological foundation of technologies branded artificial intelligence — believed their invention to be both “psychotic” and “rational” (Halpern 2014). My purpose in this article is to explore the psychological analogues of intelligent machines and McCulloch's thesis, namely, that machine intelligence can be advanced by modeling “the kind of functions which a brain must perform if it is only to go wrong and have a psychosis.” Why precisely did McCulloch and his colleagues believe their inventions were both “psychotic” and “rational”? What is the significance of such a diagnosis; and what are its implications? What are We Talking about When We Talk about AI As Bender and Hanna (2025) note, the phrase “artificial intelligence” has become marketing hype, introducing much confusion into discussions of technologies used to automate decision- making, personalize recommendations, and translate languages. Confusion especially abounds with “generative” forms of automation such as ChatGPT and DALL-E (the latter are termed “synthetic media machines” by Bender and Hanna). While I harness some of this confusion as a symptom of the problems with AI discussed here, clarification of terminology is useful. 1 As quoted in Halpern 2014, 223. 2 If it doesn't exist, it’s not information. It is also important to note that “hallucination” is the term developed by those working in the AI industry, not its critics (McQuillan, Jarke, and Pargman 2024, 365). I go one step further than Pasquinelli (2017, emphasis in original), arguing that, not only has AI “inaugurated the age of statistical science fiction” but it has done so in a manner analogous to psychosis, and, worse, that such “myth making” can and indeed is inducing psychosis in humans. Another view is offered by Stetar (2025), who argues: “[in] the systems we’ve constructed, there’s an undeniable fracture happening — one that goes far deeper than the misapplication of terms. The term “hallucination,” used to describe errors in language models, is part of this collapse.”