T HE R ISKS OF H UMAN OVERRELIANCE ON L ARGE L ANGUAGE MODELS FOR C RITICAL T HINKING APREPRINT Tom Duenas * Political Scientist & AI Researcher tom@superailab.org Diana Ruiz* UX Researcher and Professor diana@coderina.com November 12, 2024 ABSTRACT This research investigates the ethical considerations and educational implications of increasing human reliance on Large Language Models (LLMs) for critical thinking. We examine the ethical challenges of delegating cognitive tasks to AI systems and explore strategies for preserving human agency in AI-augmented decision-making. The study analyzes how educational frameworks may need to evolve in response to LLM integration, discussing opportunities and challenges in AI-assisted learning environments. We present theoretical models for human-LLM interaction, emphasizing socio-technical perspectives on AI integration in human cognitive processes. The paper also addresses the potential long-term effects on human reasoning skills and proposes approaches for fostering critical thinking in an AI-augmented world. We conclude by calling for interdisciplinary research on human-AI cognitive symbiosis and the establishment of ethical guidelines for responsible LLM deployment in critical thinking contexts. Keywords Artificial Super-intelligence · AI Safety · AI Alignment 1 Introduction Large Language Models (LLMs) have revolutionized the field of artificial intelligence, demonstrating unprecedented capabilities in natural language processing and generation. As exemplified by Achiam et al. [2023] in their work on GPT-4, these models have achieved remarkable performance across a wide range of linguistic tasks, from text completion to complex reasoning. The rapid advancement of LLMs has led to their increasing integration into various aspects of daily life, from virtual assistants to content creation tools. The current state-of-the-art models, including GPT-4o1, Claude 3.5 Sonnet, Llama 3.2, and OpenAI’s advanced voice mode, represent the cutting edge of this technology. These models showcase significant improvements in language understanding, generation, and even multimodal capabilities, pushing the boundaries of what artificial intelligence can achieve in human-like communication and problem-solving. However, this growing reliance on LLMs for cognitive tasks raises important questions about their impact on human thinking and decision-making processes. Lin and Chang [2023] discusses the trend of humans increasingly turning to AI systems for information processing and problem-solving, highlighting the potential risks and benefits of this shift. While reliance on LLMs raises valid concerns about cognitive atrophy, especially in areas requiring critical thinking, the shift to an AI-augmented cognitive landscape may also prompt adaptive changes in human cognitive roles. Historical examples, such as the introduction of GPS for navigation, show that while certain cognitive skills may diminish, others—such as spatial awareness and decision-making in unfamiliar environments—evolve alongside technology. By studying this shift, we can identify ways in which LLMs might both complement and challenge critical thinking skills, helping users retain independent judgment even in an AI-enhanced world. This adaptive approach * The researchers are integrating state of the art LLM’s into their daily workflow, including paper preparation and revision. They’re also working towards developing autonomous AI Agents that conduct independent AI research.