Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond
Alt Title: Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI
Elan Moritz
Philadelphia, Pa.
https://orcid.org/0000-0002-0167-4336
ARTICLE INFO ABSTRACT
Keywords: Leibniz,
Cantor, Turing, Gödel,
Cohen, IJ Good, Humans,
Brain, Mind, Cognition,
Understanding, Knowl-
edge, AI, AGI, Artificial
Intelligence, Superintelli-
gence, , UltraIntelligent
Machines, ChatGPT, GPT-
4, Llama-3, LLM, Large
Language Models, Groq,
Cohere, Life, Emergence,
MetaSystems Transitions,
Prompts, Metaprompts,
Forcing Methods, Uni-
verse, Methodology.
2024-04-21
The paper continues my earlier Chat with OpenAI’s ChatGPT
with a Focused LLM Experiment (FLEX). The idea is to conduct
Large Language Model (LLM) based explorations of certain areas
or concepts. The approach is based on crafting initial guiding
prompts and then follow up with user prompts based on the LLMs’
responses. The goals include improving understanding of LLM
capabilities and their limitations culminating in optimized prompts.
The specific subjects explored as research subject matter include a)
diagonalization techniques as practiced by Cantor, Turing, Gödel,
and the advances, such as the Forcing techniques introduced by
Paul Cohen and later investigators, b) Knowledge Hierarchies
& Mapping Exercises, c) Discussions of IJ Good’s Speculations
Concerning the First Ultraintelligent Machine, AGI, and Super-
intelligence. Results suggest variability between major models
like ChatGPT-4, Llama-3, Cohere, Sonnet and Opus. Results also
point to strong dependence on users’ preexisting knowledge and
skill bases. The paper should be viewed as ’raw data’ rather than a
polished authoritative reference.
©2024 Elan Moritz. This is an open access article distributed under the
non-commercial non-derivative use terms of the CC BY NC-ND-4.0 license.
1
1
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
1