Gödel, Kripke, Wolfe
Gödel, Escher, Bach is for Amateurs... and also Wittgenstein, Wolfe, and Wolfram wouldn't have made that allusion work
If I had a philosophy department office door back in the 1990s or 2000s, I'd use it to troll the in-group, out-group 'jokes' professors would post on their office doors. These were not utilitarian jokes the way The Far Side was repurposed by scientists. These were custom made garbage. A logic professor with a love of Wittgenstein had some bizarre screed on her door comparing two images of two poorly drawn cartoon dogs. These were juxtaposed like some Slylock Fox comic (https://www.slylockfox.com/09-30-2023-slylock-fox/). Her comments were something Magritte-ish, like “This is a dog. Dog-ness is the apprehension of the experience of the game Dog. Just as game-ness itself is both a foundational and coherent semantic intention...” My cartoon would also juxtapose the two cartoon dogs. But my caption would read, “This is an in-group, out-group test.” Apparently, professors think it's funny to test each other. Must be a conditioned response. This particular professor was interested in semantic intention and was especially interested in the ways Kripke Semantics responded to many problems Wittgenstein posed. So if you aren't familiar with Wittgenstein's philosophy (https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus), then you won't understand that this dog screed is actually a 'joke.' What was extremely obscure twenty five years ago is now cutting edge: is it possible to have axiomatic, algorithmic intentionality (ie machine learning graduating to intelligence)? Is that even possible when we can't even define words without some sort of coherent frame of reference? Machine learning resolves this problem by weighting definitions with tokens, using context to define meaning. But this is a pure coherence model: the only foundation is the predicate. If the meanings of words were utterly gutted and replaced with utterly different meanings, this model would continue to function, even as the information the words previously mapped was utterly destroyed. Language is being axiomatized by commercial AI, even though it exists in a liminal, intersubjective space, where, so long as it works (continues the effective 'game' of language we observe) we are to look past its faulty epistemic premise: context without foundation (https://philpapers.org/rec/HAAEAI).
By contrast, the great Gene Wolfe had envisioned a fictional society (https://tvtropes.org/pmwiki/pmwiki.php/Literature/BookOfTheNewSun) where the foundational text was as sacred as the Bible, Constitution, Dictionary, Encyclopedia, and more (https://en.wikipedia.org/wiki/Ascian_language). Foundation without context. All other words were banned. Only the great tome could be recited. So ordinary conversation could only quote the great Ascian Codex of permitted thought. Ordinary kitchen conversation would need to be quotations of scenes described in the Ascian Codex. “Pass me the milk please,” would be impossible to say unless it was actually written in that Ascian Codex. Instead, you'd need to say something like, “The rulers in their benevolence provided for the people,” and you'd have to know in that context that the other person wanted milk. Smurfy! It was Newspeak taken to a whole other level (https://en.wikipedia.org/wiki/Newspeak). This would be an example of the opposite of our existing AI model: a rigidly designated text that served as source of all metaphor.
What we mean and what is understood is as old as that schoolyard argument whether my color 'red' is your color 'red.' Throwing language—this liminal qubit, this game—into an automatic (machine), axiomatic (assumption-based), algorithmic (rules based), closed system (only references itself) breaks the hard limit set by Kurt Godel with his incompleteness theorems (https://en.wikipedia.org//wiki/G%C3%B6del's_incompleteness_theorems). If our intentions are bigger than our language, then our ability to think is greater than our tools' ability to map and contain our thoughts. Therefore the incorrigible ambitions of machine learning advocates are poorly conceived. Raymond Carver’s literary examination of the topic Love in What We Talk About When We Talk About Love (https://en.wikipedia.org/wiki/What_We_Talk_About_When_We_Talk_About_Love), GE Moore's examination of Good in Principia Ethica (https://en.wikipedia.org/wiki/Principia_Ethica), Pope Urban II's definition of God which excluded the Muslim Allah, Potter Stewart's definition of pornography in Jacobellis v. Ohio (https://en.wikipedia.org/wiki/I_know_it_when_I_see_it), or simply what we mean today when we talk about Progress—these liminal concepts are impossible for machines to comprehend. Analytic philosophers would rather just say these are illusory ideas because they cannot be defined, but Gödel's work invalidated the premise behind Analytic Philosophy—the guiding principle behind machine learning and AI. The mechanized blurring of distinctions is not accidental, nor is the haste with which it is being implemented. Rather, the haste is to complete the swindle before the mark (us), gets wise to the scam.
Bad link to Incompleteness Theorems fixed. Worth reading.
We can know when our own comprehension of a subject exceeds the ability of language to precisely convey that comprehension, but we cannot know if that kind of a condition exists in a self-taught computer A.I.
We should acknowledge this gap in our "knowing".
:-)