Sunday, 15 February 2026

The church might leave the village, not yet.

 

Comment to Carsten Herrmann-Pillath:

  The limits of knowledge in AI: Why AI can never achieve human intelligence

 

“Die Kirche im Dorf lassen” (keep the church in the village), a German proverb praising the status quo.

The proverb reminds us to acknowledge that we deliberate (only) about ‘non-humanoid AI’, as CHP emphasises when discussing GAI. Still, it remains open what kind of body a non-humanoid intelligent entity might have. For example, consider an AI that ‘handles’ the operation of a power grid peppered with multiple sensors tracking the condition of the installations. What kind of body would that AI sense?

KI: The picture shows a stone wall with water seeping through it, and the seepage has caused a bright orange discolouration on both the stones and the cobblestones below.


The evolution of biological intelligence, including its human form, is closely related to sensorial and modelling functions applied to both the external environment and the body [*]. The functioning of these increasingly complex ‘Intelligence Systems’ evolved from ‘bacteria’ intelligence onward. These biological ‘Intelligence Systems’ (of the body, senses, nervous system, and brain) exhibit an increasingly complex interplay between processing received sensorial inputs and modelling expected sensorial inputs. Combining ‘processing and modelling inputs’ leads to a (more or less) intelligent practice, that is, an action of the body unto the environment or the body (scratching and picking off the lice).

Compared to this ‘wealth of sensing and modelling’ a LLM is poor. However, the LLM accesses a (vast) variety of descriptions of (human) ‘intelligent practice’. These descriptions (for example, books or other texts) are ‘external representations’ of those practices [1]. These external representations vary, are limited and biased. However, they represent different realisations of (same/similar) intelligent practices of humans, including the impact of ‘the bodily’.

The LLM derives (more or less) typical patterns [2] of these external representations of ‘intelligent practice’. Subsequently, it presents (the user of the LLM) with an artefact, i.e., a replication of ‘the typical description’, although with some variations. The human user can apply this replication to local/punctual practices (or not). The user might experience success (or not) and tune practices, including the use of an LLM. ‘Unhappily’, the LLM does not benefit (directly) from the user’s practice, namely, to tune the LMM process to derive (better) ‘typical patterns’.

Hence, when discussing LLMs, the proverb reminds us to acknowledge that we deliberate (only) about using text/language-based patterns [3] that describe the status quo of intelligent human practices. However, the related use cases are wide-ranging, given the ample use of text/language in human cultures. Hence, metaphorically speaking, ‘wir lassen die Kirche im Dorf’ (we keep the church in the village), and a noticeable evolution of the LMM is not happening.

Summarising, the strength of the (current) LMM process, as seen from a human perspective, is that an LMM is using a very large corpus of inputs, i.e. ‘general social knowledge’ [**], which is now in reach of (individual) humans (for the good or the bad). Of similar interest is that the functional principle underlying the LLM is (just) ‘replicator with variation’, which is the essence of (any) evolution, yet. The weakness of the (current) LMM process is that its built-in evolution option is underdeveloped. That is, a ‘replication leading to a successful human practice’ is not fed back into the LLM process (directly). Such feedback might occur (at the level of individual users) through the design of a prompt that references previous ‘successful replicators’. If that way of working occurs, a process might start moving LLM-human-couple towards a GAI (including the human bodily practice) and the ‘die Kirche könnte  das Dorf verlassen’ (the church might leave the village).

[*] see: Bennett, Max. 2023. A Brief History of Intelligence: Evolution, AI and the Five Breakthroughs That Made Our Brains. New York: Mariner Books.; Barret, Lisa Feldmann. 2018. How Emotions are Made - The Secret Life of the Brain. ´Mariner Books.

[**] see discussion about ‘general social knowledge’: https://www.rosalux.de/news/id/50774/unser-wissen-in-einem-topf; https://technosphere.blog/2026/01/31/marxs-technosphere-and-the-ai-powered-transition-to-postcapitalism-re-reading-the-fragment-on-machines/

[1] The expression “external representation” is taken from J. Renn’s works [Jürgen Renn (2020) The evolution of knowledge.  Princeton University, page 242] meaning: “External representation: Any aspect of the material culture or environment of a society that may serve as encoding of knowledge (landmarks, tools, artifacts, models, rituals, sound, gestures, language, music, images, signs, writing, symbol systems etc.). External representations can be used to share, store, transmit, appropriate, or control knowledge, but also to transform it. The handling of external representations is guided by cognitive structures. The use of external representations may also be constrained by semiotic rules characteristic of their material properties and their employment in a given social or cultural context, such as orthographic rules or stylistic conventions in the case of writing.”

[2] For many users, the way the function of an LLM is a ‚black box‘ (as many modern technologies) https://link.springer.com/content/pdf/bbm:978-3-031-97445-8/1

[3] including image treatment because it is often made via textual descriptions