Say a human has knowledge gained in the proper way described in your article, and is confined to a room with no more access to reality. Do they lose the status of having knowledge? Probably not. Then their knowledge/understanding is encoded in their neuronal network, which has no contact with reality. Isn't this like the chatbots? Their understanding may well encode what you wish to delineate as knowledge.
The argument only depends on the external world diachronically. The cognitive architecture is shaped by evolutionary pressures and the contents are shaped by developmental learning. In a sensory deprivation setting, perception is driven top-down by imagination and internal monologue.
While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.
Say a human has knowledge gained in the proper way described in your article, and is confined to a room with no more access to reality. Do they lose the status of having knowledge? Probably not. Then their knowledge/understanding is encoded in their neuronal network, which has no contact with reality. Isn't this like the chatbots? Their understanding may well encode what you wish to delineate as knowledge.
The argument only depends on the external world diachronically. The cognitive architecture is shaped by evolutionary pressures and the contents are shaped by developmental learning. In a sensory deprivation setting, perception is driven top-down by imagination and internal monologue.
But why can't such a state be encoded in a language model? Your definition relies on retrospective diagnosis rather than on inherent properties.
It can be; the question is whether it is.
In that case what is the point of the article? You seem to have made claims, not just introduced definitions and questions.
While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.