Another point is the term "phenomenal consciousness". I always thought that the adjective phenomenal served to make the phrase "phenomenal consciousness" unambiguous as opposed to just consciousness. I expect "phenomenal consciousness" to be about qualia; in materialism they are physically instantiated objects in the brain. But consciousness as a cognitive process need not involve qualia. It can be, in theory, implemented as software on hardware. Please disambiguate what you mean.
Phenomenal consciousness as opposed to access consciousness or creature consciousness. It is used to collectively refer to phenomenally conscious states, colloquially called experiences. This avoids the ambiguity in qualia, whether qualia are states (experiences) or properties.
Then it's an ontological distinction whether an abstract computational process is physically implemented in a way to create an entity called experience. But it's not a cognitive category because the same processes up to isomorphism can be experience or not. Right?
I mean that the chatbot and the human brain may be running the same algorithm at a certain level of abstraction and then it's cognitively the same event but not physically. One may be an experience the other may not. Therefore, talking about phenomenonal consciousness is about physics rather than cognition.
Say a human has knowledge gained in the proper way described in your article, and is confined to a room with no more access to reality. Do they lose the status of having knowledge? Probably not. Then their knowledge/understanding is encoded in their neuronal network, which has no contact with reality. Isn't this like the chatbots? Their understanding may well encode what you wish to delineate as knowledge.
The argument only depends on the external world diachronically. The cognitive architecture is shaped by evolutionary pressures and the contents are shaped by developmental learning. In a sensory deprivation setting, perception is driven top-down by imagination and internal monologue.
While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.
Another point is the term "phenomenal consciousness". I always thought that the adjective phenomenal served to make the phrase "phenomenal consciousness" unambiguous as opposed to just consciousness. I expect "phenomenal consciousness" to be about qualia; in materialism they are physically instantiated objects in the brain. But consciousness as a cognitive process need not involve qualia. It can be, in theory, implemented as software on hardware. Please disambiguate what you mean.
Phenomenal consciousness as opposed to access consciousness or creature consciousness. It is used to collectively refer to phenomenally conscious states, colloquially called experiences. This avoids the ambiguity in qualia, whether qualia are states (experiences) or properties.
Then it's an ontological distinction whether an abstract computational process is physically implemented in a way to create an entity called experience. But it's not a cognitive category because the same processes up to isomorphism can be experience or not. Right?
On my account, isomorphic processes are ontologically equivalent.
No way. A pen and pencil instantiation is ontologically equivalent to a computer program?
An instantiation is not a program. Pen and pencil are not equivalent to a silicon chip computer, but there is just one program.
I mean that the chatbot and the human brain may be running the same algorithm at a certain level of abstraction and then it's cognitively the same event but not physically. One may be an experience the other may not. Therefore, talking about phenomenonal consciousness is about physics rather than cognition.
Say a human has knowledge gained in the proper way described in your article, and is confined to a room with no more access to reality. Do they lose the status of having knowledge? Probably not. Then their knowledge/understanding is encoded in their neuronal network, which has no contact with reality. Isn't this like the chatbots? Their understanding may well encode what you wish to delineate as knowledge.
The argument only depends on the external world diachronically. The cognitive architecture is shaped by evolutionary pressures and the contents are shaped by developmental learning. In a sensory deprivation setting, perception is driven top-down by imagination and internal monologue.
But why can't such a state be encoded in a language model? Your definition relies on retrospective diagnosis rather than on inherent properties.
It can be; the question is whether it is.
In that case what is the point of the article? You seem to have made claims, not just introduced definitions and questions.
While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.