While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.
The argument in this post is analogous to evolutionary debunking. It is that the training incentives do not sufficiently privilege truth tracking to make it emerge as a core aspect of the resulting architecture.
But why can't such a state be encoded in a language model? Your definition relies on retrospective diagnosis rather than on inherent properties.
It can be; the question is whether it is.
In that case what is the point of the article? You seem to have made claims, not just introduced definitions and questions.
While I make the claim confidently for clarity, it's just a conjecture, a guess, something that occurred to me. The conjecture is that for standard transformer autoregressive LLM training the conditions are not conducive to the emergence of phenomenal consciousness and the narrow notion of knowledge.
But I don't understand this narrow notion of knowledge. It's defined by a historical path rather than properties. I don't like such a concept. Or perhaps I have missed the true definition of knowledge versus understanding.
The argument in this post is analogous to evolutionary debunking. It is that the training incentives do not sufficiently privilege truth tracking to make it emerge as a core aspect of the resulting architecture.
Fair enough. This is expanded on more in future articles.