LLMs aren't intelligent, they're 'just' very good at repeating information back in a different format.In the way that a student becomes a teacher, and a trainee becomes a boss, so LLMs eventually (through your asking questions and providing them with information) will eventually be capable of the same.
In information theory terms they don't add anything to the system, they reduce information.
They're fantastic tools, but they really don't deserve the term 'intelligence'. If you keep feeding them the output of LLMs then they progressively get less and less useful. Like a digital version of 'Idiocracy'.
Or in other words, they're closer to HAL than Skynet or Data.
Last edited:

