I share your skepticism of LLM’s output but I don’t think it’s fair to say they know nothing about semantics. I think it’s still an open question as to what degree LLM’s encode a coherent world model. Also you can just ask chatgpt about objects and their relationships and it gets the answer right way more often than you’d expect by chance, so it has some understanding of the world. Not good enough for me to trust it though