I'd counsel you to work with LLMs daily and agree that we're no where close to LLMs that work properly consistently outside of toy use cases, where examples can be scraped from the internet. If we can agree on that we can agree that General Intelligence is not the same thing as a, sometimes, seemingly random guess at the next word...