OpenAI has specifically said LLM’s aren’t a path to AGI, though they think they have utility in understanding how society can and should interact with a potential AGI, especially from a policy perspective.
Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:
“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:
Predictive
Generative
Autonomous and agents
Artificial general intelligence”
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.
I exclude things like increasing processing power/infrastructure as slow AGI is still AGI even if it’s not useful. Yes AGI needs energy, no building energy infrastructure doesn’t qualify as actually working on AGI. You’re also going to need money but making money it’s isn’t inherent progress.
IMO, the fundamental requirements for AGI need at minimum: A system which operates continuously, improves in operation, and can set goals for itself. If you know the work you’re doing isn’t going to result in that then working towards AGI implies abandoning that approach and trying something new.
Basically researching new algorithms or types of computation could qualify, but iterative improvement on well studied methods doesn’t. So some research into biological neurons/brains qualifies but optimizing A* doesn’t even if it’s useful for what you’re working on. There’s a huge number of spin-offs from AI research that are really useful and worth developing, but also inherently limited.
I’m somewhat torn as to the minimum threshold for progress. Tossing 1 billion dollars worth of computational power at genetic algorithms wouldn’t produce AGI, but there’s theoretical levels of processing power where such an approach could actually work even if we’re nowhere close to building such systems. It’s the kind of moonshot that 99.99…% wouldn’t work, but maybe…
So, it may seem like moving the goalposts but I think the initial work on LLM’s could qualify, but subsequent refinement doesn’t.
Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:
“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.