Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Training LLMs is not the only thing people are trying. They dominate the public attention right now but there are people everywhere trying all kinds of approaches. Here's one from IBM: https://research.ibm.com/topics/neuro-symbolic-ai

First sentence: "We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence"



I agree some people are doing novel work, but that’s a long way from “Everyone”.


Everyone is trying to get to AGI, and yes mostly through LLMs for now.

You said you don't believe LLMs are capable of ever getting there, so I offered a link showing people are trying other things as well. My point was never "Everyone is doing novel, non-LLM work towards AGI".

But everyone is in fact trying to get to AGI:

Google: https://www.fastcompany.com/91233846/noam-shazeer-back-at-go... https://deepmind.google/research/publications/66938/

Microsoft: https://www.microsoft.com/en-us/bing/do-more-with-ai/artific...

Meta: https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-...

Salesforce: https://www.forbes.com/sites/johnkoetsier/2023/09/12/salesfo...

Not to mention obvious suspects (OpenAI, Anthropic etc). Just because you think it won't work doesn't mean they're not trying. Everyone is trying to get to AGI.


OpenAI has specifically said LLM’s aren’t a path to AGI, though they think they have utility in understanding how society can and should interact with a potential AGI, especially from a policy perspective.

Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:

“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:

  Predictive
  Generative
  Autonomous and agents
  Artificial general intelligence”
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.


Please, before we discuss this further, and I would like to, provide some idea of what would qualify as an "actual effort into AGI" for you.


I exclude things like increasing processing power/infrastructure as slow AGI is still AGI even if it’s not useful. Yes AGI needs energy, no building energy infrastructure doesn’t qualify as actually working on AGI. You’re also going to need money but making money it’s isn’t inherent progress.

IMO, the fundamental requirements for AGI need at minimum: A system which operates continuously, improves in operation, and can set goals for itself. If you know the work you’re doing isn’t going to result in that then working towards AGI implies abandoning that approach and trying something new.

Basically researching new algorithms or types of computation could qualify, but iterative improvement on well studied methods doesn’t. So some research into biological neurons/brains qualifies but optimizing A* doesn’t even if it’s useful for what you’re working on. There’s a huge number of spin-offs from AI research that are really useful and worth developing, but also inherently limited.

I’m somewhat torn as to the minimum threshold for progress. Tossing 1 billion dollars worth of computational power at genetic algorithms wouldn’t produce AGI, but there’s theoretical levels of processing power where such an approach could actually work even if we’re nowhere close to building such systems. It’s the kind of moonshot that 99.99…% wouldn’t work, but maybe…

So, it may seem like moving the goalposts but I think the initial work on LLM’s could qualify, but subsequent refinement doesn’t.

Edited with some minor clarification.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: