Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think he's given up on it.

How many decades did it take for neural nets to take off?

The reason we're even talking about LeCun today is because he was early in seeing the promise of neural nets and stuck with it through the whole AI winter when most people thought it was a waste of time.





But neural nets were always popular, they just went through phases of hype depending on the capacity of hardware at the time. The only limitation of neural nets at the time was computational power to scale up. AI winters came when other techniques became available that required less compute. Once GPGPU became available, all of that work became immediately viable.

No similar limitations exist today for JEPA, to my knowledge.


Depends on how far back you are going. There was the whole 1969 Minsky Perceptron flap where he said ANNs (i.e Perceptrons) were useless because they can't learn XOR (and no-one at the time knew how to train multi-layer ANNs), which stiffled ANN research and funding for a while. It would then be almost 20 years until the 1986 PDP handbook published LeCun and Hinton's rediscovery of backpropagation as a way to train multi-layer ANNs thereby making them practical.

The JEPA parallel is just that it's not a popular/mainstream approach (at least in terms of well funded research), but may eventually win out over LLMs in the long term. Modern GPUs provide plenty of power for almost any artifical brain type approach, but of course are expensive at scale, so lack of funding can be a barrier in of itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: