Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why wouldn’t they provide a hosted version? Seems like a no brainer… they have the money, the hardware, the bandwidth, the people to build support for it, and they could design the experience and gather more learning data about usage in the initial stages, while putting a dent in ChatGPT commercial prospects, and all while still letting others host and use it elsewhere. I don’t get it. Maybe it was just the fastest option?


Probably the researchers at meta are only interested in research, and productionizing this would be up to other teams.


But Yann LeCun seems to think the safety problems of eventual AGI will be solved somehow.

Nobody is saying this model is AGI obviously.

But this would be an entry point into researching one small sliver of the alignment problem. If you follow my thinking, it’s odd that he professes confidence that AI safety is a non issue, yet from this he seems to want no part in understanding it.

I realize their research interest may just be the optimization / mathy research… that’s their prerogative but it’s odd imho.


It’s not that odd and I think you’re overestimating the importance of user submitted data for the purposes of alignment research. In particular because it’s more liability for them to try to be responsible for outputs. Really though, this way they get a bunch of free work from volunteers in open source/ML communities.


Yes sounds like a reasonable explanation, thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: