It doesn't really affect the other frontier labs too much because OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
Yeah, but they know how to get the quality human labeled data at scale better than anyone — and they know what Anthropic and OpenAI wanted — what made it quality
Where are you getting this information? What basis do you have for making this claim? OpenAI, despite its public drama, is still a massive brand and if this were exposed, would tank the company's reputation. I think making baseless claims like this is dangerous for HN
I think Gell-Mann amnesia happens here too, where you can see how wrong HN comments are on a topic you know deeply, but then forget about that when reading the comments on another topic.
I doubt they'll restrict it to their own models. The amount of business intel they'd get on the coding performance of competing models would be invaluable.
The "video to learning app" feature is a cool concept (see it in AI Studio). I just passed in two separate Stanford lectures to see if it could come up with an interesting interactive app. The apps it generated weren't too useful, but I can see with more focus and development, it'd be a game changer for education.