The point was that LLMs are not well set up to find new insights unless they are already somehow contained in the knowledge they have been trained on.
This can mean "contained indirectly" which still makes this useful for scientific purposes.
The fact that the author maybe underestimated the knowledge about the topic of the article already contained within an LLM does not invalidate this point.
> The point was that LLMs are not well set up to find new insights unless they are already somehow contained in the knowledge they have been trained on.
The author is, to use his phrase, "deeply uninformed" on this point.
LLMs generalize very well and they have successfully pushed the frontier of at least one open problem in every scientific field I've bothered to look up.
Yes, but only after this post in science and on HN. As has been mentioned above, one of the links it offers is this very post.
So, AI will look online and synthesize the latest relevant blog posts. In Gemini's case, it will use trends to figure out what you are probably asking. And since this post caught traction, suddenly the long tail of related links are gaining traction as well.
But had Derek asked the question before writing the article, his links would not have matched. And his point that it isn't the AI that figured out that something has changed, remains relevant.
OT, I really enjoy his posts. As AI takes over, will we even read blog posts [enough for authors like him to keep writing], or just get the AI cliff notes - until there is no one writing novel stuff?
The point was that LLMs are not well set up to find new insights unless they are already somehow contained in the knowledge they have been trained on. This can mean "contained indirectly" which still makes this useful for scientific purposes.
The fact that the author maybe underestimated the knowledge about the topic of the article already contained within an LLM does not invalidate this point.