Assuming that the existing corpus was already coherent with what experts find true (afaik, they used all available books and common knowledge resources), why would any amount of additional corrective statements make a difference for a retrained model? It’s not that our written knowledge was wrong all the time and we tolerated it until mid 2022.
I don’t really understand how it works, how its iterations are different or what the roadmap is. But what I managed to learn (better say feel) about LLMs isn’t very consistent with such linear predictions.
Well, maybe it will use downvotes as anti-prompts? Existing sources must have had votes too, but it was probably only a subset. Maybe the current iteration didn’t rank by vote at all, so the next one will really shine? Guess we’ll see soon.
I don’t really understand how it works, how its iterations are different or what the roadmap is. But what I managed to learn (better say feel) about LLMs isn’t very consistent with such linear predictions.
Well, maybe it will use downvotes as anti-prompts? Existing sources must have had votes too, but it was probably only a subset. Maybe the current iteration didn’t rank by vote at all, so the next one will really shine? Guess we’ll see soon.