A) number of times people want factual data from LLMs - the more they do it, the more they encounter gibberish generator. B) the amount of efforts to correct LLM output - some people get 80% ready output, spend some time to rewrite it to become correct and then tell on forums that LLM practically did most of the work. Other people in the same situation will say that they god gibberish and had to spend time rewriting, so LLMs are crap at that task. So we are not only seeing LLM bias, but then human reporting bias on top of it.