Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us.

Yes, I got that that was the original claim. I still disagree with us. What makes them feel alive and intelligent is that they produce human-like language output, not that the process by which they construct that output is sequential. Non-autoregressive LLMs of equal output quality would (do) appear just as alive and intelligent as autoregressive LLMs. An autoregressive LLM behind a non-streaming request/response interface where the token-by-token sequencing of the response is not exposed to the user still seems just as intelligent as one where the output is streamed to the user.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: