Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.

Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.



What is becoming more obvious is that people on Hacker News apparently do not understand the concept of non-determinism. Acting as if the output of an LLM is deterministic, and that it returns the same result for the same prompt every time is foolish.


Run the prompt 100 times. I'll wait. I'll estimate you won't get a shell command 1-2% of the time. Please post snark on reddit. This site is for technical discussion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: