Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I, naively (an uninformed guess), considered the non-determinism (multiple results possible, even with temperature=0 and fixed seed) stemming from floating point rounding errors propagating through the calculations. How wrong am I ?


You may be interested in https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldm... .

> The non-determinism at temperature zero, we guess, is caused by floating point errors during forward propagation. Possibly the “not knowing what to do” leads to maximum uncertainty, so that logits for multiple completions are maximally close and hence these errors (which, despite a lack of documentation, GPT insiders inform us are a known, but rare, phenomenon) are more reliably produced.


Also uninformed but I can't see how that would be true, floating point rounding errors are entirely deterministic


Not if your scheduler causes accumulation in a different order.


Are you talking about a DAG of FP calculations, where parallel steps might finish in different order across different executions? That's getting out of my area of knowledge, but I'd believe it's possible


Well a very simple example would be if you run a parallel reduce using atomics the result will depend on which workers acquire the accumulator first.


They're gonna round the same each time you're running it on the same hardware.


but they're not: they are scheduled on some infrastructure in the cloud. So the code version might be slightly different, the compiler (settings) might differ, and the actual hardware might differ.


With a fixed seed there will be the same floating point rounding errors.

A fixed seed is enough for determinism. You don't need to set temperature=0. Setting temperature=0 also means that you aren't sampling, which means that you're doing greedy one-step probability maximization which might mean that the text ends up strange for that reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: