Yes, if you eschew determinism for the sake of raw performance then the result will be non-deterministic. But you don't have to do this, nor is it inherently untenable to solve these problems in a deterministic way.
Sure it may require some performance overhead, and increase development time, but it's no different than writing deterministic code elsewhere. It's disingenuous to hand-wave away the solution because of some opaque cost or overhead we're unwilling to entertain. None of the parent posts ever mention performance tradeoffs.
In particular there is no indication that the problem being discussed couldn't be solved with determinism in an equivalent amount of time. You're making my point: GPUs are deterministic, software may decide not to be.
FWIW, I took “GPUs are deterministic” to mean they are deterministic in all possible intended use cases. This is not strictly true, since the whole point of using them is massive parallelism, which brings along non-determinism, for reasons that others have noted. Of course it’s possible to choose to forego that, but what is the point of a GPU in that case?
This is a false dichotomy. You can have massive parallelism and determinism.
You can trade determinism for convenience, but that doesn't make things easier: now you have to deal with the determinism.
But to suggest that massive parallelism somehow implies non-determinism is quite disingenuous from my perspective.
We have mutexes and lock-free ring buffers and stable sorts and all sorts of bells and whistles to make parallelism safe elsewhere. We also already have tools to solve this for GPUs.
Yes, if you eschew determinism for the sake of raw performance then the result will be non-deterministic. But you don't have to do this, nor is it inherently untenable to solve these problems in a deterministic way.
Sure it may require some performance overhead, and increase development time, but it's no different than writing deterministic code elsewhere. It's disingenuous to hand-wave away the solution because of some opaque cost or overhead we're unwilling to entertain. None of the parent posts ever mention performance tradeoffs.
In particular there is no indication that the problem being discussed couldn't be solved with determinism in an equivalent amount of time. You're making my point: GPUs are deterministic, software may decide not to be.