> They're almost certainly not storing a static list of numbers.
Probably. But I wouldn't bet on it. I once borrowed a car that would glitch if you pressed the cruise control buttons too fast. Normally + and - buttons increase and decrease the speed by 1 km/h. But if you do it too fast, it sometimes eats the entry, and starts skipping one position. Eg. it would increase from 105 to 107, and decrease from 107 to 105. It was persistent until cruise control was turned entirely off and on again. Eh? Making that bug must have taken more effort than doing it correctly. I guess it must be populating linked lists of possible speeds, and then screwing up the links when clicking too fast? (that was Jeep Renegade)
A lot of the digital cruise controls that I've used in cars increment by 1 for each press, but increment by jumps (3-5 IME) if you hold it down. I wonder if that bug is a state machine problem. Pressing fast enough puts it into the "holding" mode, but because you're not actually holding, it also doesn't register that you've "stopped" holding.
>However the Microsoft WaitForSingleObject and WaitForMultipleObjects did not have an efficient implementation, which is why they had to add WaitOnAddress
WaitForSingle/MultipleObjects wait for kernel objects, similiar to poll. WaitOnAddress is lightweight synchronization, equivalent to futex. Windows doesn't have something like WaitForMultipleAddresses. futex_waitv was used by Wine patches because they implement NT kernel objects in userspace, and there were some semantic differences that made it hard to represent them as eventfds.
PS: But using futexes to emulate kernel objects breaks security boundaries of a process. That's why it was never merged into upstream Wine, and NTSYNC was developed.
Benaphore is a kernel synchronization object behind atomic counter for uncontended path. But the kernel object needs to be always here, initialized and destroyed when appropriate.
Futex doesn't need any kernel initialization, and from perspective of kernel it doesn't exist at all when there are no waiters.
(see also CRITICAL_SECTION, which originally always had kernel object created with it, but it was later changed to create them on-demand falling back to keyed event on failure. SRWLock only uses keyed events. Keyed event only needs one object per process, and otherwise consumes kernel resources only if there are any waiters.)
> Many people won’t worry about crashed threads, as they often will crash the whole program. However, you can catch the signal a crash generates and keep the overall process from terminating.
That doesn't help if the entire process dies for any reason and you want to clean up the locks. Solution to that is called "robust" locks. You can register list of held futexes with the kernel using sys_set_robust_list, and when the thread dies kernel for each entry will set a specific bit and wake waiter if there's one.
> You can register list of held futexes with the kernel using sys_set_robust_list, and when the thread dies kernel for each entry will set a specific bit and wake waiter if there's one.
My biggest worry with that kind of thing is that the lock was guarding something which is now in an inconsistent state.
Without thoroughly understanding how/why the particular thread crashed, there's no guarantee that the data is in any sort of valid or recoverable state. In that case, crashing the whole app is absolutely a better thing to do.
It's really cool that the capabilities exist to do cleanup/recovery after a single thread crashed. But I think (off-the-cuff guess) that 95% of engineers won't know how to properly utilize robust locks with robust data structures, 4% won't have the time to engineer (including documentation) that kind of solution, and the last 1% are really really well-paid (or, should be) and would find better ways to prevent the crash from happening in the first place.
The concern about state consistency applies to all error conditions, not just those that occur while holding a mutex lock.
It doesn’t matter if multiple threads are running or just one - the process could be in the middle of updating a memory-mapped file, or communicating with another process via shared memory, or a thousand other things.
Ensuring consistency is excruciatingly hard. If it truly matters, the best we have is databases.
one option is to use a non-blocking algorithm that by definition will maintain consistency at instruction boundary. In fact you won't even need robust mutexes this way.
But of course consistency is only maintained if the algorithm is implemented correctly; if you are trying to robustly protect against a process randomly crashing, you might not want to rely on the correctness of that process (or that process randomly writing to shared memory on its way to a segfault).
That also assumes that your data only needs consistency within a machine word (i.e., data types where the CPU can support atomic instructions). If you're just trying to ensure that 64 bits of data are consistent, that's fine, but that usually wasn't so hard anyway.
You need atomic operations at the synchronization points, but you can build arbitrary complex data structures on top of it. For example a lock free tree will stay consistent even if a process dies in the middle of an update (you might need to garbage collect orphaned unpublished nodes).
If you're using futexes across processes (or any other cross-process state), one generic approach is for a watchdog process to keep a SOCK_STREAM or SOCK_SEQPACKET Unix domain socket open for each process so it can reliably detect when a process crashes and clean up its per-process state.
Neither CRITICAL_SECTION nor SRWLock enters the kernel when uncontended. (SRWLock is based on keyed events, CRITICAL_SECTION nowadays creates kernel object on-demand but falls back to keyed event on failure)
What _is_ the big difference between CRITICAL_SECTION and a futex, really? I always assumed that futexes were meant to be pretty similar to CRITICAL_SECTION (mostly-userspace locks that didn't have fatal spinning issues on contention).
A critical_section is a mutex, a futex is a general synchronization primitive (a critical_section might be implemented on a more general primitive of course, I'm not a Windows expert).
Critical section was IIRC built on top of windows manual/auto reset events which are a different primitive useful for more than just mutex but without the userspace coordination aspect (32 bit value) of futexes.
Well, technically both WaitOnAddress and SRWLOCK use the same "wait/wake by thread ID" primitive. WaitOnAddress uses a hash table to store the thread ID to wake for an address, whereas SRWLOCK can just store that in the SRWLOCK itself (well, in an object on the waiter's stack, pointed to by the SRWLOCK).
>If that manufacturer key is known, it only takes two samples from an authenticator to determine the sequence key.
Not if seed with appropriate length is used. Though I don't know how common that is, back in 2008 authors noted that "We would
like to mention that none of the real-world KeeLoq systems we analyzed used
any seed" (https://www.iacr.org/archive/crypto2008/51570204/51570204.pd..., section 4.3)
Modern monitor technology has more than enough technology that adding more is most certainly not my cup of tea. Made worse ironically by modern rendering techniques...
Though my understanding is that it helps hide shakier framerates in console land. Which sounds like it could be a thing...
Your vision have motion blur. Staring at your screen at fixed distance and no movement is highly unrealistic and allows you to see crisp 4k images no matter the content. This results in a cartoonish experience because it mimics nothing in real life.
Now you do have the normal problem that the designers of the game/movie can't know for sure what part of the image you are focusing on (my pet peeve with 3D movies) since that affects where and how you would perceive the blur.
Also have the problem of overuse or using it to mask other issues, or just as an artistic choice.
But it makes total sense to invest in a high refresh display with quick pixel transitions to reduce blur, and then selectively add motion blur back artificially.
Turning it off is akin to cranking up the brightness to 400% because otherwise you can't make out details in the dark parts off the game ... thats the point.
But if you prefer it off then go ahead, games are meant to be enjoyed!
Your eyes do not have built-in motion blur. If they are accurately tracking a moving object, it will not be seen as blurry. Artifically adding motion blur breaks this.
Sure they do, the moving object in focus will not have motion blur but the surroundings will. Motion blur is not indiscriminately adding blur everywhere.
> Motion blur is not indiscriminately adding blur everywhere.
Motion blur in games is inaccurate and exaggerated and isn’t close to presenting any kind of “realism.”
My surroundings might have blur, but I don’t move my vision in the same way a 3d camera is controlled in game, so in the “same” circumstances I do not see the blur you do when moving a camera in 3d space in a game. My eyes jump from point to point, meaning the image I see is clear and blur free. When I’m tracking a single point, that point remains perfectly clear whilst sure, outside of that the surroundings blur.
However motion blur in games does can literally not replicate either of these realities, it just adds a smear on top of a smear on top of a smear.
So given both are unrealistic, I’d appreciate the one that’s far closer to how I actually see which is the one without yet another layer of blur. Modern displays add blur, modern rendering techniques add more, I don't need EVEN more added on top with in-game blur on top of that.
> Take something like Rocket League for example. Definitely doesn't have velocity buffers.
How did you reach this conclusion? Rocket League looks like a game that definitely have velocity buffers to me. (Many fast-moving scenarios + motion blur)
Probably. But I wouldn't bet on it. I once borrowed a car that would glitch if you pressed the cruise control buttons too fast. Normally + and - buttons increase and decrease the speed by 1 km/h. But if you do it too fast, it sometimes eats the entry, and starts skipping one position. Eg. it would increase from 105 to 107, and decrease from 107 to 105. It was persistent until cruise control was turned entirely off and on again. Eh? Making that bug must have taken more effort than doing it correctly. I guess it must be populating linked lists of possible speeds, and then screwing up the links when clicking too fast? (that was Jeep Renegade)