Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Moving the mouse is the worst case scenario for a deep multicore processor because (1) it is a single-threaded latency sensitive task, (2) there are 23 other cores that can grab a lock and prevent the 1 core that matters from doing its job in a timely manner.


With regard to latency sensitivity, with so many cores available, a single core could be dedicated to processing mouse input. Of course, if there's a lock in the way that's held, that's still a problem.


For a modern system like Wayland or WDM, the display compositor gets rectangles of pixels from the applications, copies those into a big rectangle it sends to the rasterizer. Just before it does that, it draws on a mouse cursor. It also has to communicate with the applications about what image to draw so you have to deal with multiple threads no matter what.

(Unless you go back to the late 8-bit era where the cursor might be a just a hardware sprite that can be moved around by writing a few bytes.)


I believe the mouse cursor is still often a hardware sprite today.


Almost 100% guaranteed, but it is still possible for the OS to be delayed in sending it the coordinates. Of course that means that the OS is badly written.


By these standards, the only well-written operating systems I've ever seen are BeOS and QNX/Photon. Probably iOS (at least, older versions of it) deserves to be on the list, despite mouse use being atypical.

Which... yeah, that actually might be true.


iOS is the odd one out, because user input has first priority. IIRC, it cannot even do network I/O if you are holding your finger on the screen; all interrupts except those from the touchscreen are disabled for the duration.


I can say for sure this is true on sway (wayland) because while messing around seeing if the proprietary nvidia driver finally works yet (when will I learn... for anyone curious nvidia is still nvidia) one of the things I had to do was manually disable hardware cursors.


sure, no problem, negotiate with the application, but also act as a responsible shepherd of the desktop, if some timeout is exceeded draw the "application is busy" cursor.

this already happens, just after some laughably long chain of unprocessed input events


Or if it needs code that’s paged out or all cores are thermal limited or the GPU hung. CPU time usually isn’t the limiting factor.


So the 23 cores are there only to starve the 24th ? Sounds almost human.


Where are the mouse coprocessor equipped computers?


Hardware sprites (and presumably interrupts) made the Amiga mouse pointer move fluidly and responsively even when the CPU was busy. No complex graphics pipelines or laggy LCD screens to add further latency back then either.


Indeed - it's no deep magic, "all" you have to do is update the sprite's position from within the very same interrupt which reads the mouse counters. Oh, and make sure that nothing involved in that process can be paged out, ever. (The Amiga made sure of that by not supporting virtual memory!)


I do miss aiming a particle accelerator at my brain as a HCI method.


It was just form of 2D acceleration.

We even see it now, sometimes called "hardware cursor" in various games settings, althought pipeline is much longer.

Just that old small hardware had little to no memory protection and very tight integration so stuff like that could be done directly instead of going thru many layers of abstraction


Not sure if you missed the point or I'm missing yours, but the post you replied to points out that CRT displays sound insane in a certain light...


You can do it with very little. Prime example: https://www.folklore.org/StoryView.py?project=Macintosh&stor... drawing a mouse pointer on an almost stock Apple II, flicker-free because drawing is synced to the vertical blank interval, even though the hardware cannot see when that happens (that part I don’t quite understand; I can see them detecting ‘end of screen’ on an almost blank screen and programming the timer to generate a periodic interrupt at about the screen redraw frequency, but wouldn’t the VBL and the 6522 interrupts drift away from each other over time?)


Mouse cursors are mostly handled in hardware. GPUs composit the cursor during scanout, so all the OS has to do is calculate the new coordinates of the cursor and tell the GPU about them.


> Mouse cursors are mostly handled in hardware. GPUs composit the cursor during scanout, so all the OS has to do is calculate the new coordinates of the cursor and tell the GPU about them.

Is this true on modern Linux DEs (e.g. on KDE Plasma)?

Is it also true on Windows and macOS?


I don't know about other platforms, but in Windows, it had been true >20 years ago, before the desktop was even composited. Some fullscreen games also relied on this functionality, and sometimes you had an option in game settings to switch between hardware-accelerated cursor and manually drawn one (because the former could be buggy with some video drivers).


This is fairly basic functionality. Windows and wlroots-based compositors have it at least. I'm reasonably certain that other major compositors have it too.


Did you check?

I tell my QA people all the time: If you come to me with "I think" or "I believe", it sounds like you've got some reading to do.


I didn't write "I think" or "I believe".

If you want to know my epistemic status: I know about windows based on observable surface behavior and bugs related to accelerated cursors, but I haven't looked at the source. I know about wlroots because I saw the pull requests related to that and a flag to disable it in sway. The last statement regarding other platforms was an educated guess based on gfx card history: accelerated overlays are an ancient feature present in a lot of hardware, not some newfangled niche feature.

And I'm not one of your QA people.


You can actually test this pretty easily by monitoring GPU usage. When nothing on the screen changes, GPU usage should be pretty much zero, even at the lowest P-state (because the compositor isn't rendering new frames unless something changed). You can move the cursor around and GPU usage will stay at the same level, but do something that actually requires new frames to be rendered, like for example dragging a selection rectangle on the desktop, and you'll see GPU usage go up a bit. (And if you do the same in applications using GPU-accelerated frameworks you'll often see the GPU rev up a few P-states, for example scrolling in the Steam library list is enough to cause a transition for P2 for me, likewise many websites kick it up to P2 when you move the mouse or scroll). Another way to see how the cursor is not using the regular rendering pipeline is to move a window around. You'll invariably see that the window lags behind the cursor by 1-3 frames. Other ways where this becomes noticeable is when the graphics stack breaks down, but the cursor still works.


Just an FYI-- I tried to get ChatGPT to generate a question as condescending as what you just responded to. It now generally refuses to write anything impolite.

The most I could get was to ask it to take an avuncular tone, at which point it did ask you to "spill the beans, kiddo" about these compositors. :)


Optical mice themselves can have quite a complicated processor onboard. Luckily the image processing isn't done on the CPU/GPU.

I wonder if anyone ever sold a "software optical mouse".


> I wonder if anyone ever sold a "software optical mouse".

You could if you really want to...

https://8051enthusiast.github.io/2020/04/14/003-Stream_Video...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: