Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main argument I can think of for separating their memory is that, of course, GDDR can be optimized for bandwidth and regular ram can be optimized for latency or somewhere in between.

But, memory latency continues to make poor progress compared to CPU speeds. So, since we’re already going to need a complicated system of caches on the CPU side, maybe it is not such a big deal if CPU memory acts more like GDDR.



It doesn't explain why GPUs that are integrated into a CPU (like AMD's ones) and use the same memory, cannot use it all.

If they could use all memory, you could cheaply run neural networks with 64Gb of RAM without buying a professional GPU. No wonder manufacturers don't want you to be able to do that.


Both discrete and integrated GPUs can use as much system RAM as you allow them in your BIOS or operating system.

The problem is that system RAM is slow compared to onboard VRAM on discrete GPUs. Size isn't everything, speed and latency are also factors that have to be considered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: