Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, CPU inference. For llama.cpp with Apple M1/M2 the GPU inference (via metal) is about 5x faster than CPU for text generation and about the same speed for prompt processing. Not insignificant but not giant either.

You generally can't hook up large storage drives to nvme. Those are all tiny flash storage. I'm not sure why you brought it up.



> You generally can't hook up large storage drives to nvme. Those are all tiny flash storage.

What’s your definition of large?

2TB and 4TB NVME are not tiny. You can even buy 8TB NVMEs, though those are more expensive and IMHO not worth it for this use case.

2TB NVMEs are $60-$100 right now.

You can attach several of those via Thunderbolt/USB4 enclosures providing 2500-3000 MB/s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: