Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whole thing feels like a paper launch being held up by people looking for blog traffic missing the point.

I'd be pissed if I paid this much for hardware and the performance was this lacklustre while also being kneecapped for training



When the networking is 25GB/s and the memory bandwidth is 210GB/s you know something is seriously wrong.


It has connectx 200GB/s


No, the NIC runs at 200Gb/s, not 200GB/s.


What do you mean by "kneecapped for training"? Isn't it 128GB of VRAM enougth for small model training, that a current GC can't do?

Obviously, even with connectx, it's only 240Gi of VRAM, so no big models can be trained.


Spend some time looking at the real benchmarks before writing nonsense


You are quite rude here. I was asking questions. The benchmarks are very new and don't explains why it can used for training.

But if FP4 means 4bit floating point, and that the hardware capability of the DGX Spark is effectively only in FP4, then yes. That was nonsense to wish it could have been used for training. But it wasn't obvious from the advertising of nvidia.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: