To be fair, the PCI Express bus on the current Pi was kind of an afterthought, only meant to work with a very limited set of devices, so I'm pretty sure nobody at Broadcom, and few in the design stages at Raspberry Pi, had ever tested more complex devices with it.
It works fine in _most_ cases for simple devices (USB controllers, SATA, NVMe, WiFi, and the like), but really falls apart for more advanced devices (hardware RAID, GPU, TPU, etc.).
And all Arm processors have to deal with cache coherence issues (which aren't a problem on X86), meaning some drivers (notably, AMD still) need to program for the different architecture (some patches exist but they're not perfect yet, and not in mainline Linux).
> To be fair, the PCI Express bus on the current Pi was kind of an afterthought, only meant to work with a very limited set of devices, so I'm pretty sure nobody at Broadcom, and few in the design stages at Raspberry Pi, had ever tested more complex devices with it.
The first Raspberry Pi was sold over a decade ago and I 'member people actually using them as embedded boards for whatever stuff they were working with eight years ago (especially once the GPU performance became powerful enough to run digital signage). Sorry but that a company like Broadcom can't be arsed to develop a standards-compliant PCIe interface is a joke, and with this kind of attitude the ARM world complains that no one buys their chips?!
(Edit: Oh, just noticed whom I replied to - the person who wrote the article I referred to. HN is a small world indeed, and I guess we share at least some of our frustrations)
> meaning some drivers (notably, AMD still) need to program for the different architecture (some patches exist but they're not perfect yet, and not in mainline Linux).
Drivers... oh don't get me started on that front. Everyone in the x86 space seems to have learned over the last two decades that it is a good idea to submit drivers to the Linux kernel early. Intel and AMD both do that for CPUs and also for a lot of their other stuff. In contrast, the entire embedded world still locks away drivers behind years-old kernel forks, ridiculous NDAs, absurdly expensive dev boards, completely whack u-boot forks and even more whack BSPs.
x86 is two well organized behemoths designing chips (including the base designs for most motherboards).
On ARM, some SoC designers slap together IP blocks without fully understanding the implications. Not all SoC designers. But there are a lot out there, and not all have as high of standards as Intel and AMD.
IIRC, the PCIe implementation on the SolidRun Honeycomb is quite good and can be used with GPUs.
It works fine in _most_ cases for simple devices (USB controllers, SATA, NVMe, WiFi, and the like), but really falls apart for more advanced devices (hardware RAID, GPU, TPU, etc.).
And all Arm processors have to deal with cache coherence issues (which aren't a problem on X86), meaning some drivers (notably, AMD still) need to program for the different architecture (some patches exist but they're not perfect yet, and not in mainline Linux).