Why does Intel cripple ECC and I/O virtualization [0] in high-end consumer desktop systems? Force people to buy Xeon, even if the performance is otherwise equivalent or higher for the use case. Why does Nvidia restrict virtualization? Force people to buy Quadro and Tesla.
> nVidia
FYI, a few years ago, nVidia has officially changed their name to "Nvidia"...
[0] IOMMU is not only a virtualization feature, it's also an important security feature to protect the host from DMA attacks of malicious peripherals (e.g. 1394, ExpressCard, Thunderbolt, USB 4). Fortunately Intel no longer cripples IOMMU (VT-D) since Skylake, but ECC is another story.
I stand corrected. I also found a clarification from Wikipedia on the situation.
> "From the mid 90s to early-mid 2000s, stylized as nVIDIA with a large italicized lowercase "n" on products. Now officially written as NVIDIA, and stylized in the logo as nVIDIA with the lowercase "n" the same height as the uppercase "VIDIA".
> nVidia
FYI, a few years ago, nVidia has officially changed their name to "Nvidia"...
[0] IOMMU is not only a virtualization feature, it's also an important security feature to protect the host from DMA attacks of malicious peripherals (e.g. 1394, ExpressCard, Thunderbolt, USB 4). Fortunately Intel no longer cripples IOMMU (VT-D) since Skylake, but ECC is another story.