Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I/O to the peripherals can generally be done with either memory mapped I/O or a separate I/O address space (port mapped I/O) accessed with different instructions than memory accesses. x86 can do either, but some architectures don't have a separate I/O space. PCI allows decices to require I/O access to be used, so on systems without it, it's got to be emulated by the controller somehow. For x86, the I/O space is only 16-bit, vs at least 32-bit for MMIO (but many devices allow a 64-bit MMIO address). For devices like GPUs with large amounts of RAM, you can feasibly map the whole memory with MMIO and access it directly, whereas with I/O ports, you'd probably need to write a destination address to one port and then read/write from another port. If you wanted concurrency, you'd need multiple pairs of access ports or locking, rather than letting the memory subsystems arbitrate access.

As others have said, you'd normally configure the MMIO space for uncached access, or you'd need to be careful to force the memory ordering you need. The device specific interfacing requirements would be the guide there. Devices can indicate if their MMIO ranges are prefetchable or not, which should indicate if stray reads would cause side effects or not.

One bonus of MMIO is DMA could interface with other devices, whereas I don't think devices are allowed to drive the I/O bus like that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: