Or iGPU perf, which of course needs the above mentioned bandwidth.
Or ML inference performance (5 tokens/sec with llama 65B).
Or iGPU perf, which of course needs the above mentioned bandwidth.
Or ML inference performance (5 tokens/sec with llama 65B).