That is my interpretation but I need a more detailed paper because everything is quite vague. Based on what I read, the claim isn't that great (yet). Everyone is gunning to be the historical first here.
"Hey, we have a new fluid flow simulation processor that reaches CFD supremacy over classical computers... we pour water across the surface of things and the results are indistinguishable to reality compared to the days, weeks, months it would take to simulate and still wouldn't reach the same resolution of accuracy on a classical computer."
It's not quite as bad as an analog water simulator made of water. They have built a programmable machine that can run any quantum problem it can load into its qubits. The problem is, it has so few qubits that it can only load tiny problems. The claim is that it can still beat classical computers on some subset of these tiny problems.
Is it correct that the claim “that it can still best classical computers on some subset of these tiny problems” based on classical computers simulating quantum circuitry to compute those problems?
I’m wondering if there is some other way the problem could be computed without having to simulate qubits.
A lot of work has gone into looking for classical algorithms for these sampling problems since the mid-2000s, and we have found some speedups, but everything still suggests that they are truly exponential on classical hardware. The classical algorithms for these problems don't involve modeling qubits directly, but they do involve sampling exponentially many states and averaging over the results, which is a lot like modeling qubits.
"Hey, we have a new fluid flow simulation processor that reaches CFD supremacy over classical computers... we pour water across the surface of things and the results are indistinguishable to reality compared to the days, weeks, months it would take to simulate and still wouldn't reach the same resolution of accuracy on a classical computer."