If you truly wanted to maximize performance per watt, you'd pick a very different design more reminiscent of GPU's. But then single-thread and low-multi-thread performance would really suck. So it will always be a carefully tuned tradeoff.
Nope. Not even. Again, as the grandparent post stated, everything is being sold with the default configuration being redlined.
You absolutely can reign it back in to sanity, and get better performance per watt over the previous gen, and still be noticeably faster over the previous gen.
-- -----
With AMD, apparently we're going to see BIOS updates from board manufacturers to make doing that simple. Set it to a lower power limit in a few keystrokes, still have something blazing fast (faster than previous gen), but use 145W instead of 250W. Or go even lower, still be a bit faster than previous gen while using around 88W on a 7950X instead of the 118W a 5950X did.
Intel -- who has been redlining their CPUs for years now -- even noted Raptor Lake's efficiency at lower power levels. Again, cut power consumption by 40-50%, for only a tiny performance hit. They actually made entire slides for their recent presentation highlighting this!
NVIDIA no different, and has been for years. Ampere stock voltages were well outside their sweet spot. Undervolt, cut power consumption by 20-25% and have performance UNCHANGED.
-- -----
Sure, there's more efficient stuff. Take last generation's 8-core Ryzen 7 PRO 5750GE. About 80% of the performance of an Intel Core i5-12600K, but only uses 38W flat out instead of 145W.
You don't even really need to rein it back, modern processors will throttle back automatically depending on how effective the cooling is. Anyway, the issue with manual undervolting is that it may adversely impact reliability if you ended up with a slightly substandard chip, that will still work fine at stock settings. That's why it can't just be a default.
>You don't even really need to rein it back, modern processors will throttle back automatically depending on how effective the cooling is
This isn't about thermals. This is about power consumption.
I'm not suggesting reigning in a CPU's power limits because it's "too hot".
I'm suggesting getting 95% of the performance for 59% of the power consumption. Because it's not worth spending 72% more on electricity for 5% increased performance. Again, even the manufacturers themselves know this and are admitting it. Look at this slide from Intel: https://cdn.arstechnica.net/wp-content/uploads/2022/09/13th-... Purported identical performance of previous gen at 25% of the power consumption. They KNOW the default configuration setting (which you can change in a few keystrokes) is total garbage in terms of power efficiency.
-----
I guarantee you server CPUs aren't going to be configured to be this idiotic out of the box. Because in the datacenter, perf/W matters and drives purchasing decisions.