Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How much would that run for 5K?


Probably. Or AMD pulls another rabbit out and calls it $2500 in mid 2021.


I doubt it. This is new AMD with ridiculous prices when they face no competition. I'd say it's more likely to expect $6k than $2.5k.


I don't really see how the pricing is ridiculous though. They're still way cheaper than what Intel charged for the same cores a year ago, and they have more cores than Intel can even offer on their best workstation chips. If you compare against 28 core Xeons, the new Threadrippers are a downright bargain.


They increased pricing by $100/$200 for 24/32 core CPU since last year. May be it's not ridiculous, but they are not undercutting anymore.


They are undercutting - on perf/$. The difference in sticker price you pointed out is only there because there is no comparable intel part (under $3000 at least)


I meant in comparison to what we were used to. Now a semi-decent TRX40 board is $700, entry-level TR3 is $1300. Top-end x399 board is $550, entry-level TR is $250. There was a huge jump in prices compared to previous generation of HEDT.


The price jump was +8% for 24 cores and +11% for 32 cores. It wasn't really a "huge" jump. The reason it seems so much more expensive is because the cheaper SKUs were simply removed instead of replaced.

And entry-level TR was never $250. That's the EOL "we need to dump old inventory" fire sale price.


Yeah, but everybody expected a price drop with new gen, instead we got overpriced TRX40 (compared to high-end x399) and the same number of cores for more $. I get it those cores are much more powerful, but it's still an untypical situation. 2990WX will probably stay at the same level as there is no other choice for x399 owners anyway.


> 2990WX will probably stay at the same level as there is no other choice for x399 owners anyway.

2990WX is pretty bad for most use cases. Its only good for rendering.

The 24-core 3960x probably is better due to lower latency, and better balanced I/O.


I agree, but as an x399 2990WX will be the end of the line and therefore no reason to lower the prices, like was the case with 4790k. 2990WX loses many benchmarks to 3900x, 3960x should demolish it most of the time.

Reinforcement learning might be a good use case for 2990WX.


It seems like people benefit from the higher clocks and lower latencies from the 16-core 2950x. I pretty much consider the 2950x to be the end-of-the-line for typical use cases... with the 2990wx only really used for render-boxes.

> Reinforcement learning might be a good use case for 2990WX.

Hmm, the 2990wx is better than the 2950x for that task, but the 3960x has 256-bit AVX2 vectors. Since the 2990wx only has 128-bit AVX2 vectors, I would place my bets on the cheaper, 24-core 3960x instead.

Doubling the SIMD-width in such a compute-heavy problem would be more important than the +12 cores that the 2990wx offers.

EDIT: The 3960x also fixes the latency issues that the 2990wx has, so its acceptable to use the 3960x in general-use case scenarios (aka: video games). The latency-issue made the 2990wx terrible at playing video games.

Yeah, no one is buying these HEDTs for "purely" gaming tasks, but any "creative" individual who does video rendering during the week, but plays Counterstrike on the weekend, needs a compromise machine that handles both high-core counts AND high-clock speeds for the different workloads.


> but everybody expected a price drop with new gen

No they weren't. I certainly wasn't. There was no reason at all to believe TR3 would be a price drop. Ryzen 3000 wasn't and neither was X570. If the mainstream platform parts didn't get a price drop why would the HEDT halo products? Particularly since new generations are almost never price drops, especially without any competition?

> instead we got overpriced TRX40 (compared to high-end x399)

X399 at launch ranged from $340 to $550. TRX40 at launch ranges from $450 to $700. Yes there was a bump there, but there is also overlap in pricing, too. You are getting PCI-E 4.0 instead along with a substantially higher spec'd chipset. You're also getting in general a higher class of board quality & construction. Similar to the X570 vs. X470 comparison.

> but it's still an untypical situation

Untypical in that they are actually a lot faster generation over generation, sure. Untypical in that they are priced similarly or slightly more? Not really. That's been status quo for the last decade or so. The company with the halo product sets the price. The company in 2nd place prices cuts in response. AMD has the halo, they were never going to price cut it.


Top-end TRX40 is around $1000 (Zenith II). That's almost double of Zenith Extreme x399; x399 had 16-phase VRMs as well in later releases; PCIe 4's usefulness is questionable (basically just for 100 Gigabit networking right now).

For x399 users TRX40 is underwhelming as it just feels like "pay for the same stuff again" if you want to use new CPUs.


> Top-end TRX40 is around $1000 (Zenith II)

Halo boards are always stupidly overpriced. X570 tops out at $1000, too. That's a terrible way to judge a platform's costs.

> PCIe 4's usefulness is questionable (basically just for 100 Gigabit networking right now).

Not true at all. It's more bandwidth to the chipset, meaning you can run double the PCI 3.0 gear off of that chipset than you could before without hitting a bottleneck (well actually 4x since the number of lanes to the chipset also doubled...). That means more SATA ports. More M.2 drives. More USB 3.2 gen 2x2.

> For x399 users TRX40 is underwhelming as it just feels like "pay for the same stuff again" if you want to use new CPUs.

Not disagreeing on that but that's very different from TRX4 is "overpriced vs X399." Just because it's not worth upgrading to the new platform doesn't make the new platform overpriced vs. the old one.


> It's more bandwidth to the chipset, meaning you can run double the PCI 3.0 gear off of that chipset than you could before without hitting a bottleneck

Not necessarily the case in practice since that would require some sort of chipset or active converter exposed by the motherboard to mux 3.0 lanes to bifurcated 4.0 lanes. A 3.0 x4 device still needs those four lanes to get full speed so in a PCI-e 4.0 setting you’ll actually be using up four of the PCIe 4.0 lanes, but inefficiently.


This comment doesn't make sense. You're getting more cores than previous generation of HEDT, and the equivalent Intel processors aren't cheaper at all.

This is a new market segment. If you want a fast cpu, the Ryzen 7 and 9 series are completely fine if you want that price range!

The exact same price range as you're used to still exist.

On the other hand, people have been used to paying exorbitantly for Xeon processors, like 2000-5000 per cpu, so this is a breath of fresh air.


You are getting the same number of (better) cores for higher price. But backwards/forward compatibility is gone, so you either pay the full price upfront, or are stuck with outdated chips forever.


Board cost is due to PCIe4 support mainly


The bigger point is that there was no reason to kill X399 support at all. It's the same physical socket, the socket is capable of supporting much more than Threadripper did with it (Epyc uses the same socket for 8 memory channels), and the power consumption has not increased significantly compared to TR 2000 series.

There was no reason to kill TR4. It could have been a "legacy" board with PCIe 3.0 support, like X470 is for the desktop socket.

AMD just killed TR4 because they wanted everyone to buy new boards. The classic Intel move.

(meanwhile Intel put a new generation of chips on X299, while also putting out a compatible X299X socket that increases lane count. Intel doing it right for once, AMD doing it wrong for once.)


Which is kinda unnecessary as there is no single GPU on the market capable of saturating PCIe3 and situations where one needs a sustained transfer between multiple M.2 SSDs that could saturate PCIe4 are very rare. Only 100Gbps+ LAN is probably practical for a few total pro users.


Actually, its pretty easy to get bandwidth-bottlenecked in GPU-compute.

I know video games don't really get bandwidth bottlenecked, but all you gotta do is perform a "Scan" or "Reduce" on the GPU and bam, you're PCIe bottlenecked. (I recommend NVidia CUB or AMD ROCprim for these kinds of operations)

CUB Device-reduce is extremely fast if the data is already on the GPU: https://nvlabs.github.io/cub/structcub_1_1_device_reduce.htm.... However, if the data is CPU / DDR4 RAM side, then the slow PCIe connection hampers you severely.

I pushed 1GB of data to device-side reduce the other day (just playing with ROCprim), and it took ~100ms to hipMemcpy the 1GB of data to the GPU, but only 5ms to actually execute the reduce. That's a PCIe-bottleneck for sure. (Numbers from memory... I don't quite remember them exactly but that was roughly the magnitudes we're talking about). That was over PCIe 3.0 x16, which seems to only push 10GBps one-way in practice. (15Gbps in theory, but practice is always lower than the specs)

Yeah, I know CPU / GPU have like 10us of latency, but you can easily write a "server" kind of CPU-master / GPU-slave scheduling algorithm to send these jobs down to the GPU. So you can write software to ignore the latency problem in many cases.

Software can't solve the bandwidth problem however. You gotta just buy a bigger pipe.


Yup. What's crazy is that a $6k 3990X is still a better deal in terms of $/core than even a dual-cpu Xeon W-3175 system (which would cost about $6k for only 56 cores).


probably closer to 3500-4000 USD. Maybe $5k if theyre feeling greedy. But I think they want to undercut intel so theyll keep their margins reasonable so they can crush the Xeon line




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: