Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD's New Threadripper Chips Have a Hidden Fuse That Blows When Overclocking (extremetech.com)
82 points by webmaven on Dec 14, 2023 | hide | past | favorite | 85 comments


For those who just read the title: the "fuse" doesn't stop the cpu from working, it's just a flag for AMD support to know that the chip has been overclocked and this might invalidate the warranty.


On a related note, AMD's EPYC CPUs have eFuses that enforce a vendor lock. This means you can't transfer an EPYC CPU from, say, a Dell server to a Lenovo server.

https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-...


this isn't just limited to epyc either, it's in the consumer platform as well, and it applies to non-pro cpus that have been used in branded systems.

https://www.youtube.com/watch?v=bFNJVaO9E-o&t=215

not only is this removing a cheap source of upgrades for consumers and sending it to e-waste instead, but it's going to pollute the consumer secondhand market in the same way, once AMD starts getting more traction in OEM sales.

just like with epyc, you will need to wade through piles of "HP ryzen" and "Dell ryzen" and find the one that didn't list it as locked. Oops, still locked anyway, that seller just didn't list it properly.

people are gonna wake up one day and realize the secondhand market is completely fuckered, OEMs sell a lot more PCs to businesses than consumers, there is a disproportionate amount of volume that's going to be locked when things start getting parted out. Again, "fortunately" AMD didn't have much traction with OEMs during the AM4 years but AM5 is going to be a real mess.


I'm sure this is illegal and AMD should be prosecuted for it. The people who decided to implement these "security" measures should lose their jobs and/or face legal consequences for it.

If they did such a thing with car components, it would be shut down in no time. No different for computer parts.

Also a CPU that can be rendered permanently inoperable due to a software issue could be considered not fit for purpose and thus a defective product. Especially if it was intentionally designed this way - such that the CPU can self-cripple itself.

Maybe a legal firm would want to take up a class action against AMD for this potential product defect? Viruses and malware should not be able to physically destroy a CPU. There should be physical hardware protections against such destructive behaviour occurring due to a software induced logic error.

The PSP is not truly secure, someone should write a PSP eFuse bricker tool (Ryzenkill?) that can be run as a proof of concept to show the CPUs are defective, and then we can have a recall or free replacement of the CPU by AMD. I'll look into it myself - I need to ensure it's legal for me to write it from the country that I'm residing in right now. I personally have extensive experience hacking older AMD processors and GPUs through JTAG.

I'm sure it's still legal to provide a list of addresses and data writes that you need to perform to permanently brick the CPU?. I don't think that is classed as a "hacking tool". I can get t-shirts printed with it on, together with the words "AMD Ryzen CPU Self-Destruct Sequence". I don't think selling those would be illegal?


This is also true of the Threadripper Pro 3000/5000 CPU's (likely the 7000 as well, but I can't state that with certainty). Perhaps not surprising since they are basically EPYC's, but worth noting.

I was an early adopter and got bitten by it via a Lenovo.


Smh this is ridiculous. You literally bought the hardware...


This practice should be made illegal, if it's not already.


"You will own nothing" and I thought that was about houses, cars, tools, and entertainment media


For you to own _nothing_, it must apply to _everything_.


First Thought: Are those CPU's available (at Dell-sized quantities) with vendor-specific features? If so, Dell has excellent reason to keep possibly-incompatible CPU's out of their systems.


If they have Dell-specific features, then I agree.


Come on, these are server chips. No sensible organization is going to be moving chips between vendor boards, at least until they are written off.


It mostly just cripples the secondary market for used server hardware.


I dont see the economics of splitting servers for parts. Aside from pulling drives for shredding, servers are mostly sold as pulled on the aftermarket


Not every company is willing to end of life perfectly good servers every five years either. Our fleet is mainly Frankensteined secondary servers with a pile of spares for repair.


regardless of the easily quantified economics it's just another sad little papercut on quality of life in general when e.g. my kid can't go out and trivially assemble boxes out of random dumpstered parts, which was how I got into computing in the first place


there is an entire hobby for adults putting together random dumpstered parts/whatever they could scrounge up cheap too, google 'homelab' and thank me later ;)

(I get your point though, I agree, it's very important for kids, especially in a world of increasingly walled gardens. A ton of the poorer world also makes do with whatever - X79 and X99 poverty builds are a real thing even today, because they're still a viable platform and china has cranked out cheap off-brand boards with resoldered chipsets. good workstation/server boards are <$200, and you can get a decent cpu for $50 or so, and memory for $20 a 32GB stick, it still is a relatively high-value thing for the capabilities it offers, if you need more than a normal consumer system.)


"I don't see" != "Does not exists"

Refurb server parts market is more than alive, especially when a 5 year old server CPU is almost as good as a new one, but for $100 instead of $5000.


> I dont see the economics of splitting servers for parts.

You can see that economy on ebay. Those parts extend the life of existing boxes by years.

I'm typing this on a machine powered by parts from 'split' servers.


The AMD market allows more splitting since they keep a single socket for a DDR generation. Meaning you can get 2-4 generations of processors onto an older server.


however shocking you may find it, typing “epyc cpu” into eBay will nonetheless produce quite a few results.

this used to be even bigger in the Xeon days. Being able to drop in a $50 upgrade to x99 and get the 2-3 best chip in socket owns actually. X99 isn't bad if you've got a 18-core chip running at 3.8 ghz all-core (with all-core turbo enhancement), and you can address 256GB of memory in a consumer board with dirt-cheap 32GB RDIMM sticks (they are basically under $20 now) since many X99 boards also support RDIMM (including on consumer cpus).

Like let's not cheer literal e-waste here. Secondary users keep hardware out of landfills, a lot of those customers have existing systems they want to upgrade or want to build using whitebox brands like supermicro/asrock rack, the EU absolutely needs to tamp down on this practice.

The most silly part is, how does locking it to the brand do anything anyway? If you locked the CPU to the board then sure, that makes sense, but the threat model here is an employee is stealing your CPUs and swapping them out with something cheaper, and this makes them... find another lenovo or dell branded CPU to match it with? and with PSP allowing secure manipulation of non-volatile storage, there is zero reason to make this a permanent fuse, why shouldn't you allow this to also be unlocked when it's time for disposal? like if you want to prevent the "evil maid" attack you would want it to be both more tightly locked to the board and also unlockable so it can be serviced.

(even if you don't want to trust non-volatile storage, you can also make "pseudo-non-volatile" out of plain old e-fuses. even number of fuses blown = unlocked, odd = locked. being able to lock and unlock say 128 times is plenty, in practice.)

And ironically the e-waste this produces (brand-locked chips) actually makes the cost of these evil-maid swapping attacks even cheaper, because nobody wants them on the secondary market, the cheapest platform-locked naples chips are probably already absolutely worthless because nobody wants naples to begin with, let alone brand-locked naples.

it so very obviously is narrowly targeted at making sure the secondary market is a confusing mess that buyers want to avoid. Creating e-waste to sell more chips. Again, the EU really needs to step in, it is noxious, they should even be forced to release a firmware update unlocking existing chips (AMD is the root of trust and can do this).

Sadly Intel has also really cracked down on xeon secondary sales as well - starting with broadwell, xeons can't be used in consumer Z97 boards, and they went to the same thing on X299, as well as cracking down on unlocked multipliers on certain skus (1660v3 has it, among others) and all-core turbo enhancement with broadwell-EP. It would be nice if you could drop an epyc into your threadripper board, too, that's basically the modern equivalent of ye olde xeon ebay upgrade. On the other hand, there’s no surprises either - socket compatible cpus are socket compatible, your board may only work with consumer chips but it does work with all of them.

And frankly people are also underplaying the fact that AMD does this on consumer socket as well - consumers would benefit from having a bunch of cheap zen2/zen3 chips from old OEM systems in a year or two here, that would push prices down even farther.

There is also a general danger that if the systems don’t have any resale value anymore that they will be scrapped entirely instead of resold as either parts or a complete server. If the value of the server is nothing, nobody will bother re-using it. You’ve literally turned working parts into e-waste, nobody wants them anymore and it stops being economically viable to bother for the handful of people who do. Everyone benefits from reduced friction in a market.


This will ruin the AMD brand, if Intel doesn't do such a thing, then people will start going with Intel instead.


the fact that it probably will not is actually the problem


im fairly sure you will be able to pick up “works with any chip” mobos when the time comes.


To me this is actually a reasonable thing for AMD to do. It still leaves me free to do whatever I want with my property.


[flagged]


I kinda assumed that all the above companies would already be doing something similar. I want to live in a world with warranties, so if there's a non-privacy-violating way for companies to protect themselves a little bit from fraudulent warranty claims, why shouldn't they do it?


They should, I was just speaking about hidding the fact from the customer. (everytime NV/Apple/Intel do something stupid people react with fuck them vibes, everytime AMD do something stupid peopel on hacker news are defending them like AMD is some sort of small startup)


Maybe it was not advertised pre-sale as well as it should be but do you really count this as hiding anything from the customer: https://twitter.com/hjc4869/status/1734421088606851462


If Nvidia/Apple/Intel used hardware fuses? Hahaha!


I know Nvidia does the same thing in their Tegra chips. It stops you from downgrading to previous firmware versions on the Nintendo Switch.


Apple has (had?) a water sensor in the iPhones to detect that the phone was in water. Didn't break anything, just signaled that there was water contact.


I tripped mine when I was in Vietnam... just because of the humidity.


The equivalent of those "warranty void if removed" stickers. Not a problem in and of itself.


The real problem is marketing overclocking as if it's just an everyday common thing to do with little to no thought.


The complaint in the article is the inverse: the BIOS displays a big scary warning about overclocking voiding the warranty when in fact overclocking does not necessarily void the warranty. It's more of an "everyday common thing" than the message in the BIOS implies.


Clearly they just want to know the data on failure rate of their process, this affects their real bottom line, Servers and Consoles where reliability is everything. Gaming Computer users are a minority and Custom PC owners are steadily declining.


"If not blown, then not overclocked" was my reading, as well.


I remembered the Xbox 360 processor also had eFuses:

https://free60.org/Hardware/Fusesets/


You could write protect them by removing a resistor (R6T3) from the console's motherboard.

https://consolemods.org/wiki/Xbox_360:Disabling_the_eFuse_Bu... (see Method 2)


Nintendo Switch too IIRC, to guard against users downgrading to a prior firmware with vulnerabilities.


The switch has fuses that blow when you upgrade your firmware to a significant version number. It will check how many fuses are blown to validate what version the fw should be at.


I don't know if overclocking pays off much these days. The chips are already taken close to limits with factory settings, so there's not much overclocking potential unless you use extreme cooling.

I fondly remember the Core 2 Duo days when I overclocked my CPU with more than 50% using the stock cooler. It's nice to 50% performance for free, particularly when you are poor.

I even remember overclocking an old laptop with K6-II (those days laptop CPUs were socketed) by around 15%.


One argument used to be that there's always manufacturing tolerances that you can exploit to your advantage. However, those tolerances were in many cases optimized out of the process by better binning -- if a particular CPU reliably worked at a higher clock speed, it would simply be sold at a higher speed.

But also, whether a chip is actually stable at settings X or Y has become a lot harder to determine due to complexity: multicore architectures, frequency scaling and power management in general make it hard to predict how a chip will behave under real conditions. It has become unrealistic for a user to test all cores under all possible combinations of workloads. Consequently, and from anecdotal experience at least, many overclocked systems seem a lot less stable than their owners claim.


>But also, whether a chip is actually stable at settings X or Y has become a lot harder to determine due to complexity

There are many good software for stress testing CPUs and GPUs. Just run AIda64 or OCCP stress tests for a long time and you are good to go if no problems arise. If there are problems, you redo the settings and run the tests again.


I think that's naive, or let's say oversimplified. There are too many cases where systems pass these tests and then fail in real-world scenarios, sometimes quickly. There's simply no way that a specific set of software routines can guarantee that a processor behaves in spec.


Undervolting (removing the safety margin put in place for binning) and removing power limits is where the gains are today. Ryzen 5000 could get some pretty nice multicore gains by undervolting using PBO2.


The most basic "overclocking" these days is actually undervolting. The clock speed you can leave to the chip.

The undervolting makes the chip run cooler, which in turn means it can stay at higher frequencies longer.


You need higher voltage to have a chip (CPU, GPU, RAM) stable at a higher frequency. Higher voltage means more heat, so you need better cooling.


This actually depends on silicon quality. The mainboard manufacturers generally configure the OOB CPU voltages higher than they need to be in anticipation of handling crappy silicon.

If you have a higher binned CPU then you can reduce that voltage and still be stable at a given clock, the chip will then be able to boost higher because there will be more thermal headroom.


The days of creating a resistor bridge with a pencil to enable extra cores and altering the multiplier are long gone. I still remember pushing my duron with a massive aluminium block on top of it getting 100+mhz more out of it.


I remember when you could enable cores in a GPU just by flashing a different firmware.


Wasn't there a time when you could flash a GeForce to a Quadro and you'd lose some performance in gaming but you gain floating point precision for rendering? Or was this a rumor?


Overclocking these modern CPUs is usually just removing the powerlimit. They can clock themselves pretty close to the thermal limit anyway and little is gained in raw performance above that. But the Performance/Watt drops significantly when overclocking.


I have a 10850k cpu, with a HUGE Be Quiet heatsink. Any heavy CPU workload will draw 300+watt, without any overclocking, the cpu will downclock, but not throttle around 100 Celsius. Overclocking has never been more dead than it is today. The days with old Celeron 300 or Core 2 Duo days with 50% + overclock speed are long gone. Today you will most like hit max temperature and cpu throttling.


> the cpu will downclock, but not throttle around 100 Celsius.

Downclocking is literally throttling. But there's like another 10% performance that can be unlocked if you'd go full custom watercooling.


>But there's like another 10% performance that can be unlocked if you'd go full custom watercooling.

It's actually less than 10%. I'd say closer to around 5% - as a custom watercooling enthusiast. 2-3% gains going from a great dual tower air cooler to an All-in-One closed loop (but extra noise from the pump) and another 2-3% gains for a great CPU waterblock and thicker, denser radiators of custom watercooling (and least noise of all, assuming you choose an appropriate pump speed).

5% performance gains, and 500% price increase <:o)


fine with that. for 99% of users it's great that factory settings allow the chip to run basically at max potential without having to be an expert in the complex web of acronyms/timings/voltages for overclocking


Frankly this is aimed more at XMP, where vendors routinely bump IMC voltage to get those swanky binned gaming kits onto the QVL. A ddr4-4800 kit obviously isn’t going to work on stock voltage with a memory controller rated at 3200, the gap is covered by increasing VCCIO and VCCSA a lot, and this results in quite a few chips with failed memory controllers over time.

Also, on AMD there is an extremely widespread culture of overclocking the fabric which again, can result in degradation over time (progressively lower clocks at a given voltage, or instability at a fixed clock). People very much still believe in the fairy-tale of “24/7 safe” overclocking - obviously the manufacturer has every incentive to certify the chip as fast as it’ll go, they set the limit at the 24/7 safe point already, but the gains are so potent that people tell themselves whatever they need to hear.

https://www.amd.com/en/legal/claims/gaming-details.html

AMD and intel have been very very clear for a very long time that this is not covered by warranty (see AMD GD-106/GD-112) but people have low key continued to do it and warranty the CPUs when they fail, because there wasn’t a way for AMD and intel to tell. And to be fair - AMD and intel have often muddled the waters by using these numbers in their benchmarks etc. But there remains a general sense of “if you didn’t turn on XMP you didn’t do the build right” and “XMP can’t hurt anything as long as you don’t increase the voltages” despite this already being done automatically by the board.

XMP absolutely does cause damage and I killed a 9900k with no other overclocking simply by enabling XMP on a fast kit. But consumers still don’t get the memo, they think it is only a problem if you instantly blow up the chip like the 7000 series at launch. XMP is widely viewed as safe and even something you're supposed to be doing.

Electromigration happens very quickly at 7nm and 5nm nodes, and people are not used to it, and it’s low key also a thing on 14nm and 10nm class too. But it is already something that CPUs have to be designed around and designed to detect and monitor and tolerate as part of the boost algorithm (that stability margin is also headroom the boost algorithm wants to exploit), they devote an enormous amount of effort to electromigration already. And the mismatch between that public understanding of the problem and the reality is causing a lot of warranty claims. You can see from enterprise fleets that CPUs don’t normally fail, yet every enthusiast knows someone who had a chip die “even without overclocking! I just enabled XMP!”. Of the failed CPUs getting returned for warranty, probably a plurality are memory overclocking failures. And this is only going to continue to get worse over time - 2.5D and 3D packaging are both progressively even more sensitive due to thermals, and have tighter requirements for voltage stability since they need to talk to the other dies at very low voltages (especially across heterogeneous dies with different metal stacks/totally different processes etc).

https://semiengineering.com/3d-ic-reliability-degrades-with-...

https://semiengineering.com/on-chip-power-distribution-model...


Speaking as an engineer, this is an essential feature.

From AMD's language, and common sense, the thoughts on this impacting warranty are misguided. I suspect this is a normal feature designed for debugging.

If I am AMD, and I e.g. receive 20,000 defective / failed chips returned from a bad run, or am sampling failures to just improve reliability, I want to know how those devices failed.

If, what's failing, are overclocked chips, I probably do want to lock down chips, void the warranty, or similar. If what's failing is a distribution of chips representing real-world use, I want to address the defect. In either case, if doing an RCA to understand the defect, I want to know how the chip was used. Reporting of maximum temperature, voltage, overclocking, etc. (all the data HDD keep in SMART) is super-helpful for that sort of analysis.

Also: Voiding warranty for overclockers would have been dumb. I've never overclocked, but for anyone here running a business: Vocal thought leaders are important to your brand. Overclocking may be out-of-spec, but if you want the word on the street to be that your chips are awesome, they had darned well be singing your praises. I've never overclocked, but I've cost both Google and AMD millions of dollars of business:

* I used to do business with Google before they broke my personal account, wiped out data for a startup I was working with, and broke / discontinued several services I rely on. I'm well enough known that me going around saying "Don't build a business on Google" (and ditto for many other folks on HN) means that GCP is starting a 100 meter dash with Azure and AWS about a hundred extra meters back.

* I bought an AMD GPU for machine learning since I liked open-source. It didn't work, and AMD discontinued support a few months after I bought it. AMD wouldn't provide warranty support for lack of suitability to advertised purpose. I learned my lesson, and tools I'm building (which will likely have broad-based adoption) don't support AMD.

There are customers you want to treat like a restaurant treats a food critic. It's not about them being profitable directly, but they're super-important.


I see no problem with this. If you overclock and damage you obviously shouldn't be able to claim warranty. The only thing that isn't as black and white is how they know if an overclocked faulty chip is faulty because of the overclock, or something else.


I really don't like the idea of having field-programmable OTP memories in microprocessors. This shouldn't be possible, there should be a physical VPP pin (as in MediaTek SoCs), if you don't supply power to this then no fuses are blowable. Otherwise we're going to have ransomware physically brick CPUs if you do not pay up on time.

There's even worse possibilities, I think some hardware has ~1KB of executable OTP which can be programmed in that manner - so it may just be possible to implement some kind of backdoor that resides in the physical CPU. Maybe something only a state-level attacker could do.

For example, it could prevent CPU side-channel mitigations from ever working, somehow, e.g. by repeatedly writing to a private internal register to stealthily keep the CPU vulnerable. And few would ever know about it, because it's hidden in an on-chip security processor.

Personally I utterly hate all AMD chips which have a Platform Security Processor. Locking down your own property to prevent you from accessing it should be illegal.

There is a full Trustronic Trusted Execution Environment running in there, on both GPU and CPUs, I believe. For example the TEE firmware blob for a radeon GPU on Linux is "/lib/firmware/amdgpu/psp_13_0_7_sos.bin", with "sos" meaning Secure Operating System. And some of these firmware files are encrypted, so you can't reverse engineer them.

The moment some ARM SoC company such as Rockchip comes up with a chip that's within an order of magnitude in performance, then my AMD chip (EPYC) is going in the bin. After being smashed to pieces with a hammer, live on YouTube, with an explaination why. That might get AMD marketing to pay attention.

Update: Ampere Altra CPUs have seperate power pins for blowing eFuses, you can find it in the public datasheet here[1], on page 55, the supply pins are EFUSE_MFG_VDDQ1P8 and EFUSE_PCP_VDDQ1P8. It says tie to GND if unneeded.

Also they have a public datasheet for the chip. I wonder if we can buy unfused CPUs without secure boot enabled? Or is that the default state of the chip?

Could someone design a simple and cheap open-source motherboard for one of these processors, it's only a matter of time before the chips start turning up on Ebay? We will need more documentation from Ampere than just the datasheet, of course.

1. https://uawartifacts.blob.core.windows.net/upload-files/Altr...


I'm guessing that vulnerabilities tied to ongoing manufacturer control (CPUs, cars, Polish locomotives, etc.) will continue until a widespread attack turns public opinion.

And I fear that even then, lobbyists and/or intelligence agencies will neuter any response.

Shoot, now I'm just making myself depressed.


That's why we need open source chips, the same restrictions we had with proprietary software are now coming to hardware. Plenty of interesting things to do with FPGAs, and the community is even in the early stages of developing open source FPGAs now.

It was AMD with their PSP that pushed me into FPGAs and creating open source hardware, now that everything is getting locked down.


Of all the CPUs I have had not a single one broke from use before they were replaced due to obsolescence and I definitely have been pushing them. How common is CPUs being permanently damaged from reasonable overclocking anyway, especially modern ones that have integrated sensors and clock themselves down automatically?


CPUs are still pretty durable but they can be a bit temperamental.

With the smaller process sizes there's less voltage tolerance before degradation occurs. The X3D chips from AMD pack things supposedly pack the cache so tightly that it's easy to get into trouble

A lot of chips burned out with something relatively mild like 1.3V on the SOC!

That protection is fairly limited. There aren't enough sensors for everything that's heat sensitive. Reducing voltages/clock speeds also may not be enough to save all of those.

Very long way to say: not that common... but decently.

edit: As Paul mentions in a peer comment, this is largely due to the memory controller being moved onto the CPU.


anecdotally, probably at least 25% of enthusiasts have had a cpu that "randomly died" or "started getting unstable over time" even though they supposedly never overclocked it, and the leading cause there is memory controller degradation caused by XMP.

I had a 9900K fail due to XMP, no other overclocking.


That's similar to the e-fuses cellphone manufacturers use to know when a phone was rooted and void warranty regardless of what caused the malfunction, right?


apparently it does not void the warranty: https://www.tomshardware.com/pc-components/cpus/amd-says-ove...


It does not void the warranty, but the overclocking is permanent and any damage that happens when overclocked isn't managed by the warranty (if I read everything right)


AMD also says it will in one of their hidden footnotes at the bottom of a marketing page:

>Overclocking and/or undervolting AMD processors and memory, including without limitation, altering clock frequencies / multipliers or memory timing / voltage, to operate outside of AMD’s published specifications will void any applicable AMD product warranty, even when enabled via AMD hardware and/or software.

From: https://www.amd.com/en/technologies/expo

The moral of this story is overclocking is not for the naive nor the ignorant, and you must be willing to write off any hardware subjected to overclocking.


Does it means that CoreCtrl can blow the fuse when underclocking? I’m doing this to avoid my CPU push the fans when doing casual browsing.


Ofc it is not voiding the warranty, since I would be not surprise it would blow even if people sincerely did not engage in overclocking.


Similar e-fuses are common for preventing downgrading firmware, especially on consoles. Nintendo was famously running out of e-fuses to blow on the Switch because of the many vulnerabilities the console had.

Point being that these e-fuses don't blow up randomly by themselves.


That already happened on Xilinx Zynq FPGAs, where an incorrect power supply sequence causes random writes to one of the internal buses, which overwrites various eFuses, even after they have been locked.

https://support.xilinx.com/s/question/0D52E00006hpKdKSAU/zyn...

I understand burning eFuses is necessary during production, but having them blowable when the device is operating normally could be compared to building disposable devices. One logic or software error and the device can be bricked permanently. That is a ridiculous state of affairs. What about single event upsets or some EMI induced glitch causing eFuse corruption?

The power for burning the important fuses should be off at all times during normal operation. And we can have a test pad to supply the VPP during production.

In the case of the AMD processors, a separate bank of fuses can be used to store diagnostic data for warranty returns, if necessary. If those get corrupted somehow, the device won't malfunction. But the important ones should not be possible to program after manufacturing, no matter what happens to the logical state of the hardware, period.


Indeed, but trusting there is only one cause and it is always on user will is just abusive.


The use cases for overclocked CPUs are dead to me.

There are so many other dominating factors that it is almost pointless to worry about per core frequency.

Memory bandwidth is probably a bigger constraint for many practical applications. Faster core clocks doesn't do anything to resolve that.


Opening of a sci-fi novel: Chinese manufacturers embed a Trojan horse mechanism into chips and batteries. The mechanism, triggered remotely by predetermined signal, disables chips and causes batteries to short-circuit and explode.


Expensive.


Why don't they just put some thermal feedback system in it, making the chip inherently safe?


Thermal throttling has been a thing for a long time.


Also only partially effective! A chip that's:

    * on
    * idling
    * at room temperature
... is slowly degrading. This is only expedited by raising voltages when overclocking.

Current passing through conductors wears them down. More current or heat means faster.

What's more, there aren't enough sensors (or room to put them) for every heat-sensitive component.


I mean... is it really necessary to overclock a AMD Threadripper,

for the price of this new Threadripper ($9,999) I wouldn't take the risk of overclocking it..


It depends. If I'm an IT manager at Pixar and this is going in the workstation of my best and most productive animator, the price might not matter. If reducing the time s/he spends waiting on the computer means I can produce movies faster, I'll take risks to give them the best machine possible.

If I'm a home freelancer staking my livelihood on this workstation, I'm not taking the risk.


> I mean... is it really necessary to overclock a AMD Threadripper

Yes? The main point of the Threadripper (especially non-PRO) is that you have a many-core CPU that can still run at high frequencies. Othervise EPYC (many-core) or Ryzen (high speeds) would be enough.


Ah I see thanks, but still for this generation of Threadripper, I would avoid overclocking it if it means having no warranty.

Also shit move from AMD because even if you overclock with their software, the warranty still becomes void




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: