Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel to Create RISC-V Dev Platform with SiFive P550 Cores on 7nm in 2022 (anandtech.com)
121 points by rbanffy on June 26, 2021 | hide | past | favorite | 42 comments



I would say Atom is Intels best bet at a ARM competitor, not at the same power but as a slightly beefier alternative.

From what I read they are canceling that line and instead offering low power regular desktop processors.

That's why I bought 4x 8-core Atom servers at 25W, so I don't get GPUs in my low electricity servers / high performance alternative to Raspberry 4.

RISC-V is probably going to take a decade to reach the kind of stability the latest Atom line has in terms of support and performance, if ever; specially on the server-side.

The Xeon line CPUs consume way too much power to be an alternative for home hosting with lead-acid backup.


Intel didn’t so much kill the atom line as merge and evolve it.

They are following in ARM’s footsteps by having heterogeneous multi-core CPUs with high performance and low power cores (like the M1 and ARM’s big.little before that)

Intel’s Adler Lake will have a mix of Golden Cove cores for performance and Gracemont cores for power efficiency. Gracemont cores are improved Tremont cores previously found in their Atom line.


Atom is an amazingly power inefficient design.

X86 here is a definite dead weight on its neck. Atoms front-end if few times bigger than the back-end.


Somehow I doubt this wide-sweeping affirmation, are you sure? Atom is really big family by now, starting from Silvermont it vent full out of order with all bells and whistles, I find it hard to believe OO backend is anywhere near the size of x86 insn decoder.


A complicated ISA like the x86, with layer upon layer of additions, costs a lot of silicon that will need power to run.


Let's go see some figures!

"Empirical Study of Power Consumption of x86-64 Instruction Decoder"

https://www.usenix.org/system/files/conference/cooldc16/cool...

From the conclusion:

"The result demonstrates that the decoders consume between 3% and 10% of the total processor package power in our benchmarks. The power consumed by the decoders is small compared with other components such as the L2 cache, which consumed 22% of package power in benchmark #1. We conclude that switching to a different instruction set would save only a small amount of power since the instruction decoder cannot be eliminated completely in modern processors."


1. They are physically huge, and that means more leakage, when there could be none.

2. Inefficient front-end taking toll on everything behind it contributes to the rest of the chip doing nothing except dissipating power.

3. While cores on modern SoCs take much less space than everything else usually, front-ends still eat into the die area.

4. Front-end dictates how the rest of the core behind it is designed. I believe that X86 technical debt absolutely must exert its toll on that, and overall transistor count in the back-end.


It's big, but not overwhelmingly so. To eye is looks like it's about 10% of the area so it jives with the quote provided by minipci1321 which claims ~10% of power.

Here is an annotated die shot of Zen2 that shows it. https://forums.anandtech.com/threads/annotated-hi-res-core-d...


Scaling RISC-V will be orders of magnitude easier than anything of the X86 lineage. It’s a new clean modern design with so much less complexity and legacy baggage. Intel could build crazy fast RISC-V designs within a few years at most if they want to.

A very interesting path back to relevance for Intel would be to design proprietary absolutely crazy fast RISC-V cores and market them in the data center and HPC markets first. The tooling on Linux is about there. Eventually they could get an initially small but very loyal developer laptop market going too. A good RISC-V design could easily compete with the Apple M1.

Get enough momentum and a Windows port could happen. Then you could reboot the PC market.

The automotive and drone niches would be interesting too. There is a market there for low power high performance chips for things like driving assist and autonomy. A new instruction set wouldn’t matter much since the whole software stack is more or less custom anyway.

At the very least it would be a way to hedge in case X86 really does start to enter a death spiral.


If it's so easy to create fast risc-v cores then why hasn't it been done?


It's not exactly easy -- more of "we know how to do it, given funding".

Designing any fast modern chip -- such as the Apple M1 -- probably still costs around a billion dollars.

In the RISC-V world the company with the most funding so far is SiFive and they've had $190 million.

Intel getting involved could change that very quickly.


Nobody’s done it yet, probably due to lack of perceived demand.

Don’t get me wrong. It’s not so easy that a few college kids could just whip one out. A company like Intel or AMD could definitely do it.


> That's why I bought 4x 8-core Atom servers at 25W, so I don't get GPUs in my low electricity servers / high performance alternative to Raspberry 4.

How much did you pay? Last time I checked these were still a couple of times more expensive than Raspberry Pis.


~$500 per board, but then you need to add PicoPSU, passively cooled case, SATA drive, 12V PSU, 2x 16GB RAM, so about $1100 in total

Same Gflops/W but about 4x Watt so for things that cannot be distributed (like a shared MMO world in my case) it's probably as good as it gets...


Those boards have proprietary formats. Are you running them in supermicro chassis?


A2SDi-8C-HLN4F are Mini-ITX so they fit in these: https://streacom.com/products/db1-fanless-mini-itx-case/

But you need to buy this GPU adapter: https://streacom.com/products/db4-gpu-cooling-kit/ (for the small CPU)

And buy M3 female-female spacers that are 50mm long and some M3 thread that you cut at the appropriate length.

So it's a bit fiddly until they release proper small CPU adapters for these.

Also to remove the heatsink that comes on the motherboard you need to run the board without fan until it heats enough to soften the heatpaste. Be careful to not rip the board when removing it, wooden icecream stick to bend might come useful.


Which servers/boards did you buy. I’m interested in one for local dev work as over a year it’s probably cheaper than renting a suitably large VM.


SuperMicro


Thanks will look into it.


Generally for Intel I see the benefits as:

- Part of a roadmap to RISC-V to eventually lure Arm customers uncomfortable with the Nvidia bid for Arm

- Generally hitting Arm at the lower end by giving RISC-V more credibility in the short term

- Open technology like RISC-V is less affected by the US rules for technology discussion/transfer with China


> Open technology like RISC-V is less affected by the US rules for technology discussion/transfer with China

I understand people see that as a benefit of RISC-V, but I'm not seeing it quite here; how does that benefit Intel?


They might get approval to continue selling to China (companies on the export ban list)?


I wonder whether we will sometime see RISC-V as a "replacement" (Either official by Intel/AMD or unofficial because of being better) on desktop PCs.


Why not. On the server side of things the architecture doesn't really matter as recompiling isn't that big of a deal. For linux it is no issue whatsoever, Windows has technically been multiarch for decades really and it seems MS wants that to happen. Also apple showed that static recompliation is an viable option as well quite recently.


Does it come with out of band managment systems? I have a difficult time seeing how Intel will benefit this outside of the chip Fab process. Granted this is substantially a branch in a different direction.. I expect very little for non NDA customers on their side.


Maybe to have an out of the coming death of x86 that's not ARM, which could potentially be controlled by Nvidia soon


I thought the uk blocked the sale on national security grounds.

What is the compelling reason for going for RISC V compared to ARM or x86 when comes to general purpose computing?

I understand Intel wants to communicate that it is taking a direction but it seems to me that we are a long time away from seeing RISC V as the main computing architecture in a consumer laptop or a phone.


ARM is potentially soon owned by an enemy. RISC-V is owned by no one.

Arm64 and riscv64 are pretty close to equivalent in a technical sense for general purpose computing, especially once consumer ARM chips ship with SVE2 and RISC-V with their Vector extension (which are both going to be out at pretty much the same time).

Someone with a lot of money, such as Intel, investing in RISC-V will vastly accelerate RISC-V getting to being able to be used in a high end phone or laptop. Intel has the ability to "do an M1" with RISC-V, leapfrogging ARM.


> Intel has the ability to "do an M1" with RISC-V, leapfrogging ARM.

What about Intel’s execution over the last 5+ years gives you this confidence?


2006


How fast do you believe Intel could come up with such an M1-level RISC-V CPU?

Doing so, wouldn't they also help create a market where the barrier to entry is much lower than x86, thus inviting more competitors than they have now? It'd be interesting to see Intel on such a warpath against ARM...


Intel won't be able to do an M1 style SOC any time soon. Not with any architecture.

Also, why would this lower the barrier to entry? Intel loves their margins. This will never change.

This effort is all about Intel fiddling with their 7nm process, not about moving to another architecture.


> Intel has the ability to "do an M1"

They might be able to match it on speed, efficiency, and price, but not on vertical integration. But I'm curious to see how a RISC-V ecosystem might develop once someone starts pumping heavy dollars into it.


Who has the incentive to put heavy dollars into it?


The manufacturer with the US government behind (plus political support), since they don’t want to see their country lagging behind in processor manufacturing for strategic reasons.

I believe this will tend to put the right incentives when needed.


Nationalism and subsidies does not make a sustainable business.

Why would intel do something that breaks x86/windows dominance? Why would Google/Qualcomm break arm/android?


> Nationalism and subsidies does not make a sustainable business.

This is not true. Airbus is an early example, and of course we can't forget to mention Huawei, Baidu or Alibaba.

I'd say defense industry as well, but since their customers are governments one may argue that their revenue would dry up when military spending is down


Nationalism/subsidies/protectionism are a valid way to nurture a startup.

But you can definitely grow to a point where that's no longer necessary. That's why people got so terrified of Huawei: it wasn't that they were subsisting only on forced 'buy-domestic' orders from China, it was that they were able to produce products that were competitive enough for overseas customers to willingly buy them.


Its not that they want but politics might drive them toward this way.

I'm pretty sure a lot of Chinese big tech are willing to invest in semi conductor designs to keep more of the profit internally then send it to the US and prevent risk in their supply chains. You also see more and more Chinese big tech companies branch out of China so its not certain that windows and android can easily keep their monopoly.


Small chips.. I'm wondering about 7nm yield issue at Intel...


same kind of news over and over again, with nothing to show for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: