• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

European HPC Processor "Rhea1" Tapes Out, Launch Delayed to 2026

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,251 (1.12/day)
European Processor Initiative (EPI) is nearing completion of its first goal. SiPearl, the leading developer behind the Rhea1 processor, has finally reached the tapeout stage after a string of delays, but it will not be ready for delivery until 2026 at the earliest. When the project launched in 2020, SiPearl planned to begin production in 2023; however, the 61 billion-transistor chip only entered tapeout this summer. The design, built on TSMC's N6 process, features 80 Arm Neoverse V1 cores alongside 64 GB of HBM2E memory and a DDR5 interface. While these specifications once looked cutting‑edge, the industry has already moved on, and Rhea1's raw performance may seem dated by the time samples are available. SiPearl initially explored a RISC‑V architecture back in 2019 but abandoned it after early feedback and comments highlighted the instruction set's immaturity for exascale computing.

Development was further interrupted by shifting core‑count debates, with teams alternately considering 72 cores, then 64, before finally settling on 80 cores by 2022. Those back‑and‑forth decisions, combined with evolving performance expectations, helped push the timeline back by years. Despite missing its original schedule, Rhea1 remains vital to European ambitions for high‑performance computing sovereignty and serves as the intended CPU for the Jupiter supercomputer. Thanks to Jupiter's modular design, the system was not left idle; its GPU booster module, running NVIDIA Grace Hopper accelerators, is already operational and approximately 80 percent complete. With the CPU clusters slated for mid-2026 deployment, full system readiness is expected by the end of 2026. To support this effort, SiPearl has recently secured €130 million in new financing from the French government, industry partners, and Taiwan's Cathay Venture. As Rhea1 finishes its goal, work on Rhea2 is already underway, and we can expect more updates about Rhea2 in a year or two.



View at TechPowerUp Main Site | Source
 
and Rhea1's raw performance may seem dated by the time samples are available
At this point I think owning your own stuff far outweighs raw performance. And you have to start building before you can learn how to build better. So as a first step this is just fine.
 
At this point I think owning your own stuff far outweighs raw performance. And you have to start building before you can learn how to build better. So as a first step this is just fine.
For EU, dangers from running US-made processors are non-existent. And for the agenda of green energy, it is much more dangerous to use an outdated processor, to do less work per hour per kW, than something that is much more modern, does more work per hour per kW. Efficiency tradeoff is an actual problem. Sovereign infrastructure is only a part of the solution. For China, it may make sense, for the EU, not so much.
 
At this point I think owning your own stuff far outweighs raw performance. And you have to start building before you can learn how to build better. So as a first step this is just fine.
Exactly this - along with fabs within ones sphere of influence, ideally in ones country - so that no other country can arbitrarily restrict supply on a whim.
 
For EU, dangers from running US-made processors are non-existent.
what about the danger of economical non-viability ? *TARIFFS INTENSIFIES*
And for the agenda of green energy, it is much more dangerous to use an outdated processor, to do less work per hour per kW, than something that is much more modern, does more work per hour per kW. Efficiency tradeoff is an actual problem. Sovereign infrastructure is only a part of the solution. For China, it may make sense, for the EU, not so much.
I disagree, silicon designers -of which all are US based- have zero competition but themselves (chinese designers don't factor here as they design for national needs with no exportations), and it happens to be a very lucrative industry, plus the silicon engraving tools are designed and produced in the EU, which still makes me wonder how tf did we go for so long without a local silicon manufacturing industry (I know we have one, I'm taking about high end/bleeding edge nodes for high end/high volume compute, not microcontrollers or automotive silicon)
We're decades late to the game but any start is a good one, at the rate technologies are being developed nowadays, I'm sure the catch up to acceptable compute levels will be reasonably paced and that in 10-20 years, we'll have a solid enough competitors on our hands *inhales hopium*
 
Yet more ARM E-Waste to be released late to a market that won't want it by the time it is available.
 
Didn't see in article numbers for perf/watt. If you have data please make comparison with the most modern western processors.
Jülich Supercomputing Center (JSC) plans to install 1,300 nodes, each containing two Rhea1 CPUs. That is about 5 PetaFLOPS of FP64, resuling in 1.9 TeraFLOPS of FP64 per CPU. TDP numbers are unknown, meaning that we don't know perf/watt. We could use any Arm Neoverse V1 design to compare, but it depends on SiPear's frequency tuning.
 
Exactly, that's I mean. Have no data for power consumption and I think also data of performance for CPU is too preliminary.
 
It’s a good start, few will care if it’s not the most cutting-edge and/or best price/performance those things will take time and iterations

It has to start, too important to have a « sovereign » design
 
And for the agenda of green energy, it is much more dangerous to use an outdated processor, to do less work per hour per kW, than something that is much more modern, does more work per hour per kW.
I find this disagreeable. While, yes, more modern processors are more efficient in an absolute sense, within the context of 'green' operations, processors tend to worsen. It's more productive to push the hardware harder and harder to eke out more performance, get work done faster and faster and faster... and oft, any potential power consumption gains are nullified by an increase in power draw toward that end. No matter the per-watt performance of the processor, more watts is more watts, and more watts means more petroleum burned to keep up with demand.
 
I find this disagreeable. While, yes, more modern processors are more efficient in an absolute sense, within the context of 'green' operations, processors tend to worsen. It's more productive to push the hardware harder and harder to eke out more performance, get work done faster and faster and faster... and oft, any potential power consumption gains are nullified by an increase in power draw toward that end. No matter the per-watt performance of the processor, more watts is more watts, and more watts means more petroleum burned to keep up with demand.
The way I see it. If processor X takes 10 seconds to do the job, consuming 10 W of power, and processor Y takes 6 seconds, consuming 12 W of power, Y is more power-efficient, even while consuming more power. That is not a nullified performance increase, but a massive efficiency increase. While consumer CPUs are not the case here, server CPUs are measured for TCO, cost of infrastructure, and everything else. Hence, these processors are better off with a performance increase at a slight power increase, than the baseline expectation.
 
A friend of mine shared a thought with me: It's a lot of meaningless work. In the sense that people work, train models, ever bigger models, at the cost of ever more gigawatts and ultimately with diminishing, even negative returns. Yes I know the Nvidia and TSMC realised big money. But average joe still left poor.
 
No matter the per-watt performance of the processor, more watts is more watts, and more watts means more petroleum burned to keep up with demand.
I assure you, it's not "petroleum" that's powering datacenters. 99% of them are hooked as directly as possible to a nuclear reactor, more power available on immediate demand, much stabler, less risky, ENORMOUSLY CHEAPER, why do you think Microsoft wants to kickstart Three Miles Island up again and on the EU side, why do you think new datacenters are primarily being set up here in France, which is 85% nuclear powered ?

Offer a few Megawatts via nuclear or gas to corporations, show them how much each would cost, see which one gets picked...

The way I see it. If processor X takes 10 seconds to do the job, consuming 10 W of power, and processor Y takes 6 seconds, consuming 12 W of power, Y is more power-efficient, even while consuming more power. That is not a nullified performance increase, but a massive efficiency increase. While consumer CPUs are not the case here, server CPUs are measured for TCO, cost of infrastructure, and everything else. Hence, these processors are better off with a performance increase at a slight power increase, than the baseline expectation.
Bingo, this is precisely what companies are after, lowest TCO possibly achievable, getting 300W CPUs instead of 200W ones makes sense if they 300W ones do a job in a time beyond inversely proportional to the increase in power draw
 

Bingo, this is precisely what companies are after, lowest TCO possibly achievable, getting 300W CPUs instead of 200W ones makes sense if they 300W ones do a job in a time beyond inversely proportional to the increase in power draw
During the great crypto mining craze and then calculations for LLM training, up to the moment I write, so many calculations have been performed that if these calculations were useful, our civilization would have advanced scientifically and technically by a thousand years, just in these 10 or so calendar years. Is such progress of civilization a fact? It is not a fact, on the contrary, there is almost no progress, except in the wealth of a small part of humanity.
 
During the great crypto mining craze and then calculations for LLM training, up to the moment I write, so many calculations have been performed that if these calculations were useful, our civilization would have advanced scientifically and technically by a thousand years, just in these 10 or so calendar years. Is such progress of civilization a fact? It is not a fact, on the contrary, there is almost no progress, except in the wealth of a small part of humanity.
If scientific research brought in money, these same organizations would be funding clusters of supercomputers, why do you think these are almost all funded and operated by states ?
 
If scientific research brought in money, these same organizations would be funding clusters of supercomputers, why do you think these are almost all funded and operated by states ?
Apparently billionaires have an allergy to science. They prefer the iterative approach in which data is obtained by RUD (Rapid Unscheduled Disassembly). Moreover, serious science uses complex mathematics that is unfamiliar to them and they are afraid of it. :D
 
The way I see it. If processor X takes 10 seconds to do the job, consuming 10 W of power, and processor Y takes 6 seconds, consuming 12 W of power, Y is more power-efficient, even while consuming more power. That is not a nullified performance increase, but a massive efficiency increase. While consumer CPUs are not the case here, server CPUs are measured for TCO, cost of infrastructure, and everything else. Hence, these processors are better off with a performance increase at a slight power increase, than the baseline expectation.
And I do not disagree. In absolute terms, yes, processor Y is more efficient and the obvious choice for a business concerned with productivity. My problem comes in that the power consumption has increased, regardless of other contextual factors. Work done has increased per watt-hour, but the issue is that the work only stops for these processors when they are obsolete, and there is no guarantee that power consumption will be reduced for future processors. More watts is more watts.

I assure you, it's not "petroleum" that's powering datacenters. 99% of them are hooked as directly as possible to a nuclear reactor, more power available on immediate demand, much stabler, less risky, ENORMOUSLY CHEAPER, why do you think Microsoft wants to kickstart Three Miles Island up again and on the EU side, why do you think new datacenters are primarily being set up here in France, which is 85% nuclear powered ?

Offer a few Megawatts via nuclear or gas to corporations, show them how much each would cost, see which one gets picked...
This is expressly false. There are three plans for nuclear power supply to three major American companies, but those will not be operational until the turn of the decade, whether for refactoring and repairs (Three Mile Island) or because the technology isn't yet available commerically (SMR projects). For everybody else and for current datacenters, the most common form of on-site power is petroleum (which, disclaimer, I do include gas-fired in my use of the term)... if it is on-site.

In terms of grid power, USA and Europe primarily rely on petroleum plants with coal plants as secondary sources; nuclear, wind, solar, and hydro make up the difference. They simply haven't become that popular OR that cheap. In this regard, France is a bit of an outlier in terms of domestic production. At the same time, however, it is not markedly popular in terms of planned datacenters. That distinction goes to the UK, Germany, and the Netherlands, of which I would likely place the blame on stronger internet infrastructure/regional demand. I might concede that in terms of proportion to extant datacenters, France may be one of the fastest growers.

Per megawatt-hour, gas is cheap and coal is dirt cheap, even after carbon taxes. Nuclear is the only 'zero-emission' source whose output isn't otherwise limited by environmental factors, but its primary limitation is installation cost and time to build. A midsize plant costs BILLIONS (whether that cost is justified or not) and takes years to build, nevermind how long it takes to break even in terms of revenue. SMRs solve this problem indirectly, but again, they're not quite there yet. A gas-fired plant is pocket change in comparison and ROI comes much more quickly.
 
For EU, dangers from running US-made processors are non-existent. And for the agenda of green energy, it is much more dangerous to use an outdated processor, to do less work per hour per kW, than something that is much more modern, does more work per hour per kW. Efficiency tradeoff is an actual problem. Sovereign infrastructure is only a part of the solution. For China, it may make sense, for the EU, not so much.
No worries as long as the EU keeps its head down and does what it's told. But there might come a time when the EU will want to step out of line and pricing or export controls might just bite them in the ass. The US has shown how easy it is to become an unreliable or outright hostile partner. And the "per kW"/efficiency metric becomes less relevant as we move to renewables. Good to have, not a reason to just stop if you don't. The EU doesn't have to go around replacing every CPU within its borders, just work on a decent alternative. Alternatives are always good, for sovereignty, for competition, you name it.

Making sure the EU has access to tech regardless of what hostile or insane person might rule the other side of the ocean is essential and there's no reason to not even try. Not being the best doesn't mean "why even bother". Anything else is "Neville Chamberlain in 1938" level of shortsightedness.
 
This is expressly false. There are three plans for nuclear power supply to three major American companies, but those will not be operational until the turn of the decade, whether for refactoring and repairs (Three Mile Island) or because the technology isn't yet available commerically (SMR projects). For everybody else and for current datacenters, the most common form of on-site power is petroleum (which, disclaimer, I do include gas-fired in my use of the term)... if it is on-site.
I didn't mention on-site.
In terms of grid power, USA and Europe primarily rely on petroleum plants with coal plants as secondary sources; nuclear, wind, solar, and hydro make up the difference. They simply haven't become that popular OR that cheap. In this regard, France is a bit of an outlier in terms of domestic production. At the same time, however, it is not markedly popular in terms of planned datacenters. That distinction goes to the UK, Germany, and the Netherlands, of which I would likely place the blame on stronger internet infrastructure/regional demand. I might concede that in terms of proportion to extant datacenters, France may be one of the fastest growers.
And that's why I explicitly mentioned France, and not Europe as a whole. There has been billions worth of investments announced earlier this year and we already have both a solid preexisting infrastructure (both in power, network nodes -thank *fuck* for the solid competitive ISP market here- and current datacenters) and a solid nuclear supply and industry -which is getting ramped up again-. Also it probably helps that our electricity is the cheapest in Europe, especially at large scale.
Per megawatt-hour, gas is cheap and coal is dirt cheap, even after carbon taxes. Nuclear is the only 'zero-emission' source whose output isn't otherwise limited by environmental factors, but its primary limitation is installation cost and time to build. A midsize plant costs BILLIONS (whether that cost is justified or not) and takes years to build, nevermind how long it takes to break even in terms of revenue. SMRs solve this problem indirectly, but again, they're not quite there yet. A gas-fired plant is pocket change in comparison and ROI comes much more quickly.
The three countries that produce any significant amount of energy from coal are Germany, Poland and Turkey and the curve falls so sharply after this third one that the fourth and beyond aren't relevant to mention, also is has all the problems gas has but worse times ten, getting power up and down is *extremely* unresponsive compared to nuclear. Add to this the unreliability of gas supply because of geopolitics *coughrussiacoughormuzcough* and it once again shows that datacenter building locations of interest systematically goes back to countries with significant nuclear power.

or Scandinavian countries with all their dams, but then you have to run fiber through the Baltics and I heard there was some issues with underwater cables at this time of the decade...
 
For EU, dangers from running US-made processors are non-existent. And for the agenda of green energy, it is much more dangerous to use an outdated processor, to do less work per hour per kW, than something that is much more modern, does more work per hour per kW. Efficiency tradeoff is an actual problem. Sovereign infrastructure is only a part of the solution. For China, it may make sense, for the EU, not so much.
That's... a very interesting statement, and I do wonder what you base this on.

The take in the EU is not exactly what you are describing here, what you are describing is a 'das war einmal' thing.

The US has already shown its true face to the EU recently, for example by cutting off Microsoft services to the ICC because it investigates Israel. And that's just the tip of the iceberg. But acts like that are a reality check and you can rest assured we'll want to guard critical systems preferably without any ties to the US.

Governments are investigating how to decouple ASAP.
Non-US based companies are doing the exact same thing.
Decoupling means strategic and technical autonomy. That includes the hardware, even if not today, but eventually. Intel is the first one to go, not in the last place because it is also economically no longer a sound choice to replace hardware with. But there is zero doubt we will replace it for all mission critical systems.

The trust pact between the EU and US is irreparably damaged, and it will erode the power of the US everywhere. It'll take a while though, because we're slow AF.

As for processing power and efficiency... that is a matter of perspective and our pockets are pretty deep.
 
Back
Top