Tuesday, December 22nd 2020

NVIDIA to Introduce an Architecture Named After Ada Lovelace, Hopper Delayed?

NVIDIA has launched its GeForce RTX 3000 series of graphics cards based on the Ampere architecture three months ago. However, we are already getting information about the next-generation that the company plans to introduce. In the past, the rumors made us believe that the architecture coming after Ampere is allegedly being called Hopper. Hopper architecture is supposed to bring multi-chip packaging technology and be introduced after Ampere. However, thanks to @kopite7kimi on Twitter, a reliable source of information, we have data that NVIDIA is reportedly working on a monolithic GPU architecture that the company internally refers to as "ADxxx" for its codenames.

The new monolithically-designed Lovelace architecture is going make a debut on the 5 nm semiconductor manufacturing process, a whole year earlier than Hopper. It is unknown which foundry will manufacture the GPUs, however, both of NVIDIA's partners, TSMC and Samsung, are capable of manufacturing it. The Hopper is expected to arrive sometime in 2023-2024 and utilize the MCM technology, while the Lovelace architecture will appear in 2021-2022. We are not sure if the Hopper architecture will be exclusive to data centers or extend to the gaming segment as well. The Ada Lovelace architecture is supposedly going to be a gaming GPU family. Ada Lovelace, a British mathematician, has appeared on NVIDIA's 2018 GTC t-shirt known as "Company of Heroes", so NVIDIA may have already been using the ADxxx codenames internally for a long time now.
Sources: kopite7kimi, via VideoCardz
Add your own comment

35 Comments on NVIDIA to Introduce an Architecture Named After Ada Lovelace, Hopper Delayed?

#26
Anymal
She was forced, Linda. Ada? Was she forced to anything?
Posted on Reply
#27
zlobby
R-T-BGo directly to horny jail.
Bonk'd! :D
Posted on Reply
#28
cueman
nvidia lovelace 5nm
amd rdna3 5nm


i really hope this, then only engineer works both side shows clearly, which one has fastest and efficencies gpus.. i guess Q1-2/2022 out.

few things is sure.

1st,amd cant make same thing like its soing rx 6000 series, and i mean using 'old' rx 5000 series gpus and sqeezze them one gpu,aka 2x5700xt,i mean 3x5700xt are not possible
so, amd must create brand new one. let see them skills.

nvidia..well lovelace is something new what no1 knows except nvidia engineers and leaders,but, sure is that it is 5nm die tech...is it samsung or TSMC we will see, but not so long ago,
we heard that samsung and nvida make new deal...so is it mean samsung engineer has sure nvidia peoples that them die tech is better ..or is it cheaper.. or both?

we will see.

hope we see amd rdna3 and nvidia lovelace early 2022..Q1-2.... i cant belive earlier.

bfore that we seen nvidia super models and of coz rtx 3080 ti and others variants from nvidia also..... how about rtx 3070 sli...i say, NO.

and. also, where is intel gpu? it was nice to see 3rd gpu for battle line.


i just wondering, sure thouse both are remendes fast and return back under 300W can forget...questions is, where we need so much power,bcoz only few percent gamers use 4K monitor.

hmm,soon, only one gpu take same power than hole PC rig bfore.....no good.
Posted on Reply
#29
bug
Hardware GeekThat's a legitimate point, but it seems like they do it much more often with far smaller performance increases between each variant. I'm sure better binning of the chips has an impact on it, but it's getting kind of ridiculous. Look at the xt series on ryzen 2. 100mhz increases when you're already running at 4.6ghz is around a 2% difference. That's not an upgrade, that's a margin of error. And I fully expect my next upgrade to be a zen 3 apu, but that kind of marketing stunt is off-putting.
Yeah well, what can you do, marketing is like that.
I don't look at each and every generation as a must-have upgrade (because, like you noted, improvements can be pretty minute). I just look at it like "$200 bought you 100% performance last year, this year it buys you 105%".
Posted on Reply
#30
medi01
In a little more than 1 year, Lisa Su's AMD has managed to wreck NV's lineup into ridiculous last minute measures, with laughable memory configurations and power consumption with new power sockets and all that at unexpectedly low price.

All the lame excuses now, about "but node advantage", is just an attempt to address that acute green pain in the lower back of the respective green zealots.

While node advantage is indeed there, AMD also happens to use SMALLER chips that consume LESS power. That is all what one would expect from a node shrink to do. Compare transistor counts, AMD has matched green chips at perf/transistor, beaten at perf/watt AND is using slower VRAM.

What remains is the lame "but RT" argument, 2 years into ungodly RT goodness, we have "whopping" two dozen games, most of which exhibit barely any visual difference, with the most notable "feature" that RT brings being sharply dropping your framerate.

Oh, but to "compensate" there is "but if you run it at a lower resolution... from far above you wouldn't notice... especially if we apply that glorified TAA derivative we call 'DLSS 2'".


AT this pace, RDNA3 will at the very least exchange punches with the most obnoxiously priced crap team green will roll out, but we can also get into this:

Posted on Reply
#31
Prima.Vera
I thought the architecture's name was Linda Lovelace.... That would have been deep.
Only for connoisseurs. ;)
Posted on Reply
#32
R-T-B
Prima.VeraI thought the architecture's name was Linda Lovelace.... That would have been deep.
Only for connoisseurs. ;)
R-T-BGo directly to horny jail.
Posted on Reply
#33
Hardware Geek
bugYeah well, what can you do, marketing is like that.
I don't look at each and every generation as a must-have upgrade (because, like you noted, improvements can be pretty minute). I just look at it like "$200 bought you 100% performance last year, this year it buys you 105%".
Agreed. I used to be on the new processor every gen bandwagon but when I was a kid, you could still push processors and increase performance by 50% or more. I didn't need a benchmark to "prove" it was faster and subsequent generations were still making 20-30% gen on gen gains. It's only now that processors are getting high core counts that I'm actually considering an upgrade, although I'll probably wait for zen 3+ with ddr5.
Posted on Reply
#34
bug
Hardware GeekAgreed. I used to be on the new processor every gen bandwagon but when I was a kid, you could still push processors and increase performance by 50% or more. I didn't need a benchmark to "prove" it was faster and subsequent generations were still making 20-30% gen on gen gains. It's only now that processors are getting high core counts that I'm actually considering an upgrade, although I'll probably wait for zen 3+ with ddr5.
Yes, when I realized I needed benchmarks to see the improvements is when I also stopped fretting over each and every release. Great minds think alike, right? ;)
Posted on Reply
#35
mike dar
A Japanese firm has figured out how to replace silicon with Antimony... Thinner, better charge holding noticbly. So we.. not me..to old.. will see 1 NM in 10-20 years. Finer than brain material, coding will be available then.. Rise Of the Machines will happen. I always wanted a AI of my own.
Posted on Reply
Add your own comment
May 15th, 2025 11:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts