• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 40 Series "AD104" Could Match RTX 3090 Ti Performance

Joined
Jan 15, 2021
Messages
337 (0.29/day)
400 Watts and only 7K CUDA cores... also ppl arguing that the 980ti could OC needs to realize that 99.99% of ppl don't OC their GPU.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,944 (2.61/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
Don't tell us to shutup about it if you can't handle proper criticism.
It would help if the criticism also came from people who knew a thing or 2 about whats going on in semiconductor industry currently. But no its coming from people who have zero clue of current limitations being hit by chip designers, and the never ending demand for higher performance.
 
Last edited:
Joined
Nov 24, 2017
Messages
853 (0.37/day)
Location
Asia
Processor Intel Core i5 4590
Motherboard Gigabyte Z97x Gaming 3
Cooling Intel Stock Cooler
Memory 8GiB(2x4GiB) DDR3-1600 [800MHz]
Video Card(s) XFX RX 560D 4GiB
Storage Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD
Display(s) Samsung S20D300 20" 768p TN
Case Cooler Master MasterBox E501L
Audio Device(s) Realtek ALC1150
Power Supply Corsair VS450
Mouse A4Tech N-70FX
Software Windows 10 Pro
Benchmark Scores BaseMark GPU : 250 Point in HD 4600
Not quite. Ampere has a high TDP because it is doing hardware real-time raytracing, which is a VERY complex and compute heavy type of task. When not RTRT is not being performed, Ampere GPUs are good on power. Turing is/was no different. AMD's RTRT functionality is no different, turn on raytracing and power usage takes a big bump.
Doing Hardware raytracing while running non raytraced games!!!:roll:
So you are telling me that TPU was able to run the same Raytraced test to measure power on GTX 1630?? You need to read the TPU review methodology properly. If you lazy to read it, It says Gaming: Cyberpunk 2077 is running at 2560x1440 with Ultra settings and ray tracing disabled. We ensure the card is heated up properly, which ensures a steady-state result instead of short-term numbers that won't hold up in long-term usage.

No matter what how Nvidia fansboys trying to spin it Ampere is inefficient. Let me give you a example,
Card Name/GPU NameManufacturing NodeFP32 TLFOPSPower(TBP)
MI50(Vega20)TSMC 7nm13.3300W
MI100(Arcturus)TSMC 7nm23.1300W
Tesla V100(GV100)TSMC 12nm14.13250W(upto 300W version)
A100(GA100)TSMC 7nm19.5300W(upto 500W Version)
Here we can see Vega20 to Arcturus same node with new hardware matrix unit and 1.73x FP32 performance, same TBP. Where GV100 to GA100, TSMC's 12nm to 7nm(12->7-60% power reduction), only 1.38X FP32 performance, 50W more power. If Ampere were efficient then TBD would have been lower or same. But that does not happened. TBP rose which means one thing Ampere is not efficient.

It's not that simple. Just look at A4000. A big part of efficiency just depends on the performance you target in terms of clock speed and the yields you target. Higher clockspeeds means less efficiency. It's possible bad yields resulted in the 3000 series being relatively inefficient.

Just look how efficient A4000 is. Less power than a 6600xt while performance is equal to a 3060Ti. Just a result of a bigger chip with lower clocks and possibily binned for low voltages.

Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.

Edit: Looks like @Beertintedgoggles already pointed out the power consumption testing part.
 
Joined
Sep 13, 2021
Messages
86 (0.09/day)
I will wait until cards are available, second revision without early adopter problems, optimized drivers, to see the power consumption in my preferred applications/games under real conditions.
400w for 3090 ti performance would be fine for me, a bit undervolting and boost rate limit for finetuning. 12 GB Vram is ok for the next 4 years, disliked the 8GB 3070 and 10GB 3080.
 
Joined
Dec 12, 2016
Messages
1,189 (0.45/day)
Doing Hardware raytracing while running non raytraced games!!!:roll:
So you are telling me that TPU was able to run the same Raytraced test to measure power on GTX 1630?? You need to read the TPU review methodology properly. If you lazy to read it, It says Gaming: Cyberpunk 2077 is running at 2560x1440 with Ultra settings and ray tracing disabled. We ensure the card is heated up properly, which ensures a steady-state result instead of short-term numbers that won't hold up in long-term usage.

No matter what how Nvidia fansboys trying to spin it Ampere is inefficient. Let me give you a example,
Card Name/GPU NameManufacturing NodeFP32 TLFOPSPower(TBP)
MI50(Vega20)TSMC 7nm13.3300W
MI100(Arcturus)TSMC 7nm23.1300W
Tesla V100(GV100)TSMC 12nm14.13250W(upto 300W version)
A100(GA100)TSMC 7nm19.5300W(upto 500W Version)
Here we can see Vega20 to Arcturus same node with new hardware matrix unit and 1.73x FP32 performance, same TBP. Where GV100 to GA100, TSMC's 12nm to 7nm(12->7-60% power reduction), only 1.38X FP32 performance, 50W more power. If Ampere were efficient then TBD would have been lower or same. But that does not happened. TBP rose which means one thing Ampere is not efficient.


Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.

Edit: Looks like @Beertintedgoggles already pointed out the power consumption testing part.
Lexluthermeister is trying to say that with RT enabled on both Ampere and RDNA2 based cards, Nvidia is more efficient because of the much higher performance. The problem is that neither of these architectures' power numbers are measured with RT enabled. He/she is conjecturing without measurements. So in lieu of actual measurements with RT enabled, we have TPUs Perf/W numbers. From the latest 6950XT review:
1659369801168.png

The numbers are top loaded with RDNA2 cards.
 
Joined
Dec 5, 2013
Messages
600 (0.16/day)
Location
UK
There is clearly a barrier neither Nvidia or AMD can pass through, they cannot increase performance in a meaningful way in the 2 year cycle without going crazy on power draw, no point in beating the dead horse.
For top-end maybe, but there's really no technological limitation to stop nVidia making low-mid range cards "wider but slower" (as they are already doing with mobile chips). AMD certainly weren't held back by making an almost (180w) RTX 3060 with the TDP of a (120w) GTX 1660 with the RX 6600 (then pricing it like a 130w RTX 3050), much of which is due to hitting the sweet spot rather than going beyond it. All 4 of the past nVidia GPU's I've owned undervolted to 0.85-0.90v (from 1.05v) retaining stock frequency. Reduce that a little to 1700-1800MHz and voltage (and TDP) just falls away. Make the chips a little wider and you've gained back what you lost in frequency but without adding anywhere near as much TDP or voltage back on. They do this all the time with mobile chips. The "limitation" to them not doing more of this for desktop is entirely 'political', ie, they simply don't want to tell overly-complacent stockholders that the "low hanging fruit has all been picked", and that they may have to accept 5% lower margins in order to use +5-10% larger die sizes in order to make a +50% better product in case they start voting against those executive bonuses...
 
Last edited:
Joined
Dec 5, 2020
Messages
159 (0.13/day)
@AleksandarK
Rumors before 30 launch were under estimating CUDA count by a large number.
It's likely they're overestimating performance this time to reach an equilibrium.

The only impressive leak was the cooler which wasn't from kopite7kimi anyway.

look at his numbers:
View attachment 256640
he's predicted half the CUDA cores for 3090. leaked 5248 vs real 10496
his 4352 for 3080 -which is same cores as 2080Ti had- is less than half the actual 8960
his leak is exactly half for 3070 and 3070Ti


And here is his crazy 20gb rumors about 3080Ti

View attachment 256643
You're trying to dimiss his leaks with those tweest but that first tweet is actually pretty accurate. The only thing he didn't know was the SM layout and that the CUDA cores per SM doubled. He predicted the raw specs of Ampere more than a year before launch.

He accurately leaked the raw layout of all Ampere chips and that it was using Samsung more 1.5 years before launch. That was the most impressive leak imo.



He also leaked TSE scores back then for GA102 just like he did this time.

I'm pretty sure those SKUs were being tested. They just didn't release. Companies test many potential products that never release
 
Joined
Jun 23, 2017
Messages
5 (0.00/day)


GTX 1080 was 25% faster when comparing OC versions and 20% for the OC-OC. And it did that with 10% less transistors 7200/8000. 600 shrinking to 300mm2. and the impressive 59% improvement between FE and OC-OC.

Now 4070 Ti has more transistors, L2 and ROPs, But cut to 192 bit bus and using same memory speed G6X. no improvement there. Bandwidth is cut to less than half.
I got 980ti from Asus with 1500 MHZ on core then i bought GTX 1080 with 2078 mhz and there was only 5% difference, but huge diffrence in watts.
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Personally, I usually draw the line on 250W for GPUs, as that's roughly the point where the card can be reasonably cooled with air without too much noise. 400W+ is beyond what I would recommend, at least if people don't really "need" it for a good reason.
But let's keep in mind that we won't see the real world power draw until the final products are here, and engineering samples commonly have higher power limits.

Rumors before 30 launch were under estimating CUDA count by a large number.
It's likely they're overestimating performance this time to reach an equilibrium.
That's because the vast majority of "leaks" are not leaks at all, just random nobodies on Reddit, Twitter or YouTube pulling numbers out of thin air.
The core count of the chips is set >1 year in advance (before tapeut), the only things that changes once the chips are done are how many of them are enabled, clock speeds, TDP, price etc. If you employ this knowledge you can expose most of fake ones. Just look at the track record of your favorite leakers; if they lied in the past, then they are full of s*** and they shouldn't be trusted again.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,944 (2.61/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
That better not be true. 256bit mem bus or NVidia can suck a d... duck, yes, suck a duck.
It may not be as bad as you think if the l2 cache increased by as much as leaks indicate or more
 
Joined
Aug 21, 2013
Messages
1,669 (0.43/day)
Samsung's 8nm is quite good node. Much better then Nvidia fansboys like to tell every one. Ampere has high TDP because Ampere is inefficient.
Nvidia fanboys bashing Samsung's 8nm? I was under the impression that they claim it's the best think since sliced bread, because how could they say anything otherwise? The fact is that this node was never meant for high power complex chips like Ampere. It was meant for low power smartphone SoC's like the one i have in my S10e that is based on this same 8nm process. Samsung did scale it up but it was never going to beat TSMC's 7nm in terms of efficiency.

Also part of what makes Ampere look bad is the absurd power consumption if Microns G6X memory. Had they used regular G6 even on the top end models the power consumption would have been more favorable.
 
Joined
May 8, 2018
Messages
1,495 (0.69/day)
Location
London, UK
3070 256 bit, 4070 = 192 bit, likely next gen 5070 will be 128 bit, disappointing.
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Focusing just on the bus width is pure ignorance.
In the 80s and early 90s, bits was all the rage for gaming consoles, CPU register width that time; 16-bits of power!
Still today, buyers are focusing on arbitrary specs, because at the end of the day, we all know specs is what makes someone cool, not actual performance.
(sarcasm)
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Everyone always assumes samsung's process sucked just because nvidia OCed their parts too high to dominate said graphs out of the box. Sub 2 GHz the 8nm node is great.

What I wouldnt give to find one of those A2000s to upgrade my SFF box.
Even sub-2GHz there's a lot of efficiency on the table. This PC in my living room has a 3060 in it.

Unigine Superposition 1080p High:
  • 60% TDP = 10060 points, 105W 1675MHz
  • 75% TDP = 10920 points, 125W 1785MHz
  • 85% TDP = 11300 points, 145W, 1830MHz
  • 100% TDP = 11530 points, 170W, 1890MHz
  • 110% TDP = 11740 points, 180W, 1935MHz (PerfCap = VRel)
Comparing 60% against stock, it's 87% of the performance for only 62% of the total board power.
Comparing 60% against an OC, it's , it's 86% of the performance for only 58% of the total board power.

I haven't even bothered undervolting yet. It's inaudible under load set to 75% in afterburner and I'm only losing 5% of the stock performance for that privilege.
What I wouldn't give to find one of those A2000s to upgrade my SFF box.
Are you limited to low-profile? If not, just buy a dirt-cheap 3060 and run it at the lowest power limit you can. This Palit goes down to 55% which ends up drawing a fraction over 90W board power. With the patience to undervolt and tune you can probably get >1700MHz boost clocks from under 100W.
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Even sub-2GHz there's a lot of efficiency on the table.
<snip>
Comparing 60% against stock, it's 87% of the performance for only 62% of the total board power.
Comparing 60% against an OC, it's , it's 86% of the performance for only 58% of the total board power.
I like the idea, I might try it out when I build a HTPC.
(Now with supplies getting better, there could be opportunities to get a good card at discount.)

But I do wonder though, how does this affect the frame rate consistency?

I haven't even bothered undervolting yet. It's inaudible under load set to 75% in afterburner and I'm only losing 5% of the stock performance for that privilege.
Undervolting is a little more "scary" though. Is it really worth the risk of crashing during gameplay or movies?
I would think that with 25% of the TDP shaved off, the cooler should be easily capable of cooling a card fairly silently, even if it was a higher TDP card than this.
 
  • Like
Reactions: Lei
Joined
Jul 9, 2015
Messages
3,413 (1.07/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Here in mainland Europe electricity expenses went up 200 to 300%
Where is that "mainland Europe" where electricity price trippled please? Average EU price was somewhere in 25-30 cents area, in which country would you pay 75-90 cents for a kilowatt - hour???

Rumors before 30 launch were under estimating CUDA count by a large number.
NV simply claimed things had double the number of shaders cards really had, just because they could do fp+fp.
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I like the idea, I might try it out when I build a HTPC.
(Now with supplies getting better, there could be opportunities to get a good card at discount.)

But I do wonder though, how does this affect the frame rate consistency?


Undervolting is a little more "scary" though. Is it really worth the risk of crashing during gameplay or movies?
I would think that with 25% of the TDP shaved off, the cooler should be easily capable of cooling a card fairly silently, even if it was a higher TDP card than this.

I mentioned my 3060 because it's the same silicon as the RTX A2000 @TheinsanegamerN was lusting over.
The A2000 is 75W, slot-powered, and likely binned silicon picked for efficiency.

If you're truly after efficiency in the sub-100W range, you can actually go AMD, because raytracing is kind of irrelevant on these lower-end cards so AMD's disadvantage also becomes irrelevant.
An RX6600 seems to have a lot of voltage margin out of the factory, it's pretty likely that you can reduce voltage by 150mv without any consequences, leading to a ~30% drop in TDP at the default clocks. If you give up a couple hundred MHz then Reddit undervolting adventurers prove that a 75W RX6600 should be able to run at 2.3-2.4GHz which likely puts it quite a bit ahead of a 3060 running at the same 75W.
 
Joined
Aug 21, 2013
Messages
1,669 (0.43/day)
Where is that "mainland Europe" where electricity price trippled please? Average EU price was somewhere in 25-30 cents area, in which country would you pay 75-90 cents for a kilowatt - hour???
30 sents. If it used to be 10 sents that that has indeed tripled. 30 was pretty high before. Now it's the norm.
I can for example say that i now pay same or more during the summer that use used to pay during the winter in the coldest months.
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
But I do wonder though, how does this affect the frame rate consistency?
It doesn't, at least not in a negative way.

The boost algorithm is primarily power-limited anyway. For every Ampere card I've ever tested, PWR and VRel are the only reasons I ever see, so the clocks of your GPU under normal use are always going to be bumping up against a software-defined power ceiling or voltage ceiling.

If you lower the power ceiling by setting a reduced power limit, the behaviour is exactly the same. The only possible side effect is that you're less likely to get voltage limits influencing the boost algorith, which (if anything) is likely to make the boost clock more stable, not less stable. I suspect the difference is so tiny that it'd be lost in run-to-run variance, but if anything, I'd expect an underclocked card to be more consistent, not less.
 
Joined
Aug 6, 2020
Messages
729 (0.55/day)
3070 256 bit, 4070 = 192 bit, likely next gen 5070 will be 128 bit, disappointing.


Well, they almost matched a 2060 on a 128-bit 3050 ( clocked at same 14 Gbps)

if mainstream Ada cards jump up to 18-20 gbps GDDr6, then they can easily handle 3070 at 192 bits,. The double-density means the 4070 gets a 12GB bump, and the rest of the lineup jumps to 16GB or more.

I expect that they will want to maintain the current 3070 TDP of 225w for 4070, and with that massive rumored performance bump (3090), you're going to have to cut power in unique ways (by cutting the memory devices in by nearly 1/2, that's a good start!)
 
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
30 sents. If it used to be 10 sents that that has indeed tripled. 30 was pretty high before. Now it's the norm.
I can for example say that i now pay same or more during the summer that use used to pay during the winter in the coldest months.
I was on a fixed tariff during lockdown of 16.6p per KWh. Currently I'm paying 33p per KWh and it's about to go up again by 20% in October, so in the space of a couple of years, I'll have seen a 2.4x increase.

In 1990, when I was learning about electricity in school, we had a particularly stable period lasting most of a decade where electricity in the UK cost 5p/KWh. Inflation since 1992 is almost exactly double, but energy costs have increased almost 7x and will soon be 9x more. That's basically a 4.5x increase in real cost over the last 30 years, adjusting for inflation, and the bulk of that increase has been very recent.

Source:

Admittedly, this silly country did a big messy brexit-shaped shit on the carpet, so things might not be quite so bad elsewhere in Europe.
 
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
You got called out on an inaccurate statement.
No I didn't. You are not reading the material correctly and understanding context.
Reviews here on TPU show the power measurement numbers with ray tracing disabled.
On that one review for that specific game. For everything else, cards are tested with RTRT on. Why? Because CyberPunk2077 is the new Crysis. It will bring any system it runs on to it's knees and in the way that W1zard usually conducts testing, would bring the frames rates to a crawl which would interfere with testing results on power usage. So for that ONE game, RTRT is turned off.

Do we understand the context now?

Doing Hardware raytracing while running non raytraced games!!!:roll:
So you are telling me that TPU was able to run the same Raytraced test to measure power on GTX 1630?? You need to read the TPU review methodology properly. If you lazy to read it, It says Gaming: Cyberpunk 2077 is running at 2560x1440 with Ultra settings and ray tracing disabled. We ensure the card is heated up properly, which ensures a steady-state result instead of short-term numbers that won't hold up in long-term usage.
No matter what how Nvidia fansboys trying to spin it Ampere is inefficient. Let me give you a example,
Card Name/GPU NameManufacturing NodeFP32 TLFOPSPower(TBP)
MI50(Vega20)TSMC 7nm13.3300W
MI100(Arcturus)TSMC 7nm23.1300W
Tesla V100(GV100)TSMC 12nm14.13250W(upto 300W version)
A100(GA100)TSMC 7nm19.5300W(upto 500W Version)
Here we can see Vega20 to Arcturus same node with new hardware matrix unit and 1.73x FP32 performance, same TBP. Where GV100 to GA100, TSMC's 12nm to 7nm(12->7-60% power reduction), only 1.38X FP32 performance, 50W more power. If Ampere were efficient then TBD would have been lower or same. But that does not happened. TBP rose which means one thing Ampere is not efficient.

Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.

Edit: Looks like @Beertintedgoggles already pointed out the power consumption testing part.
Wow. Just wow.

You know the old saying: Ignorance is bliss..

It may not be as bad as you think if the l2 cache increased by as much as leaks indicate or more
I don't care? 256bit or they can take a flying leap..

Focusing just on the bus width is pure ignorance.
Rendering an opinion without historical context and technological understanding is unadulterated ignorance.
 
Last edited:
Top