• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Could Ready HD 4670 Competitor

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
@Darkmatter

- Only GT200 can dual-issue MADD and MUL ops all the time. G8x/G9x generation chips can't do it all the time. There are a select few scenarios where you can dual-issue MAD and MUL ops.

- I didn't: 1375Mhz * 2 Flops * 32 shaders = 88 GFlops

- You are wrong about it being SIMD. ATI's shader involves a MIMD 5-way vectoriel unit, MIMD signifying (contrary to SIMD) that several different instructions can be processed in parallel. The compiler is going to try to assemble simple operations in order to fill the MIMD 5D unit. But these 5 instructions cannot be dependant on each other. So even one shader can process different instructions at a time, let alone one cluster!
I simulated that only 3 instructions/shader can be done on average in my real life calculation because of less than optimal code and inefficiencies.
So basically your conclusion is wrong!

Using my real life caculation (it's just a simulation):
9800GTX 432GFlops
HD3870 248
HD4670 240
9600GT 208
9500GT 88
HD3650 87

If you check out my Crysis scores i posted previously, things start to make sence.
Now i know the HD4670 won't beat the 9600GT in Crysis because of many factors but what ATI has done is basically slapped the HD3870 shader engine into it. Add the RV700 generation architectural improvements.
nVidia on the contrary has made a die shrink of G84 and clocked it higher.

(pls read my previous posts before you reply)

Sorry, but you are wrong. Well, in some way you could say it's MIMD, because R600/700 is composed of SIMD arrays of 5 wide superscalar shader processors controled through VLIWs. BUT the MULTIPLE instruction part is INSIDE each shader, meaning that each ALU within the shader can process different instructions, BUT the every SP in the SIMD array has to share the same instruction. My claim still remains true.

http://www.techreport.com/articles.x/12458/2
http://www.techreport.com/articles.x/14990/4

These stream processor blocks are arranged in arrays of 16 on the chip, for a SIMD (single instruction multiple data) arrangement, and are controlled via VLIW (very long instruction word) commands. At a basic level, that means as many as six instructions, five math and one for the branch unit, are grouped into a single instruction word. This one instruction word then controls all 16 execution blocks, which operate in parallel on similar data, be it pixels, vertices, or what have you.

And then still remains the question whether the drivers can take the usually linear code of games (linear in the sense that AFAIK they calculate different data types at a different time, instead of everything being calculated concurrently) and effectively blend different types of instructions in one VLIW instructions in real time. "Real time" being the key. R600/700 was developed with GPGPU in mind and there it can be effectively used. The inclusion of VLIW then makes sense. But IMO that is fundamentally impossible for the most part in real time calculations. Probably if shaders are doing vertex calculations the other 2 ALUs remain unused, even worse if the operation requires less ALUs.

On the MADD+MUL you are probably right, but Nvidia DID claim they had fixed it on the 9 series.

88 GFlops: I thought you were talking about the 9600GT, for some reason. Probably because candle mentioned it. But TBH arguing about the shader power to compare the graphics cards performance is pointless. The card could be capable of 10 TFlops, but if it mantained only the same 8 render back-ends, it would still perform similarly to any other card with 8 ROPs and similar clocks.

Ah oh, about Crysis. Nonsense. HD3870 is not faster than 9600 GT, let alone a massively crippled one. (if you insist in comparing the HD3870 with the HD4670)
 
Last edited:
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
@ Darkmatter

I went to school myself again and found out that you are right about the fact that each cluster is SIMD. That will cause some inefficiency.
http://pc.watch.impress.co.jp/docs/2008/0626/kaigai_3.pdf

This is my source on Crysis: http://www.computerbase.de/artikel/...st_ati_radeon_hd_4870_x2/20/#abschnitt_crysis
They use DX10 - very high - 1280x1024.

We'll talk about this again when benchmarks appear which i guess will be soon.
But here is a nice little preview for you:
http://bp3.blogger.com/_4qvKWy79Suw/R5pzm6JY-BI/AAAAAAAAAPg/YUofEVeF82U/s1600-h/hd3690.gif => one chart
http://www.pcpop.com/doc/0/265/265454_5.shtml => full article (chinese)

I don't know if you remember the Radeon HD3690 intented for the chinese market only?
This is what it is: http://www.itocp.com/attachments/month_0801/20080117_5aca84ad09a931a1be6fzI5hDbRNoulx.jpg
Basically a HD3850 with a 128bit bus.
I know it's 16 vs 8 ROPS but both will have 16 tex units .... time will tell.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
@ Darkmatter

I went to school myself again and found out that you are right about the fact that each cluster is SIMD. That will cause some inefficiency.
http://pc.watch.impress.co.jp/docs/2008/0626/kaigai_3.pdf

This is my source on Crysis: http://www.computerbase.de/artikel/...st_ati_radeon_hd_4870_x2/20/#abschnitt_crysis
They use DX10 - very high - 1280x1024.

We'll talk about this again when benchmarks appear which i guess will be soon.
But here is a nice little preview for you:
http://bp3.blogger.com/_4qvKWy79Suw/R5pzm6JY-BI/AAAAAAAAAPg/YUofEVeF82U/s1600-h/hd3690.gif => one chart
http://www.pcpop.com/doc/0/265/265454_5.shtml => full article (chinese)

I don't know if you remember the Radeon HD3690 intented for the chinese market only?
This is what it is: http://www.itocp.com/attachments/month_0801/20080117_5aca84ad09a931a1be6fzI5hDbRNoulx.jpg
Basically a HD3850 with a 128bit bus.
I know it's 16 vs 8 ROPS but both will have 16 tex units .... time will tell.

Yeah time will tell. I never pretended to say that this card will be faster than the HD anyway. I do think that on reasonable settings for this kind of cards both will be pretty close. You can't take some benchmarks and say one card is better than other one because at some settings it has 7fps and the other card only 4fps. None of the two are playable, you have to look at what they do at playable settings, because they were designed for those ones.

The HD3870 is only faster when AF/AA is disabled and/or when both cards are very under playable frames. You can't seriously prove your point based on that criteria, because of course if you disable AA and AF, taking out the burden from ROPs and TMUs, obviously all the burden will be on shaders. But on more common settings the card that is more balanced usually wins. HD3000 series were unbalanced and the HD46xx will be even more. HD4xxx's ROP and TMU are more efficient so it will do better than HD3xxx no matter what but IMO not to the point to leave the competition far behind.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
@Darkmatter
Don't treat me like a nob. The Crysis example was brought up to explain to newtekie1 that shader power does matter.

Great find newtekie1. But the HD4650 GDDR2 is already beating the 9500GT GDDR3. The 9550GT needs to be core 1Ghz, shader 2Ghz & memory 2Ghz to get close to the HD4670. I think those frequencies are out of reach. I think the 9550GT is meant to compete with the HD4650 GDDR3.
 
Joined
Dec 28, 2006
Messages
4,378 (0.69/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
So ... just to recap for you:
ATI: 5 units can do MADD (or ADD or MUL)
The 5th (and complex) unit is a special unit. It can also do transcedentals like SIN, COS, LOG, EXP. That's it.
1 MADD (=Multiply-Add) = 2 Flops
1 ADD or MUL = 1 Flops
And these are all usable. The developer doesn't need to program this. The compiler takes care of this. A real life scenario with some bad code could be something like 2 MADD + 1 MUL. If we average this over the 64 units then that would give 240GFlops.

nVidia: basically each scalar unit can do 2 Flops per clock. That would result in a real life performance of around 90GFlops.

So on shader performance ATI will win hands down.

Considering how close the HD4870 performs to the GTX 280 and how much more texel fillrate and bandwidth the GTX has, then it seems to me that shader performance is darn important these days.

800SP vs 240SP and it still can't catch it, i think ATI has a problem there
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
@Darkmatter
Don't treat me like a nob. The Crysis example was brought up to explain to newtekie1 that shader power does matter.

Great find newtekie1. But the HD4650 GDDR2 is already beating the 9500GT GDDR3. The 9550GT needs to be core 1Ghz, shader 2Ghz & memory 2Ghz to get close to the HD4670. I think those frequencies are out of reach. I think the 9550GT is meant to compete with the HD4650 GDDR3.

I'm willing to hear where did I treat you like a noob? Your Crysis point still doesn't hold. Shader power does matter, no one, not even newtekie said it doesn't we just questioned HD card's real shader power ON GAMES.

And that also counts for your second paragraph, the card beating the other one on 3DMark means nothing. It never did. 3DMark is only useful to test OCs and such things. It's not useful to test different cards or system's real performance. Ati cards, specially ever since the R600 have a tremendous advantage on benchmarks, because it's a lot easier to obtain a much higher efficiency (as discussed above) on a fixed benchmark than on real gameplay. The lack of texture power is also mitigated on a benchmark, as everyithng behind the camera will never have to be rendered unexpectedly. It doesn't even matter if the benchmark is something like 3DMark or Crysis GPU benchmark. HardOPC already demostrated that.
 

Kursah

Super Moderator
Staff member
Joined
Oct 15, 2006
Messages
14,666 (2.30/day)
Location
Missoula, MT, USA
System Name Kursah's Gaming Rig 2018 (2022 Upgrade) - Ryzen+ Edition | Gaming Laptop (Lenovo Legion 5i Pro 2022)
Processor R7 5800X @ Stock | i7 12700H @ Stock
Motherboard Asus ROG Strix X370-F Gaming BIOS 6203| Legion 5i Pro NM-E231
Cooling Noctua NH-U14S Push-Pull + NT-H1 | Stock Cooling
Memory TEAMGROUP T-Force Vulcan Z 32GB (2x16) DDR4 4000 @ 3600 18-20-20-42 1.35v | 32GB DDR5 4800 (2x16)
Video Card(s) Palit GeForce RTX 4070 JetStream 12GB | CPU-based Intel Iris XE + RTX 3070 8GB 150W
Storage 4TB SP UD90 NVME, 960GB SATA SSD, 2TB HDD | 1TB Samsung OEM NVME SSD + 4TB Crucial P3 Plus NVME SSD
Display(s) Acer 28" 4K VG280K x2 | 16" 2560x1600 built-in
Case Corsair 600C - Stock Fans on Low | Stock Metal/Plastic
Audio Device(s) Aune T1 mk1 > AKG K553 Pro + JVC HA-RX 700 (Equalizer APO + PeaceUI) | Bluetooth Earbuds (BX29)
Power Supply EVGA 750G2 Modular + APC Back-UPS Pro 1500 | 300W OEM (heavy use) or Lenovo Legion C135W GAN (light)
Mouse Logitech G502 | Logitech M330
Keyboard HyperX Alloy Core RGB | Built in Keyboard (Lenovo laptop KB FTW)
Software Windows 11 Pro x64 | Windows 11 Home x64
Who cares about these specifics folks? Sure it's somewhat nice to know, but damn! This has been an interesting read and rehash of technologies. ATI has dissapointed me with their advertising shaders, personally I would've counted each cluster as a shader core, instead of bragging about 320, 640, 800 or however many zillion "shaders" they fit on their GPU. Also their strategy is improving with every generation, not just in how many shaders, but in overall performance. Both sides are doing good, to me there is no clear winner as I could care less...what I DO care about is what is going to get me what I want for the budget I have to work with...sometimes that includes temps, stability, drivers, OC-ability, etc. See my sys specs to see the winner I chose! Couldn't be happier! :D

As-far-as these low-low-end cards, I may pick a couple up to put in a my sisters' rig and parents' rigs. They do little-to-nothing stressful beyond 2D...just depends if replacing what they already have is worth it or not. As newtekie stated earlier...I really see no point in a strong market for these cards...we don't need multiple models in the low-end segment imo, nor do I care about it's 3D or benchmark performance...if I were to get one of these, it would be for an internet/htpc rig that probably would never game.

:toast:
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium


Well this seems to me a very accurate representation of real life game performance. Everything is where it should be. Actually HD4870 should be above GTX260 what would mean that it doesn't give an advantage to ATI.
And HardOPC ... please ...
Most websites already concluded that the 3DMark Vantage GPU score is very representative.
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
http://gpucafe.com/2008/08/nvidia-preparing-to-counter-attack-in-the-sub-150-segment/

GPU Café has found out that the 9550GT is going to be based on G94b. 64 shaders & 192bit bus. It kinda confirms that the G96 couldn't catch up with the HD4670. If this is true then the 9550GT will be very competitive. It seems that nVidia is prepared to cut profit margins in order to stay competitive since this G94b based product will be much more expensive to produce.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
http://gpucafe.com/2008/08/nvidia-preparing-to-counter-attack-in-the-sub-150-segment/

GPU Café has found out that the 9550GT is going to be based on G94b. 64 shaders & 192bit bus. It kinda confirms that the G96 couldn't catch up with the HD4670. If this is true then the 9550GT will be very competitive. It seems that nVidia is prepared to cut profit margins in order to stay competitive since this G94b based product will be much more expensive to produce.

Good to know. As of the margins, I don't think they will be much smaller than what Ati has with the HD4670. And that chip, if true, is bound to be significantly faster.
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
PCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
PCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.

OMG I know all that. But it won't be that much, and it should perform faster enough to be able to sell it for a bit more. Also G94b will be for the GT and the ones that don't qualify will become the 9550. Those chips are going waste right now, so it will actually increase their current margins IMO.
 
Joined
Dec 28, 2006
Messages
4,378 (0.69/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
PCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.

most users have free molex also think about it. Many bought a 5200 Ultra of FX5600 and both needed external power, it comes down to whats cheaper
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
@Darkmatter

How can a 9550GT use broken G94b's if it keeps all the 64 shaders? Broken memory bus?
And i'm sticking with the production cost issue. I double checked everything again and the 9550GT should be around 35-45% more expensive to produce. nVidia can do two things: put 384MB on the card (instead of 768MB) or really use broken G94's (48 shaders?).
Overview of material:
HD4670: 6-layer pcb, ~380 chips per wafer, 128bit chip packaging
9550GT: 8-layer pcb, ~290 chips per wafer, 256bit chip packaging

Did you ever see wafer prices? pcb and chip packaging cost aren't anything to scoff at either.
Even if the 9550GT will be only a bit more expensive but also a bit faster, ATI is bound to make a huge profit on the HD4650 & HD4670. Not only will the RV730 be a hit in it's class but the RV710 is going to destroy the 9400GT.
No matter how many disadvantages the SIMD based VLIW shader engine has, it really takes much less die space than the scalar based approach nVidia uses.

BTW a review:
http://publish.it168.com/2008/0901/20080901043806.shtml
http://en.expreview.com/2008/09/02/rv730-reviewed-prforms-close-to-3850/
 
Last edited:

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
@Darkmatter

How can a 9550GT use broken G94b's if it's keeps all the 64 shaders? Broken memory bus?
And i'm sticking with the production cost issue. I double checked everything again and the 9550GT should be around 35-45% more expensive to produce. nVidia can do two things: put 384MB on the card (instead of 768MB) or really use broken G94's (48 shaders?).
Overview of material:
HD4670: 6-layer pcb, ~380 chips per wafer, 128bit chip packaging
9550GT: 8-layer pcb, ~290 chips per wafer, 256bit chip packaging

Did you ever see wafer prices? pcb and chip packaging cost aren't anything to scoff at either.
Even if the 9550GT will be only a bit more expensive but also a bit faster, ATI is bound to make a huge profit on the HD4650 & HD4670. Not only will the RV730 be a hit in it's class but the RV710 is going to destroy the 9400GT.
No matter how many disadvantages the SIMD based VLIW shader engine has, it really takes much less die space than the scalar based approach nVidia uses.

BTW a review:
http://publish.it168.com/2008/0901/20080901043806.shtml
http://en.expreview.com/2008/09/02/rv730-reviewed-prforms-close-to-3850/

So many things... Well

1- Nvidia uses a cluster aproach, so they can disable both SP/TMU clusters AND ROP/MC clusters.

2- Any sources on that it will use 8 layers? If 8800 GT could be made in 6 layer PCB, as Nvidia wanted partners to adopt, this one can be on 6 layers a lot easier. I don't actually know if it will have 8, so I'm just assuming. 192 bit is NOT 256 bit last time I checked anyway.

3- Which are your sources for die size?

:roll::roll: 290*8-layers / 6-layers = ~380 :roll::roll: I really hope you have sources for die size and that calculation was not made as things seem to tell... PCB Layers have nothing to do with chips per wafer. NO COMMENT!!

4- Of course they could put 384 MB on them and could still perform a lot better. Isn't the HD3850 faster with only 256 after all?

5- SIMD + VLIW does not necessarily take less space for the same performance. G80/92 vs. R600/670 proved that. R7xx is better, but don't compare it to previous 55nm chips as Nvidia has still to show a real 55nm chip. Also only looking at die photos you can clearly see that Ati puts all their units very close to each other, while Nvidia puts some "blank" space between them so the chip does not get so hot. HINT: Nvidia @65nm is cooler than Ati @55nm.

Now I'm not saying which card will be faster, but IMO no one will be a lot better than the other as you seem to believe and want to tell everybody. It simply won't. Yeah on your link we can see the HD4670 very close to HD3850. The thing is that, judging by the specs, the 9550GT could be close to 9600GT/HD3870 (shaders FTW isn't it, or you suddenly changed your mind?) specially at lower resolutions, where this both cards are supossed to be aimed for.
 
Last edited:
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
1- i know. do you really think that they have enough perfect chips (all 64 shaders) with just one memory controller/rop cluster broken? i don't think so because the G94b has been in production for like 2 months now, they will use good chips too.
lets not forget that the 9550GT will have 12 rops because of this.

2- true, there are some variants that use a 6-layer pcb but forget about high frequencies then. even with a 192bit bus.

3- what the hell are you talking about? what do pcb layers have to do with chips per wafer? can't you read the comma's or are you just making fun of me now? i'm talking about three different things: pcb, chips, packaging!
you want the calculation? here you go: wafer = ~70000mm² so that's: (70000/150)*0.82
the 0.82 stands for the yields (i had to guess that one but i took the same for both).
All reports are saying that the RV730 will be ~150mm².
G94 = 240mm² -- normally 65nm to 55nm = < ~18% -- 240-18% = 196.8 mm²

4- no the HD3850 256MB is slower.

5- fyi, even the RV770 is smaller than the G92b and as far as i can remember, it's much faster. lol
RV670 -> 14,36 mm x 13.37 mm = 192 mm²
RV770 -> 15.65 mm x 15.65 mm = 245 mm²
G92b ---> 16.4 mm x 16.4 mm = 268 mm² >> 55nm
G92 ----> 18 mm x 18 mm = 324 mm²
G200 --> 24 mm x 24 mm = 576 mm²

You show me one post where i said that the 9550GT will be slower after we found out that it will be G94b based! Actually I found out myself that it will be G94b based and corrected myself.
I said the 9550GT will be very competitive but it will cost nVidia money.
I do believe that they will perform comparable. I'm just saying that the 9550GT will cost ~35% more to produce compared to the HD4670 and it will have less memory at the same price point.
I don't know why i even bother replying. This is the last thing i put here. You can reply whatever you want, i won't reply anymore.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
1- i know. do you really think that they have enough perfect chips (all 64 shaders) with just one memory controller/rop cluster broken? i don't think so because the G94b has been in production for like 2 months now, they will use good chips too.
lets not forget that the 9550GT will have 12 rops because of this.

2- true, there are some variants that use a 6-layer pcb but forget about high frequencies then. even with a 192bit bus.

3- what the hell are you talking about? what do pcb layers have to do with chips per wafer? can't you read the comma's or are you just making fun of me now? i'm talking about three different things: pcb, chips, packaging!
you want the calculation? here you go: wafer = ~70000mm² so that's: (70000/150)*0.82
the 0.82 stands for the yields (i had to guess that one but i took the same for both).
All reports are saying that the RV730 will be ~150mm².
G94 = 240mm² -- normally 65nm to 55nm = < ~18% -- 240-18% = 196.8 mm²

4- no the HD3850 256MB is slower.

5- fyi, even the RV770 is smaller than the G92b and as far as i can remember, it's much faster. lol
RV670 -> 14,36 mm x 13.37 mm = 192 mm²
RV770 -> 15.65 mm x 15.65 mm = 245 mm²
G92b ---> 16.4 mm x 16.4 mm = 268 mm² >> 55nm
G92 ----> 18 mm x 18 mm = 324 mm²
G200 --> 24 mm x 24 mm = 576 mm²

You show me one post where i said that the 9550GT will be slower after we found out that it will be G94b based! Actually I found out myself that it will be G94b based and corrected myself.
I said the 9550GT will be very competitive but it will cost nVidia money.
I do believe that they will perform comparable. I'm just saying that the 9550GT will cost ~35% more to produce compared to the HD4670 and it will have less memory at the same price point.
I don't know why i even bother replying. This is the last thing i put here. You can reply whatever you want, i won't reply anymore.

You have short memory or something as all the discussion between us has been based on you praising the HD card to no end, while saying Nvidia will have a tough time to compete, when you don't actually know shit. It was me who was saying BOTH would be OK. You are trying to say Ati will pwn all the time. Because you can't use the performance argument you are just being creative, something that I can admire TBH, but it's nothing more than fairy tales coming out from your head. Enjoyable to a point, but anyone can get tired easily after some posts.

LOL. You gotta love fanboism.

Besides that:

-HD3850 256 is almost as fast as the 512MB variant. Within a 5% difference.

-Perform comparable? LOL. We already know how HD4670 performs, the 9550GT will be VERY close to both 9600GT and 8800GS, because it's specs are exactly that a mix of the two. Depending on the game it will be close to one or the other, to the slower one of the two probably, either way it will be way faster than HD4670 unless they clock it absurdly low, because where the GT will be slower (same games as the 8800GS) is where the HD will be slower too, maybe even slower because 12 vs. 8 ROPs.

-G92b is not a true 55nm chip. Neither are these ones probably. Anyway, apart from RV770 which I DID exclude from my claim, all other 55nm Ati chips are close to Nvidia's 65nm chips when it comes to performance/die size, DESPITE the process difference!!!!

-I love how you categorically affirm the GT will be a 35% more expensive to produce, that it will require to have less memory for doing so, that it won't clock high enough if it has 6 layers, that it will be xxx mm2, etc, when you actually don't have a clue about the chip, as any other mortal on the Earth. It's funny really.

-Also you seem to forget that production costs of the card, on that segment is less than all the money that intermediaries take for them + the packaging, so actually 35% difference on production cost can ealily end up being a 10% in retail. The GT can easily be more than 10% faster than the HD card.

All in all we can't affirm anything. I have not affirmed anything, YOU HAVE, putting all your assumptions as facts. And that is my friend when DarkMatter always comes in.

Now I would love you to respond to the post, since this is a conversation (even a discussion is a conversation) between civil people and it's not polite to end conversations the way you did. I didn't insult you, so I have the right to get a response. Say whatever you want in the postm though I would like you to reply to the content. Even better, PM me, but do it.

EDIT: I first thought to let this one pass, but I have decided to attack you from all fronts, since you like to fight on all of them too. lol.

G92b is actually significantly smaller than RV770. Not enough to justify the performance difference, but it's a new chip against an old one. As I said G92b is NOT a true 55nm chip.

http://www.pcper.com/article.php?aid=580
http://www.pcper.com/article.php?aid=581&type=expert

G92 - 324 mm2
G92b - 231 mm2
RV770 - 260 (256 is probably more accurate)

- 231/256*100 = 90% - So G92b it's a 10% smaller than RV770. A quick look at Wizard's reviews reveal that surprisingly HD4870 is around 10-15% faster than 9800GTX+! :eek: Surprise! (Actually it was a surprise for me. I'm talking about higher resolution and settings FYI)

- 231/324*100 = 71,3% - Almost a 29% reduction. It seems not only Ati ca do that kind of things, after all...

Let's extrapolate that 29% to the G94b please:

- 240*0,713 = 171

Higher than the Radeons estimated 150, but much better than your picture isn't it? And that's for the full G94b, the new 9600GT, you can't actually compare them directly. You would have to compare the new 9600GT to the Radeon to do any fair perf./size comparison*. Nvidia does things differently than Ati. Where Ati tries to do a single chip and get as higher yields as possible on that chip, Nvidia does the chips bigger (faster) so that they don't have to care about deffective cores. They just can use them as the second card, because even crippled are going to be able to compete (8800GT, G80 GTS, GTX260, 8800GS... the list is long). The consecuence of this is that Nvidia has to throw away much less chips, and I could even go as far as to say that it might contrarest the expenses of less die-per-wafer numbers and yields.

*Let's not leave loose ends and let's continue that comparison:

- According to Wizzards reviews HD3850 is 20% slower than the 9600GT.
- I'm going to make an estimate and say that according to your links, the HD4670 is 10% slower than HD3850 (sometimes less, sometimes more), let's be gentle and traduce it as a 5% accumulative for a total of 25% slower than the 9600GT.

- 150/171*100 = 87,7% ...

OK. Let's play with your numbers...
150/196,8*100 = 76,2% Even your (probably very wrong) estimates fall short.

I'm willing to hear a response for this.
 
Last edited:
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
Darkmatter i've waited till the numbers came in:
http://www.anandtech.com/video/showdoc.aspx?i=3405&p=7

To be honest, i stopped reading your above post halfways because it's full of mistakes.

So a HD4670 is as fast or faster than a 9600GSO. A 9600GSO is a G92 @ 192bit.
Now explain to me how a G94 @ 192bit can come close to this?
(and pls don't make up stuff)
 
Joined
Dec 28, 2006
Messages
4,378 (0.69/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
the same reason the 9600GT is faster than the GSO MrMill. The 9600GSO aka 8800GS has to be oced to beat the 9600GT, everyone and there grandmother knows that.
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
the same reason the 9600GT is faster than the GSO MrMill. The 9600GSO aka 8800GS has to be oced to beat the 9600GT, everyone and there grandmother knows that.

Well you should also know that:
9600GT (G94) is slower than 9800GTX (G92).
9600GSO (G92 192bit) is almost as fast but slower than 9600GT.
So a G94 @ 192bit will be even slower than a 9600GSO.

... even my grandmother knows that ... pffff ... did you even read this thread?
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Darkmatter i've waited till the numbers came in:
http://www.anandtech.com/video/showdoc.aspx?i=3405&p=7

To be honest, i stopped reading your above post halfways because it's full of mistakes.

So a HD4670 is as fast or faster than a 9600GSO. A 9600GSO is a G92 @ 192bit.
Now explain to me how a G94 @ 192bit can come close to this?
(and pls don't make up stuff)

Well you should also know that:
9600GT (G94) is slower than 9800GTX (G92).
9600GSO (G92 192bit) is almost as fast but slower than 9600GT.
So a G94 @ 192bit will be even slower than a 9600GSO.

... even my grandmother knows that ... pffff ... did you even read this thread?

Everytime you post is only to show your ignorance.

First of all, there are no mistakes there and I didn't make up anything. It's constrasted info. Search a bit. :laugh:. The fact that you stopped reading only shows you are not able or willing to read something you know it's against your beliefs and completely true. You don't want to learn the bold truth and your brain just screams: ALARM ALARM! STOP READING! EXTERNAL INFLUENCE DETECTED!

Second, the chip doesn't matter one thing, actual specs of the chip does. The GS has more shaders but are crippled by the low ROP count and 192 bit bus AND the fact that it runs at 550Mhz. The GT at 650Mhz is running 18% faster and a quick look at any Wizzard's review will show you that (surprise, surprise...) the GT is around 18% faster on average. On lower resolutions the difference is smaller (ROP advantage gone, SPs FTW) and on higher ones it's bigger, because ROP number counts there.

The 9550GT if required could be easily be clocked at 750Mhz.

- Because it's 55nm it could be clocked above 700Mhz.
- Because it has less stuff than the 9600GT it could be clocked higher.
- Because Nvidia chips are nowhere their limit, if really needed, they could clock it higher.

You have to realise how the market is been until now. Nvidia has been owning all segments so they didn't have to stress the cards too much to compete (when I say that, I mean not reaching a point where failure rate could eventually become a problem, RV770 anyone?). They let that work to partners instead, knowing they will do it (that's the way of Nvidia to make them happy). Proof of that is how every single Nvidia chip based on G92 and newer chips can easily be overclocked a 20% without making the card sweat (with stock cooling and volts) and up to 30% are possible also at stock, Ati chips simply can't do that (20% OC applied to 775Mhz is 930, 750Mhz-->900Mhz). That's also the reason you can find a lot of Nvidia factory OCed cards and only few Ati ones, and those few ones are usually OCed just a bit.

The bottom line is that in order to compete Nvidia chips have a lot of headroom yet. HD4670 once again does a modest 10% OC on Wizzards review and just shows Ati systematically clocks the cards higher above in the curve. Now Nvidia will have to clock the new cards higher and that's all. The GS, BTW, is the Nvidia card that holds the record of stock overclocking AFAIK, primarily because it has less stuff inside, so as I said, just one factor more in 9550GT's favor against the GS and 9600GT and ultimately against the HD4670.

It's going to be a tought fight but IMO it's in Nvidia's hands. The 9550GT can be a lot faster than the 9600GSO, very close to the GT except at 1920x1200 4xAA and above, but no one will or should buy a 85$ card if he wants to play at those settings anyway and the HD4670 is neither a good performer there. We have yet to see if Nvidia WANTS it to be faster.
 
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
-65nm to 55nm brings a theoretical shrink of 19%. That's max 19%.
You are saying: G92 - 324 mm2 G92b - 231 mm2
Did nVidia make a shrink of 40%? Did it ever occur to you that pcper.com is wrong.

-G92b is not a true 55nm chip?? WTH! What is it then? 60nm?
Seriously, where did you read that? The chip shrank 18%, that means an almost perfect transition from 65nm to 55nm. Don't let anybody fool you, it's 55nm.

-So you are basically saying that:
Take a 9600GT, cut off ~1/4 of the chip, now clock it really high so it's close to 9600GT performance at $80. Wow this makes a lot of business sence. *sarcasm*
nVidia will never clock it higher than 650Mhz. You can be pretty sure of that.

HD4670: http://www.newegg.com/Product/Product.aspx?Item=N82E16814500061
9500GT: http://www.newegg.com/Product/Product.aspx?Item=N82E16814500061

Those are the cheapest prices, $80. First thing nVidia needs to do before it even can release a 9550GT is to drop the 9500GT price to ~$65.
And like i have said before, the HD4670 is a true lowend product. It's cheap to make and ATI can make it even cheaper.
Just look at that simple design: http://www.computerbase.de/bild/article/866/17
Very small PCB and very simple power circuitry, comparable to the much slower 9500GT.

So you have called me:
- LOL. You gotta love fanboism.
- Everytime you post is only to show your ignorance.
That's really nice of you! I have been on topic all the time, never called you names but you still need to say these stuff like a kid. Maybe you are a kid, i don't know.
The only reason why we have this discussion is because you are ignorant.
You look at matters with your limited knowledge of business and electronics, and always conclude that i'm wrong. Well i waited for the HD4670 to be released. Now i'll wait for the 9550GT to be released.
 
Top