Friday, August 29th 2008

NVIDIA Could Ready HD 4670 Competitor

GPU Café published information on future competition lineups., which shows the entry of a "GeForce 9550 GT" stacked up against the Radeon HD 4670. Sources in the media have pointed to the the possibility that the the RV730 based HD 4670 from ATI outperforms NVIDIA cards in its current lineup, relative to the segments where GeForce 9500 GT sits. The HD 4650 could exchange a few blows with the GeForce 9500 GT with equal or better levels of performance while the HD 4670 surpasses it.

The entry of a GeForce 9550 GT shows the 9500 GT cannot compete with the HD 4650, a newer price demographic of ~ $129 is shown in that chart that not only indicates prices, but also shows the HD 4650's lead over 9500 GT is so significant that ATI could be comfortable with asking you $20 more than what 9500 GT asks, relative to the range. GPU Café reports that the 9550 GT would be a toned-down (and shrunk) G94, as in the 55 nm G94b, featuring 64 shader processors and a 192-bit memory bus (and presumably, memory configurations such as 384 MB or 768 MB of GDDR3 memory).
Source: GPU Café
Add your own comment

58 Comments on NVIDIA Could Ready HD 4670 Competitor

#26
MrMilli
@ Darkmatter

I went to school myself again and found out that you are right about the fact that each cluster is SIMD. That will cause some inefficiency.
pc.watch.impress.co.jp/docs/2008/0626/kaigai_3.pdf

This is my source on Crysis: www.computerbase.de/artikel/hardware/grafikkarten/2008/test_ati_radeon_hd_4870_x2/20/#abschnitt_crysis
They use DX10 - very high - 1280x1024.

We'll talk about this again when benchmarks appear which i guess will be soon.
But here is a nice little preview for you:
bp3.blogger.com/_4qvKWy79Suw/R5pzm6JY-BI/AAAAAAAAAPg/YUofEVeF82U/s1600-h/hd3690.gif => one chart
www.pcpop.com/doc/0/265/265454_5.shtml => full article (chinese)

I don't know if you remember the Radeon HD3690 intented for the chinese market only?
This is what it is: www.itocp.com/attachments/month_0801/20080117_5aca84ad09a931a1be6fzI5hDbRNoulx.jpg
Basically a HD3850 with a 128bit bus.
I know it's 16 vs 8 ROPS but both will have 16 tex units .... time will tell.
Posted on Reply
#28
DarkMatter
MrMilli@ Darkmatter

I went to school myself again and found out that you are right about the fact that each cluster is SIMD. That will cause some inefficiency.
pc.watch.impress.co.jp/docs/2008/0626/kaigai_3.pdf

This is my source on Crysis: www.computerbase.de/artikel/hardware/grafikkarten/2008/test_ati_radeon_hd_4870_x2/20/#abschnitt_crysis
They use DX10 - very high - 1280x1024.

We'll talk about this again when benchmarks appear which i guess will be soon.
But here is a nice little preview for you:
bp3.blogger.com/_4qvKWy79Suw/R5pzm6JY-BI/AAAAAAAAAPg/YUofEVeF82U/s1600-h/hd3690.gif => one chart
www.pcpop.com/doc/0/265/265454_5.shtml => full article (chinese)

I don't know if you remember the Radeon HD3690 intented for the chinese market only?
This is what it is: www.itocp.com/attachments/month_0801/20080117_5aca84ad09a931a1be6fzI5hDbRNoulx.jpg
Basically a HD3850 with a 128bit bus.
I know it's 16 vs 8 ROPS but both will have 16 tex units .... time will tell.
Yeah time will tell. I never pretended to say that this card will be faster than the HD anyway. I do think that on reasonable settings for this kind of cards both will be pretty close. You can't take some benchmarks and say one card is better than other one because at some settings it has 7fps and the other card only 4fps. None of the two are playable, you have to look at what they do at playable settings, because they were designed for those ones.

The HD3870 is only faster when AF/AA is disabled and/or when both cards are very under playable frames. You can't seriously prove your point based on that criteria, because of course if you disable AA and AF, taking out the burden from ROPs and TMUs, obviously all the burden will be on shaders. But on more common settings the card that is more balanced usually wins. HD3000 series were unbalanced and the HD46xx will be even more. HD4xxx's ROP and TMU are more efficient so it will do better than HD3xxx no matter what but IMO not to the point to leave the competition far behind.
Posted on Reply
#30
MrMilli
@Darkmatter
Don't treat me like a nob. The Crysis example was brought up to explain to newtekie1 that shader power does matter.

Great find newtekie1. But the HD4650 GDDR2 is already beating the 9500GT GDDR3. The 9550GT needs to be core 1Ghz, shader 2Ghz & memory 2Ghz to get close to the HD4670. I think those frequencies are out of reach. I think the 9550GT is meant to compete with the HD4650 GDDR3.
Posted on Reply
#31
candle_86
MrMilliSo ... just to recap for you:
ATI: 5 units can do MADD (or ADD or MUL)
The 5th (and complex) unit is a special unit. It can also do transcedentals like SIN, COS, LOG, EXP. That's it.
1 MADD (=Multiply-Add) = 2 Flops
1 ADD or MUL = 1 Flops
And these are all usable. The developer doesn't need to program this. The compiler takes care of this. A real life scenario with some bad code could be something like 2 MADD + 1 MUL. If we average this over the 64 units then that would give 240GFlops.

nVidia: basically each scalar unit can do 2 Flops per clock. That would result in a real life performance of around 90GFlops.

So on shader performance ATI will win hands down.

Considering how close the HD4870 performs to the GTX 280 and how much more texel fillrate and bandwidth the GTX has, then it seems to me that shader performance is darn important these days.
800SP vs 240SP and it still can't catch it, i think ATI has a problem there
Posted on Reply
#32
GPUCafe
GPU Cafe Representative
candle_86800SP vs 240SP and it still can't catch it, i think ATI has a problem there
Big problem for sure. They are the one's giving $100-150 price drops right? ;)
Posted on Reply
#33
DarkMatter
MrMilli@Darkmatter
Don't treat me like a nob. The Crysis example was brought up to explain to newtekie1 that shader power does matter.

Great find newtekie1. But the HD4650 GDDR2 is already beating the 9500GT GDDR3. The 9550GT needs to be core 1Ghz, shader 2Ghz & memory 2Ghz to get close to the HD4670. I think those frequencies are out of reach. I think the 9550GT is meant to compete with the HD4650 GDDR3.
I'm willing to hear where did I treat you like a noob? Your Crysis point still doesn't hold. Shader power does matter, no one, not even newtekie said it doesn't we just questioned HD card's real shader power ON GAMES.

And that also counts for your second paragraph, the card beating the other one on 3DMark means nothing. It never did. 3DMark is only useful to test OCs and such things. It's not useful to test different cards or system's real performance. Ati cards, specially ever since the R600 have a tremendous advantage on benchmarks, because it's a lot easier to obtain a much higher efficiency (as discussed above) on a fixed benchmark than on real gameplay. The lack of texture power is also mitigated on a benchmark, as everyithng behind the camera will never have to be rendered unexpectedly. It doesn't even matter if the benchmark is something like 3DMark or Crysis GPU benchmark. HardOPC already demostrated that.
Posted on Reply
#34
Kursah
Who cares about these specifics folks? Sure it's somewhat nice to know, but damn! This has been an interesting read and rehash of technologies. ATI has dissapointed me with their advertising shaders, personally I would've counted each cluster as a shader core, instead of bragging about 320, 640, 800 or however many zillion "shaders" they fit on their GPU. Also their strategy is improving with every generation, not just in how many shaders, but in overall performance. Both sides are doing good, to me there is no clear winner as I could care less...what I DO care about is what is going to get me what I want for the budget I have to work with...sometimes that includes temps, stability, drivers, OC-ability, etc. See my sys specs to see the winner I chose! Couldn't be happier! :D

As-far-as these low-low-end cards, I may pick a couple up to put in a my sisters' rig and parents' rigs. They do little-to-nothing stressful beyond 2D...just depends if replacing what they already have is worth it or not. As newtekie stated earlier...I really see no point in a strong market for these cards...we don't need multiple models in the low-end segment imo, nor do I care about it's 3D or benchmark performance...if I were to get one of these, it would be for an internet/htpc rig that probably would never game.

:toast:
Posted on Reply
#35
MrMilli


Well this seems to me a very accurate representation of real life game performance. Everything is where it should be. Actually HD4870 should be above GTX260 what would mean that it doesn't give an advantage to ATI.
And HardOPC ... please ...
Most websites already concluded that the 3DMark Vantage GPU score is very representative.
Posted on Reply
#36
MrMilli
gpucafe.com/2008/08/nvidia-preparing-to-counter-attack-in-the-sub-150-segment/

GPU Café has found out that the 9550GT is going to be based on G94b. 64 shaders & 192bit bus. It kinda confirms that the G96 couldn't catch up with the HD4670. If this is true then the 9550GT will be very competitive. It seems that nVidia is prepared to cut profit margins in order to stay competitive since this G94b based product will be much more expensive to produce.
Posted on Reply
#37
DarkMatter
MrMilligpucafe.com/2008/08/nvidia-preparing-to-counter-attack-in-the-sub-150-segment/

GPU Café has found out that the 9550GT is going to be based on G94b. 64 shaders & 192bit bus. It kinda confirms that the G96 couldn't catch up with the HD4670. If this is true then the 9550GT will be very competitive. It seems that nVidia is prepared to cut profit margins in order to stay competitive since this G94b based product will be much more expensive to produce.
Good to know. As of the margins, I don't think they will be much smaller than what Ati has with the HD4670. And that chip, if true, is bound to be significantly faster.
Posted on Reply
#38
MrMilli
PCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.
Posted on Reply
#39
DarkMatter
MrMilliPCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.
OMG I know all that. But it won't be that much, and it should perform faster enough to be able to sell it for a bit more. Also G94b will be for the GT and the ones that don't qualify will become the 9550. Those chips are going waste right now, so it will actually increase their current margins IMO.
Posted on Reply
#40
candle_86
MrMilliPCB will be much more expensive (more layers because of the 192bit bus) and bigger.
G94b will be around 200mm² and RV730 is around 150mm².
Power consumption will be an issue too since a 9600GT uses around 100W and need an additional PCI-E power plug. If they want this gone then they need to take it below 75W. 55nm will bring them tops 10W lower consumption on the same clock so that's not enough.
(HD4670 has a 59W power envelope)

When we're talking about end user prices of around $100 then these stuff matter a lot.
most users have free molex also think about it. Many bought a 5200 Ultra of FX5600 and both needed external power, it comes down to whats cheaper
Posted on Reply
#41
MrMilli
@Darkmatter

How can a 9550GT use broken G94b's if it keeps all the 64 shaders? Broken memory bus?
And i'm sticking with the production cost issue. I double checked everything again and the 9550GT should be around 35-45% more expensive to produce. nVidia can do two things: put 384MB on the card (instead of 768MB) or really use broken G94's (48 shaders?).
Overview of material:
HD4670: 6-layer pcb, ~380 chips per wafer, 128bit chip packaging
9550GT: 8-layer pcb, ~290 chips per wafer, 256bit chip packaging

Did you ever see wafer prices? pcb and chip packaging cost aren't anything to scoff at either.
Even if the 9550GT will be only a bit more expensive but also a bit faster, ATI is bound to make a huge profit on the HD4650 & HD4670. Not only will the RV730 be a hit in it's class but the RV710 is going to destroy the 9400GT.
No matter how many disadvantages the SIMD based VLIW shader engine has, it really takes much less die space than the scalar based approach nVidia uses.

BTW a review:
publish.it168.com/2008/0901/20080901043806.shtml
en.expreview.com/2008/09/02/rv730-reviewed-prforms-close-to-3850/
Posted on Reply
#42
DarkMatter
MrMilli@Darkmatter

How can a 9550GT use broken G94b's if it's keeps all the 64 shaders? Broken memory bus?
And i'm sticking with the production cost issue. I double checked everything again and the 9550GT should be around 35-45% more expensive to produce. nVidia can do two things: put 384MB on the card (instead of 768MB) or really use broken G94's (48 shaders?).
Overview of material:
HD4670: 6-layer pcb, ~380 chips per wafer, 128bit chip packaging
9550GT: 8-layer pcb, ~290 chips per wafer, 256bit chip packaging

Did you ever see wafer prices? pcb and chip packaging cost aren't anything to scoff at either.
Even if the 9550GT will be only a bit more expensive but also a bit faster, ATI is bound to make a huge profit on the HD4650 & HD4670. Not only will the RV730 be a hit in it's class but the RV710 is going to destroy the 9400GT.
No matter how many disadvantages the SIMD based VLIW shader engine has, it really takes much less die space than the scalar based approach nVidia uses.

BTW a review:
publish.it168.com/2008/0901/20080901043806.shtml
en.expreview.com/2008/09/02/rv730-reviewed-prforms-close-to-3850/
So many things... Well

1- Nvidia uses a cluster aproach, so they can disable both SP/TMU clusters AND ROP/MC clusters.

2- Any sources on that it will use 8 layers? If 8800 GT could be made in 6 layer PCB, as Nvidia wanted partners to adopt, this one can be on 6 layers a lot easier. I don't actually know if it will have 8, so I'm just assuming. 192 bit is NOT 256 bit last time I checked anyway.

3- Which are your sources for die size?

:roll::roll: 290*8-layers / 6-layers = ~380 :roll::roll: I really hope you have sources for die size and that calculation was not made as things seem to tell... PCB Layers have nothing to do with chips per wafer. NO COMMENT!!

4- Of course they could put 384 MB on them and could still perform a lot better. Isn't the HD3850 faster with only 256 after all?

5- SIMD + VLIW does not necessarily take less space for the same performance. G80/92 vs. R600/670 proved that. R7xx is better, but don't compare it to previous 55nm chips as Nvidia has still to show a real 55nm chip. Also only looking at die photos you can clearly see that Ati puts all their units very close to each other, while Nvidia puts some "blank" space between them so the chip does not get so hot. HINT: Nvidia @65nm is cooler than Ati @55nm.

Now I'm not saying which card will be faster, but IMO no one will be a lot better than the other as you seem to believe and want to tell everybody. It simply won't. Yeah on your link we can see the HD4670 very close to HD3850. The thing is that, judging by the specs, the 9550GT could be close to 9600GT/HD3870 (shaders FTW isn't it, or you suddenly changed your mind?) specially at lower resolutions, where this both cards are supossed to be aimed for.
Posted on Reply
#43
MrMilli
1- i know. do you really think that they have enough perfect chips (all 64 shaders) with just one memory controller/rop cluster broken? i don't think so because the G94b has been in production for like 2 months now, they will use good chips too.
lets not forget that the 9550GT will have 12 rops because of this.

2- true, there are some variants that use a 6-layer pcb but forget about high frequencies then. even with a 192bit bus.

3- what the hell are you talking about? what do pcb layers have to do with chips per wafer? can't you read the comma's or are you just making fun of me now? i'm talking about three different things: pcb, chips, packaging!
you want the calculation? here you go: wafer = ~70000mm² so that's: (70000/150)*0.82
the 0.82 stands for the yields (i had to guess that one but i took the same for both).
All reports are saying that the RV730 will be ~150mm².
G94 = 240mm² -- normally 65nm to 55nm = < ~18% -- 240-18% = 196.8 mm²

4- no the HD3850 256MB is slower.

5- fyi, even the RV770 is smaller than the G92b and as far as i can remember, it's much faster. lol
RV670 -> 14,36 mm x 13.37 mm = 192 mm²
RV770 -> 15.65 mm x 15.65 mm = 245 mm²
G92b ---> 16.4 mm x 16.4 mm = 268 mm² >> 55nm
G92 ----> 18 mm x 18 mm = 324 mm²
G200 --> 24 mm x 24 mm = 576 mm²

You show me one post where i said that the 9550GT will be slower after we found out that it will be G94b based! Actually I found out myself that it will be G94b based and corrected myself.
I said the 9550GT will be very competitive but it will cost nVidia money.
I do believe that they will perform comparable. I'm just saying that the 9550GT will cost ~35% more to produce compared to the HD4670 and it will have less memory at the same price point.
I don't know why i even bother replying. This is the last thing i put here. You can reply whatever you want, i won't reply anymore.
Posted on Reply
#44
DarkMatter
MrMilli1- i know. do you really think that they have enough perfect chips (all 64 shaders) with just one memory controller/rop cluster broken? i don't think so because the G94b has been in production for like 2 months now, they will use good chips too.
lets not forget that the 9550GT will have 12 rops because of this.

2- true, there are some variants that use a 6-layer pcb but forget about high frequencies then. even with a 192bit bus.

3- what the hell are you talking about? what do pcb layers have to do with chips per wafer? can't you read the comma's or are you just making fun of me now? i'm talking about three different things: pcb, chips, packaging!
you want the calculation? here you go: wafer = ~70000mm² so that's: (70000/150)*0.82
the 0.82 stands for the yields (i had to guess that one but i took the same for both).
All reports are saying that the RV730 will be ~150mm².
G94 = 240mm² -- normally 65nm to 55nm = < ~18% -- 240-18% = 196.8 mm²

4- no the HD3850 256MB is slower.

5- fyi, even the RV770 is smaller than the G92b and as far as i can remember, it's much faster. lol
RV670 -> 14,36 mm x 13.37 mm = 192 mm²
RV770 -> 15.65 mm x 15.65 mm = 245 mm²
G92b ---> 16.4 mm x 16.4 mm = 268 mm² >> 55nm
G92 ----> 18 mm x 18 mm = 324 mm²
G200 --> 24 mm x 24 mm = 576 mm²

You show me one post where i said that the 9550GT will be slower after we found out that it will be G94b based! Actually I found out myself that it will be G94b based and corrected myself.
I said the 9550GT will be very competitive but it will cost nVidia money.
I do believe that they will perform comparable. I'm just saying that the 9550GT will cost ~35% more to produce compared to the HD4670 and it will have less memory at the same price point.
I don't know why i even bother replying. This is the last thing i put here. You can reply whatever you want, i won't reply anymore.
You have short memory or something as all the discussion between us has been based on you praising the HD card to no end, while saying Nvidia will have a tough time to compete, when you don't actually know shit. It was me who was saying BOTH would be OK. You are trying to say Ati will pwn all the time. Because you can't use the performance argument you are just being creative, something that I can admire TBH, but it's nothing more than fairy tales coming out from your head. Enjoyable to a point, but anyone can get tired easily after some posts.

LOL. You gotta love fanboism.

Besides that:

-HD3850 256 is almost as fast as the 512MB variant. Within a 5% difference.

-Perform comparable? LOL. We already know how HD4670 performs, the 9550GT will be VERY close to both 9600GT and 8800GS, because it's specs are exactly that a mix of the two. Depending on the game it will be close to one or the other, to the slower one of the two probably, either way it will be way faster than HD4670 unless they clock it absurdly low, because where the GT will be slower (same games as the 8800GS) is where the HD will be slower too, maybe even slower because 12 vs. 8 ROPs.

-G92b is not a true 55nm chip. Neither are these ones probably. Anyway, apart from RV770 which I DID exclude from my claim, all other 55nm Ati chips are close to Nvidia's 65nm chips when it comes to performance/die size, DESPITE the process difference!!!!

-I love how you categorically affirm the GT will be a 35% more expensive to produce, that it will require to have less memory for doing so, that it won't clock high enough if it has 6 layers, that it will be xxx mm2, etc, when you actually don't have a clue about the chip, as any other mortal on the Earth. It's funny really.

-Also you seem to forget that production costs of the card, on that segment is less than all the money that intermediaries take for them + the packaging, so actually 35% difference on production cost can ealily end up being a 10% in retail. The GT can easily be more than 10% faster than the HD card.

All in all we can't affirm anything. I have not affirmed anything, YOU HAVE, putting all your assumptions as facts. And that is my friend when DarkMatter always comes in.

Now I would love you to respond to the post, since this is a conversation (even a discussion is a conversation) between civil people and it's not polite to end conversations the way you did. I didn't insult you, so I have the right to get a response. Say whatever you want in the postm though I would like you to reply to the content. Even better, PM me, but do it.

EDIT: I first thought to let this one pass, but I have decided to attack you from all fronts, since you like to fight on all of them too. lol.

G92b is actually significantly smaller than RV770. Not enough to justify the performance difference, but it's a new chip against an old one. As I said G92b is NOT a true 55nm chip.

www.pcper.com/article.php?aid=580
www.pcper.com/article.php?aid=581&type=expert

G92 - 324 mm2
G92b - 231 mm2
RV770 - 260 (256 is probably more accurate)

- 231/256*100 = 90% - So G92b it's a 10% smaller than RV770. A quick look at Wizard's reviews reveal that surprisingly HD4870 is around 10-15% faster than 9800GTX+! :eek: Surprise! (Actually it was a surprise for me. I'm talking about higher resolution and settings FYI)

- 231/324*100 = 71,3% - Almost a 29% reduction. It seems not only Ati ca do that kind of things, after all...

Let's extrapolate that 29% to the G94b please:

- 240*0,713 = 171

Higher than the Radeons estimated 150, but much better than your picture isn't it? And that's for the full G94b, the new 9600GT, you can't actually compare them directly. You would have to compare the new 9600GT to the Radeon to do any fair perf./size comparison*. Nvidia does things differently than Ati. Where Ati tries to do a single chip and get as higher yields as possible on that chip, Nvidia does the chips bigger (faster) so that they don't have to care about deffective cores. They just can use them as the second card, because even crippled are going to be able to compete (8800GT, G80 GTS, GTX260, 8800GS... the list is long). The consecuence of this is that Nvidia has to throw away much less chips, and I could even go as far as to say that it might contrarest the expenses of less die-per-wafer numbers and yields.

*Let's not leave loose ends and let's continue that comparison:

- According to Wizzards reviews HD3850 is 20% slower than the 9600GT.
- I'm going to make an estimate and say that according to your links, the HD4670 is 10% slower than HD3850 (sometimes less, sometimes more), let's be gentle and traduce it as a 5% accumulative for a total of 25% slower than the 9600GT.

- 150/171*100 = 87,7% ...

OK. Let's play with your numbers...
150/196,8*100 = 76,2% Even your (probably very wrong) estimates fall short.

I'm willing to hear a response for this.
Posted on Reply
#45
MrMilli
Darkmatter i've waited till the numbers came in:
www.anandtech.com/video/showdoc.aspx?i=3405&p=7

To be honest, i stopped reading your above post halfways because it's full of mistakes.

So a HD4670 is as fast or faster than a 9600GSO. A 9600GSO is a G92 @ 192bit.
Now explain to me how a G94 @ 192bit can come close to this?
(and pls don't make up stuff)
Posted on Reply
#46
candle_86
the same reason the 9600GT is faster than the GSO MrMill. The 9600GSO aka 8800GS has to be oced to beat the 9600GT, everyone and there grandmother knows that.
Posted on Reply
#47
MrMilli
candle_86the same reason the 9600GT is faster than the GSO MrMill. The 9600GSO aka 8800GS has to be oced to beat the 9600GT, everyone and there grandmother knows that.
Well you should also know that:
9600GT (G94) is slower than 9800GTX (G92).
9600GSO (G92 192bit) is almost as fast but slower than 9600GT.
So a G94 @ 192bit will be even slower than a 9600GSO.

... even my grandmother knows that ... pffff ... did you even read this thread?
Posted on Reply
#48
DarkMatter
MrMilliDarkmatter i've waited till the numbers came in:
www.anandtech.com/video/showdoc.aspx?i=3405&p=7

To be honest, i stopped reading your above post halfways because it's full of mistakes.

So a HD4670 is as fast or faster than a 9600GSO. A 9600GSO is a G92 @ 192bit.
Now explain to me how a G94 @ 192bit can come close to this?
(and pls don't make up stuff)
MrMilliWell you should also know that:
9600GT (G94) is slower than 9800GTX (G92).
9600GSO (G92 192bit) is almost as fast but slower than 9600GT.
So a G94 @ 192bit will be even slower than a 9600GSO.

... even my grandmother knows that ... pffff ... did you even read this thread?
Everytime you post is only to show your ignorance.

First of all, there are no mistakes there and I didn't make up anything. It's constrasted info. Search a bit. :laugh:. The fact that you stopped reading only shows you are not able or willing to read something you know it's against your beliefs and completely true. You don't want to learn the bold truth and your brain just screams: ALARM ALARM! STOP READING! EXTERNAL INFLUENCE DETECTED!

Second, the chip doesn't matter one thing, actual specs of the chip does. The GS has more shaders but are crippled by the low ROP count and 192 bit bus AND the fact that it runs at 550Mhz. The GT at 650Mhz is running 18% faster and a quick look at any Wizzard's review will show you that (surprise, surprise...) the GT is around 18% faster on average. On lower resolutions the difference is smaller (ROP advantage gone, SPs FTW) and on higher ones it's bigger, because ROP number counts there.

The 9550GT if required could be easily be clocked at 750Mhz.

- Because it's 55nm it could be clocked above 700Mhz.
- Because it has less stuff than the 9600GT it could be clocked higher.
- Because Nvidia chips are nowhere their limit, if really needed, they could clock it higher.

You have to realise how the market is been until now. Nvidia has been owning all segments so they didn't have to stress the cards too much to compete (when I say that, I mean not reaching a point where failure rate could eventually become a problem, RV770 anyone?). They let that work to partners instead, knowing they will do it (that's the way of Nvidia to make them happy). Proof of that is how every single Nvidia chip based on G92 and newer chips can easily be overclocked a 20% without making the card sweat (with stock cooling and volts) and up to 30% are possible also at stock, Ati chips simply can't do that (20% OC applied to 775Mhz is 930, 750Mhz-->900Mhz). That's also the reason you can find a lot of Nvidia factory OCed cards and only few Ati ones, and those few ones are usually OCed just a bit.

The bottom line is that in order to compete Nvidia chips have a lot of headroom yet. HD4670 once again does a modest 10% OC on Wizzards review and just shows Ati systematically clocks the cards higher above in the curve. Now Nvidia will have to clock the new cards higher and that's all. The GS, BTW, is the Nvidia card that holds the record of stock overclocking AFAIK, primarily because it has less stuff inside, so as I said, just one factor more in 9550GT's favor against the GS and 9600GT and ultimately against the HD4670.

It's going to be a tought fight but IMO it's in Nvidia's hands. The 9550GT can be a lot faster than the 9600GSO, very close to the GT except at 1920x1200 4xAA and above, but no one will or should buy a 85$ card if he wants to play at those settings anyway and the HD4670 is neither a good performer there. We have yet to see if Nvidia WANTS it to be faster.
Posted on Reply
#49
MrMilli
-65nm to 55nm brings a theoretical shrink of 19%. That's max 19%.
You are saying: G92 - 324 mm2 G92b - 231 mm2
Did nVidia make a shrink of 40%? Did it ever occur to you that pcper.com is wrong.

-G92b is not a true 55nm chip?? WTH! What is it then? 60nm?
Seriously, where did you read that? The chip shrank 18%, that means an almost perfect transition from 65nm to 55nm. Don't let anybody fool you, it's 55nm.

-So you are basically saying that:
Take a 9600GT, cut off ~1/4 of the chip, now clock it really high so it's close to 9600GT performance at $80. Wow this makes a lot of business sence. *sarcasm*
nVidia will never clock it higher than 650Mhz. You can be pretty sure of that.

HD4670: www.newegg.com/Product/Product.aspx?Item=N82E16814500061
9500GT: www.newegg.com/Product/Product.aspx?Item=N82E16814500061

Those are the cheapest prices, $80. First thing nVidia needs to do before it even can release a 9550GT is to drop the 9500GT price to ~$65.
And like i have said before, the HD4670 is a true lowend product. It's cheap to make and ATI can make it even cheaper.
Just look at that simple design: www.computerbase.de/bild/article/866/17
Very small PCB and very simple power circuitry, comparable to the much slower 9500GT.

So you have called me:
- LOL. You gotta love fanboism.
- Everytime you post is only to show your ignorance.
That's really nice of you! I have been on topic all the time, never called you names but you still need to say these stuff like a kid. Maybe you are a kid, i don't know.
The only reason why we have this discussion is because you are ignorant.
You look at matters with your limited knowledge of business and electronics, and always conclude that i'm wrong. Well i waited for the HD4670 to be released. Now i'll wait for the 9550GT to be released.
Posted on Reply
#50
DarkMatter
MrMilli-65nm to 55nm brings a theoretical shrink of 19%. That's max 19%.
You are saying: G92 - 324 mm2 G92b - 231 mm2
Did nVidia make a shrink of 40%? Did it ever occur to you that pcper.com is wrong.

-G92b is not a true 55nm chip?? WTH! What is it then? 60nm?
Seriously, where did you read that? The chip shrank 18%, that means an almost perfect transition from 65nm to 55nm. Don't let anybody fool you, it's 55nm.

-So you are basically saying that:
Take a 9600GT, cut off ~1/4 of the chip, now clock it really high so it's close to 9600GT performance at $80. Wow this makes a lot of business sence. *sarcasm*
nVidia will never clock it higher than 650Mhz. You can be pretty sure of that.

HD4670: www.newegg.com/Product/Product.aspx?Item=N82E16814500061
9500GT: www.newegg.com/Product/Product.aspx?Item=N82E16814500061

Those are the cheapest prices, $80. First thing nVidia needs to do before it even can release a 9550GT is to drop the 9500GT price to ~$65.
And like i have said before, the HD4670 is a true lowend product. It's cheap to make and ATI can make it even cheaper.
Just look at that simple design: www.computerbase.de/bild/article/866/17
Very small PCB and very simple power circuitry, comparable to the much slower 9500GT.

So you have called me:
- LOL. You gotta love fanboism.
- Everytime you post is only to show your ignorance.
That's really nice of you! I have been on topic all the time, never called you names but you still need to say these stuff like a kid. Maybe you are a kid, i don't know.
The only reason why we have this discussion is because you are ignorant.
You look at matters with your limited knowledge of business and electronics, and always conclude that i'm wrong. Well i waited for the HD4670 to be released. Now i'll wait for the 9550GT to be released.
LOL. I say you are ignorant because you effectively are, mate. And you show it everytime you write. You have a hard time understanding things so I will go part by part again:

- 65 to 55 nm is effectively a ~40% reduction in size. See, it happens that chips are square and that fab. process is the minimum distance attainable between two transistors in an array of transistors. Because transistors are put on a 2 dimensional array: 65^2 / 55^2 = 1,39. Now if you know how to read that number, it means 65 nm is 40% bigger than 55nm. And think I have to explain this kind of things... :shadedshu

- In order to be a true 55nm chip you have to redesign it. For instance, within the chip (any chip), they have to add many redundant transistors, which their only job is refresh/amplify the signal between the ones that do the math (also many others are thre just to serve as resistors, but that later). Those have to be put at a distance according to the resistance of the medium in which they are present, just like radio repeaters. This means the optimum distance is constant for the same silicon alley. The transition of G92 to G92b was only optical, meaning the exact same structure was used, but where in 65nm (i.e) 5 repeaters where needed, in 55nm only 4 would be required. You have one only ocupying space.

Same principle can be applied to the "resistors". Basically on chips there are no resistors, engineers take benefit of the parasit resistance (a bunch of adjacent transistors act like a resistor to other transistors) to set the transistors to the desired output. They try to make the whole chip so that all working (the ones that have a function, that take part on ALUs etc.) act like resistors to each other, but that is impossible to 100%, so redundant transistors are required. Because at 55nm you need less voltage, resistors values can be smaller, so you need less of them. Again because G92b is just the same chip made smaller it has more than what a true 55nm chip would require.

- They are not cuttin 1/4 of the chip, by no means. See, again you show ignorance.
And it doesn't make sense what? Competing doesn't make sense? Probably they are releasing a new 9600GT (9650GT?) based on G94b with higher clocks, so it wouln't compete with it anyway, we really don't know.

- More things, the price difference between 128bit and 192bit, or 256bit for that matter, is not that big anymore. There are already lots and lots of 256bit cards around and below $80. The HD3850, for instance is around there and it's comparatively A LOT more expensive to produce than the 9550GT will be. 9600GT is already well below $100, so yeah Nvidia wouldn't have any problem to sustitute 9600GT with that cheaper card if it performed similar. It wouldn't be the first time Nvidia does this, Ati has also made that hundreds of times. It's common bussiness.

Now THINK before posting again with ignorant responses, I'm getting tired of explaining everything.

EDIT: OH, and BTW nice try with those newegg links. :roll:
www.newegg.com/Product/Product.aspx?Item=N82E16814500062

9600GT at $80 after MIR:

www.newegg.com/Product/Product.aspx?Item=N82E16814125099
Posted on Reply
Add your own comment
May 13th, 2024 13:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts