Originally Posted by DarkMatter
I have heard and read lot of people claiming big performance boosts out of shader overclocking alone, so I will keep my thoughts.
The fact that new cat 7.8 makes 2900 perform better only corroborates my point. Shader power is what performance is asking for. And if there's something that Radeons have is shader power. Better said they have as much as 2x the theoretical peak shader power of Nvidia counterparts.
I didn't know how shaders overclock when overclocking the core, I assumed they did rise their clock but I doubt they do it in a linear fashion. Now I know it's not linear. Nevertheless you have to agree that 1860 (or 1836 for that matter) is higher than 1761, don't you?
Now if you want you can join me in the "more shader power, more performance" club, wich BTW is backed up by the 8800GT and many 3dmark records, for example lekamies case.
I agree completely with the shader power, I think you are misunderstanding what I am trying to say regarding the 1761. The 1761 is not independantly overclocked, unlike the R600 series (where the shader clock is locked and fixed at a lower speed than Nvidia) on the G80, as you increase the core clock the shader clock automatically increases with it so, when a user sets his 8800GTS core speed to 863 Mhz he already has a shader clock speed of 1761, then he can go into rivatuner and overclock the shader clock seperatly (he has probably hit his core max already at 863mhz) without touching the core, so my point was the chances are his shaders are well beyond 1761 if he knows what he is doing, and to acheive that score on an 8800GTS suggest he does.....make sense? So I am saying the minimum it can be is 1761 and is likely to be much higher.....but of course we dont know, what we do know is though that you would have to expect it to be a lot LOWER because the difference in CPU speed and Core clocks is a lot and the faster (perhaps not the best terminology
) is actually the slower.