• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's shady trick to boost the GeForce 9600GT

Hello,

In the article, was the 9600 GT card that was used during the test plugged into a PCIe 2.0 slot or was it using an x16 PCIe slot?

Chris
 
I've searched and searched and searched an I see nothing supporting these articles.

The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.

I'd like to see someone who has a motherboard which supports a PCIe 2 slot with a 9600 GT and see if overclocking the PCIe bus makes any difference with the hardware lock.

It just looks to me like you have a 9600 that supports PCIe 2, which can handle twice the data rate of PCIe x16. So to have a x16/2.0 card running in an x16 slot, will be able to handle additional data from an overclocked PCIe bus. But only because it already supports twice the data rate as listed above.

It's not actually overclocking, it's just passing more data isn't it?
Chris
 
How does that matter? Does the 9600 GT utilize the full bandwidth of PCI-E 1.1 in the first place? Other than bi-directional data rate and the power, there's no difference in the architectures.

I wonder how many reviewers used a PCI-E 2.0 board to test this in the first place. The TPU evaluation methodology is very realistic, we don't use a NForce 780i SLI board with a QX9650 to test a 9600 GT because people with such CPU/platform configurations wouldn't even use a 9600 GT. In the same way a rather moderate system is used to evaluate the board. AMD 7xx chipsets don't run Intel chips, X38 would seem high-end, NForce 7xx is not far spread with its mainstream chipsets as yet. PCI-E 1.1 is sufficient for even a 8800 GT.
 
Last edited:
Does anybody know if its the same with the 8800gs cards?

no, the only cards affected at the moment are 9600 gt

pcie 1.1 or 2.0 does not matter at all for this. the pcie base frequency does, which is 100 mhz by default on both
 
wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!
 
wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!

Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.

Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.
 
yea I guess you are correct and this should be clearly shown on all specifications. But then did they honestly think that no-one would realise? If they thought that no-one like w1zzard would be curious enough to find this then they are living in their own bubble
 
Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.

Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.


and that is one of the few burning issues.
 
I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
W1z can we expect an review of the 9600GT at various clocks?
 
a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?

I have to agree with using the crystal control method.

The is also another twist here. A 100Mhz PCI bus frequency in no longer the across the board
standard. The reference NV 780i's motherboards run a 125Mhz PCIe bus frequency on the PCIe
2.0 GFX card slots as the default and it is not adjustable in system bios either.

Viper
 
I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
W1z can we expect an review of the 9600GT at various clocks?

Only if the pci-express frequency is above 100mhz. Its like how cpu's are clocked now. It uses the pci-express bus speed instead of the front side bus. The multiplier is 26 for the 9600gt.

So 100/4 = 25mhz

25 x 26 = 650mhz

With 110mhz pci-express bus speed :

110/4 = 27.5

27.5 x 26 = 715mhz

with 125mhz: 812.5mhz

So the last one applies for linkboost.
 
You're mistaking. All the tools you've mentioned including nTune, GPU-Z and ExpertTool show just the target clocks, which you "ask" to set. So you'll always see "correct" clocks there regardless of the reall PLL state, thermal throttling conditions etc.
The real clocks generated by hardware must be and normally are different comparing to target ones. And there are only two tools, allowing to monitor real PLL clocks: RivaTuner and Everest. The rest will give your target clocks only.

Now there is a useful piece of info!
 
I've searched and searched and searched an I see nothing supporting these articles.

The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.

I'd like to see someone who has a motherboard which supports a PCIe 2 slot with a 9600 GT and see if overclocking the PCIe bus makes any difference with the hardware lock.

It just looks to me like you have a 9600 that supports PCIe 2, which can handle twice the data rate of PCIe x16. So to have a x16/2.0 card running in an x16 slot, will be able to handle additional data from an overclocked PCIe bus. But only because it already supports twice the data rate as listed above.

It's not actually overclocking, it's just passing more data isn't it?
Chris

As Wile E suggests, run a bench such as 3D mark 2006, with identical system settings and identical gfx core/shader and memory speeds, just ensure on one run your PCI-E bus speed is at 105 or 110mhz and the other run keep it at the default 100mhz.....you will see the difference for yourself! then the proof of the pudding will be in the eating so to speak.

I understand your points (and frustrations) but liken this to the fact that you have a car, the cars top speed on the flat is 150 miles an hour, no matter what you try you cannot get it to go faster, then I come along with a mod for your engine and fit it, but you cannot find the mod and looking in all the manuals for the engine you can find no difference in your engine so you dont beleive it exists, however, you get into your car and drive it, flat out on the flat and you find it does 175 miles an hour, do you still not beleive I fitted the mod?

Now I know thats a fairly random dreamt up story but give it a try (the gfx card that is, not the car!)........:D

One other thing, a 9600 could not possibly use any of the bandwidth that PCI-E 2.0 has to offer (5Gbit) so that cannot be a factor, it wont even be using 2.5Gbit (PCI-E 1.1). It's not a case of "handling" it, the throughput is not there in the first place to utilise the bandwidth, even an 8800Ultra cannot use just 2.5Gbit bandwidth let alone PCI-E 2.0's 5Gbit.
 
Last edited:
Oh boy

Ok i just read the report and also the complete thread in this forum.
And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.
 
True. But there are avid supporters who believe large companies of their choice can do no wrong, or they wnat to know in detail the steps taken to test.



Either way, W1zz is reputable and has not misled or lied before. He uses some of the best mixes to test hardware, and top notch setups. Some might actually believe that the PCI-e 2.0 standard will provide instant benefits due to reading the whitepapers and taking them at face value. However users with more experiance know that the current PCI-e bus is not to the point of saturation and increasing the bus speed does little to nothing other than eleminate a fractional latentcy in reads and writes. Now that we have a add in board basing it's timing speed off the PCI-e bus we have a real reason to experment, but on boards that have stability issues with higher frequencies the card in question will underperform from what the manufacturer has stated.
 
Ok i just read the report and also the complete thread in this forum.
And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.

I have to agree. Bringing PCIe 1.x/2.0 spec bandwidth difference and any performance difference
due to that is irrelevant in the scope of the report.

What is relevant is NV is apparently using the PCI bus frequency divided by 4 as the core clock
generators master/base frequency for the 9600GT instead of a 27mhz crystal as on all other cards.

Simply plugging the card into a motherboard that runs a higher default PCIe bus frequency, such as
a reference 780i (125Mhz default on the PCIe 2.0 slots), will cause the 9600GT to run higher core
clock speeds and give higher benchmark scores, than plugging it into a motherboard than runs a
100Mhz default PCIe bus frequency slot. The user/tester doesn't have to do anything else and
may not even notice the core clock speed difference...just the higher benchmark scores.

I can not help but think this artificial performance increase the 9600GT has when plugged into a
780i with it higher default PCIe bus frequency was some how accidentally over looked by NV!!!

Viper
 
Last edited:
i doubt the 780i PCI-e bus speed was overlooked hell they made the chipset
 
so what do you guys suggest on a good method of overclocking my 9600gt with this knowledge?Should i set the bus speed to 100 first,oc the core,mem and shader to the highest stable settings,then slowly get the bus speed up?or do the opposite?get the bus speed up as high as i can while stable,then start ocing the core and mem and shaders?
 
Hello All,

I've not been trying to start a debate over the article. But I do see that none of the software used in the article is accurate. The 27Mhz frequency is nothing new as I listed an article where this was covered in another review in which the Author of Riva Tuner also discussed when it was found, what 3 years ago?

In regards to what I have been trying to say about all of this unsupported software. I was simply trying to see how this article came about and what PROOF there was to support the theory. But the whole report is based on results that shows screenshots of GPU-Z, Riva Tuner, 3DMark06.. All programs that are fairly outdated as compared to todays hardware.

---------

Going back over all of my posts, this is all I can find to give/take away from what I got from the article. But as my original reply. I do not see anything to support this article or the hundreds of others. If these cards could change clock freqencies based on the PCIe bus frequency. We would have some very unstable cards as they already come OC'd from the factory and some can not be pushed much more without getting hot, causing black screens, monitors to shut off ect.

Stock 9600 GT / Bandwidth

675/1800/1700 - Bandwidth: 57.6GB/s

OC'd 9600 GT / Bandwidth

750/2000/1918 - Bandwidth: 64.0 GB/s

14j9aq8.jpg


3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.

What does it mean? That 3DMark06 isn't accurate much like Riva Tuner. nVidia won't comment because they can't. Nothing has changed on these cards other than the fact that no software that is out right now actually supports them. We don't even have decent drivers at this point so all in all, no software is giving factual readings of these cards.

These are just my own opinions from my own research. Again, I am not saying that I am right. I am saying however that I find it a bit odd that only one review has been posted where as no other 9600 GT owner can produce simular results.

Chris

BTW:
If someone knows where 3DMark06 takes into account changes to the PCIe bus frequency and calculates these increased bandwidth changes there instead of showing it to be oc'd settings within the card. Please let me know because I have not found this anywhere in 3dmark06.
 
Last edited:
Please Note:

I missed something from the main article. From all of my tests and research, I am doing all of this on a custom built computer. In looking back at the original article in which I too asked about the nForce 590 MB, it seems that the only people who may gain anything from OC's the PCIe bus is those with a 590 MB or possibly better.

The marchitecture is currently code-named Trinity and denotes a combination of Trinity-enabled motherboard (the 590s) and Trinity-enabled graphics cards. Once that combination is set in motion, the available bandwidth for the cards will go from 4GB/s to 5.2 GB/s, essentially – a legal overclock of the PCIe bus, going from x16 to "x20" or "x22", depending on the final stability tests the company conducted recently.

http://www.theinquirer.net/en/inquirer/news/2006/04/27/nvidia-set-to-overclock-the-pcie-bus

I do not have an nForce board, Therefore, I also don't have LinkBoost technology when I OC my PCIe bus. So for those of us who don't have an nForce board, you do need to be careful OC'ing the PCIe bus as you can burn up your video card, cause corrupted data on your hd among other harmful things. On most MB's, overclocking the bus can cause instability so unless you have an nForce board, just be careful.

Again, this was something I missed and didn't understand because the article pointed out something with the 9600 card. Specifically in the title. But it has everything to do with nForce boards.

Chris

BTW
The reason this article is confusing is because of the title!

NVIDIA's shady trick to boost the GeForce 9600GT

That is wrong! If anything, the article should have been called.

nVidia's advancement of Technology in the nForce Trinity-enabled motherboards

There isn't anything shady about nVidia creating Video cards that support the latest advancements of thier nForce boards. nVidia is the creator of both so I don't see how anyone considers that shady.
 
Last edited:
The 'shady' part is that they've not communicated to the reviewers about this 'advancement'. Reviewers take the card and evaluate it on par with other competing hardware. That's the gripe, in the article's conclusion there's a reasoning for that.
 
...

3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.

...

First of all, you are here pointing out the main problem with this feature:
That the 9600GT gets a higher core-speed simply by increasing the PCI-E bus. Thats give a great boost in fill rates and such, as proof given on the second page: http://www.techpowerup.com/reviews/NVIDIA/Shady_9600_GT/2.html

That gives motherboard with automatic PCI-E overclock or a a higher PCI-E clock as standard a performance boost, by between 5 and 25 %. But thats not applying on every motherboard out there, and therefore people buying the card dont know what performance to expect on thier intel, amd or nvida chipset.

And yes, the extra bandwith comes from an overclocked card.

...

BTW
The reason this article is confusing is because of the title!

NVIDIA's shady trick to boost the GeForce 9600GT

That is wrong! If anything, the article should have been called.

nVidia's advancement of Technology in the nForce Trinity-enabled motherboards

...

There you are wrong again, as the same performance boost would be applied to a manually overclocked intelboard as well. This article isnt about if nividia did invent a good thing or not, its about why they kept quiet about it, even on direct questions about it.
Its a shady way to increase performance in reviews, so it might missmatch with performance regular buyers would get of the card in thier almost identical systems at home.
 
i think nvidia attach great importance to early reviews.. if they have deliberately kept quiet about this "change" its to make the odd review make the card seem better than what it really is.. deliberate or incompetence.. neither can be consider a merit mark..

its also quite clear that many contributors to this thread are posting nonsense because they havnt properly read the article or the thread..

trog
 
Hello,

I've talked to a friend who stopped by. He said that the main reason why those who overclock for high scores on 3DMark06 and so forth do overclock the PCIe bus as it "Does" raise the core clock on the cards.

What I find confusing about the article as well as all of the replies to my own is that there are tons of articles dating back to 2006 stating that the nforce boards have linkboost and/or allowing changes to the PCIe frequency. So if these features have been available for over 2 years and the G80+ chipsets have also supported this for the past year or more. Why would it just now be noticed with a 9600 GT card when there are reviews and tech sites that say it's been available for over a year, maybe two?

I don't know when nvidia first offered the nForce 590 board or the G80 chipset. But there seems to be enough info stating that this shady feature has been around for quite some time.

Other sites were first to mention it in 2006. But some people are just now finding out while testing all of these new cards since there seem to be so much hype around them.

Chris
 
This will be my last post on this subject.

There are many reasons why I don't understand the whole concept behind the article and what exactly it has too do with the 9600 GT card. The article is 4 pages long and talks about 2 year old features that are now defunct for the most part.

Here is why, for myself to read an article written in Feb 2008 doesn't make sense.

According to nVidia's website, LinkBoost has been removed from the 590 and 680i series chipsets. Only older boards with the 590 support it still.

That happened some time ago.

Where there cards other than the 9600 that supported Link Boost and PCIe Overclocking?

The GeForce 79xx series and above featured these additions.

So all in all, everything as listed in the article has been around since 2006. But like I said in the above reply, I don't see how a 2008 review can call NVidia shady because of the 9600 when it's obvious that the article was written about features that have been around since at least 2006.

Chris
 
Back
Top