• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's shady trick to boost the GeForce 9600GT

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
Hello,

In the article, was the 9600 GT card that was used during the test plugged into a PCIe 2.0 slot or was it using an x16 PCIe slot?

Chris
 

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
I've searched and searched and searched an I see nothing supporting these articles.

The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.

I'd like to see someone who has a motherboard which supports a PCIe 2 slot with a 9600 GT and see if overclocking the PCIe bus makes any difference with the hardware lock.

It just looks to me like you have a 9600 that supports PCIe 2, which can handle twice the data rate of PCIe x16. So to have a x16/2.0 card running in an x16 slot, will be able to handle additional data from an overclocked PCIe bus. But only because it already supports twice the data rate as listed above.

It's not actually overclocking, it's just passing more data isn't it?
Chris
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,343 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
How does that matter? Does the 9600 GT utilize the full bandwidth of PCI-E 1.1 in the first place? Other than bi-directional data rate and the power, there's no difference in the architectures.

I wonder how many reviewers used a PCI-E 2.0 board to test this in the first place. The TPU evaluation methodology is very realistic, we don't use a NForce 780i SLI board with a QX9650 to test a 9600 GT because people with such CPU/platform configurations wouldn't even use a 9600 GT. In the same way a rather moderate system is used to evaluate the board. AMD 7xx chipsets don't run Intel chips, X38 would seem high-end, NForce 7xx is not far spread with its mainstream chipsets as yet. PCI-E 1.1 is sufficient for even a 8800 GT.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,028 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Does anybody know if its the same with the 8800gs cards?

no, the only cards affected at the moment are 9600 gt

pcie 1.1 or 2.0 does not matter at all for this. the pcie base frequency does, which is 100 mhz by default on both
 
Joined
Jul 30, 2007
Messages
6,560 (1.07/day)
System Name Vintage
Processor i7 - 3770K @ Stock
Cooling Scythe Zipang II
Memory 2x4GB Crucial DDR3
Video Card(s) MSI GTX970
Storage M4 124GB SSD// WD Black 640GB// WD Black 1TB//Samsung F3 1.5TB
Display(s) Samsung SM223BW 21.6"
Case Generic
Power Supply Corsair HX 520W
Software Windows 7
wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!
 
Joined
Jun 3, 2006
Messages
1,328 (0.20/day)
Location
London
wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!

Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.

Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.
 
Joined
Jul 30, 2007
Messages
6,560 (1.07/day)
System Name Vintage
Processor i7 - 3770K @ Stock
Cooling Scythe Zipang II
Memory 2x4GB Crucial DDR3
Video Card(s) MSI GTX970
Storage M4 124GB SSD// WD Black 640GB// WD Black 1TB//Samsung F3 1.5TB
Display(s) Samsung SM223BW 21.6"
Case Generic
Power Supply Corsair HX 520W
Software Windows 7
yea I guess you are correct and this should be clearly shown on all specifications. But then did they honestly think that no-one would realise? If they thought that no-one like w1zzard would be curious enough to find this then they are living in their own bubble
 
Joined
May 19, 2007
Messages
7,662 (1.24/day)
Location
c:\programs\kitteh.exe
Processor C2Q6600 @ 1.6 GHz
Motherboard Anus PQ5
Cooling ACFPro
Memory GEiL2 x 1 GB PC2 6400
Video Card(s) MSi 4830 (RIP)
Storage Seagate Barracuda 7200.10 320 GB Perpendicular Recording
Display(s) Dell 17'
Case El Cheepo
Audio Device(s) 7.1 Onboard
Power Supply Corsair TX750
Software MCE2K5
Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.

Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.


and that is one of the few burning issues.
 
Joined
Nov 22, 2007
Messages
1,398 (0.23/day)
Location
Hyderabad,India
System Name MSI apache ge62 2qd
Processor intel i7 5700HQ
Memory 12 Gb
Video Card(s) GTX960m
Storage 1TB
Display(s) Dell 24'
I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
W1z can we expect an review of the 9600GT at various clocks?
 

ViperJohn

Radeon Mod God
Joined
Aug 26, 2004
Messages
149 (0.02/day)
a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?

I have to agree with using the crystal control method.

The is also another twist here. A 100Mhz PCI bus frequency in no longer the across the board
standard. The reference NV 780i's motherboards run a 125Mhz PCIe bus frequency on the PCIe
2.0 GFX card slots as the default and it is not adjustable in system bios either.

Viper
 
Joined
Jun 3, 2006
Messages
1,328 (0.20/day)
Location
London
I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
W1z can we expect an review of the 9600GT at various clocks?

Only if the pci-express frequency is above 100mhz. Its like how cpu's are clocked now. It uses the pci-express bus speed instead of the front side bus. The multiplier is 26 for the 9600gt.

So 100/4 = 25mhz

25 x 26 = 650mhz

With 110mhz pci-express bus speed :

110/4 = 27.5

27.5 x 26 = 715mhz

with 125mhz: 812.5mhz

So the last one applies for linkboost.
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
You're mistaking. All the tools you've mentioned including nTune, GPU-Z and ExpertTool show just the target clocks, which you "ask" to set. So you'll always see "correct" clocks there regardless of the reall PLL state, thermal throttling conditions etc.
The real clocks generated by hardware must be and normally are different comparing to target ones. And there are only two tools, allowing to monitor real PLL clocks: RivaTuner and Everest. The rest will give your target clocks only.

Now there is a useful piece of info!
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
I've searched and searched and searched an I see nothing supporting these articles.

The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.

I'd like to see someone who has a motherboard which supports a PCIe 2 slot with a 9600 GT and see if overclocking the PCIe bus makes any difference with the hardware lock.

It just looks to me like you have a 9600 that supports PCIe 2, which can handle twice the data rate of PCIe x16. So to have a x16/2.0 card running in an x16 slot, will be able to handle additional data from an overclocked PCIe bus. But only because it already supports twice the data rate as listed above.

It's not actually overclocking, it's just passing more data isn't it?
Chris

As Wile E suggests, run a bench such as 3D mark 2006, with identical system settings and identical gfx core/shader and memory speeds, just ensure on one run your PCI-E bus speed is at 105 or 110mhz and the other run keep it at the default 100mhz.....you will see the difference for yourself! then the proof of the pudding will be in the eating so to speak.

I understand your points (and frustrations) but liken this to the fact that you have a car, the cars top speed on the flat is 150 miles an hour, no matter what you try you cannot get it to go faster, then I come along with a mod for your engine and fit it, but you cannot find the mod and looking in all the manuals for the engine you can find no difference in your engine so you dont beleive it exists, however, you get into your car and drive it, flat out on the flat and you find it does 175 miles an hour, do you still not beleive I fitted the mod?

Now I know thats a fairly random dreamt up story but give it a try (the gfx card that is, not the car!)........:D

One other thing, a 9600 could not possibly use any of the bandwidth that PCI-E 2.0 has to offer (5Gbit) so that cannot be a factor, it wont even be using 2.5Gbit (PCI-E 1.1). It's not a case of "handling" it, the throughput is not there in the first place to utilise the bandwidth, even an 8800Ultra cannot use just 2.5Gbit bandwidth let alone PCI-E 2.0's 5Gbit.
 
Last edited:
Joined
Mar 1, 2008
Messages
282 (0.05/day)
Location
Antwerp, Belgium
Oh boy

Ok i just read the report and also the complete thread in this forum.
And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.
 
Joined
Nov 4, 2005
Messages
11,674 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
True. But there are avid supporters who believe large companies of their choice can do no wrong, or they wnat to know in detail the steps taken to test.



Either way, W1zz is reputable and has not misled or lied before. He uses some of the best mixes to test hardware, and top notch setups. Some might actually believe that the PCI-e 2.0 standard will provide instant benefits due to reading the whitepapers and taking them at face value. However users with more experiance know that the current PCI-e bus is not to the point of saturation and increasing the bus speed does little to nothing other than eleminate a fractional latentcy in reads and writes. Now that we have a add in board basing it's timing speed off the PCI-e bus we have a real reason to experment, but on boards that have stability issues with higher frequencies the card in question will underperform from what the manufacturer has stated.
 

ViperJohn

Radeon Mod God
Joined
Aug 26, 2004
Messages
149 (0.02/day)
Ok i just read the report and also the complete thread in this forum.
And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.

I have to agree. Bringing PCIe 1.x/2.0 spec bandwidth difference and any performance difference
due to that is irrelevant in the scope of the report.

What is relevant is NV is apparently using the PCI bus frequency divided by 4 as the core clock
generators master/base frequency for the 9600GT instead of a 27mhz crystal as on all other cards.

Simply plugging the card into a motherboard that runs a higher default PCIe bus frequency, such as
a reference 780i (125Mhz default on the PCIe 2.0 slots), will cause the 9600GT to run higher core
clock speeds and give higher benchmark scores, than plugging it into a motherboard than runs a
100Mhz default PCIe bus frequency slot. The user/tester doesn't have to do anything else and
may not even notice the core clock speed difference...just the higher benchmark scores.

I can not help but think this artificial performance increase the 9600GT has when plugged into a
780i with it higher default PCIe bus frequency was some how accidentally over looked by NV!!!

Viper
 
Last edited:

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
i doubt the 780i PCI-e bus speed was overlooked hell they made the chipset
 
Joined
Jan 25, 2008
Messages
137 (0.02/day)
Location
PA,USA
Processor AMD 5000+ Black Edition
Motherboard Asus M2N32-SLI Deluxe Wireless Edition
Cooling ZEROtherm BTF90
Memory 2x 1GB OCZ Platinum Revision 2 DDR2 800
Video Card(s) Gigabyte 9600GT 512MB
Storage Seagate Barracuda 320GB
Display(s) Acer 20" LCD
Case Antec Sonata III
Audio Device(s) Razer Barracuda AC 1
Power Supply Antec Earthwatts 500w
so what do you guys suggest on a good method of overclocking my 9600gt with this knowledge?Should i set the bus speed to 100 first,oc the core,mem and shader to the highest stable settings,then slowly get the bus speed up?or do the opposite?get the bus speed up as high as i can while stable,then start ocing the core and mem and shaders?
 

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
Hello All,

I've not been trying to start a debate over the article. But I do see that none of the software used in the article is accurate. The 27Mhz frequency is nothing new as I listed an article where this was covered in another review in which the Author of Riva Tuner also discussed when it was found, what 3 years ago?

In regards to what I have been trying to say about all of this unsupported software. I was simply trying to see how this article came about and what PROOF there was to support the theory. But the whole report is based on results that shows screenshots of GPU-Z, Riva Tuner, 3DMark06.. All programs that are fairly outdated as compared to todays hardware.

---------

Going back over all of my posts, this is all I can find to give/take away from what I got from the article. But as my original reply. I do not see anything to support this article or the hundreds of others. If these cards could change clock freqencies based on the PCIe bus frequency. We would have some very unstable cards as they already come OC'd from the factory and some can not be pushed much more without getting hot, causing black screens, monitors to shut off ect.

Stock 9600 GT / Bandwidth

675/1800/1700 - Bandwidth: 57.6GB/s

OC'd 9600 GT / Bandwidth

750/2000/1918 - Bandwidth: 64.0 GB/s



3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.

What does it mean? That 3DMark06 isn't accurate much like Riva Tuner. nVidia won't comment because they can't. Nothing has changed on these cards other than the fact that no software that is out right now actually supports them. We don't even have decent drivers at this point so all in all, no software is giving factual readings of these cards.

These are just my own opinions from my own research. Again, I am not saying that I am right. I am saying however that I find it a bit odd that only one review has been posted where as no other 9600 GT owner can produce simular results.

Chris

BTW:
If someone knows where 3DMark06 takes into account changes to the PCIe bus frequency and calculates these increased bandwidth changes there instead of showing it to be oc'd settings within the card. Please let me know because I have not found this anywhere in 3dmark06.
 
Last edited:

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
Please Note:

I missed something from the main article. From all of my tests and research, I am doing all of this on a custom built computer. In looking back at the original article in which I too asked about the nForce 590 MB, it seems that the only people who may gain anything from OC's the PCIe bus is those with a 590 MB or possibly better.

The marchitecture is currently code-named Trinity and denotes a combination of Trinity-enabled motherboard (the 590s) and Trinity-enabled graphics cards. Once that combination is set in motion, the available bandwidth for the cards will go from 4GB/s to 5.2 GB/s, essentially – a legal overclock of the PCIe bus, going from x16 to "x20" or "x22", depending on the final stability tests the company conducted recently.

http://www.theinquirer.net/en/inquirer/news/2006/04/27/nvidia-set-to-overclock-the-pcie-bus

I do not have an nForce board, Therefore, I also don't have LinkBoost technology when I OC my PCIe bus. So for those of us who don't have an nForce board, you do need to be careful OC'ing the PCIe bus as you can burn up your video card, cause corrupted data on your hd among other harmful things. On most MB's, overclocking the bus can cause instability so unless you have an nForce board, just be careful.

Again, this was something I missed and didn't understand because the article pointed out something with the 9600 card. Specifically in the title. But it has everything to do with nForce boards.

Chris

BTW
The reason this article is confusing is because of the title!

NVIDIA's shady trick to boost the GeForce 9600GT

That is wrong! If anything, the article should have been called.

nVidia's advancement of Technology in the nForce Trinity-enabled motherboards

There isn't anything shady about nVidia creating Video cards that support the latest advancements of thier nForce boards. nVidia is the creator of both so I don't see how anyone considers that shady.
 
Last edited:

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,343 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The 'shady' part is that they've not communicated to the reviewers about this 'advancement'. Reviewers take the card and evaluate it on par with other competing hardware. That's the gripe, in the article's conclusion there's a reasoning for that.
 
Joined
Nov 21, 2004
Messages
64 (0.01/day)
System Name Dothan
Processor Intel Pentium M 770 @ 160*16
Motherboard MSI Speedster FA-4
Cooling Zalman CNPS9500 LED
Memory Adata DDR2 @ 240
Video Card(s) Sapphire HD3870
Storage Maxtor DiamondMax 10 300GB | Hitachi 160GB | Seagate 250GB
Display(s) Samsung 920T + LG Flatron 22"
Case TT Tsunami Xaser Black
Audio Device(s) Creative Audigy 2 ZS
Power Supply Hiper type-R 480W
Software ATITool & SysTool
...

3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.

...

First of all, you are here pointing out the main problem with this feature:
That the 9600GT gets a higher core-speed simply by increasing the PCI-E bus. Thats give a great boost in fill rates and such, as proof given on the second page: http://www.techpowerup.com/reviews/NVIDIA/Shady_9600_GT/2.html

That gives motherboard with automatic PCI-E overclock or a a higher PCI-E clock as standard a performance boost, by between 5 and 25 %. But thats not applying on every motherboard out there, and therefore people buying the card dont know what performance to expect on thier intel, amd or nvida chipset.

And yes, the extra bandwith comes from an overclocked card.

...

BTW
The reason this article is confusing is because of the title!

NVIDIA's shady trick to boost the GeForce 9600GT

That is wrong! If anything, the article should have been called.

nVidia's advancement of Technology in the nForce Trinity-enabled motherboards

...

There you are wrong again, as the same performance boost would be applied to a manually overclocked intelboard as well. This article isnt about if nividia did invent a good thing or not, its about why they kept quiet about it, even on direct questions about it.
Its a shady way to increase performance in reviews, so it might missmatch with performance regular buyers would get of the card in thier almost identical systems at home.
 
Joined
Dec 18, 2005
Messages
8,253 (1.23/day)
System Name money pit..
Processor Intel 9900K 4.8 at 1.152 core voltage minus 0.120 offset
Motherboard Asus rog Strix Z370-F Gaming
Cooling Dark Rock TF air cooler.. Stock vga air coolers with case side fans to help cooling..
Memory 32 gb corsair vengeance 3200
Video Card(s) Palit Gaming Pro OC 2080TI
Storage 150 nvme boot drive partition.. 1T Sandisk sata.. 1T Transend sata.. 1T 970 evo nvme m 2..
Display(s) 27" Asus PG279Q ROG Swift 165Hrz Nvidia G-Sync, IPS.. 2560x1440..
Case Gigabyte mid-tower.. cheap and nothing special..
Audio Device(s) onboard sounds with stereo amp..
Power Supply EVGA 850 watt..
Mouse Logitech G700s
Keyboard Logitech K270
Software Win 10 pro..
Benchmark Scores Firestike 29500.. timepsy 14000..
i think nvidia attach great importance to early reviews.. if they have deliberately kept quiet about this "change" its to make the odd review make the card seem better than what it really is.. deliberate or incompetence.. neither can be consider a merit mark..

its also quite clear that many contributors to this thread are posting nonsense because they havnt properly read the article or the thread..

trog
 

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
Hello,

I've talked to a friend who stopped by. He said that the main reason why those who overclock for high scores on 3DMark06 and so forth do overclock the PCIe bus as it "Does" raise the core clock on the cards.

What I find confusing about the article as well as all of the replies to my own is that there are tons of articles dating back to 2006 stating that the nforce boards have linkboost and/or allowing changes to the PCIe frequency. So if these features have been available for over 2 years and the G80+ chipsets have also supported this for the past year or more. Why would it just now be noticed with a 9600 GT card when there are reviews and tech sites that say it's been available for over a year, maybe two?

I don't know when nvidia first offered the nForce 590 board or the G80 chipset. But there seems to be enough info stating that this shady feature has been around for quite some time.

Other sites were first to mention it in 2006. But some people are just now finding out while testing all of these new cards since there seem to be so much hype around them.

Chris
 

cbunting

New Member
Joined
Mar 1, 2008
Messages
26 (0.00/day)
This will be my last post on this subject.

There are many reasons why I don't understand the whole concept behind the article and what exactly it has too do with the 9600 GT card. The article is 4 pages long and talks about 2 year old features that are now defunct for the most part.

Here is why, for myself to read an article written in Feb 2008 doesn't make sense.

According to nVidia's website, LinkBoost has been removed from the 590 and 680i series chipsets. Only older boards with the 590 support it still.

That happened some time ago.

Where there cards other than the 9600 that supported Link Boost and PCIe Overclocking?

The GeForce 79xx series and above featured these additions.

So all in all, everything as listed in the article has been around since 2006. But like I said in the above reply, I don't see how a 2008 review can call NVidia shady because of the 9600 when it's obvious that the article was written about features that have been around since at least 2006.

Chris
 
Top