so has anyone had the cojones to play around with the memory timings on their card
maybe it is like ocing cpu,loosen the timings a touch and can clock the mem higher
curious to see if anyone is using it
What counts in the end is not the clockspeed the memory is running at and getting said figure as high as possible, but the actual PERFORMANCE of the video card.
If you losen the timings as you call it, the access time for the memory will go up even if you clock it faster and hence increase the available bandwidth. If this will result in more performance is a complicated issue, especially with graphics cards where the connection between memory latency and performance is a huge blank outside companies like Nvidia and ATi.
When it comes to CPUs, it's easy to determine what tweaking the memory will do for a particular application. If your program does a lot of random accesses, less bandwidth and tighter timings is preferable, while a program that reads lots of linear data and does computations on it will prefer bandwidth instead. There is no middle ground where some magic settings will give both kinds of programs perform optimally, users have to select that themselves.
Video cards on the other hand are a little different. They have traditionally been of the "linear, bandwidth-eating" type, pipelined so deeply that literally hundreds of clock cycles may pass between when a command to draw a triangle is read, and the first pixel is written. This, along with multiple caches in strategic places gives the GPU time to hide away memory access latency. However, fairly recently pixel shaders entered the scene, and one of the additional features they brought are dependent texture reads. Basically, what this means is you take one texture and read a texel from it, then take that value and add/subtract/whatever it from ANOTHER texture; the real one, to determine the color of the final pixel.
Things that can be achieved this way is stuff like water/glass refractions, heat haze, etc. It also totally screws with texture caches, because where a GPU normally is designed to read texel values that are very close to each other and easily cachable, dependent texture reads can basically go anywhere in a texturemap from one pixel to the next. This means memory accesses can't be hidden nearly as easily, and a game might well stumble into the territory where the GPU's memory performance is not ideal for either bandwidth or access speed.
Start messing with memory timings to tweak up the clock speed another 10, 20, 30 MHz, access time will go up, game performance might actually decrease. Ironic, huh?