• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

3-way SLI Action with NVIDIA GeForce GTX 280 and 3DMark Vantage

yes, i would agree with you, were it another game. but one thing i've always noticed about crysis is even at the 20 FPS avg i play it at it seemed solid, which was weird as shit compared to css when if i get 20FPS its really really bad. i never noticed the lag in crysis until i got down to 15 or so, so yea i was laggin my lowest i ever hit was i thk 12FPS so put that card in my rig and even at 30FPS i dout i'd see the 15FPS where i notice lag. i don't know what crytek did but maybe its the coding that helps with the lag, cause i've never played a game where i would consider 20FPS stable or acceptable.

and its not just me check this site
http://forums.pureoverclock.com/showthread.php?p=14934#post14934

It's because of many things, but the most important is that everything in Crysis has a separate thread in CPU and memory: renderer, physics, AI, controls. And the engine will try to balance them acording to the situation. That means that unlike every other games (that I can think off at least) in Crysis "lag" in any of those systems doesn't affect the others. Usually graphics, physics or AI lag is traduced into mouse or net lag (extremely true for CSS), but this doen't happen in Crysis or is mitigated a lot which means almost the same in practice.

EDIT: That system is suposedly smart enough to change physics and AI "resolution" or "framerate" on the fly if it sees too many workload on the CPU and thinks it's not going to be able to handle it. What I mean with framerate and resolution there there is that, instead of calculating physics and AI for every frame it will do it at a lower speed than the renderer (framerate). Or it can calulate less interactions per frame (resolution). Anything to find the balance it needs.
 
Last edited:
it'd be nice to see the GT200b come out sooner rather than later for some much improved clock speeds and yeilds. apparently 60 out of 100 GT200 chips fails on the production line. and lets face it 602/1296 isnt that fast for the core.

i think what we need to see is a 55nm part with the full 256 (maybe more) shaders enabled, running at ~700/1750/2400

that alone should increase performance about 15% over GT200, not to mention the added OC headroom.
 
Look at my avatar.
And I would also see how much of a difference it would be if you used a AMD chip.
i did so what? my question still remains valid. maybe i rephrase. at current time a bench with inferior chip what good can be for 3-way sli except but bottleneck it?

p.s i have 2 amd chips and 2 intel chips from different job each one. i get whatever fit my needs. fanboyism does bad with my money.:toast:
 
oy, bottleneck this bottleneck that, dude trust me bottlenecks are mythical.
 
I wonder how much that OC had to do with the final score, though

QX9650 OCed to 4GHz, DDR3 OCed at 1GHz . . .

CPU doesn't make a huge difference in Vantage. I think 400Mhz on my quad netted me another 100pts, iirc.
 
CPU doesn't make a huge difference in Vantage. I think 400Mhz on my quad netted me another 100pts, iirc.

this is true.
vantage score has allot more to do with the GPU than the CPU.
 
it'd be nice to see the GT200b come out sooner rather than later for some much improved clock speeds and yeilds. apparently 60 out of 100 GT200 chips fails on the production line. and lets face it 602/1296 isnt that fast for the core.

i think what we need to see is a 55nm part with the full 256 (maybe more) shaders enabled, running at ~700/1750/2400

that alone should increase performance about 15% over GT200, not to mention the added OC headroom.

I think GT200 is 10 clusters of 24 processors and not 15 clusters of 16, and that's what was needed for the card to have 256 processors. At least that's what it is according to leaked specs and a die shot floating around. Anyway we don't really know which specs are true and the possibility of being 16 SP clusters is interesting: could the card be 16 clusters of 16 processors, with one cluster dissabled for the sake of improving yields? Don't think so because yields are suposedly too low for a method like this to be in use. What do you guys think about this?

Hell, I love speculation. :D
 
Very nice. That gpu score is roughly equal to the world record just set in vantage by 2 gx2's. So they got a 1 core advantage on very early drivers. Looks like it won't be until nehalem that a proc can give it its full power though, and the price for a system like that is just plain silly, so I really don't care what real game benchies are like. Who's gonna buy a system like that just to play Crysis? :confused: I'd like to see what Kingpin and Co. can do w/ this..........
 
Very nice. That gpu score is roughly equal to the world record just set in vantage by 2 gx2's. So they got a 1 core advantage on very early drivers. Looks like it won't be until nehalem that a proc can give it its full power though, and the price for a system like that is just plain silly, so I really don't care what real game benchies are like. Who's gonna buy a system like that just to play Crysis? :confused: I'd like to see what Kingpin and Co. can do w/ this..........

it's more a than a 1 core advantage, the gtx280's are at stock while the 9800gx2's are severly oced., and being that the shader clock is so low on the gtx280, I imagine ocing it would have an increased gain percentage than normal. not to mention tri sli drivers for the gtx280 can't be optimized yet which means after a few months the same config could score in the 25k+ range.
 
but like some people are saying that IS overkill no matter what game you have, these days games are not designed for 3 way sli.... but in time they will be !!

and 3 way sli just started some time ago, and we have to consider the drivers that come with it which would allow it to reach its full potential !!

so i was any one of you, no matter AMD OR INTEL, i would keep one graphics card now then later once they realease heaps of drivers, then go time !!
 
so if the tests are really gpu bound how does 3x9800GTX OC do? (TRi SLi)

and im thinking most 9800GTX's will do 775-800mhz core, 1900-2100 shader and 2350-2500 memory.....TRi SLi on that would be niiiiiice
 
so if the tests are really gpu bound how does 2x9800GTX OC do?

About 13K-15K in the graphics portion if oc'd very well.
 
but like some people are saying that IS overkill no matter what game you have, these days games are not designed for 3 way sli.... but in time they will be !!

and 3 way sli just started some time ago, and we have to consider the drivers that come with it which would allow it to reach its full potential !!

so i was any one of you, no matter AMD OR INTEL, i would keep one graphics card now then later once they realease heaps of drivers, then go time !!

Whats with you people and talking about overkill? Doesn't everyone remember when Doom 3 came out and stomped all the systems? And then after that Oblivion came out, followed by Crysis. Theres really no such thing as overkill when it comes to computer parts because the software will eventually catch up. Using a cannon to go hunting is overkill because the animals don't get any stronger. If you have the money buying up 3 high end videos is not overkill because the games will eventually bring the system to its knees anyway.
 
Back
Top