• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Preliminary Tests on GeForce GTX 295 Run, Leads Radeon HD 4870 X2

Theoretically you are right......in practice, the 9800GX2 ran cooler than the 3870x2 I beleive :)

Yea the 3870x2 and 4870x2 do run very hot, even this 3850 im running now runs dam hot, at 80c under load:shadedshu but at least with all these cards you can update and put a better cooler on them, to bring down there temps, and there for overclocking them higher to get better performance with out the risk of overheating.

When i buy a GPU i always look at the cooling on the card, as i like i nice cool running system, so the heat doesn't also heat up inside the case and makes other components run hotter as well.
 
wow that's all the GTX285 is? well at least they did decent with sticking to their naming scheme, so basically an oc'd GTX 280 will perform the same as GTX285, i wonder when they're going to come out with the GTX265.

yeah but look at the power consumption and die size.

this card should be easyer to cool with more oc headroom, yes its not much better than the original 280, but just think of it as a newer core stepping.
 
My 8600 GTS gets hot as hell, sadly.:shadedshu
 
well at GTX285 speeds a GTX280 would be consuming considerably more than 236w, i think its a testament to the 55nm shrink, they've done well. so far :p

the power difference is 22.5% both cards at stock, at the same speeds, i bet its over 25%

not bad for a ~15% die shrink right! they must have done some core steppings/revisions along the way to squeeze that sort of power savings out of it.

and +1, this card should overclock very nicely. anyone here want GTX280 specs running at 800 core :P i know i do
 
Dual gpu setups may be a thing of the past soon. There is speculation that the dual RV8xx series card is based on this I like the Nvidia design for thier dual cards, though it's obviously not as cost effective as ATi's design. Plus, if Nvidia were to try putting two massive GTX chips on one pcb, the card would have to be huge.

Dual-core and Quad-core CPUs could never put multi-socket to rest, although that's what was talked about when Pentium D was launched :)

Today we have multi-socket setups comprising of those dual/quad-core chips. This is what could happen. GTX 280 wasn't quite a cost-effective design either (although the credit goes to the great way in which NVIDIA dealt with its partners to bring the price down).
 
I like what I'm seeing. Let's hope Wiz gets a review sample as soon as they are available!
 
Damn dual cards. *growl* Sure they put up ridiculous numbers, but you pay out the ass for them and some in games they perform worse than a single card. Power, but where's the refinement and efficiency. Anybody can slap a couple cards together and charge double the price (more actually sometimes), I wanna see some innovative engineering, this is becoming more mundane than tick-tock.......
 
+1, i agree, and that's why i want to see what ATI is doing with RV870, or their gen after that. There were rumors that the RV870 would actually be dual chip like a Pentium D but i think that was shot down, but hopefully on next gen it will happen, i'd like to see the performance of 2 gpu's with a ring-bus.
 
+1, i agree, and that's why i want to see what ATI is doing with RV870, or their gen after that. There were rumors that the RV870 would actually be dual chip like a Pentium D but i think that was shot down, but hopefully on next gen it will happen, i'd like to see the performance of 2 gpu's with a ring-bus.

Ring Bus was the Flaw of the R670.
 
flaw yes, but flaws can be fixed as can all problems some way or another, i don't know much about this ring bus i just know(or thk) that they'd have to use one for 2 gpu's to share the same memory pool or w/e. there was also the idea that the 512-bit memory bus was fail but nvidia is doing just fine with one. though there is that non-active sideport on the HD 4870x2 that would allow direct communication with the 2 chips, which supposedly would negate micro-stuttering or at least minimize it greatly, wonder if they ever got it working or if it'll make a difference, and if it does have they been waiting to activate it in case nvidia did do a GX2 card.
 
AMD has Architectural Charts of the R670 vs the R770.
 
looks good , but we can't depend on first test , we know this first tests always become not clear and not accurate , but if this card beat 2x gtx260 that's mean sure beat 4870x2 , but if is not this result will be forget it , and im sure ati do some respond and i think it will be release driver which is active the bridge between the two gpu in 4870x2 , sure this driver only support 4870x2
 
AGREED!!!!!

Nobody is talking about the HD 4870x2's secret "Sideport" weapon which can only be enabled through a driver update. Right now the HD 4870x2 is king, and if this new Nvidia card is a little faster it will only drive the HD 4870x2 down in price which is a good thing.

Read the review on this site about the added "sideport" of an extra 5 GB/s both ways.
http://www.techpowerup.com/reviews/Sapphire/HD_4870_X2/

Can this be a secret weapon to increase speed in games at high res?
 
AGREED!!!!!

Nobody is talking about the HD 4870x2's secret "Sideport" weapon which can only be enabled through a driver update. Right now the HD 4870x2 is king, and if this new Nvidia card is a little faster it will only drive the HD 4870x2 down in price which is a good thing.

Read the review on this site about the added "sideport" of an extra 5 GB/s both ways.
http://www.techpowerup.com/reviews/Sapphire/HD_4870_X2/

Can this be a secret weapon to increase speed in games at high res?

Short answer. NO.
 
I see only "pure" advertise here.

NVIDIA PhysX support -> 2nd benchmark (which is not needed, and doesn't really show any true performance over ATI cards).

But, I'm gonna buy one if it's under ~$350, lols. Maybe after 4-5 months when it launched. :D

Ouch! Yeah this thing will be EOL after 3 months. What a waste of resources that Nvidia could be using toward their next architecture. Also if this thing isn't engineered perfectly it could turn out to be a HUGE disaster that not even Nvidia's PR dept will be able to spin. Let's hope it doesn't idle at 80c.
 
AGREED!!!!!

Nobody is talking about the HD 4870x2's secret "Sideport" weapon which can only be enabled through a driver update. Right now the HD 4870x2 is king, and if this new Nvidia card is a little faster it will only drive the HD 4870x2 down in price which is a good thing.

Read the review on this site about the added "sideport" of an extra 5 GB/s both ways.
http://www.techpowerup.com/reviews/Sapphire/HD_4870_X2/

Can this be a secret weapon to increase speed in games at high res?

i had already mentioned this in an ealier page :p, and i hope that this would improve performance, i know if it were to work as it should it would improve performance, but i don't they got it working. Probly will on the HD 5870x2.

eidairaman1 said:
AMD has Architectural Charts of the R670 vs the R770.

this means very little, it just means they changed the architecture of the gpu cause they couldn't get it working, doesn't mean it won't work. Did you know the Radeon X1000 series used a ring bus as well? for some reason the ring bus didn't work out with the 2900XT. Did you know Intel plans to use the ring bus on larrabee? they're not a stupid company so it must still have it's uses
 
I call BS on those benches. There's no chance that the GTX295 is pulling 150FPS+ at 2560X1600 with 4AA considering current benchmarks show GTX280 SLI struggling to get 120FPS at 1680x1050 with 4AA.

Those benchmarks are totally misleading, especially the PhysX one. Why would you compare a game ran on two different computers, where one is doing physics on the CPU, while the other is on the GPU, when everyone knows the GPU is faster?

That, and the fact that ATi cards don't officially do PhysX on the GPU, there's no point in them benching it and using it as a comparison.

Actually a single GTX 260+ gets 132FPS at 2560x1600 4xAA 16AF.

http://www.pcper.com/article.php?aid=645&type=expert&pid=3
 
yea but that's at dx9, big difference.
 
Preliminary tests, are all but preliminary, they don't represent the truth. They don't have any driver information and run only Nvidia preferable games, Fallout being the exception, which shows hardly any real gain. But then again it should be better, because this is actually 2 GTX280, or it would be a major fail

And its great to see people are not taking this BS


What do you mean this by this is 2 GTX280's? Are you saying that it has the performance of 2 280's or are you saying that they used 2 280's for this benchmark?
 
technically the GTX 260 and GTX 280 are the same chip remember. so these 2 chips on the GTX295 are their own variant, one of GTX 280 specs-240 SPU's, but GTX 260 memory design-448-bit bus 896mb memory per chip. so it should perform less than GTX 280 SLI but more than GTX 260 SLI.
 
this means very little, it just means they changed the architecture of the gpu cause they couldn't get it working, doesn't mean it won't work. Did you know the Radeon X1000 series used a ring bus as well? for some reason the ring bus didn't work out with the 2900XT. Did you know Intel plans to use the ring bus on larrabee? they're not a stupid company so it must still have it's uses
I believe ATI was going to go with the Ring Bus memory once again with the HD 4870's. Thanks to AMD and the good old "Crossbar Switch" they've taken the performance crown away from Nvidia.

Thanks the Microsoft heavily investing into ATI's R&D for the XBOX 360, ATI had the ability to architecturally design the "Ring Bus Memory Controller" for the XBOX 360. With no effort, they applied the same design for there Graphics Cards.

In theory the Ring Bus should have performed much more efficient then it currently does, but IMO I think they need more time to perfect the technology.

This is why AMD went with the Crossbar Switch which is quite similar to what is found in the Athlon 64 I believe.
 
The problem with ring bus is that you need at least 2 similarly fast, bandwidth hungry units atached to it to make sense, or in defect of that, more than one fast memory pool. In GPUs that's not the case, you have 1 GPU and one memory pool, so building a costly pathway between them doesn't make sense perf/price or perf/watt wise. The ring bus is similar to the PCI bus in that the different units atached to it have to be arbitered and given some "time". Being so the ring bus was a lot of it's time busy feeding non-performance related units that didn't need all the bus power, canibalizing that power that could be used by the performance parts. It's like building a 12 lane highway just because sometimes 12 trucks are gonna go through it and then allocate the use of that highway to a lot of lonely cars, while the trucks have to wait.

I might not be correct on this one but in the Xbox 360 the ring bus connected the GPU, the CPU, the main memory and the embedded memory, and of course many other "minor" things wiht DMA access. That's 4 performance units, instead of two, and many more units which required access to memory. So there it does make more sense. It does make sense to make a shared big highway, instead of many dedicated small pathways.
 
What do you mean this by this is 2 GTX280's? Are you saying that it has the performance of 2 280's or are you saying that they used 2 280's for this benchmark?

The GTX295 will contain two 55nm 240sp GPU's so same shader count as two current GTX280's, but with the memory and bus of the 260.....although because of the 55nm process they should be clocked much higher......I think thats the plan.
 
I call BS on those benches. There's no chance that the GTX295 is pulling 150FPS+ at 2560X1600 with 4AA considering current benchmarks show GTX280 SLI struggling to get 120FPS at 1680x1050 with 4AA.

Those benchmarks are totally misleading, especially the PhysX one. Why would you compare a game ran on two different computers, where one is doing physics on the CPU, while the other is on the GPU, when everyone knows the GPU is faster?

That, and the fact that ATi cards don't officially do PhysX on the GPU, there's no point in them benching it and using it as a comparison.

It might be possible to achieve those frame rates if they are using the Lucid Hydra, which has been rumored to achieve 100% scaling with multiple GPU's.
Considering Lucid have kept their lips sealed about who they're working with, it's quite possible.
Or perhaps those smartys at NVIDIA have come up with their own version...
 
Interesting i had not heard of this till you linked it, and no i don't think nvidia got or made a chip like that, it's just regular SLI working on the GTX 295. i see the performance but now i want to know the price.
 
Back
Top