Thursday, July 10th 2008

R700 up to 80 % Faster than GeForce GTX 280

Pre-release performance evaluations suggest that the Radeon HD 4870 X2 2GB GDDR5 model will on an average be 50% faster than the GeForce GTX 280 and in some tests 80% faster. A second model, the HD 4850 X2 (2GB GDDR3 memory, 2x RV770Pro) will also convincingly outperform the GeForce GTX 280. The R700 series will be brought into the market late-July thru August.Source: Hardspell
Add your own comment

149 Comments on R700 up to 80 % Faster than GeForce GTX 280

#1
TheGuruStud
*waits for tweaked 280 with 800 core*

:cool:
Posted on Reply
#2
purecain
waits for tweaked bios for 4870 with 900core lol....
Posted on Reply
#3
TheGuruStud
by: purecain
waits for tweaked bios for 4870 with 900core lol....
THEN YOUR HOUSE CATCHES ON FIRE!!!! :D
Posted on Reply
#4
Live OR Die
sounds sweat just hope it not a hype up like the new nvidia card were
Posted on Reply
#5
purecain
by: TheGuruStud
THEN YOUR HOUSE CATCHES ON FIRE!!!! :D
hardly, the core never goes above 58c even at 840mhz...

the fan issues have given this card bad press....:nutkick:
Posted on Reply
#6
imperialreign
by: swaaye
How about we compare R700 (2xRV770) to GTX 280 SLI? :) Price is a bit different though I guess lol.

BTW, some of you really need to understand GPU architecture a bit better. R700 is not more efficient than a design based on a single giant GPU would be. At least, not for pure 3D performance. The fastest GPU design, due to how parallel 3D rendering is, is always a single GPU. Dual GPUs waste RAM and have to deal with inefficiencies caused by trying to split the tasks and communicate via a pathetically slow bridge chip. There is extra hardware and hardware performing redundant tasks. And the drivers have to be specially set up for every game basically (this is conveniently ignored by most people for some reason.)

The problem is that manufacturing technology can not cope with mega huge GPUs. That is why GT200 can't clock as high as G92. The bigger the chip gets with more transistors, the hotter it is and the more complex it becomes to make it stable at higher clock speeds. If you look back at how early GPUs barely needed fans to today's ridiculous furnaces, you see that manufacturing is way behind what competition has pushed GPUs to become.

R700 and 9800GX2 are designed to overcome manufacturing inadequacies in the only way possible. They also conveniently allow an entire lineup to be based mainly on a single GPU design. It's just important to realize that this is not the optimum way to go for performance.

Also, realize that there is potential for a refreshed GT200 to be vastly faster. If they shrink it down and tweak it, and this allows it to clock up decently higher, a dual GT250 (or whatever) could be a lot faster than R700.
I would tend to agree, buy we also must keep in mind that there are also reports the ATI is binning the R770 GPUs, holding onto the better cores for higher-end models . . . I think we can expect to see a second 4870x2 release with better components should nVidia come close to countering it's performance.

Also, although it might be 2 GPUs on one PCB - each PCB has it's own DRAM, AFAIK, neithr GPU is sharing MEM, which eliminates a lot of the inefficiencies of the 3870x2 - include the higherbandwidth of the GDDR5, and a lot of the microstutter associated with multi-GPU setups whould be practically eliminated (that was a big issue with the 3870x2 versus xfired 3870s - although the 70x2 had higher frame rates, minimum frame rates were lower than two seperate cards).


The big point of this whole debate at this point . . . the 4870x2 is a single card, dual-slot solution . . . to best it with nVidia hardware ATM would require 2 cards, totalling 4 hardware slots you would have to sacrifice, and pay nearly twice as much out of pocket for compared to the 4870x2 price . . . if we want to look at it like that, for the same price, you could purchase 2 4870x2s, sacrifice the same 4 slots, and PWN THA LIVIN HELL out of any nVidia setup currently on the market. Sadly, due to nVidia's GPU design - they can't release anything similar to compete with it like the 9800GX2 just yet.


just cause it takes ATI 2 cores to best 1 nVidia core doesn't really mean squat anymore - ATI brought the cake with the R770, proving they've still got game. Hell, sometimes it takes more than one challenger to beat down another competitor . . . I don't recall anyone crying foul when the UK, USSR, and the USA teamed up to pwn some Nazis back in the day :p
Posted on Reply
#7
brian.ca
by: swaaye
How about we compare R700 (2xRV770) to GTX 280 SLI? :) Price is a bit different though I guess lol.

BTW, some of you really need to understand GPU architecture a bit better. R700 is not more efficient than a design based on a single giant GPU would be. At least, not for pure 3D performance. The fastest GPU design, due to how parallel 3D rendering is, is always a single GPU. Dual GPUs waste RAM and have to deal with inefficiencies caused by trying to split the tasks and communicate via a pathetically slow bridge chip. There is extra hardware and hardware performing redundant tasks. And the drivers have to be specially set up for every game basically (this is conveniently ignored by most people for some reason.)

The problem is that manufacturing technology can not cope with mega huge GPUs. That is why GT200 can't clock as high as G92. The bigger the chip gets with more transistors, the hotter it is and the more complex it becomes to make it stable at higher clock speeds. If you look back at how early GPUs barely needed fans to today's ridiculous furnaces, you see that manufacturing is way behind what competition has pushed GPUs to become.

Also, one thing I never get... between the g92 / r670 & now it took ATI & Nvidia about the same time to release their new chips... why are people always talking about a die shrink & revisions for the gt200 like ATI will be sitting around with their thumbs up their arses the whole time Nv works on that revision, with nothing to counter with or show when Nv pushes out their new offerings?

R700 and 9800GX2 are designed to overcome manufacturing inadequacies in the only way possible. They also conveniently allow an entire lineup to be based mainly on a single GPU design. It's just important to realize that this is not the optimum way to go for performance.

Also, realize that there is potential for a refreshed GT200 to be vastly faster. If they shrink it down and tweak it, and this allows it to clock up decently higher, a dual GT250 (or whatever) could be a lot faster than R700.
When you say, "communicate via a pathetically slow bridge chip" are you talking about the PLX chip? It was my understanding that for the 4870x2 the PLX chip will not handle communication between the two GPUs, that's supposed to be done through the side port or some other means that's supposed to fix some of the issues people had with the 3870x2? There is still a PLX chip but from what I read that is needed there for another function (breaking up the PCI/e lane into two to divide it between the two chips, and reversing that process in the opposite direction).

About the other stuff, I really think most people don't care too much about how performance is achieved (ie: it's efficiency, how many chips to a card etc.) so much as that it is achieved and how much it costs. As long as there are no substantial issues that are the result of the method I'd have to agree that it shouldn't really matter. I haven't owned any X2 cards to experience it first hand but that micro stuttering issue I've heard about sounds like that would be such an issue. If that's fixed now with out causing a new issue though then great. Otherwise you have to consider that you get what you pay for and need to decide what's important to you. Ex: Would a single 280 give you easier performance? Sure. Will it get you as much? Apparently not. If you're dead set on the extra performance, is the cost in inconvenience/issues worth the money you save vs. other solutions. etc.

Also, one other thing I don't get... Nv's 8800GT and ATI's 3870 weren't too far apart in release times, now with both companies releasing new cards the release dates are again close (if not closer). So why do people talk about the potential for a die shrink of the 200 like ATI doesn't have the capability to do it or anything else to counter over that time frame as well?
Posted on Reply
#8
yogurt_21
the 4870x2 should be a monster but based on the 9800gx2 I see no reason why a gtx260x2 isn't possible. It's not like they have to worry about putting 2 cores on the same pcb, just use 2 pcb's like on the 9800gx2. so core size wouldn't matter, heat, eh that can be dealt with.

a gtx280x2 could happen heat and power wize (considering the power and heat of the gtx280 vs the 4870) but the cost of the card would be high and being that the extremist market is small, I don't see too many selling. a dual gtx260 could probably hit that 500$ mark and be competitive. Then you just drop the price of the gtx280 to 400$ and be set. then ati would have to release a 4850x2 to counter the gtx280's price point. it'd be a nice little competitive market.
Posted on Reply
#9
OzzmanFloyd120
by: Darkrealms
So then what would the result of a 280 x2 be?
$1500 USD... or one soul.
Posted on Reply
#10
imperialreign
by: OzzmanFloyd120
$1500 USD... or one soul.
exactly . . . and for about $1000 - you could have 4 R770 GPUs sitting on 2 PCBs :toast:
Posted on Reply
#11
tkpenalty
GTX280X2 is impossible. Why? HUUUUUUUUUGE heat output and the fact that the bga solder balls on the packages are fragile. The only reason why the GTX280 has "low temps" is because of the cooler used on it; its basically better than whats out there in terms of cooling at the moment. Well... good luck fitting all that into TWO slots.

HD4870X4 would require some smart engineering, but I'd think a HD4870X3 would be enough anyway. They would have to reposition the memory banks etc.... not very easy. But its possible.
Posted on Reply
#12
mlupple
by: magibeg
I recommend just rooting for the people who have the best products for the cost :)
If everyone thought like you, we'd all have to root for wal-mart and chinese made products. Then they'd have an absolute monopoly since nobody would shop anywhere else and then they'd fuck us. Supporting the underdog to keep competition alive is what makes this country roll!
Posted on Reply
#13
GPUCafe
GPU Cafe Representative
Came here since I got a linkback from here ..

We actually posted this two weeks earlier.
  • Performance claims are 30% faster than the GTX280 overall. (best case scenario: 70% faster)
  • Target driver for reviews is Catalyst 8.7. Claimed Quad CrossFireX gains of upto 300% over single 4870.
http://gpucafe.com/?p=12
Needless to say our chinese friends are faster. :P

And got some GT200b info that I've been sitting on for a couple of days. Not going live with it cause they look flaky. :shadedshu
Posted on Reply
#14
magibeg
by: mlupple
If everyone thought like you, we'd all have to root for wal-mart and chinese made products. Then they'd have an absolute monopoly since nobody would shop anywhere else and then they'd fuck us. Supporting the underdog to keep competition alive is what makes this country roll!
I thought it was capitalism that makes the US roll?
Posted on Reply
#15
imperialreign
by: magibeg
I thought it was capitalism that makes the US roll?
no . . .

a roll is a roll

and a toll is a toll

and if the US don't get no tolls

then we don't get no rolls
Posted on Reply
#18
SK-1
by: imperialreign
no . . .

a roll is a roll

and a toll is a toll

and if the US don't get no tolls

then we don't get no rolls
:roll: Indica or Sativa imperialreign?:roll:
Posted on Reply
#21
indybird
If nvidia takes the GX2 route again with the GTX 280, then (assuming the GX2 performs equal to two GTX 280s in SLI), it will be approximately equal to the HD4870X2.

Heres my reasoning:
-In SLI the second GTX 280 adds anywhere from 50% to 80% of the performance of a single card (according to most reviews).
-The ATI HD4870X2 is claimed to be 50% to 80% faster than the GTX 280
-Therefore, unless nvidia improves their scaling or ATI's isnt as good as they are currently claiming, these two cards will be equal in performance
-Based on the cost of a 65nm GTX 280 core, the GTX 280 will be very expensive: ~$700.

However, if nvidia chooses to compete with the HD4870X2 using the 55nm (probably overclocked) GTX 280 then I believe that the HD4870X2 will be more powerful.

More reasoning:
-When nvidia lowered the manufacturing process of the 9800GTX to 55nm, overclocked it and called it the 9800GTX+, the average performance gain was 10%, 20% at absolute best (also according to reviews).
-If the ATI HD4870X2 is 50% to 80% faster then the GTX 280 then the HD4870X2 will be app. 35% to 60% faster then the GTX 280 55nm/overclocked.

Sounds to me like the best option for nvidia would be a GTX260GX2 with the cores overclocked to GTX280 speeds and to sell it for about $550-$600.

-Indybird
Posted on Reply
#22
KainXS
I think Nvidia might skip a GX2 this time around, compared to what im seeing with the scores the R700 is pulling out and what SLI GTX280's do, which from what i've seen they scale about 10-45% on average on this review, releasing a GX2 would be murder, its costs alot to make those G200 cores and even when the die strinks they're still going to cost alot to make, they're going to have to speed up development of the G300's

and why would any of you want a GTX280-GX2, that card would cost at least 900 dollars when you see that the 280's cost about 500-550 dollars now, unless you have money to burn and are a major Nvidia fan, thats a waste of money.

Nvidia totally :nutkick: this time

I want to wait for the R800's before I buy another card, if the R700 can pull 80% off though then I will definitely buy that

http://www.tbreak.com/reviews/article.php?cat=grfx&id=618&pagenumber=10
Posted on Reply
#23
steelkane
I remember 3DFX having similar clams, as far as amd/ati selling the better performing card for a cheaper price, there only doing that because they have been behind for so long, lets just see when they pull ahead, they don't raise the prices. As far as who's better amd/ati or nvidia, I don't care about that, I've had both & just want good performance.
Posted on Reply
#24
Ketxxx
Heedless Psychic
Calm down people its all hear-say with no solid numbers to back it up. Remember right now there are the likes of the HD4850 that sell for stupidly cheap and stupidly cheap HD3870 GDDR3 models that sell for even lower but give stellar performance. I cant speak for a HD485070 yet, but I can say this HD3870 GDDR3 I have does very well. Scores 11k off the bat completely stock in 3DM06. My point is, don't be in awe of something you have absolutely no proof of. Instead be in awe of what you do have proof of ;)
Posted on Reply
#25
Megasty
by: KainXS
I think Nvidia might skip a GX2 this time around, compared to what im seeing with the scores the R700 is pulling out and what SLI GTX280's do, which from what i've seen they scale about 10-45% on average on this review, releasing a GX2 would be murder, its costs alot to make those G200 cores and even when the die strinks they're still going to cost alot to make, they're going to have to speed up development of the G300's

and why would any of you want a GTX280-GX2, that card would cost at least 900 dollars when you see that the 280's cost about 500-550 dollars now, unless you have money to burn and are a major Nvidia fan, thats a waste of money.

Nvidia totally :nutkick: this time

I want to wait for the R800's before I buy another card, if the R700 can pull 80% off though then I will definitely buy that

http://www.tbreak.com/reviews/article.php?cat=grfx&id=618&pagenumber=10
That review is painful to look at when it comes to GTX280 SLI. This is the main reason why they should back off any idea of a GX2 & just write it off. The reason the 9800GX2 worked so well is that the G92 scaled very well. This thing scales like garbage. Drivers might help it eventually but who is going to delve $1000+ into either option to find out. The 4870x2 is going to be the cornerstone for dual GPUs. If NV wants to kill themselves with a $1000+ POS that scales horribly as 2 cards now, then I see no reason to even consider it.
Posted on Reply
Add your own comment