• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

Joined
May 27, 2005
Messages
3,651 (0.53/day)
Location
Little Rock Arkansas, United States
System Name Monolith
Processor Intel Xeon E3110 Wolfdale@3.5GHz
Motherboard MSI P35-Neo
Cooling Active Air
Memory 4GB DDR2 800
Video Card(s) Sapphire HD 3850 512MB PCI-E
Storage 1 x 80GB Internal, 1 x 250GB Internal, 1 x 40GB External
Display(s) Acer X203w
Case Generic black case with locking front bezel
Audio Device(s) Creative SB Audigy 2 ZS
Power Supply 500 Watt Seasonic M12
Software Windows 7 Ultimate x64
humm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.

as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.

as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)




funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance

so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)

the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.

Don't respond to trolls, especially after a moderator has already attempted to end the situation. Such behavior only worsens the situation, and can get you in trouble.

(DO NOT RESPOND)
 

Rebo&Zooty

New Member
Joined
May 17, 2008
Messages
490 (0.08/day)
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point

i own amd, and im using an x1900 till my 8800gt's back from the shop(stock cooler gave out) an u dont see me QQ(crying) or upset about being an amd user, i have setup core2 systems for people, they are nice, but price for price i still prefer to get as much out of an amd rig as i can, my new/current boards got a few years left b4 i need to replace it, plenty of cpu's to come in that time, i would guess 3-4 will pass thru the board b4 i upgrade it, unless i get a really kickass deal on a DFI 790fx board(the high end one not the lower one)
 

HTC

Joined
Apr 1, 2008
Messages
4,604 (0.79/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 2600X
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Nitro+ Radeon RX 480 OC 4 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 19.04 LTS
the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.

This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though :(

EDIT

Apologies, moderator: it was in post # 99 when i started to write this reply!
 
Last edited:

jbunch07

New Member
Joined
Feb 22, 2008
Messages
5,260 (0.89/day)
Location
Chattanooga,TN
Processor i5-2500k
Motherboard ASRock z68 pro3-m
Cooling Corsair A70
Memory Kingston HyperX 8GB 2 x 4GB 1600mhz
Storage OCZ Agility3 60GB(boot) 2x320GB Raid0(storage)
Display(s) Samsung 24" 1920x1200
Case Custom
Power Supply PC Power and Cooling 750w
Software Win 7 x64
lets get this thread back on track!

no more arguing about ati vs nvidia!
at least not here
 

Rebo&Zooty

New Member
Joined
May 17, 2008
Messages
490 (0.08/day)
Don't respond to trolls, especially after a moderator has already attempted to end the situation. Such behavior only worsens the situation, and can get you in trouble.

sorry was a cross post
i started when he sposted that orignaly, i spent alot of time on my slow net(damn comcast is buggin again!!!!!) finding those damn links/images.

sorry for the cross posts,woulde delete it but, all that effort gone to waste :(
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,355 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps

Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.

Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.
 

jbunch07

New Member
Joined
Feb 22, 2008
Messages
5,260 (0.89/day)
Location
Chattanooga,TN
Processor i5-2500k
Motherboard ASRock z68 pro3-m
Cooling Corsair A70
Memory Kingston HyperX 8GB 2 x 4GB 1600mhz
Storage OCZ Agility3 60GB(boot) 2x320GB Raid0(storage)
Display(s) Samsung 24" 1920x1200
Case Custom
Power Supply PC Power and Cooling 750w
Software Win 7 x64
Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.

Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.

thank you!

its about time someone finally said it!

comparing memory bus always made me laugh

256 gddr5 should do very nice!
 

jaydeejohn

New Member
Joined
Sep 26, 2006
Messages
126 (0.02/day)
Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?
 

Rebo&Zooty

New Member
Joined
May 17, 2008
Messages
490 (0.08/day)
This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though :(

yeah, see, from what i been told by a couple people i know who work for amd/ati and intel, ati honestly expected vista to take off and replace xp over night, if that had happened dx10 would have become the norm and the r600/670 design would have been GREAT, it would have looked far better then it does, BUT because vista fell on its face(doah!! *homer simpson sound*) ati's design was.....well less then optimal.

i have sent ati enough bitching emails in the past about buggs that i know how their support is, if you report it dirrectly they tend to try and fix it.

nvidia support, u get a form letter at best unless u know somebody on the inside, then they get the runaround and you get the runaround from them because, honestly they cant get any clear answers to alot of long standing buggs.

a few examples

windows server 2003 and xp x64(same os core) have a lovely bugg with nvidia drivers, if you have ever installed another companys video drivers you have a 99% chance that once you install the nvidia drivers the system will BSOD every time you try and use the card above a 2d desktop level, its a KNOWN issue since x64 came out(and from some reports also effects 32bit server 2003 as well), nvidia has had YEARS to fix this, they havent bothered, their fix is noted as "reinstall windows"..........if i had repoted that bugg to ati i would have had a beta fix in a couple days( i know because i repoted a bugg with some 3rd party apps that caused them to lock up and got fast acction)

Nvidia for over a year had a bugg in their yv12 video rendering, ffdshow wiki explains it, and documents how long its been a probblem, they fixed it in some beta drivers, but then broek it in fulls........

ati wide screen scaling: on some older games the games image is streched because the game dosnt support widescreen res's, theres a fix for this if you know where to look in drivers but its not automatic so it causes alot of people troubles.


i got a large list of bitches about both companys.

ati: agp card supports been spotty with the x1k and up cards, no excuse here other then the fact that they just need more people to email them and complain about it(squeeky wheel gets the oil as granny use to say)
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,355 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?

This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there is a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.
 

jbunch07

New Member
Joined
Feb 22, 2008
Messages
5,260 (0.89/day)
Location
Chattanooga,TN
Processor i5-2500k
Motherboard ASRock z68 pro3-m
Cooling Corsair A70
Memory Kingston HyperX 8GB 2 x 4GB 1600mhz
Storage OCZ Agility3 60GB(boot) 2x320GB Raid0(storage)
Display(s) Samsung 24" 1920x1200
Case Custom
Power Supply PC Power and Cooling 750w
Software Win 7 x64
This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there is a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.

i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

correct me if im wrong
 
Joined
Aug 12, 2006
Messages
3,278 (0.51/day)
Location
UK-small Village in a Valley Near Newcastle
Processor I9 9900KS @ 5.3Ghz
Motherboard Gagabyte z390 Aorus Ultra
Cooling Nexxxos Nova 1080 + 360 rad
Memory 32Gb Crucial Balliastix RGB 4.4GHz
Video Card(s) MSI Gaming X Trio RTX 3090 (Bios and Shunt Modded) 2.17GHz @ 38C
Storage NVME / SSD RAID arrays
Display(s) 38" LG 38GN950-B, 27" BENQ XL2730Z 144hz 1440p, Samsung 27" 3D 1440p
Case Thermaltake Core series
Power Supply 1.6Kw Silverstone
Mouse Roccat Kone EMP
Keyboard Corsair Viper Mechanical
Software Windows 10 Pro
people like candle really should be kept out of these types of threads, im sure he just comes a stompin to troll as usual....

wait for the card, then smack it if you feel necessary, else just stfu and let the facts roll from the horses mouth so to speak and wait for genuine reviews.

:toast:
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,355 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

correct me if im wrong

Yes, higher the resolution (of the video/game), larger are the frames, more amounts of data are transferred, extra bandwidth helps there.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

I could agree with many of your points in this thread, but I can't take you seriously, just because of these:

a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:

EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.
 
Last edited:

Rebo&Zooty

New Member
Joined
May 17, 2008
Messages
490 (0.08/day)
msrp is simlar on both cards, nvidia just recently price droped them afik.

af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.

this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.
 
Joined
Sep 2, 2005
Messages
294 (0.04/day)
Location
Szekszárd, Hungary
Processor AMD Phenom II X4 955BE
Motherboard Asus M4A785TD-V Evo
Cooling Xigmatek HDT S1283
Memory 4GB Kingston Hyperx DDR3
Video Card(s) GigaByte Radeon HD3870 512MB GDDR4
Storage WD Caviar Black 640GB, Hitachi Deskstar T7K250 250GB
Display(s) Samsung SyncMaster F2380M
Audio Device(s) Creative Audigy ES 5.1
Power Supply Corsair VX550
Software Microsoft Windows 7 Professional x64
Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa)

http://www.computerbase.de/artikel/...on_hd_3870_x2/24/#abschnitt_performancerating
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
msrp is simlar on both cards, nvidia just recently price droped them afik.

af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.

this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.

MSRP is not similar, the GTX is $50 cheaper since day one. And that's the case in Newegg, GTX is around $50 cheaper. Average GTX $300, average X2 $350-375. Cheaper GTX $289, cheaper X2 $339. The average is not calculated, but aproximated, I didn't take the 2 higher prices for each card to make the average. If I did so the X2 would suffer a lot, indeed my averages are being very favorable to the X2. Here in Spain, the GTX is well below 250 euro, while the X2 is well above 300.

Anyway my point was that the graphic didn't show shader AA to be superior, X2 should be a lot faster in that circunstances, but it's not. It only shows that the performance hit under DX10.1 is not as pronounced as under DX10 when AA done on shaders, but never that it's faster than with dedicated hardware. Also according to THAT GAME, DX10.1 AA is faster than DX10 AA on Ati cards, but I would take that with a grain of salt. The lighting in DX10.1 was way inferior to DX10 one on some places, because something was missing. I saw it somewhere and had my doubts. Until one of my friends confirmed it.
 

Rebo&Zooty

New Member
Joined
May 17, 2008
Messages
490 (0.08/day)
I could agree with many of your points in this thread, but I can't take you seriously, just because of these:

a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:

EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.

http://www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

according to tpu review its
7800/7900
Number of pixel shader processors 24
Number of pixel pipes 24
Number of texturing units 24

so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)

you try and disscredit me then use faulse facts......

2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
http://www.trustedreviews.com/graph...phire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa) in 3DMark 06

http://www.computerbase.de/artikel/...on_hd_3870_x2/24/#abschnitt_performancerating

Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
http://www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

according to tpu review its
7800/7900
Number of pixel shader processors 24
Number of pixel pipes 24
Number of texturing units 24

so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)

you try and disscredit me then use faulse facts......

2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
http://www.trustedreviews.com/graph...phire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)

7900 GTX has 16 ROPS. Period.
Speaking of PIPES when the cards have different number of units at each stage is silly. What's the pipe number? The number of ROPs, the number of TMUs or the number of Pixel shaders? Silly.
 
Joined
Sep 2, 2005
Messages
294 (0.04/day)
Location
Szekszárd, Hungary
Processor AMD Phenom II X4 955BE
Motherboard Asus M4A785TD-V Evo
Cooling Xigmatek HDT S1283
Memory 4GB Kingston Hyperx DDR3
Video Card(s) GigaByte Radeon HD3870 512MB GDDR4
Storage WD Caviar Black 640GB, Hitachi Deskstar T7K250 250GB
Display(s) Samsung SyncMaster F2380M
Audio Device(s) Creative Audigy ES 5.1
Power Supply Corsair VX550
Software Microsoft Windows 7 Professional x64
Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.

I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.

Yup, OK, sorry. :eek:

I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
Traduction to English went better. :toast:

In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

EDIT: I forgot to answer this before even though I wanted to do so:

rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.
 
Last edited:
Joined
Oct 13, 2005
Messages
192 (0.03/day)
Processor i7-4770k @ 4.4
Motherboard Gigabyte GA-Z87X-UD5H
Cooling Corsair H80i
Memory 16GB Mushkin Blackline Ridgeback DDR3-2400 11-13-13-31
Video Card(s) EVGA GTX 780 Ti SC ACX
Storage OCZ Vertex 4 256GB, 4x1TB WD Black RAID5
Display(s) Samsung S27A850 2560x1440
Case Antec Eleven Hundred
Audio Device(s) Sound Blaster ZxR / Sennheiser HD650 on WooAudio WA7 RCA + Blue Yeti
Power Supply Corsair HX750
Benchmark Scores 11186 Firestrike
prices dosnt matter its all on your setup. 9800 gx2 and the X2 3 series are basicly equal. i recomend X2 over 9800 gx2. right now the best computer out is the ati crossfire X quad gpu setup. my friend from css mark 06 score was 45k.... one main reason is the quad extream he has at 4.8 ghz with 8gb ram. but his friend got the 9800 gx2 sli and his was only 40k. crossfireX is the best thing out right now from what i have seen atleast.
Considering the current World Record is 32k, and 8GB of ram doesn't increase a 3DMark score over 2GB, what are you talking about?
 
Joined
Sep 2, 2005
Messages
294 (0.04/day)
Location
Szekszárd, Hungary
Processor AMD Phenom II X4 955BE
Motherboard Asus M4A785TD-V Evo
Cooling Xigmatek HDT S1283
Memory 4GB Kingston Hyperx DDR3
Video Card(s) GigaByte Radeon HD3870 512MB GDDR4
Storage WD Caviar Black 640GB, Hitachi Deskstar T7K250 250GB
Display(s) Samsung SyncMaster F2380M
Audio Device(s) Creative Audigy ES 5.1
Power Supply Corsair VX550
Software Microsoft Windows 7 Professional x64
Yup, OK, sorry. :eek:

I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
Traduction to English went better. :toast:

In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

EDIT: I forgot to answer this before even though I wanted to do so:



It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.


I think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
I'm not certain if the rops are a bottleneck...
 
Top