• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Give RV770 a Refresh, G200b Counterattack Planned

FudFighter

New Member
Joined
Oct 29, 2008
Messages
109 (0.02/day)
duno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.

we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish :)
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
duno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.

we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish :)

Did you know, the wiring you see on either sides of the PCB aren't the only wiring? PCB is a layered thing with each layers holding wiring...something conductive pens won't help with.

The whole 512bit more prone to damage thing is just a mathematical probability. Of course the vMem circuitry is a different issue, where more chips = more wear/tear, but it's understood, that on a card with a 512bit memory interface, the vMem is accordingly durable (high-grade components used), something NVIDIA does use on its reference G80 and G200 PCBs.
 

FudFighter

New Member
Joined
Oct 29, 2008
Messages
109 (0.02/day)
yes i know 6 and 8 layers are common, and i fully know you cant fix internal traces, i never said anything about fixing internal pcb traces....you act like i am a moron/uber noob.........

anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that :p )

i think you get what i was talking about, im done trying to explain/justify/wtfe im gonna watch some lost and look for some more stuff to dump on my samsung players.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that :p )

...if there aren't any measures to make them durable accordingly, yes, but that's not the case. By that logic, a Core 2 Extreme QX9770 is more prone to damage than a E5200 (again notwithstanding overclocking), but that isn't the case, right? Probabilities always exist. Sometimes they're too small to manifest into anything real. I'm not doing anything than this discussion..thank you for it.
 

FudFighter

New Member
Joined
Oct 29, 2008
Messages
109 (0.02/day)
no but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
no but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......

Ah...now use your logic against yourself:

"you do know they bin chips dont you?"

The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.
 
Joined
May 19, 2007
Messages
7,662 (1.24/day)
Location
c:\programs\kitteh.exe
Processor C2Q6600 @ 1.6 GHz
Motherboard Anus PQ5
Cooling ACFPro
Memory GEiL2 x 1 GB PC2 6400
Video Card(s) MSi 4830 (RIP)
Storage Seagate Barracuda 7200.10 320 GB Perpendicular Recording
Display(s) Dell 17'
Case El Cheepo
Audio Device(s) 7.1 Onboard
Power Supply Corsair TX750
Software MCE2K5
no but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......

why so much effort to try and show up bta?
 
Joined
Jul 20, 2008
Messages
4,016 (0.70/day)
Location
Ohio
System Name Desktop|| Virtual Host 0
Processor Intel Core i5 2500-K @ 4.3ghz || 2x Xeon L5630 (total 8 cores, 16 threads)
Motherboard ASUS P8Z68-V || Dell PowerEdge R710 (Intel 5520 chipset)
Cooling Corsair Hydro H100 || Stock hotplug fans and passive heatsinks
Memory 4x4gb Corsair Vengeance DDR3 1600 || 12x4gb Hynix DDR3 1066 FB-DIMMs
Video Card(s) MSI GTX 760 Gaming Twin Frozr 4GB OC || Don't know, don't care
Storage Hitachi 7K3000 2TB || 6x300gb 15k rpm SAS internal hotswap, 12x3tb Seagate NAS drives in enclosure
Display(s) ViewSonic VA2349S || remote iDRAC KVM console
Case Antec P280 || Dell PowerEdge R710
Audio Device(s) HRT MusicStreamer II+ and Focusrite Scarlett 18i8 || Don't know, don't care
Power Supply SeaSonic X650 Gold || 2x870w hot-swappable
Mouse Logitech G500 || remote iDRAC KVM console
Keyboard Logitech G510 || remote iDRAC KVM console
Software Win7 Ultimate x64 || VMware vSphere 6.0 with vCenter Server 6.0
Benchmark Scores Over 9000 on the scouter

FudFighter

New Member
Joined
Oct 29, 2008
Messages
109 (0.02/day)
Ah...now use your logic against yourself:

"you do know they bin chips dont you?"

The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.

but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.

Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.

just like back in the day the 9700-9800pro/xt was a 256bit card and the 9500pro/9800se(256bit) was 128bit, some old 9500's where just 9700-9800pro/xt with a bios to dissable 1/2 the memory buss and/or pipes on the card( have seen cards both ways ) they also had native pro versions that where 128bit and FAR cheaper to make, less complext pcb's.

blah, that last bit was a bit of a rammble, point beeing that ati's way this time around as they have in the past they found a cheaper more efficent way to do the same job.

gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then 3)


blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.

you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)

example that can show you what i mean would be the k10's, there are quad cores and tricores.

the tricore's are eather weak or failed quads, amd found a way to make money off flawed chips, they still function just fine, but due to complexity of NATIVE quadcore you ARE going to have higher fails then if you went with multi dual cores on 1 die.

in that reguard intels method was smarter to a point(for home user market) since it was cheaper and had lower fail rates(could alwase sell failed duals as lower singel core models) amd even admited for the non-server market they should have done an intel style setup for the first run, then moved native on the second batch.

as i have a feeling nvidia will endup moving to less complex pcb's with gddr5 with their next real change(non-refresh)

we shal see, i just know that price for perf i would take a 4800 over the gt200 or g92, that being if i hadnt bought the card i got b4 the 3800's where out :)
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.

Well, that's the whole reason why it's priced that way and caters to that segment of the market, right?

Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.

Apparently NVIDIA disagrees with you. The PCB has very little role to play in the graphics card's failure. It's always had something to do with the components, or the overclocker's acts. The quality of the components used makes up for the very slight probability of the PCB being the COD for a graphics card...in effect, the PCB is the last thing you'd point your finger to.


gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then

I agree x bit GDDR5 = 2x bit GDDR3, but you have to agree that G200 PCBs have been in development for a long time, I'd say right after G80 launch, after NVIDIA started work on 65nm GPUs. Just because the product happened to launch just before another with GDDR5 came about, you can't say "they should have used GDDR5". Whether they cut costs or not, you end up paying the same, they make you to. Don't you get a GeForce GTX 260 for the same price-range you get a HD 4870? So don't come to the conclusion that if they manage to cut costs, they'll hand over the benefit to you, by making you pay less, they'll benefit themselves.

blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.

Whatever you're accusing others of, is apparently all in your mind.

you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)

Conclusions...conclusions. It's called "premium". People who can buy, will buy, however smart/dumb they are. To cater to that very market, are the $1000~1500 CPUs Intel sells, something AMD did in its days too.
 
Joined
Apr 21, 2008
Messages
5,250 (0.90/day)
Location
IRAQ-Baghdad
System Name MASTER
Processor Core i7 3930k run at 4.4ghz
Motherboard Asus Rampage IV extreme
Cooling Corsair H100i
Memory 4x4G kingston hyperx beast 2400mhz
Video Card(s) 2X EVGA GTX680
Storage 2X Crusial M4 256g raid0, 1TbWD g, 2x500 WD B
Display(s) Samsung 27' 1080P LED 3D monitior 2ms
Case CoolerMaster Chosmos II
Audio Device(s) Creative sound blaster X-FI Titanum champion,Creative speakers 7.1 T7900
Power Supply Corsair 1200i, Logitch G500 Mouse, headset Corsair vengeance 1500
Software Win7 64bit Ultimate
Benchmark Scores 3d mark 2011: testing
256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870

ohh interesting thanx w1zzard , sure you right , but let we say 4870 with 512 bit just like 4870x2 case we see high bandwidth in gpu-z and sure it give great performance , not because high size memory but cuz high memory bandwidth , am i right or what is you tip
 
Joined
Jul 20, 2008
Messages
4,016 (0.70/day)
Location
Ohio
System Name Desktop|| Virtual Host 0
Processor Intel Core i5 2500-K @ 4.3ghz || 2x Xeon L5630 (total 8 cores, 16 threads)
Motherboard ASUS P8Z68-V || Dell PowerEdge R710 (Intel 5520 chipset)
Cooling Corsair Hydro H100 || Stock hotplug fans and passive heatsinks
Memory 4x4gb Corsair Vengeance DDR3 1600 || 12x4gb Hynix DDR3 1066 FB-DIMMs
Video Card(s) MSI GTX 760 Gaming Twin Frozr 4GB OC || Don't know, don't care
Storage Hitachi 7K3000 2TB || 6x300gb 15k rpm SAS internal hotswap, 12x3tb Seagate NAS drives in enclosure
Display(s) ViewSonic VA2349S || remote iDRAC KVM console
Case Antec P280 || Dell PowerEdge R710
Audio Device(s) HRT MusicStreamer II+ and Focusrite Scarlett 18i8 || Don't know, don't care
Power Supply SeaSonic X650 Gold || 2x870w hot-swappable
Mouse Logitech G500 || remote iDRAC KVM console
Keyboard Logitech G510 || remote iDRAC KVM console
Software Win7 Ultimate x64 || VMware vSphere 6.0 with vCenter Server 6.0
Benchmark Scores Over 9000 on the scouter
FudFighter, how is btarunr treating you like a moron? I think you're being way too defensive; all bta is doing is trying to debate with you about some of the things you've said, because he disagrees.
 

wolf

Performance Enthusiast
Joined
May 7, 2007
Messages
7,753 (1.25/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
agreed, btarunr has used nothing but logic, experience and knowledge to base what he has written, whether or not its 100% factual.

theres no real sense arguing just to argue, you are well entitled to your opinion, but rest assured, nobody is calling, or treating you like a moron.

i do see some very valid points you raise FudFighter, but statements like "you do know they bin chips dont you?" to btarunr does not strengthen your position.

play it cool man, we just wanna discuss cool new hardware :)
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP

You are mixing things up.

- First of all, we are arguing to the failure rates, not to the price. Complex PCBs are undoubtely more expensive, but you make them because they allow for cheaper parts on other places (i.e GDDR3) or for improved performance. What it is better is something much more complicated than comparing the PCBs. What has hapenned with GT200 and RV770 doesn't proof anything either on this matter, first because GT200 "failed" (couldn't clock it as high as they wanted*) and second because when comparing prices you have to take into account that prices fluctuate and are related to demand. I have said this like a million times, but had Nvidia adopted GDDR5 the same day Ati did, the demand for GDDR5 would have been 3x that of what it has been**, WHEN suppliers couldn't even meet Ati's demand. That would make prices skyrocket. It's easy to look at the market today and say 256bit + GDDR5 is cheaper (I'm still not so sure), but what would have happened if GDDR5 prices were a 50-80% higher? Nvidia because of the high market share they had, couldn't take the risk of finding that out. You can certainly thank Nvidia for RV770's success in price/performance, don't doubt this a moment.

- We have told you that failure rates are indeed higher too (under the same coditions), but not as much as you make them to be, nowhere near to it really. AND that small increase is already taken into account BEFORE they manufacture them and they take actions to make for that difference (conditions are then different). In fact, a big part (biggest one probably) of the increased costs of manufacturing complex PCBs is because of that. In the end the final product is in practice as error free as the simpler one, but at higher costs. As I've discused above those increased costs could not matter, it just depends on your strategy and the world surrounding.

- Don't mix apples to oranges. Microchip manufacturing and PCBs are a totlly different thing. I can't think of any other manufactured product with such high failure rates as microchips, it's part of the complexity of the process of making them. In chips a failure rate difference of 20% can easily happen between a simple design and a more complex one, but that's not the case with PCBs.

And also don't take individual examples as proof of facts. I'm talking about K10. Just as GT200 failed, K10 failed and although those fails are related to their complexity, the nature of the failure surpassed ANY expectations. Although related, the failure was not due to the complexity, but issues with the manufacturing process. You can't take one example and make a point with it. What happens with Nehalem? It IS a native quad core CPU and they are not having the problems K10 had. Making a native quad it is more risky than slapping two duals together but the benefits of a native quad are evident. Even if failures are higher, a native quad is inherently much faster and makes up for the difference: if the native is 30% faster (just as an example), in order to give the same performance you can afford to make each core on the native CPU a 30% simpler. In the end it will depend on if the architectural benefits can make up for the difference at manufacturing time. In K10 it didn't, in Nehalem it does, in Shanghai it does. The conclusion is evident, in practice native quads are demostrating to be the better solution.

* IMO that had to be sorted out before launch, BUT not in time for when specs were finalized. I can't understand otherwise how the chip they couldn't make it run faster is in the top list of overclockers, with average overclocks above 16% percent on stock. With little increased heat and power consumption for more signs...

** Nvidia sold 2 cards for every card Ati sold back then. They had to aim at the same market share AND the same number of sales. With both fighting for GDD5 supply that would have been impossible. A low supply of RAM would have hurt them more than what has actually happened. They lost market share, but their sales didn't suffer so much. Ati on the other hand, desperately needed to gain market share, no matter what, they needed a moral win, and how much could that cost them didn't really matter. They took the risk and with a little help from Nvidia they succeeded.
 
Last edited:
Joined
Oct 30, 2008
Messages
8 (0.00/day)
Everyone here is getting mixed up

I am late maybe someone will read this.

to btarunr: Fudfighter is speaking of productions fail rates which do not make it the shelf. The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into. He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex

to Fudfighter: btarnunr has missed your point and was totally talkin about user fail rates. as you where mostly clear as to what your were saying, your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.

Both are mostly correct just a little bit of a mix up in communication. FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company. If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive. This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this. Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate. And he is right it absolutely doesnt, from an end user stand point. If QA does its job correctly there should be no problems with the end results.

I have no enemies, i dont mean to make any. You are both right in your own way
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
I am late maybe someone will read this.

to btarunr: Fudfighter is speaking of productions fail rates which do not make it the shelf. The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into. He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex

to Fudfighter: btarnunr has missed your point and was totally talkin about user fail rates. as you where mostly clear as to what your were saying, your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.

Both are mostly correct just a little bit of a mix up in communication. FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company. If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive. This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this. Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate. And he is right it absolutely doesnt, from an end user stand point. If QA does its job correctly there should be no problems with the end results.

I have no enemies, i dont mean to make any. You are both right in your own way

Yes and no. At manufacturing time a more complex product does not necessarily have a higher failure rate. Not to the point of affecting profitability, that for sure. When you manufacture anything you try to do it following the absolutely cheapest way of doing it, as long as it doesn't affect quality. That means using the cheapest materials that meet your requirements, taking care of the product enough that it is well created and no more, etc. That's why the process of creating the simple thing is cheaper and the product itself ends up being cheaper.

When you are about to create a more complex product, you use better, expensive materials, you use better, slower techniques of manufacturing the product, better and more workers take care of the future of the end product, etc. All those things make the resultant product more expensive, BUT the failure rate is mantained at a level close to that of a simple product. How much you pay (or how much it makes sense to spend) to mantain that low level of failures depends on many things and is up to the company to choose, in the end it's give or take between paying more to have less failures or "spend" that money in QA and paying the materials/workforce of those failed products that will never get to the end of the pipeline.
 
Joined
Jul 5, 2007
Messages
180 (0.03/day)
I actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cards will come out then (or whenever NV cards do), they don't want to compete against their own cards now :).



Yes, most likely.

This cards are already taped out. The best fact is the last remaining working rv770 chips are spun off in hd4830 incarnations last month. So that rumor is more than a rumor, the only thingy to sooner introduction depends on buyers momentum. I'd say will se it for christmas al least shortages of the same :laugh:

I'm just curious if they will bring them out this year. So far this is looking like the worse retail shopping in 10 years due to the economy. I honestly would be shocked to see a new offering this year.

well it's not all in ecoomic momentum. This is YAR of RV770 and they could call it as they like just nv fist makes gt200b and renaming it to gt206 :D it's just a marketing bug and they need something to stay competitive and technology @55nm allows them improements .... but it's more likely to se 800SP @40nm it's more cost effective and 150MHz+ guaranteed :toast:

i doubt ati will have adjustable shader clocks any time soon. this would be a HUGE design change.



i expect rv790 to be drop in compatible with rv770. that means you could unsolder the gpu from a hd 4850/4870, solder on rv790 and the card would work without any other change on hardware or software side.

yeah yeah they all announce drop-in compatibility, but afair the only true dropin was kt266a->kt333 and boards didnt need redesign, and even old school nForce2 Ultra needed new board some tiny changes when new same 0,18u-process Ultra400 came out. all in all it's not all in proclamations and there will only be pinout compatibility, i guess :eek:
 

DaMulta

My stars went supernova
Joined
Aug 3, 2006
Messages
16,168 (2.50/day)
Location
Oklahoma T-Town
System Name Work in progress
Processor AMD 955---4Ghz
Motherboard MSi GD70
Cooling OcZ Phase/water
Memory Crucial2GB kit (1GBx2), Ballistix 240-pin DIMM, DDR3 PC3-16000
Video Card(s) CrossfireX 2 X HD 4890 1GB OCed to 1000Mhz
Storage SSD 64GB
Display(s) Envision 24'' 1920x1200
Case Using the desk ATM
Audio Device(s) Sucky onboard for now :(
Power Supply 1000W TruePower Quattro
I would like to point something out with bin chips as talking about above.....

When they do this the center is used in commercial markets such as Xeons, Opterons, FireGL, and so on. The next step is the normal market you and me.

As btarunr said
their mathematically high probability to fail

So the the farther out you get the worse it gets, but that is also why some cheap chips will run at the same speed as the more expensive one. The beat the high probability to fail when they were made.

Now you might think this is crazy.....Lets just say a 8600Gt is the fastest selling card on the market. They run out of that bin for that card. What do they do stop making them? Nope they pull from the higher up bin and continue on with production because money is money. So if it was a really popular card you could have the TOP bin chip in a very low product, because they are selling them really fast and making their money back faster.

That really does happen with CPUs, and with Video cards bining what they sell.
 
Last edited:
Top