• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.
 
What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.

nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations.

but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce, and has to have a big cooler on it.

but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.
 
What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.

Nothing wrong with it. It just saps a lot of power for a graphics card, and costs more than the 9800GX2 which is pretty close to it.

Its powerful. Thats for sure. But AMD are saying that Nvidia are being suicidal by keeping everything in one core, I have to agree with that logic. Two HD4850s according to Tweaktown spank a GTX280, and those are the mid range HD4850, not the high-mid, 4870. The 4850 already is faster than a 9800GTX.

Now if you consider AMD putting two HD4850/HD4870's performance into ONE card, what AMD is saying suddenly makes sense.
 
If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient. But that's a lot of chip to shrink!
 
Do not rick roll people outside of general nonsense. This is not 4chan. Techpowerup is not for spamming useless junk. This is becoming more and more of a problem, I am going to have to start handing out infractions for this in the future if it does not stop.
 
If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient. But that's a lot of chip to shrink!

45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).
 
Die shrinks will just allow Nvidia to cram more transistors into the same package size. Nvidia's battle plan seems to be something like this:
Kill%20it%20With%20Fire%20Aliens.jpg
 
nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations.

but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce, and has to have a big cooler on it.

but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.

I would say it has even "better" specs than what we thought. At least this is true in my case. This is because it has effectively an additional Physx processor slapped into the core. Those additional 30 FP64 units with all tha added registers and cache don't help on rendering at all. Nor can be used by graphics APIs, only by CUDA. That's why I say better in quotes, they have added a lot of silicon that is not useful at all NOW. It could be very useful in the future, that FP64 unit really is powerful and unique as no other comercial chip has ever implemented a unit with such capabilities, so when CUDA programs start to actually be something more than a showcase, or games start to implement Ageia we could say that enhancements are something good. Until then we can only look at them like some kind of silicon waste.

45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).

You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.
 
depending on how well cuda is adopted for games in the next 6 months could very well mean Nvidia wins round 10 in the GPU wars even with the price, if CUDA is worked into games to offload alot of the calculations then Nvidia just won, and im betting money this is there gamble.
 
delusions of hope
 
not really look how nvidia helps devs to ensure compatibilty with NVidia GPU's.

If physics and lighting where moved to the GPU from the CPU that bottleneck is gone from the CPU and the GPU can handle it 200x at least faster than the fastest quad core even running the game at the same time. This in turn allows for better more realistic things to be done, remember the alan wake demo at IDF for those great physics, heres the thing it stuttered, now if CUDA is used intead it would get alot more FPS, the reason for not so heavy realistic phyics is the lack of raw horsepower, if CUDA is used as Nvidia hopes it will be used the games may not run any faster, but the level of realism can increase greatly which would sway more than one consumer.

If it get 100FPS and use's large transparent textures for dust thats great

if it gets 100FPS but draws each grain of dirt as its own pixel thats even better

which would you get evne with the price diffrence id go for the real pixel dirt
 
You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.

Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.
 
agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then
 
agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then

You're being sarcastic right? Even if you weren't,

3ee8956acc0cd2fcf6fc0a6f966deab9.gif
 
Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.

Huh! :eek: Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?

Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.

That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.
 
Huh! :eek: Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?

Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.

That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.

No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.
 
No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.

Well I can easily base my point in that CPUs with L3 caches have a lot higher TDP. Which of the two do you think is better?
 
Back
Top