• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GF100 Architecture

Nice article, seems interesting. For the hot part, I'll believe that when I see W1zzard benching one. My prediction is that GTX 280 is still hotter/consumes more wattage :p

I'll likely get the 256-bit variant when it comes out...

Quoted just that part because of different reasons :) GTX 280 killer is there with everything cut in two, GTS 340 name maybe? (a new 8800gt)

Who knows, in a year I might have a GF100 card too, if they come up with 30nm versions or something similar that will make these ones "obsolete".
 
Instead of just challenging me because you don't know about this, why don't you do some research yourself?

In my previous post and various others (eg the monitor aspect ratio discussion) I gave a very nice and complete answer. It would be nice to be appreciated for teaching people instead of getting attacked all the time. :rolleyes:

Why 3-core phenoms are faster from 2-core (same frequency) ? Why LGA1366 3-channel CPU's have more memory bandwidth than LGA1156 2-channel CPU's ? When it comes to bandwidth more is better just like money dude, just like women (more=better) ... just like everything, except some serious circumstances when bandwidth doesn't involved.
 
Minimal gain on investment.


I like the setup of this card, but the die expense is going to be huge. What does the tessellator unit have to do with performance, doesn't more tessellation mean more GPU performance hit? Why does a pair of 5770's kill this thing based off their own slide?


I agree that the next console GPU/API is going to set the defact stadard from MS for all gamins, and therefor game performance for the majority, and if this series has heat issues then I forsee ATI/AMD providing the next Xbox pair of chips.


Yes with the global market in a shift to more performance for dollar VS a few years ago the biggest and most powerful isn't going to be the best for the bottom line. However I can't wait to see real world benchmarks, I bet this thing is going to tear Crysis, GTA4, and some other new games a new one. Those are games I am willing to pay for raw performance.
 
@Steevo, GTA4 is rather CPU performance and not just Graphics-performance. That game is just silly ported from the Console towards the PC.

Its just a waste of ALOT of money to develop a crazy ass chip that might beat Ati's flagship at the moment, but proberly be avaible for the highest possible price the first following 6 to 8 months because of a too huge die and powerusage.
 
Wow, I am flabbergasted as I expected to see a whole bunch of Nvidia fans on here proclaiming this card king (which looks pretty good on paper Ill give ya that). Hopefully when this comes out, the 5870 will drop a good bit and I can grab me one. Thanks for the info BTA.
 
@Steevo, GTA4 is rather CPU performance and not just Graphics-performance. That game is just silly ported from the Console towards the PC.

Its just a waste of ALOT of money to develop a crazy ass chip that might beat Ati's flagship at the moment, but proberly be avaible for the highest possible price the first following 6 to 8 months because of a too huge die and powerusage.

I can assure you that GTA4 is also murdering cards, my quad is only being used at about 60% for my 1920X1200, but all of my GPU is being used. After many, many, many trials with bus speed, CPU speed, memory speed, timings, system memory load, different OS, drivers. I can positively determine GTA4 requires, alot of vmem, and alot of GPU horsepower. Thus the reason my measly 4850 was able to play it decently at 34 draw distance and all high settings at 1680X1050 but the extra pixels of 1920X1200 cause it to stutter some. I just need more GPU power.
 
I agree that the next console GPU/API is going to set the defact stadard from MS for all gamins, and therefor game performance for the majority, and if this series has heat issues then I forsee ATI/AMD providing the next Xbox pair of chips.

I think that may have happened already look here:

http://techreport.com/discussions.x/17755

This would make sense since I'm sure Microsoft would rather avoid the same transition issues it had when it went from the NVIDIA GPU in the first Xbox to the 360. And with the past two Nintendo consoles also going with AMD GPU's I highly doubt they would risk the backwards compatibility they've had with their systems for awhile now.

I think it would be more fair to speculate if NVIDIA will be able to get their GF100 gpu down to a point to possibly put into Sony's next console. If they are unable to deliver on that and AMD somehow manages to pull of a win with Sony then their gamble will pretty much have been for nothing. Kind of sad really since there was a time when PC gaming was the lead development platform. -_-
 
Some of the comments on that page are just stupid. The GPU in the 360 is not faster, it is just dedicated hardware. Just like a 1/4mile funnyy car isn't faster on a curved track than a yugo. Just different.

But yes it was a X1900 GPU with some mods.


I'm sure the deal is already done on the next gen Xbox, I have seen the compilers available to some from MS, but I don;t carry the clout or friends to get a look inside and see what they are program testing for in possible hardware.
 
Some of the comments on that page are just stupid. The GPU in the 360 is not faster, it is just dedicated hardware. Just like a 1/4mile funny car isn't faster on a curved track than a yugo. Just different.

But yes it was a X1900 GPU with some mods.

I'm sure the deal is already done on the next gen Xbox, I have seen the compilers available to some from MS, but I don;t carry the clout or friends to get a look inside and see what they are program testing for in possible hardware.

*chuckle* I know but it was the best reference article I could find that discussed it. Believe me I go back and forth with people a LOT every day over why a desktop X1950 or HD 2600 are not able to run some games the same as they look on the 360. When their eyes start to glaze over I just sigh and give up. :rolleyes:

While I'm glad that console gaming has gotten as far as it has I really hate that they are the ones calling the shots now. What's fustrating is that Microsoft could really help on turning Windows gaming around but when they pull half-ass crap like with GWLive it makes me wonder if anyone over there really knows what PC gaming is anymore. :banghead:

As more and more time goes on all I keep seeing is the short-attention span, better known as the Ritalin, generation dictating our gaming habits like they did with Modern Warfare 2. Kind of terrifying really to think that the generation of what some call "consoletards" (a name I dislike since it lumps in reasonable console gamers) will be running the government when a lot of us will be near retirement age. *shudder*
 
Card is going to cost 550-700€ (8800GTX Days) and if it could be a 15~25% faster than HD5870 then it's going to be a pricedrop massacre because ati isn't nvidia on marketing, they gonna reduce price on the HD58xx lineup at a level that even an nv fanboy couldn't resist it. The mainstream Fermi cards will come very late as it seams and for mainstream segment nvidia will use GTX2x0 series. They can't stand against HD5xxx HD48xx cards due to higher chip cost. So for all of us out there that can't spend 600€ on a fermi cards nvidia will do the job and pricedrop some HD5xxx cards. YEAH competition rocks! Even now that nv has no dx11 or new lineup the price of the GTX2x0 cards is higher that HD4890. Nvidia marketing sucks so far, it's releated only to nv only buyers that seing "THE WAY IT's MEAN TO BE PLAYED" gives them boost ...
 
I'm a little reserved about this because the alleged results are not from production video cards. Things like clock frequencies, power consumption, etc are all part of the equation.
 
I'm a little reserved about this because the alleged results are not from production video cards. Things like clock frequencies, power consumption, etc are all part of the equation.

I feel the same way, I will wait for W1zz's review those factors you stated are big and still up in the air.
 
I feel the same way, I will wait for W1zz's review those factors you stated are big and still up in the air.

The only information so far about that is that the sample card at CES (according to Charlie) was consuming 280 Watts :eek:. Even if true we really can't go on this because it's not a production card.
 
Can you hear the
SILENCE
that has just becalmed the ATI party? Nobody moves. They look into their drinks.



Oh wait, paper launch, nothing in the channels. Low yields. We might be saved.
"PUMP UP THE VOLUME!"
Party continues, albeit with a little less verve.
 
Funny part about those 6 core processors is that they scale perfectly in multi-threaded benchmarks. Funny. Very funny.
Not Cinebench, i7 gets 4x Gulftown doesn't reach 6x.

Back to the card in question we'll have to wait till they are in our labs/benches/PCs before believing the hype. That said 50% improvement over 5870 should be expected from nVidia's next GPU 6-9months later, at which point ATi drops the 5890 with 1.2GHz GPU and 2GB 6GHz GDDR5 and draws level.

Having the performance, features and value crowns still isn't enough to sway some fanboys and journos from the 'nVidia' is best attitude that has prevailed since the Riva and GeForce 1 days. NV's marketeering team has come in for some stick with all the renames, locking out PhysX and Gimping TWIMTBP games so I think overall there is less sympathy for nVidia.
 
Last edited:
Can you hear the
SILENCE
that has just becalmed the ATI party? Nobody moves. They look into their drinks.



Oh wait, paper launch, nothing in the channels. Low yields. We might be saved.
"PUMP UP THE VOLUME!"
Party continues, albeit with a little less verve.

This isn't a launch. Just a lift of the NDA, Mr. Big Font.
 
Last edited:
CPU Mag said that the upper end Teslas will have 6 GiB ECC RAM (1 GiB per controller).


I'm still disgusted with NVIDIA because they are putting more emphasis on Tesla than they are GeForce. Tesla is not useful to 99.9% of the population. That's a very bad decision on their behalf. We also can't forget NVIDIA's answer to everything: a bigger hammer, not innovation.
 
I wonder what innovation means, I must be losing something in the translation. :rolleyes:

The biggest innovation in GPU architectures since the unified shaders and there's no innovation? What is innovation then, there must be something to compare with...
 
I wonder what innovation means, I must be losing something in the translation. :rolleyes:

The biggest innovation in GPU architectures since the unified shaders and there's no innovation? What is innovation then, there must be something to compare with...

Innovation is marketing speak for something that is not identical to something else.:laugh:
 
This isn't a launch. Just a lift of the NDA.
Semantics!
Innovation is marketing speak for something that is not identical to something else.:laugh:
Too true! :wtf:


More pictures and more explanations here: http://www.nvidia.com/object/IO_86775.html
Capture074.jpg


Lets hope nVidias 3D vision (eyefinity and 3D combined) will work on 3x DVI and not the funny ATI mix.
 
and that's how people get banned at [H]
if you post things that are factually correct i'm not going to attack you, claiming that a 384 bits wide memory interface is an imperfect design sounds like something i would read at certain rumor sites

So now you're telling me that I'm saying stuff that's worthy of a ban? You gotta be kidding me! :eek: :mad: I'm not being confrontational. People, including you now, are attacking me for explaining something and there's a world of difference. If you really want to ban me for this, then well, jeez...

Look, you have contacts in the industry. Instead of arguing about it here, why don't you just show an engineer who designs digital circuits my post and see if he agrees with me? You might be surprised and I would be interested to hear what they say. All it boils down to is a basic design principle, that for a base 2 digital computer, a power of 2 design is always the most efficient. Unfortunately, it cannot always be achieved in the real world because of cost and physical constraints (such as with Fermi) that's all. There's nothing confrontational about this statement.

In the end, if you don't believe me, it doesn't matter to me, as this is only a casual discussion, but please don't start threatening me with bans. I won't be cowed because you're the site admin. I will stand up for what I know is right and it doesn't give you a licence to behave unpleasantly to me, especially when I have been perfectly nice to you.

It is your assertion that 384-bit isn't nice, for the reasons you stated. So the onus lies on you to back it up with references, he doesn't need to do any research for your assertions which he never came across. So go find us some, and not make confrontational statements.

If you read my posts again, you'll see my tone was actually perfectly pleasant and polite. If you don't agree with me assertion about the design, then fine, it really doesn't matter. I don't have to go around proving anything to anyone and they don't have to prove anything to me.

I've already said at least twice how in the real world, one can't necessarily achieve the efficiency of a power of 2 design. In the end, it's just my opinion on it and it shouldn't matter to you either if you don't agree with it.
 
I wonder what innovation means, I must be losing something in the translation. :rolleyes:
More performance from fewer transistors (less heat, less power draw).
 
So now you're telling me that I'm saying stuff that's worthy of a ban? You gotta be kidding me! :eek: :mad: I'm not being confrontational. People, including you now, are attacking me for explaining something and there's a world of difference. If you really want to ban me for this, then well, jeez...

i'm sorry .. i didnt not mean to imply it is worth a ban [here], was just making a joke that most of the hardocp forum visitors would understand :)

and see if he agrees with me?

nvidia's engineers say their implementation is awesome
 
Back
Top