• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GF100 Architecture

I heard this only works in SLi not on a regular card

Yeah, that's true, every card will only have 2 outputs, that's why. But 3DVision on 3 high resolution monitors? Believe me, you want more than one card. :)
 
Can it do non 3d mode with 3 monitors like eyefinity they never stated if it was possible or not or maybe i overlooked it
 
Can it do non 3d mode with 3 monitors like eyefinity they never stated if it was possible or not or maybe i overlooked it

No you don't need 3D to be activated (from Anandtech's article):

This triple-display technology will have two names. When it’s used on its own, NVIDIA is calling it NVIDIA Surround. When it’s used in conjunction with 3D Vision, it’s called 3D Vision Surround. Obviously NVIDIA would like you to use it with 3D Vision to get the full effect (and to require a more powerful GPU) but 3D Vision is by no means required to use it. It is however the key differentiator from AMD, at least until AMD’s own 3D efforts get off the ground.

They also mention something interesting there. It's going to be a driver feature meaning that GT200 cards will do it too, not just the GF100 based ones. Maybe this means you can use a cheaper or older card just for the outputs.
 
The gigthread engine only runs at 50Khz according to the white paper (20 microsecond switch time) or barely more than 48K audio time sample.



Here is about the bandwidth issues some have complained about, this should apply to both NVidia and ATI cards using tesselation and displacement mapping.

"There are a number of benefits to using tessellation with displacement mapping. The representation is
compact, scalable and leads to efficient storage and computation. The compactness of the description
means that the memory footprint is small and little bandwidth is consumed pulling the constituent
vertices on to the GPU."

"This ability to control geometric level of detail (LOD) is very powerful. Because it is on-demand and the
data is all kept on-chip, precious
memory bandwidth is preserved."


After reading the white paper if this doesn't perform 50% or better than the 5870 it almost confirms a issue with too much parallel hardware that needs too much babysitting. Alot of clusters and not enough wiring work to keep track of optomized useage. Really for how fast the shaders are performing the work and to try and force a parallel use of a whole cluster for one single op, causing the gigathread engine to switch work for just one op might prove to be hugely inefficent.

I love their use of math in the AA implememtation to try and confuse the reader. 1.6X more, 2.3X more, and only 9% slower. Way to mix it up there NV, changing baseline in the same sentance. Try 24% slower at the same baseline.
 
Last edited:
The gigthread engine only runs at 50Khz according to the white paper (20 microsecond switch time) or barely more than 48K audio time sample.

What the... :laugh:

Context switching means moving from one executing kernel to the next. i.e while doing PhysX moving from doing fluids to doing collisions. In other GPUs it's said to be ten times higher...
 
What the... :laugh:

Context switching means moving from one executing kernel to the next. i.e while doing PhysX moving from doing fluids to doing collisions. In other GPUs it's said to be ten times higher...

For nothing.


A statement. Not a question.:shadedshu Actually I was a bit suprized it wasn't faster, but after reading the rest of the whitepaper NV is in the same boat ATI is. The need for development on the threading standards, and the ability to create a standard performance expectation for a game thread.

No more poorly optomized games if they can help push this.


And Nexus looks good. Just if MS doesn't F up the implementation of such a thing by their kernel lockdown.
 
AMD was not in the situation that Nvidia is and has never been, regarding fame or company image. AMD always had good image, Nvidia doesn't and if they outrught lie about the performance they will not sell a card, they would sell less than what they would do if they said the truth. Plus GF100 not being much faster than Cypress is not going to happen. People just have to accept that performance wise it will be much faster.
1) NVIDIA's "image" depends on who you ask.
2) Corporation "stretch the truth" all the time and as long as it doesn't screw an existing customer, there generally is no consequence.
3) The whole concept of marketing is to make your product look better than the competitions by excluding (or minimizing) not-so-great details.
4) We'll see how GF100 stacks up to Cypress after it is out, not before.



That is still left to see. If the performance claims anywhere near being truth then, Fermi will just spank Cypress on that department too. 33% more transistors, if it's 60% then it's much much better.
Extending point #3 above with an exmaple: Intel tauts that Hyperthreading is 30-40% faster than no Hyperthreading not because every application sees that kind of boost. It's because only one does which gives them legal clearance to make that claim. They neglect to mention that in several other tests, hyperthreading may see 10%+ decrease in performance.

Take NVIDIA's claims with a salt flats portion of salt (but don't really do that because you'd be poisoned by it :p)


Nonsense. ECC is not going to be used in the GPUs and doesn't affect performace at all when not in use. And what the hell are you talking about Tesla coming first? GF100 comes in March and Tesla will not come until late Q2.
No ECC means not an option for supercomputing. F@H and like can take a few bad results but simulating the endurance of a nuclear stockpile, no. A hardware error is simply unacceptable in many applications that are currently being ran on thousands of CPUs.

We'll see about launch dates when it happens.



This shows you either didn't read Wizzard's article or you didn't understand it at all. And IMO it's definately the former since you are saying those things that are based on info from like July or September. This new article is showing that Fermi is indeed a GPU with all the letters. It's not just the GPGPU card that was claimed to be.
Read the article in CPU mag in the January issue. Either there is two sources of conflicting information or one is right, the other is wrong.
 
-Performance. Yeah we'll see.

- No ECC = no HPC that's why Teslas will have ECC and why GeForces will not.

Read the article in CPU mag in the January issue. Either there is two sources of conflicting information or one is right, the other is wrong.

I don't know which article you are talking about. In any case Wizzard is right the other one is wrong, in the case they have conflicting info.
 
CPU Magazine: January 2010, Volume 10, Issue 01, pages 44-47, by Kyle Schurman

White Paper: Nvidia's Fermi Architecture
First Tesla, Then Graphics

Ironic it says it right in the title. :p


Tesla needs ECC and GeForce does not, correct. That puts them on completely separate fabrication time tables because they aren't just rebadged like previous Tesla GPUs. Fermi is designed, right out of the starting gate, to be an HPC product; that's why architecturally, making that HPC product into a consumer graphics card with DirectX 11 support is going to pose the greater challenge (not to mention having to compete with Cypress). This is why I agree with Schurman's conclusion that Tesla is likely to hit the market before GeForce. Where GeForce is at in terms of performance remains to be seen but it will be undoubtly strong at HPC work because that is clearly where the design focus is.
 
Last edited:
The gigthread engine only runs at 50Khz according to the white paper (20 microsecond switch time) or barely more than 48K audio time sample.

it runs at a much higher clock, it's time slicing interval is 20 us.

your windows pc runs time slices like 10 - 20 ms, but that doesnt mean it's running at 50 hz.
 
looks expensive. die size is huge but understandable with the architecture and features under the hood.. consumers will be paying a lot of features that has yet to be fully optimized / utilized.

but seeing better tesselation performance in uniengine, looks like it is handling it "naturally" and without any code alteration of any kind. This would give it an edge indeed vs the unified tesselator. They have indeed solved the shader conundrum, maybe this geometric unit is indeed the future..

either it will be a trendsetting standard, or it will be down the drain once again just like MMX or physix.
 
Last edited:
CPU Magazine: January 2010, Volume 10, Issue 01, pages 44-47, by Kyle Schurman

White Paper: Nvidia's Fermi Architecture
First Tesla, Then Graphics

Ironic it says it right in the title. :p


Tesla needs ECC and GeForce does not, correct. That puts them on completely separate fabrication time tables because they aren't just rebadged like previous Tesla GPUs. Fermi is designed, right out of the starting gate, to be an HPC product; that's why architecturally, making that HPC product into a consumer graphics card with DirectX 11 support is going to pose the greater challenge (not to mention having to compete with Cypress). This is why I agree with Schurman's conclusion that Tesla is likely to hit the market before GeForce. Where GeForce is at in terms of performance remains to be seen but it will be undoubtly strong at HPC work because that is clearly where the design focus is.

So he says that and he is wrong. End of story man.
 
Tesla needs ECC and GeForce does not : true
That puts them on completely separate fabrication time tables : false
because they aren't just rebadged like previous Tesla GPUs : false
Fermi is designed, right out of the starting gate, to be an HPC product : yes, it was designed with hpc AND geforce in mind
making that HPC product into a graphics card is the greater challenge : false
Tesla is likely to hit the market before GeForce : false
 
I didn't accuse you of speculating, I pointed out that speculating based on die size is always an error. Die probably represents less than 10% of the card's final price and that means they can play a lot with the final price depending on what they want to do with it. I mentioned volumes etc, because there's this missconception that a chip that is 50% bigger will cost 50% more to produce and to sell, etc, which is crap. There are far more considerations and that's why I mentioned one.

I've just been trying to explain why your question was not really relevant, but I'm going to answer your question based on my opinion (with no acces to any info except that I've been working on a ratailer in the past and I know how things work) so that you can see how vague the response to such a question is when we don't know anything and we are just assuming or determining how many dies per wafer they have. Answer: anything from $250 to $750. That's it. Happy?
You are arguing simply to argue. If the question isn't relevant then it becomes a contradiction to answer it. I think that once we start seeing prices go for the GF100 it will be clear where they stand. Which is the gist my question before you started blathering your own misconceptions and speculations. :laugh:
 
You are arguing simply to argue. If the question isn't relevant then it becomes a contradiction to answer it. I think that once we start seeing prices go for the GF100 it will be clear where they stand. Which is the gist my question before you started blathering your own misconceptions and speculations. :laugh:

Whatever man, you made the bad question and I answered it in the only way such a question can be answered. You didn't simply ask for the price of the cards, should someone know it (You say "what's the price?" in that case). You asked about which would be the price based on your own given die candidates number and that can only be aswered with speculation, because:

- Die size is not the only factor in the price of a die: how much is it paid per wafer?

- Production price is not the only factor in the price of the card: what is the targetedrevenue at the end of the year? how much GPU are they willing to sell in order to meet that goal? how much Teslas and Quadros?

It is of great importance, because if they sell lots of Geforces even at very low profits per card, revenue in volumes will be high to begin with (fnancially covered, growth) and second and most importantly, it will create a bigger market for Teslas and Quadros which is where Nvidia makes most of his money.
 

Yeah, you are arguing just to be arguing. And I'm not the only person on this forum whom you've started these pity conflicts with either. But like I said, the rumor has it that GF100 is far less per wafer. Also, I've read no reports on any other video cards from them so I was asking if someone knew the price of the cards. You obviously don't know but wanted to start an arguement with me because I've asked. :p

Oh, and there are other conversation being posted about regarding the use of 448 shader vs 512 shader. For one, which will reviewers get? (I guess we will have to wait and see on this one). And, will the 448 shader and 512 shader offer their own SKU? Or will they share the same SKU? I look forward to seeing how this develops.
 
ATI Vs NVidia...hmmm.....

ATI Vs NVidia...hmmm.....

actually..its hard to chose:banghead:... in one side...they had power:rolleyes: (to make gaming experience better with more FPS in the game{i mean NVIDIA here}and you need deep pocket too to buy this card...) in one side the had better image Quality(ati card {http://www.tomshardware.com/reviews/burnout-paradise-performance,2289-2.html}:rolleyes:) Wich one do tou like?? Green side or red side??
 
The dark side. :cool:
 
Tesla needs ECC and GeForce does not : true
That puts them on completely separate fabrication time tables : false
because they aren't just rebadged like previous Tesla GPUs : false
Fermi is designed, right out of the starting gate, to be an HPC product : yes, it was designed with hpc AND geforce in mind
making that HPC product into a graphics card is the greater challenge : false
Tesla is likely to hit the market before GeForce : false
We shall see.
 
dark side:eek: being netral:confused:
 
we shall see what? it's just a video card.
 
1) If Tesla comes out before GeForce or not.
2) If the Fermi derived Tesla has it's own GPU model number due to the memory controller change (ECC vs. not) or if they come out of the same bin.
 
1) If Tesla comes out before GeForce or not.
2) If the Fermi derived Tesla has it's own GPU model number due to the memory controller change (ECC vs. not) or if they come out of the same bin.

So basically you are saying you know more than Nvidia? :laugh:
Because all the info Wizzard is giving you is been officially announced at some point.

@EastCoastHandle

If Nv can only get 104 chips per wafer vs 160 from AMD how much will these cards cost? I'm not even speculating on what chips will be ready to use, harvested, etc. The other issue which I touched on was pre-release cards vs production cards. Something Charlie brought up (to be taken with a grain of salt but worth mentioning none the less) is will we see pre-release cards that are fully unlocked at 512 shader with production cards only offer 448 shader?


Exactly where here are you asking if someone knows the price? You are asking for a speculation on the price based on your own (or Charlie's) suposition of die candidates. And again how much the die cost won't reflect how much the cards will cost, period. I'm not trying to argue with you, never was, I'm directly answering your question:

If Nv can only get 104 chips per wafer vs 160 from AMD how much will these cards cost? THERE'S NO POSSIBLE WAY OF KNOWING. NOT WITH THAT INFO ALONE.

For example, HD58xx cards cost less than $200 to produce (more on the $100 side actually), the cards not the chip, and both models cost the same, but the actual cards sell for $280 and $400. Based on the die both should (could) be priced the same and in the $200-$250 range, but you don't see that, because there's far more things to consider than die size, mostly company's strategy.
 
Last edited:
I'm saying there is conflicting information from different sources--wait and see what happens.
 
Back
Top