• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

New NVIDIA GeForce GTX280 Three Times Faster than HD 3870 in Folding@Home

nvidia has so much bullshit out there.. its amazing..
 
i see what you're saying there imperial, and it think you've struck something here

i'm wondering what kind of core the wu thats used in the graph based on? is it even based on a wu? is this just processing power? or is this theoretical? what is the basis for the data for other things? what wu's are the other things in the graph running?

makes me think this is some theoretical processing power BS

that's kinda what I was trying to get across -


there's differences in WUs as they're intended for specific hardware. A GPU can't run the SMP client, and it would buckle under the work load even if you could; just like a single-core CPU can't efficiently handle SMP workloads, either.

Perhaps nVidia's GPUs will end up with their own specific folding client - which, I figure they probably would seeing as how different their GPU architecture is over ATIs - one that is more optimized for their GPUs?

Not saying that's a bad thing, having seperate GPU clients - but, perhaps nVidia's GPU can only work with simple molecules, which is where they get such an astronomical mol/day figure from . . . it makes sense, IMO.

I guess, really, the only way to see for sure, is when their new GPUs are on the market, and people start folding with them. It'd be interesting to keep an eye on the ppd earned as compared to ATI's cards - if nVidia's are earning fewer ppd as compared to a 3870, then we would know that it's working with less complex WUs.
 
yup, it all comes down to points in the end
 
nvidia has so much bullshit out there.. its amazing..

You would think if they were touting how great their gpu was they would pull out 'real' benchmarks :) and say F you crysis "this many" fps
 
My final .02 cents on this topic before I brush it off entirelly:


TBH, I think it's great nVidia have finally decided to work with F@H and get their new GPUs ready to help out with the project. Really, the F@H project has the potential to affect everyone sooner or later, in some shape or form.

I just personally feel this is lacking any sense of tact, though - I really don't think nVidia should be making their new GPUs F@H capability a marketing point to help sell their new hardware; if they've twisted or misrepresented the facts, even more shameful, IMO. There's no point for such points that they are trying to make, when the hardware isn't even here yet to prove what it's truly capable of; and all of us know how the nVidia fanbois will run rampant over these images trying to claim how 1337 the new hardware will be.

IIRC, I can't recall ATI, nor any other hardware manufacturer making radical claims when they joined up - hell, ATI was rather quiet about it.
 
I'm brushing this bull off now. WTH does a mol have to do with computing power. Theoretical BS is still BS. If someone has the time to give us some gd numbers we don't understand then why don't they give us some gaming results.
 
I'm brushing this bull off now. WTH does a mol have to do with computing power. Theoretical BS is still BS. If someone has the time to give us some gd numbers we don't understand then why don't they give us some gaming results.

thank you! you're absolutely right
 
some people cant stand the fact that nVidia makes good hardware... I believe it for one, the GTX 280 has shaders that run at twice the speed of ATI's and theyre all full stream processing units, as opposed to one stream processing unit with four whatever attached. that and they said that they changed the architecture of the SP's to make them more efficient per-clock...

ATi is great, but nVidia got it right. Some people are just haters.
 
Although this might be faster I wanna wait till the card comes out cause these tend to be a bit inaccurate.
 
Years after years, pre-launch speculation is still the same ... The promise you a 250% better card, which end up been 30% more powerfull. :rolleyes:
 
well you have to take into account that nvidia vs ati has been this way for a while now, these numbers mean nothing, nvidia makes high powered gpu's which can usually crunch much more data than ati's while ati sticks with a complicated but effective combination of a ton of shaders and a decent gpu, I'm pretty sure the 8800GT would do better in folding than the 3870 as well.

so yeah, a card thats proably 30% faster
 
LOL at some posts. :laugh:

"mols" in that chart isn't a quantity of a single molecule, they meant number of molecules folded / time, where time is constant.


The job F@H does is take a protein molecule and simulate its folding in accordance to the raw data F@H servers provide. After simulation of a protein folding, the parameters of the folded molecule (which differ from that of the original) are sent back to the server. This simulation requires computational power.

I don't think the NVidia graph is exaggerated considering they built the core (F@H core) using NVidia CUDA, and CUDA apps get the most computational power out of both geometry and shader domains of a CUDA-supportive GPU, in this case the fastest NVidia ever made, the GTX 280. There still isn't an IDE that lets you get the most out of ATI's Stream Computing though ATI is working on one.
 
well theres avogadro's (6.02x10^23) number of atoms/moles/molecules/particles(the most generic term) in a mole (also 22.4 L at STP)

so F@h analyzes moles of molecules, making dr. pepper's joke true lol

whats dr pepper's joke ?
 
whats dr pepper's joke ?

I taught HS chem for 3 yrs b4 moving on to greater pastures. Its not really a joke to me either but that's just how silly those figures are. I don't care one bit about how many times a card can fold over a protein molecule. The only statistics I care about are framerates. I have 10 yrs of college under my belt & this junk tells me nothing about the card.

Leave the proteins for shakes & stakes & give me some framerates :roll:

I think I need to lie dowm after that one :rolleyes:
 
I guess, really, the only way to see for sure, is when their new GPUs are on the market, and people start folding with them. It'd be interesting to keep an eye on the ppd earned as compared to ATI's cards - if nVidia's are earning fewer ppd as compared to a 3870, then we would know that it's working with less complex WUs.

PPD isn't a good guage in unit complexity and actual work being done. When you actually look at how the numbers are generated, you realize this.

How many points each unit is worth depends on how long it would take a P4 2.8GHz(Socket 478 Northwood) with SSE2 disabled to complete. Then they plug the number of days it takes that processor to complete the work unit into the formula 110*(Number of Days) and that is your points per WU for CPUs. Now in the case of the SMP client, the same method is used, but then a 50% bonus is given to each WU's score just because it is an SMP WU. Does that mean the work is any more complex? No. Does that mean the multi-core CPUs are doing more work per WU than a single core CPU? No. But they still get more points per WU and hence more PPD than a single core doing the same work.

Now, in terms of GPU folding. The new GPU2 client that runs on the new HD2000 and HD3000 are done the same way. They benchmark each WU on a 3850 and then multiply the number of days it take for the WU to complete by 1000. Why 1000? It is just a number they picked. If they pick a lower number to muptiply by when doing the nVidia calculations or begin the benchmarking process with a weaker GPU, then the PPD vs. ATi cards won't be a good guage on performance.
 
Last edited:
so when can we expect to be able to download this, and will it work on my 8600gts?
 
yeah, just what i wanna do all day with my new graphics card, fold at home... why not make a FPU (fold processing unit,) instead, and better yet, why dont you get your own computer to put it in--instead of using mine to fold your crap, you cheap bastards, lol.

why dont they fold us a way to get rid of fossil fuels.
 
Wouldnt mind seeing the 280 compared to the G80.
Would be nicer to see how the 3870 competes against the G92.
 
yeah, just what i wanna do all day with my new graphics card, fold at home... why not make a FPU (fold processing unit,) instead, and better yet, why dont you get your own computer to put it in--instead of using mine to fold your crap, you cheap bastards, lol.

why dont they fold us a way to get rid of fossil fuels.

ignorance is bliss...
Why not just use a supercomputer?

Modern supercomputers are essentially clusters of hundreds of processors linked by fast networking. The speed of these processors is comparable to (and often slower than) those found in PCs! Thus, if an algorithm (like ours) does not need the fast networking, it will run just as fast on a supercluster as a supercomputer. However, our application needs not the hundreds of processors found in modern supercomputers, but hundreds of thousands of processors. Hence, the calculations performed on Folding@home would not be possible by any other means! Moreover, even if we were given exclusive access to all of the supercomputers in the world, we would still have fewer computing cycles than we do with the Folding@home cluster! This is possible since PC processors are now very fast and there are hundreds of millions of PCs sitting idle in the world.
 
this is like saying somebody who posts 1 line replys 20times a day is better then somebody who posts long informitive diatribes 5x a day.........
 
well is looking good so we just wait to compare with hd 4870
 
and will it work on my 8600gts?

Yes, any GeForce 8, 9 and upcoming series GPU.

this is like saying somebody who posts 1 line replys 20times a day is better then somebody who posts long informitive diatribes 5x a day.........

If you can condense a paragraph into 1~2 sentences, it carries equal weight.
 
  • Like
Reactions: hat
Yes, any GeForce 8, 9 and upcoming series GPU.



If you can condense a paragraph into 1~2 sentences, it carries equal weight.

not if it dosnt make any sence or isnt understood by 1/2 the people.

i could cut down alot of things i say to 1-3 sentances, but only a few people would get what i was saying.
 
not if it dosnt make any sence or isnt understood by 1/2 the people.

i could cut down alot of things i say to 1-3 sentances, but only a few people would get what i was saying.

I come across a lot of posts that span across several paragraphs all conveying a point that can fit into a couple of sentences. That's why I used the word condense and not 'cut down'.
 
Back
Top