• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

EVGA and NVIDIA Design Unique Multi-GPU Graphics Accelerator

benchmarks please

yes, and even that setup is dumb. they can just use their first nvidia card to run GPU accelerated physX, so it doesnt make any sense then either.
Also to others saying a dedicated gpu for physx is pointless:

Can you link me benchmarks showing a second gpu is not necessary for physx when using a gpu that already supports physx?....

I tried googling "physx benchmark" with some variants (+"gt200"), but so far all I have found is this test showing that adding a dedicated physx card is clearly better then running physx on a gpu that is trying to draw the graphics. In that link 2xgtx280+9600gt performs better then 3xgtx280+no-dedicated-physx-card. I for one tend not to trust anything unless it can be backed up by multiple sources/experiments so I don't truly trust just that one link, however I also don't trust comments saying that a dedicated physx card is pointless when using a gpu(or multi gpu) that can run physx.

What can be said is that the performance increase from adding a dedicated physx card is minor(does anyone care about 37fps vs 42fps?) and so is not worth the extra price of adding such a card; related to this is that there are very few physx games to begin with, and those games that do have physx can be played with physx on low without much impact to the game which again means its not really worth messing around with a dedicated physx card.

Do I think this card is pointless? Yes, but thats because I don't really care about the extra performance the g92 will bring to physx games that I'm not really interested in playing. Someone somewhere will want that extra performance boost or smaller footprint(compared to gpu+dedicated-physx-card), and to them I say :toast:

I notice some mention the super expensive mars; my feeling of the mars was the same: I would never want a mars due to its insane price, but to those who do :toast:
 
interestign article, but a few points


1. only one game was tested. poor sampling so its hard to prove if its going to be the same in all games.

2. approximately a 10 FPS gain was seen using the second card for physx. i'd not pay $150 au and another 50-100W of power use for 10 FPS.

3. december 2008 - i'm sure performance on single card (and SLI) setups has been improved
 
Instead of this pointless Frankensteinish monstrosity of a card, I would much rather they 40nm'ify the GT200 and get DX11 built into it. They obviously don't have GT300 ready, might as well get some GT200 stuff with DX11 and 40nm, right?
 
This is the new FERMI, they take some old dies and duct tape them together, throw in a leaf blower and charge $599.99, extension cord for the leaf blower not included.


This is almost not funny, without competition ATI is free to dilly dally on the new GPU's and cards like the eyefinity 2Gb I need.
 
its more than likely a 1GB GTX285 with a 1GB GTS250 with 2 PCB's (similar to the 9800GX2 but not in sli) that would be the easiest way to make a card like this

but you can pretty much guarantee this will be nothing but a high priced waste of money.

at least they are being innovative though, its a good idea to me, just too late.
 
Wow, what a retarded card. I'm sure there are a niche few out there that want a GTX285 and GTS250 for dedicated PhysX to play the few PhysX games, but this card is still retarded.

Though maybe this is just a stepping stone to a version with GT300 and a GT200 core?

I don't know, there are always these retarded cards coming out at the end of a generation of cards, sometime they lead to innovation, sometimes not. It is always a good thing to see something new though.
 
anything better than nothing?
 
Mine was 2 days ago lol.. Pretty common really has more ppl do it when the weather colder :P..

Happy Bdays to all :)..


They can send it to me; October 30th is mah birthday. :roll:

If it were like Mac Pro laptops where the weaker one was used in 2D apps while the stronger one would work in 3D apps (saving power), I would like it but, if the second GPU is solely intended for PhysX which few games require, I'd say they are wasting their time.

And let me guess, they're going to market it as "Frankenstein" or "Monster."

NV think of doing that Your kidding right ?. There so full of them selfs with Cuda.

Expect it of ATI though and i'm surprised it's not been done already but then again they have got the idle usage down real low already.

Not enough games for this yet for this be as good as it could be. People don't want this they want 300 series already lol.
 
FYI:

Signed up to the event ^^. I live 9000 km away from the event, but hey maybe they choose me lol ^^.

cheers
DS
 
But they only give you soft drinks :(
 
I know something that would be scary, stick a print out of an electric bill while using this card on the hsf.
 
The WTF Futermark card.
 
Kinda neat from an engineering perspective, but newtekie is right -- these kind of crap cards always come out at the end of a generation just to fill time.

Or need I remind anyone of the "HD 2600XT Dual."

"$400 dollar card, $150 performance."
 
I like the card, combining two different GPU's onto a single PCB is no small feat. Unfortunately it's about 8 months too late.
 
I like the card, combining two different GPU's onto a single PCB is no small feat. Unfortunately it's about 8 months too late.

The more I think about it, the more I think it actually is probably easier than we think.

I'm obviously not a graphics card engineer, but I would think desigining this card would be easier than desiging something like a GTX295 with two of the same cores.

Think about it: With two of the same cores, you have to design the card so the two core communicate together to share the work. You have to design SLi communication into the PCB, along with communication with the PCI-E bus for each core.

With this card, all you need to do is slap the two cores on the same PCB, connect them both to a NF200 bridge chip, and that is it. No need to design communication paths between the two cards. To the system, they are just two cards sharing the same PCI-e bus.

And on a different note, this card will probably benefit the people with motherboard that only have one PCI-E x16 slot. So they can have a dedicated PhysX card, the only problem is that the price will likely be so outragous, they might as well buy a new motherboard...
 
Wonderful, I always wanted a card like this that would cost me way too much and be aimed at utilising a dead (or soon to die) proprietary API, PhysX. :)

Isn't it possible to get the onboard IGP of an nvidia motherboard to do the physx whilst your real GPU does the graphics anyway?
 
*blinks* *stares a little more*...*blinks again* wait....what.......wai...huh.....STUPID!


i dont see the point in this, im guessing they need to make somemore money on a gimmick they hope people to buy to support all the money there losing.
 
Wonderful, I always wanted a card like this that would cost me way too much and be aimed at utilising a dead (or soon to die) proprietary API, PhysX. :)

Isn't it possible to get the onboard IGP of an nvidia motherboard to do the physx whilst your real GPU does the graphics anyway?

possible? yes. easy? no.


Nv ramped up the requirements for physX recently, so most of the onboard GPU's cant handle the more modern physX titles
 
http://www.evga.com/articles/00512/

" EVGA and NVIDIA would like to invite you to a new graphics card launch at the NVIDIA campus in Santa Clara, CA. No it’s not based on the new Fermi architecture… we are still finalizing those details, so stay tuned. It’s a rocking new graphics card designed by NVIDIA and EVGA to take PhysX to the next level.

The bolded part made me... :roll:

And in the launch event they can test that card vs HD 5870 + GTS 250 with the hacked drivers to allow physX... I would really wanna see that. :D
 
This only proves that Nvidia has no Fermi ready yet if they are desperate enough to have to waste money on product development like that. Oh my...

Remember that the 8800 GTX has dominate the world of graphics card almost for one year, without any response from ATi.

Fermi is close to be completed and there is no delay from Nvidia.
 
Interesting to see this card fold.
 
Heh, another G92? teh suk! Might as well make a keychain again to buy time for G300 :P
 
Remember that the 8800 GTX has dominate the world of graphics card almost for one year, without any response from ATi.

Fermi is close to be completed and there is no delay from Nvidia.

No delay, eh?

http://www.techpowerup.com/tags.php?tag=GT300

First it was Q4 2008, then Q1 2009, then Q4 2009 (that's right now) with demos in September -- this is what you call "no delay?"
 
No delay, eh?

http://www.techpowerup.com/tags.php?tag=GT300

First it was Q4 2008, then Q1 2009, then Q4 2009 (that's right now) with demos in September -- this is what you call "no delay?"

The Q4 2008 and Q1 2009 dates were nothing more than extremely early speculation based on nothing, it was never official from nVidia. Just a bunch of news reporters saying it "could" be out by Q4 2008, or it was comeing out "as early as" Q1 2009.

Q4 2009 has been the official release schedule since nVidia announced it back in December 2008.

So, no, it was never Q4 2008, then Q1 2009. It has always been Q4 2009.
 
The Q4 2008 and Q1 2009 dates were nothing more than extremely early speculation based on nothing, it was never official from nVidia. Just a bunch of news reporters saying it "could" be out by Q4 2008, or it was comeing out "as early as" Q1 2009.

Q4 2009 has been the official release schedule since nVidia announced it back in December 2008.

So, no, it was never Q4 2008, then Q1 2009. It has always been Q4 2009.

Nvidia have had delay on everything else than what was planned, no official saying on exact date nor q1 q2 q3 q4 09 01 stuff either.
Their 40NM parts was roughly 3 months delayed.

NVidia and ATI is the best that ever happened to this world, the bad part is that nvidia turning out to be arrogant pricks now. But they have had a very very good performance growth for every generation change, and pushed themself hard.
Surely GPGPU is good, but without a common API for intel ati and nvidia this isnt any compelling feature.
I wudnt want to make a app ive made just for nvidia costumers, and i deffy dont want to make it for 4 diffrent api's, one for intel, one for ati, one for nvidia and one for x86!

And i wudnt make a GPGPU card for consumers yet, could increase the performance for GPGPU steadyly, not dedicate something for it, from one generation(gaming card) then to gpgpu car the other generation.
Stuff dont happen over night, just watch 64 bits, 32 bit worked and people didnt switch.
 
Back
Top