• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,449 (2.42/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
im looking forward to some nice midrange dx11 offerings by nvidia! ill buy three!
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
Some more news snippets on the new chip:

The chip supports GDDR5 memory, has billions of transistors and it should be bigger and faster than Radeon HD 5870. The GX2 dual version of card is also in the pipes. It's well worth noting that this is the biggest architectural change since G80 times as Nvidia wanted to add a lot of instructions for better parallel computing.

Read the rest at Fudzilla.

wanna know what gets be about most of this? its probably the most true thing iv heard all year out of fud....if you read bta's news article on how nvidia is stepping up and helping microsoft wiht parallel drivers it makes sense...why would nvidia all of a sudden want to spend billions more of the $$ they like to keep to help microsoft with making parrallel drivers? i mean this close to GT300 launch?....with that die shot saying their will be more shaders? GDDR5? last i remember mem wasnt that important in folding but bandwidth counts...so that makes sense...and what about what fud said about added instruction sets? id bealive for better folding in a heartbeat...it just all seems to add up..its almost like nvidia is changing its direction a little and this time aroun d targeting the folding community just as hard the as the gamers...because they know people absolutely dedicated to folding will jump on these cards in a heartbeat..
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
We have yet to see how folding is going to work with these new generation cards.
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,449 (2.42/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
wanna know what gets be about most of this? its probably the most true thing iv heard all year out of fud....if you read bta's news article on how nvidia is stepping up and helping microsoft wiht parallel drivers it makes sense...why would nvidia all of a sudden want to spend billions more of the $$ they like to keep to help microsoft with making parrallel drivers? i mean this close to GT300 launch?....with that die shot saying their will be more shaders? GDDR5? last i remember mem wasnt that important in folding but bandwidth counts...so that makes sense...and what about what fud said about added instruction sets? id bealive for better folding in a heartbeat...it just all seems to add up..its almost like nvidia is changing its direction a little and this time aroun d targeting the folding community just as hard the as the gamers...because they know people absolutely dedicated to folding will jump on these cards in a heartbeat..

im expecting a blowout by nvidia this time around.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
We have yet to see how folding is going to work with these new generation cards.

imo it doesnt matter. im sure they can get them to fold just as well as this generation and if thats the case anything in the 9 series can already fold on par or out pace a 5870 so personally i think that the GT300 series with the added shaders will easily pull 8-10k ppd i mean their is a chance the program wont work with GT300 at fist but thats only if nvidia changed the arch from scaler to super scaler like ATI which i seriously doubt...i mean sure they may have changed the fab etc but as long as the shaders work the same way it doesnt matter if its 45 32 or 22nm i mean if fud is right and they added micro code then ya the shaders technically act diff...however at that point its like running a 32bit os on a 64bit proc...it just wont use the new optimizations...in which case it should work just find..maybe even faster given the general SP increase.
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.22/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
...and BiNGE asked in deep prayer for intellectual conversations regarding the GT300 series. His prayers were answered shortly after. Oh joyous day :D

Thanks guys :)
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
imo it doesnt matter. im sure they can get them to fold just as well as this generation and if thats the case anything in the 9 series can already fold on par or out pace a 5870 so personally i think that the GT300 series with the added shaders will easily pull 8-10k ppd i mean their is a chance the program wont work with GT300 at fist but thats only if nvidia changed the arch from scaler to super scaler like ATI which i seriously doubt...i mean sure they may have changed the fab etc but as long as the shaders work the same way it doesnt matter if its 45 32 or 22nm i mean if fud is right and they added micro code then ya the shaders technically act diff...however at that point its like running a 32bit os on a 64bit proc...it just wont use the new optimizations...in which case it should work just find..maybe even faster given the general SP increase.

Hardware isn't going to matter. The software side is what is going to matter, and since the GT300 is going to be a very different chip from GT200, I'm quite sure things are going to work differently. With DX11 compatability will Nvidia still need to use their CUDA? I would think that any new software for F@H would take advantage of compute shaders. The F@H client will have to be redone.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
Hardware isn't going to matter. The software side is what is going to matter, and since the GT300 is going to be a very different chip from GT200, I'm quite sure things are going to work differently. With DX11 compatability will Nvidia still need to use their CUDA? I would think that any new software for F@H would take advantage of compute shaders. The F@H client will have to be redone.

things will work diffirently im sure but F@H has nothing to do with the directX api's so thats not going to matter in the slightest.as for cuda im going to bank on yes as DX has to be conformed to by any company that wishes to use it. that being the case they simply arent allowed to impliment a software technology base that favors one company it cant be done no matter how much $$ nvidia wants to pump into anything DX is talked about and standardized by multiple companies including ATI and im sure they would have a serious problem with microsoft leaning heavy on nvidia. That being said i think your right in saying that F@H in general will need to be totally reworked but imo it woulod only need to be done to accomidate and utilize nvidias updated microcode. but if they didnt i honestly dont see why F@H wouldnt run i think honestly it would do so just minus the extra acceleration....that being said nvidia imo will still need to use cuda as neither F@H or microsoft can single handedly make the decision to favor one manufacturer sure nvidia cards fold better than ATI but thats just because ATI cant do physx thats not F@H taking a side..thats F@H utilizing what already existed.
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
things will work diffirently im sure but F@H has nothing to do with the directX api's so thats not going to matter in the slightest.as for cuda im going to bank on yes as DX has to be conformed to by any company that wishes to use it. that being the case they simply arent allowed to impliment a software technology base that favors one company it cant be done no matter how much $$ nvidia wants to pump into anything DX is talked about and standardized by multiple companies including ATI and im sure they would have a serious problem with microsoft leaning heavy on nvidia. That being said i think your right in saying that F@H in general will need to be totally reworked but imo it woulod only need to be done to accomidate and utilize nvidias updated microcode. but if they didnt i honestly dont see why F@H wouldnt run i think honestly it would do so just minus the extra acceleration....that being said nvidia imo will still need to use cuda as neither F@H or microsoft can single handedly make the decision to favor one manufacturer sure nvidia cards fold better than ATI but thats just because ATI cant do physx thats not F@H taking a side..thats F@H utilizing what already existed.

Well, I guess what I want to know then, is will there be PhysX with Nvidia's new cards? Are we going to continue using proprietary features on video cards that divide the industry? I hope not.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
Well, I guess what I want to know then, is will there be PhysX with Nvidia's new cards? Are we going to continue using proprietary features on video cards that divide the industry? I hope not.

well like the rest of us i can only go by speculation and history but history tends to repeat itself and given what very little we know in rumor and given the current news about nvidia wanting to step up to parallel computing it only seems to make sense ...i mean everything just comes togeher too perfectly for us to overlook it the next part i am not responding directly to you this is just general so dont think im disagreeing or starting to argue but lets all look at some facts and you tell me were i can be flawed if you wish but i might have some good points

-their is news that nvidia is in bed with microsoft with developing really good parelell computing drivers

-lets face it the F@H community houses hardcore ppl that will blow big $$ on their belifes or simply to be able to say im #1 on the list in ppd

-that being said lets not blow too much money..lets keep cuda it works and with microsoft helping us it can only get better...but this also means we will still be devided unfortunetely

-the new arch is going to have more shaders...i mean we can compare die's and speculate but lets be honest when was the last time nvidia came out with a new card that DIDNT have more SP's?

-now lets take into consideration the folders...they blow big $$ in which case folding is getting ALOT more popular lets face it atleast 80% of the ppl reading this thread contribute..alot of the folders here run farms...when i feel like fiding power cables i will too honestly...that being the case...this would be like striking oil for nvidia...a totally new community that wont be afraid to drop lots of $$ and boost nvidias sales...they can only see us folders as a relatively untapped resource...so what better than to target us?

-taking the above facts and theorys into consideration..lets talk about prioce....no doubt given history nvidias cards are usually more expensive..his means we should be expecting this...their is already a disposition...the problem is that like the tech idk if you have noticed but proces have also grown expodentially...how would nvidia make this work? easy..target a new bunch of people (folders) that will buy many of these cards and not care about proce...then ..dump a shit ton of money into microsoft to help them that way it looks like a worthwhile investement...i mean lets be honest i see this parelell computing partnership as a way to cover their ass if gamers dont wan to blow the $$


-more SP's mean more heat...and everyone knows the GTX200 series wasnt as well known as the 9 series for OC's gamers like to OC and so do enthusiests..so if it doesnt clock they will loose a little profit and general revenue...this would also lead them to seek a new adience..lets face it their is no reason for a dad and 5 kids to buy a GT300 to put in their family dell. and it probably wont happen...were the community and they know it.

-being the 2 main communities its going to have to perform..so SP numbers are almost garenteed to be above what we see now it just makes sense...lets face bigger numbers arent always better..but were overclockers we have a pre disposition for bigger numbers anyware.

GDDR5 wanna know why this makes sense? DX11 games and apps...its going to like bandwidth..thats just what happens with a new DX which is also why were going to see more SP's that and ATI has GDDR5 ..whens the last time you saw nvidia NOT do somthing ati did? and i dont mean DX10.1 thats software thats useless. i mean it may be programmed into the GPU but at the end of the day its just microcode...but GDDR5 on the otherhand..its a physical object..and last i checked nvidia doesnt like to be out played...because in their mind remember "nvidia..the way its ment to be played" and in order to keep up with their slogan they have to match them part for part

my 2c
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.22/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
if nV's GT300 still supports CUDA then I am sure they could patch their own drivers to support better F@H performance with the new generation of cards. Yes, it's that flexible.
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
-their is news that nvidia is in bed with microsoft with developing really good parelell computing drivers

Excellent. From what we have learned ATi is also in bed with Microsoft meaning MS is hopefully leading the way and keeping things compatible throughout the hardware manufacturers. The more things can stay on even ground the easier it is for a game developer to work within parameters that can benefit different hardware. I love new computer hardware and better performance as much as the next guy, but it's the advancements of the games we play and applications we use that I want more.
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.22/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
Excellent. From what we have learned ATi is also in bed with Microsoft meaning MS is hopefully leading the way and keeping things compatible throughout the hardware manufacturers. The more things can stay on even ground the easier it is for a game developer to work within parameters that can benefit different hardware. I love new computer hardware and better performance as much as the next guy, but it's the advancements of the games we play and applications we use that I want more.

too true indeed i mean faster processors more bandwidth its all awsome really and im excited for the new gen...but its not going to be anything really diffirent...if you think about it it would just be more of the same...everyone loves to see better stuff....but i mean it does get boring..look ooooooo 1ghz core...$100 dollars more..it happens every release..software is what needs to make a real improvement..and if they can play 100% catchup with the software and with the new hardware this gen..well sit back boys we are in for a very very exciting next couple of years. I mean if you sit back and really think about it we have made incredable and i mean absolutely incredable hardware changes...from core 2 to i7 architecture to graphics cards fast enough to throughput more than a unix farm on their own with a new software codebase just released and more and more being co-developed...i mean damn boys i remember the apple II's and stuff and like many of you i have seen alot of change..but right now ..late 2009 to be a computer enthusiest is a damn good time to be alive.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
i dont mean to double post but i would like to extend my thanks to erocker for having one of thee most engaging conversations with me this very fine evening. I mean damn thats was just straight forward and good its amazing the kind of shit you can hammer out when you just talk to soeone about it. good show lads...intelligent people all around this is why i love TPU
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Comparing those two die shots, the alleged GT300 die is holding 384 shaders, not 512. Look at those shader core blocks (red rectangle you marked on the GT200). There are 10 of those on the GT200. So 240/10 = 24. They bear a strong resemblance to the ones on the alleged GT300 die shot. There are 16 of those on the die. 16 x 24 = 384.

:banghead::banghead::banghead::banghead::banghead:

I'm stupid, stupid, stupid!!! Thanks a lot you made that comment and made me look at the pictures again. It's fake! Absolutely fake!!



The image is rotated 90ª in respect to the original one.



Image rotating, some tone/saturation and maybe hue variation applied, and cut and paste applied everywhere...

I love so much speculating about things instead of waiting for the info, that I'm sometimes so blind. Sure that die shots have repetitive structures and all, but this faked shot is so badly executed that I feel so embarrased that I don't know if do a Yakuza and cut one of my fingers. Being a guy who uses PS almost everyday... :banghead::banghead:
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,389 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
I'll link your post to the IT168 guys. It will be so 'in your face' lol :)
 

wolf

Performance Enthusiast
Joined
May 7, 2007
Messages
7,753 (1.25/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
My current GT300 thoughts.

It has to have more shaders or shader power, from 128 to 240 was almost double, if we are sticking with traditional-ish shader units, id hope for 480-512 or more, or maybe in the realm of ~384 but clocked off their nuts (like 2-2.5ghz)

GDDR5 is a must now, and my speculation is they will keep 512-bit bus and go for BULK bandwidth, really I don't see anything less than 200 gb/s of bandwidth coming from GT300.

ROPS are an interesting thought, 32 is already a lot, and I really don't have much to offer in terms of speculation on where they will go next. 64 would be tits tho.

SLi, I have the feeling SLi might get another tweak, in terms of being able to dual/tri/quad more of the newer cards together. This thought stems from Nvidias bulk-talk about parallelism.

sheer computing power; touting MIMD is something on nvidias cards too, and given ATi can now churn over ~2.7 TFLOPS with a current single GPU, Nvidia will really need some serious chip to best that.

using a future model of their 'current' architecture, to get ~3 TFLOPS they have to have..

512 shaders @ ~1940mhz = ~2.9 TFLOPS
384 shaders @ ~2600mhz = ~2.9 TFLOPS

or a crazy 768 shaders @ a more down to earth 1300mhz for 2.9 TFLOPS

this is all given their architecture changes, just not that drastically, thoughts?
 

Bo_Fox

New Member
Joined
May 29, 2009
Messages
480 (0.09/day)
Location
Barack Hussein Obama-Biden's Nation
System Name Flame Vortec Fatal1ty (rig1), UV Tourmaline Confexia (rig2)
Processor 2 x Core i7's 4+Gigahertzzies
Motherboard BL00DR4G3 and DFI UT-X58 T3eH8
Cooling Thermalright IFX-14 (better than TRUE) 2x push-push, Customized TT Big Typhoon
Memory 6GB OCZ DDR3-1600 CAS7-7-7-1T, 6GB for 2nd rig
Video Card(s) 8800GTX for "free" S3D (mtbs3d.com), 4870 1GB, HDTV Wonder (DRM-free)
Storage WD RE3 1TB, Caviar Black 1TB 7.2k, 500GB 7.2k, Raptor X 10k
Display(s) Sony GDM-FW900 24" CRT oc'ed to 2560x1600@68Hz, Dell 2405FPW 24" PVA (HDCP-free)
Case custom gutted-out painted black case, silver UV case, lots of aesthetics-souped stuff
Audio Device(s) Sonar X-Fi MB, Bernstein audio riser.. what??
Power Supply OCZ Fatal1ty 700W, Iceberg 680W, Fortron Booster X3 300W for GPU
Software 2 partitions WinXP-32 on 2 drives per rig, 2 of Vista64 on 2 drives per rig
Benchmark Scores 5.9 Vista Experience Index... yay!!! What??? :)
Well, I guess what I want to know then, is will there be PhysX with Nvidia's new cards? Are we going to continue using proprietary features on video cards that divide the industry? I hope not.

Yeah, it certainly looks like as if there will be even more PhysX games released next year than this year. However, one big news is that with Windows 7, there will no longer be support for a dedicated PhysX card, be it Ageia or Geforce.

My current GT300 thoughts.

It has to have more shaders or shader power, from 128 to 240 was almost double, if we are sticking with traditional-ish shader units, id hope for 480-512 or more, or maybe in the realm of ~384 but clocked off their nuts (like 2-2.5ghz)

GDDR5 is a must now, and my speculation is they will keep 512-bit bus and go for BULK bandwidth, really I don't see anything less than 200 gb/s of bandwidth coming from GT300.

ROPS are an interesting thought, 32 is already a lot, and I really don't have much to offer in terms of speculation on where they will go next. 64 would be tits tho.

SLi, I have the feeling SLi might get another tweak, in terms of being able to dual/tri/quad more of the newer cards together. This thought stems from Nvidias bulk-talk about parallelism.

sheer computing power; touting MIMD is something on nvidias cards too, and given ATi can now churn over ~2.7 TFLOPS with a current single GPU, Nvidia will really need some serious chip to best that.

using a future model of their 'current' architecture, to get ~3 TFLOPS they have to have..

512 shaders @ ~1940mhz = ~2.9 TFLOPS
384 shaders @ ~2600mhz = ~2.9 TFLOPS

or a crazy 768 shaders @ a more down to earth 1300mhz for 2.9 TFLOPS

this is all given their architecture changes, just not that drastically, thoughts?

The "official" rumor now is that GT300 will be using 384-bit bus with GDDR5. Assuming that the GDDR5 will be clocked at 5GHz effective, it will translate into 50% greater bandwidth than a GTX 285. Note how a 5870 has only 23% more bandwidth than a 4890? And how a GTX 285 still has a little more bandwidth than a 5870?!?

With a 384-bit bus, it is highly likely that Nvidia will use 12 memory chips once again with their GT300 like they did with 8800GTX. So, we'd be seeing 1.5GB of memory, which should be plenty for at least a year anyways. It is much more cost-effective than 2GB (or 16 chips) after all.

Also, 512 shaders is extremely likely, which is over 2 times that of a GTX 285 (240 shaders). Some unreliable rumor sites claim that the GT300 will have over 3 times the performance of GT200 but I find this a bit outrageous. It must be in relation to a specific scenario, like a DX11 game, huh? Eh? :D

I wouldnt be surprised if it has over 3 billion trannies. Perhaps Nvidia will use such a different architecture that it actually yields 3 TFLOPS?? (Keep in mind how the GT200 was so much bigger than a RV770, yet still had lower TFLOPS performance.)
 
Last edited:

troyrae360

New Member
Joined
Feb 2, 2009
Messages
1,129 (0.20/day)
Location
Christchurch New Zealand
System Name My Computer!
Processor AMD 6400+ Black @ 3.5
Motherboard Gigabyte AM2+ GA-MA790X DS4
Cooling Gigabyte G-Power 2 pro
Memory 2x2 gig Adata 800
Video Card(s) HD3870x2 @ 900gpu and 999mem
Storage 2x wd raid edition 120gig + 1 samsung 320 + samsung 250
Display(s) Samsung 40inch series6 full HD 1080p
Case NZXT Lexa
Audio Device(s) ALC889A HD audio with Enables a Superior Audio Experience (on board)
Power Supply Vantec ION2+ 550w
Software Vista Home pream 64
I wouldnt be surprised if it has over 3 billion trannies.

:roll:
3 billion trannies, Now I know why I stick with ATI, you might only get one Ruby but hey at least she dosnt have balls!! :roll:

 
Last edited:

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.22/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
Futa.... futa ruby?... :slap:
 

wolf

Performance Enthusiast
Joined
May 7, 2007
Messages
7,753 (1.25/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
The "official" rumor now is that GT300 will be using 384-bit bus with GDDR5. Assuming that the GDDR5 will be clocked at 5GHz effective, it will translate into 50% greater bandwidth than a GTX 285. Note how a 5870 has only 33% more bandwidth than a 4890? And how a GTX 285 still has a little more bandwidth than a 5870?!?

With a 384-bit bus, it is highly likely that Nvidia will use 12 memory chips once again with their GT300 like they did with 8800GTX. So, we'd be seeing 1.5GB of memory, which should be plenty for at least a year anyways. It is much more cost-effective than 2GB (or 16 chips) after all.

Also, 512 shaders is extremely likely, which is over 2 times that of a GTX 285 (240 shaders). Some unreliable rumor sites claim that the GT300 will have over 3 times the performance of GT200 but I find this a bit outrageous. It must be in relation to a specific scenario, like a DX11 game, huh? Eh? :D

I wouldnt be surprised if it has over 3 billion trannies. Perhaps Nvidia will use such a different architecture that it actually yields 3 TFLOPS?? (Keep in mind how the GT200 was so much bigger than a RV770, yet still had lower TFLOPS performance.)

I too noticed the 5870's memory bandwidth being uninspiring compared to even a GTX285, its super disappointing really. IMO it is what is holding the 5870 back, 200+ gb/s of bandwidth and I think they'd do even better, I could be wrong tho.

384-bit GDDR5 with entry level 1.5gb sounds SWEET as dude, plenty of bandwidth and a perfect amount of frame buffer IMO.

the TFLOPS question is a weird one, the 4870's creamed GT200 for GFLOPS and the 5870 is no slouch on paper either, I reckon It's funny to see Nvidia at a 'on paper' disadvantage there but it mean sweet f/a in games :)

IMO if this new chip pushes over 2 TFLOPS, it will be an outright BEAST.
 
Joined
May 19, 2007
Messages
7,662 (1.24/day)
Location
c:\programs\kitteh.exe
Processor C2Q6600 @ 1.6 GHz
Motherboard Anus PQ5
Cooling ACFPro
Memory GEiL2 x 1 GB PC2 6400
Video Card(s) MSi 4830 (RIP)
Storage Seagate Barracuda 7200.10 320 GB Perpendicular Recording
Display(s) Dell 17'
Case El Cheepo
Audio Device(s) 7.1 Onboard
Power Supply Corsair TX750
Software MCE2K5
id hit ruby


















with my digital pens
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
Yeah, it certainly looks like as if there will be even more PhysX games released next year than this year. However, one big news is that with Windows 7, there will no longer be support for a dedicated PhysX card, be it Ageia or Geforce.




what?
 
Joined
Feb 21, 2008
Messages
4,985 (0.84/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
Status
Not open for further replies.
Top