Bo_Fox
New Member
- Joined
- May 29, 2009
- Messages
- 480 (0.09/day)
- Location
- Barack Hussein Obama-Biden's Nation
System Name | Flame Vortec Fatal1ty (rig1), UV Tourmaline Confexia (rig2) |
---|---|
Processor | 2 x Core i7's 4+Gigahertzzies |
Motherboard | BL00DR4G3 and DFI UT-X58 T3eH8 |
Cooling | Thermalright IFX-14 (better than TRUE) 2x push-push, Customized TT Big Typhoon |
Memory | 6GB OCZ DDR3-1600 CAS7-7-7-1T, 6GB for 2nd rig |
Video Card(s) | 8800GTX for "free" S3D (mtbs3d.com), 4870 1GB, HDTV Wonder (DRM-free) |
Storage | WD RE3 1TB, Caviar Black 1TB 7.2k, 500GB 7.2k, Raptor X 10k |
Display(s) | Sony GDM-FW900 24" CRT oc'ed to 2560x1600@68Hz, Dell 2405FPW 24" PVA (HDCP-free) |
Case | custom gutted-out painted black case, silver UV case, lots of aesthetics-souped stuff |
Audio Device(s) | Sonar X-Fi MB, Bernstein audio riser.. what?? |
Power Supply | OCZ Fatal1ty 700W, Iceberg 680W, Fortron Booster X3 300W for GPU |
Software | 2 partitions WinXP-32 on 2 drives per rig, 2 of Vista64 on 2 drives per rig |
Benchmark Scores | 5.9 Vista Experience Index... yay!!! What??? :) |
i agree with Bo_Fox. I mean the GTX 260 is hell of alot bigger than the GTS 250 with much better specs, yet the GTS 250 is only 6% on avg slower and even less at higher resolutions. I personally agree that had nvidia gave the GTS 250/9800GTX a 384-bit bus with 24 ROP's it probly would have closed that gap with less shaders. But eh we all can only think of what would have made sense.
Thank you, you're the man!
Thanks for understanding.. if Nvidia read this, Nvidia would definitely feel this! Especially Jen Hsung (I forgot how to spell his last name, but was it right?)
Nvidia could've at least had a G92, G90, and a GT200. That is, a G90 on 65nm and then a G90 and GT200 on 55nm. That way, Nvidia would've maintained a lead on a 4870 with GTS 250 with 384 bits and 24 ROP's on 65nm process, then continued the lead over a 4890 with GTX 285 on 55nm. HalfAHertz was right about there being additional manufacting costs for more variations and less volume per design, yet Nvidia would have yielded much better profits (yes, the research/development costs would've been a bit higher with less "returns" from mass-production at TSMC for a specific design, but it still would have paid off). But Nvidia just wanted to go with a monster chip from the start, at whatever cost, after the "supreme" success of monster G80 chip.
Last edited: