• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS Unveils the GeForce GTX 780 STRIX 6 GB Graphics Card

Joined
Jan 10, 2014
Messages
161 (0.04/day)
System Name First Gaming PC
Processor AMD APU Kaveri A10-7850k
Motherboard MSI A88XM-E45
Cooling Stock Cooler
Memory Kingston HyperX 8 GB 1866MHz
Video Card(s) Intergrated with CPU
Storage Kingston Hyperx 3k 120 GB(OS) + 1 TB WD Blue
Display(s) LG 20EN33V 1920 x 1080
Case Infinity Rave
Audio Device(s) Intergrated Sound Card
Power Supply Enermax NAXN 500w
Software Windows 8.1 64-bit
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Artificial construct. As has been pointed out here (and elsewhere) ad nauseam, a board (or boards) run out of GPU horsepower before the vRAM limit becomes the limiting factor. The Hexus article you linked to shows a framerate of 20 frames per second - hardly playable. Disabling AA will make the framerate more realistic, but it also frees up vRAM.
I'd also note - in a real world gaming scenario, who is likely to sacrifice playability for a largely superfluous full screen AA on a 4K monitor of ~30" ? There maybe people around that might run their game as a lovely slideshow to prove a point, but I'm willing to bet there would be many, many more that would aim for a fluid 60 f.p.s. over some barely perceptible aliasing for the most part.
Your comparing a weaker card the titan to the 780ti first of all...

Second, so to run 4k at ultra with decent frame rates you have to run SLI or CFX, what happens when you compare the two you see the VRAM limits come into play very easily. Every game in that review at the high resolutions (4k+) the AMD cards pull way ahead expecially in some cases like BF4 (albeit they used mantle which is an advantage so you can skip that if it so pleases). So this is anything but an "Artificial Construct" because its pretty apparent theres a limit somewhere when the 780ti was clearly supposed to be the faster video card when compared to a R9 290X. An SLI configuration versus a CFX configuration should result in the SLI being faster since scaling remains roughly on par with eachother for 2 GPU's, however in these scenarios they are completely limited and lose by up to 44% performance difference. Even on TomsHardware Review of the 295X2 in CFX with the results being closer (Which im not comparing using quadfire for it right now, only 2 card setups) the 295 or 290X setup is still ahead in almost all cases.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Your comparing a weaker card the titan to the 780ti first of all...
Do the math. The 780Ti is ~11% faster than the Titan which shows in the frames per second measurement. yet the Titan is "using" 20-30% more vRAM using the exact same game settings.
Second, so to run 4k at ultra with decent frame rates you have to run SLI or CFX, what happens when you compare the two
Why the hell would you compare the two. The point of the discussion is frame buffer size and its relationship with higher resolutions. The best method for comparison is using the same GPU with varying frame buffers, but you think that comparing two different architectures from two different vendors somehow makes for a better comparison....a comparison that doesn't take into account different underlying architectures and multi-GPU scaling. It just looks like you're trying too hard to troll/deluge an article about an Nvidia product with AMD product placement.



******************** AN ASIDE : NOT GERMANE TO THE ACTUAL DISCUSSION*********************
An SLI configuration versus a CFX configuration should result in the SLI being faster since scaling remains roughly on par with eachother for 2 GPU's
Actual Crysis 3 scaling: CrossfireX : 82%....SLI: 54% at the same level of framerate shown in the HardOCP review



...yet even with a 50+% scaling advantage, and a 33% larger framebuffer, the dual 290X's are only 12.5%-22.7% faster in Crysis3 according to the article you just linked to....yet single cards are within a few percentage points of each other. Now why would that be? If your supposition of the value of a larger framebuffer of the AMD cards holds water, then the only other possible answer is that AMD's driver isn't working very well.
Just because same clocks or anything it does not mean they are the same, i will wait for the REAL REVIEW, just for comparison with another brand
Peachy. If clocks aren't indicative, why bother asking the question in the first place?
How well does this thing again MSI GTX 780 Gaming or Lightning edition?
EDIT : Against*
 
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Why the hell would you compare the two. The point of the discussion is frame buffer size and its relationship with higher resolutions. The best method for comparison is using the same GPU with varying frame buffers, but you think that comparing two different architectures from two different vendors somehow makes for a better comparison....a comparison that doesn't take into account different underlying architectures and multi-GPU scaling. It just looks like you're trying too hard to troll/deluge an article about an Nvidia product with AMD product placement.
Or...It actually works very well to show the frame buffer limits caused by the lower amount of VRAM. You seem to forget that at the lower resolutions (Generally 1600p and below) the 290X CFX vs 780ti SLI comes out with the SLI system ahead. Which in this article, every game is happier to run on the 290CFX setup at 4K. Tomb raider generally tends to like the 780ti setup more...You seem to want to stick with Crysis 3 but all the other big names seem to show the same trend and even in the article at the BF4 part they state:
We think there is definitely some VRAM capacity issues with the lesser 3GB per GPU on GTX 780 Ti at 4K with 4X MSAA
Here is an apples to apples comparison, voila:

In all actuality, the GTX Titan should not be ahead stock to stock of a GTX 780ti in SLI in Crysis 3, yet it is...Thats a weaker card, from the same company with weaker core clocks that is ahead. Not by much, but its a difference and on a weaker card. In fact the picture you yourself posted shows that as well...Odd that you would still claim this with obvious proof right there in your face especially because that's a 1 Card comparison.

Not all games nor setting cause the VRAM to be a limiting factor, but some do which is very apparent at 4K. Even if saying only a handful of games right now do limit because of VRAM, the future will only bring in more higher performance games that will need more VRAM. On top of that every review site shows different results for each game which will then of course depend on the settings. But if the limiter is being hit, then its causing at least SOME performance loss somewhere even if its not much.


...yet even with a 50+% scaling advantage, and a 33% larger framebuffer, the dual 290X's are only 12.5%-22.7% faster in Crysis3 according to the article you just linked to....yet single cards are within a few percentage points of each other. Now why would that be? If your supposition of the value of a larger framebuffer of the AMD cards holds water, then the only other possible answer is that AMD's driver isn't working very well.
Or it could be that 1 card hits its end before that really comes too much into play...As ive said, noone is going to game at 4k with 1 GPU because as every review sites says its pretty much not feasible without dropping quality in most games. So the options are in reality the 780ti SLI setup or a 290X CFX setup/295X2 for the gamer crowd unless you want to splurge on the Titan Blacks which cost a significant amount more. Or the other alternative is to wait and buy 780ti or 780 6gb cards which shockingly will fix the issue...

Get over trying to bring in a fanboy argument into this discussion, ive said that the 6gb alleviates the issue multiple times in multiple posts on the same forum. I could not care less in this instance about a AMD vs Nvidia debate, I more care about the necessity of more than 3gb on GPU's in this new 4K trend which will be alleviated thanks to EVGA, Palit, Asus, and the others making the 6gb cards. They obviously saw the need for it, so they are going to release it.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Or...It actually works very well to show the frame buffer limits caused by the lower amount of VRAM. You seem to forget that at the lower resolutions (Generally 1600p and below) the 290X CFX vs 780ti SLI comes out with the SLI system ahead
That is actually a product of the ratio of ROPs to compute blocks for the most part. A Hawaii GPU for instance has 64 ROPs being fed by 44 CU's. A GK110 on the other hand has 48 ROPs (even though it has more cores- 2880 to 2816) being fed by 15 SMX's. Add in the associated scheduling dependencies and that is where the main differences lie. It all comes down to what was prioritized by the chip designers. Just as Nvidia's chips are a compromise, so are AMD's. As an example, I don't think anyone would say that the GTX 660 and R9 270X are superior to Hawaii, but isolating one design compromise can show just that

Here is an apples to apples comparison, voila:

In all actuality, the GTX Titan should not be ahead stock to stock of a GTX 780ti in SLI in Crysis 3, yet it is...Thats a weaker card, from the same company with weaker core clocks that is ahead
The cards aren't "stock to stock", they were all overclocked to their highest maximum stable frequencies, as was stated in numerous forums. If you're looking at taking clocks out of the equation then this is the chart you should be looking at since it compares framebuffer and core count.

In fact the picture you yourself posted shows that as well...Odd that you would still claim this with obvious proof right there in your face especially because that's a 1 Card comparison.
And we've come full circle. I never said that you couldn't find a situation where the larger framebuffer wouldn't provide better numbers - a juggling of full screen AA and texture settings could easily manufacture that scenario. My point is that the GPU runs out of processing power before the vRAM limitation becomes the limiting factor- unless you see sub-30 f.p.s. as indicative of real world usage in Crysis 3. Do you?
Even if saying only a handful of games right now do limit because of VRAM, the future will only bring in more higher performance games that will need more VRAM.
And you see this future arriving before the next series of cards which will undoubtedly be better suited for this exact scenario ? Given that Pirate Islands and GM 204 are slated to arrive in around six months, and GK110's successor will be taped out in the next couple of weeks or so, that seems like a very optimistic viewpoint- more so given that a 4K adopter probably won't have any qualms about upgrading to the newest and most powerful boards.
ive said that the 6gb alleviates the issue multiple times in multiple posts on the same forum.
And I've yet to see any actual proof to back up the assertion.
What needs to be shown is the same GPU using two different frame buffer capacities (say 3GB and 6GB) being benchmarked at playable framerates - at least for the larger frame buffer card....and in more than a single benchmark. I doubt very many people buy a 4K screen and multi high end GPUs for a single title.
At the moment it isn't really anything more than the occasional outlier result....if that.
They obviously saw the need for it, so they are going to release it.
Well, marketing saw the need for it if nothing else. Strange that Nvidia OK'd 6GB 780's the moment that Sapphire's 8GB 290X showed up at CEBIT don't you think? Sapphire announce a 8GB 290X on the 13th March, EVGA announce their 6GB 780 eight days later. Odd that 4K gaming has been a widespread talking point for some time, yet it suddenly became imperative to have 3GB+ from both IHV's and premiere single-vendor AIB's within days of each other.

So, feel free to post links to any gaming benchmark that highlights the difference between frame buffer only (say, 3GB vs 6GB, or 4GB vs 8GB) using the same GPU at the same clocks. A comparison should eliminate as many variables as possible. Anything else comes under the heading of opinion - and while your welcome to air yours as is everyone else, it hardly constitutes proof.
 
Last edited:
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
The cards aren't "stock to stock", they were all overclocked to their highest maximum stable frequencies, as was stated in numerous forums. If you're looking at taking clocks out of the equation then this is the chart you should be looking at since it compares framebuffer and core count.
Ok stop right there, ive seen the link and the video where they show these benchmarks states at the beginning the Titan Blacks core clocks are the default 889 base...
But ill just pretend what you said is true for the benchmarks listed, 1 card would not randomly scale that much better on one random game. All the games except for Crysis 3 have very consistent scaling and performance increases.

And we've come full circle. I never said that you couldn't find a situation where the larger framebuffer wouldn't provide better numbers - a juggling of full screen AA and texture settings could easily manufacture that scenario. My point is that the GPU runs out of processing power before the vRAM limitation becomes the limiting factor- unless you see sub-30 f.p.s. as indicative of real world usage in Crysis 3. Do you?
The single GPU, not the DUAL GPU and beyond. If people are gaming at 4k, they are grabbing a pair of GPU's at least to try and run on a 4k display. They are not spending 700+ bucks on a monitor to only then spend 700 bucks on a single GPU to run a game at this stage, they are going to buy a couple and try to up the performance in which case 3GB limits the scalability. If your hitting the max VRAM with 1 card, your already hitting a wall (unless it magically requires right at the max to run) which indicates adding a second card is going to have some limiting factor in it. Also juggling settings??? You mean cranking them up to max? So why on Earth am I spending a close to 2 Grand on graphics cards to play at a low quality setting. If people are spending the money no matter if its AMD or Nvidia on the top tier GPU's, they want to crank the settings up.

unless you see sub-30 f.p.s. as indicative of real world usage in Crysis 3. Do you?

Well there are 30hz 4k monitors, and still a performance difference is still a performance difference.

And you see this future arriving before the next series of cards which will undoubtedly be better suited for this exact scenario ? Given that Pirate Islands and GM 204 are slated to arrive in around six months, and GK110's successor will be taped out in the next couple of weeks or so, that seems like a very optimistic viewpoint- more so given that a 4K adopter probably won't have any qualms about upgrading to the newest and most powerful boards.
Just because you buy a new GPU every year does not mean that everyone does. Are you saying that spending 1400+ dollars just to have it perform poorly at the resolution both companies seem adamant to advertise is ok. That could almost be labeled as false advertising. If they come out in the next few weeks, well then more power to them but having to upgrade after spending possibly 1400+ dollars because of something as small as a VRAM limitation feels stupid.

And I've yet to see any actual proof to back up the assertion.
What needs to be shown is the same GPU using two different frame buffer capacities (say 3GB and 6GB) being benchmarked at playable framerates - at least for the larger frame buffer card....and in more than a single benchmark. I doubt very many people buy a 4K screen and multi high end GPUs for a single title.
At the moment it isn't really anything more than the occasional outlier result....if that.
Ill name 3 games that hit the 3gb wall, BF4, Crysis 3, and Rome Total War 2 just off the top of my head. You want that your going to have to dig for an hour but either way there are enough points that have provided that the VRAM limit is hit at 3gb on the 780/ti in some games which still indicates theres a small wall. You wanna see, that your going to have to wait since all that are up are the basic Palit Jetstream reviews right now.

Well, marketing saw the need for it if nothing else. Strange that Nvidia OK'd 6GB 780's the moment that Sapphire's 8GB 290X showed up at CEBIT don't you think? Sapphire announce a 8GB 290X on the 13th March, EVGA announce their 6GB 780 eight days later. Odd that 4K gaming has been a widespread talking point for some time, yet it suddenly became imperative to have 3GB+ from both IHV's and premiere single-vendor AIB's within days of each other.
Which I knew would happen, AMD has been pro 4k for awhile and Nvidia jumped on the same bandwagon. Only difference if that 3gb is right on the edge and not enough for all games which makes 4k gaming bad on its 3gb counterparts. Multiple companies from Nvidia have announced 6gb edition cards yet only sapphire has announced an 8gb R9 290X card setup. Probably has something to do with the fact that 4gb has not been limited nearly as much as 3gb has been.

So, feel free to post links to any gaming benchmark that highlights the difference between frame buffer only (say, 3GB vs 6GB, or 4GB vs 8GB) using the same GPU at the same clocks. A comparison should eliminate as many variables as possible. Anything else comes under the heading of opinion - and while your welcome to air yours as is everyone else, it hardly constitutes proof.

Opinion??? I just showed games using up the 3gb Frame buffer in my first posted video which means its not enough...Whatever im done here and won't read whatever is posted next. I have already shown my point...

In the video (Digital Storm Titan Black 4k), 3 games and a benchmark are used to show the relative performance of the 3 cards in both single and multi-card systems.
The order for all single GPU configs in terms of performance for the games are as follows.
Titan Black
780ti
Titan
Now the Order for the multi-GPU's are as follows until Crysis 3
Titan Black
780ti
Titan
On Crysis 3, the Order changes on the Multi-GPU side to
Titan Black
Titan
780ti
Titan Overtakes the 780TI in a multi-GPU setup when it was behind in a single GPU config. This is obviously indicative of something either going horribly wrong or something holding it back. The logical is that the game exceeded something the 780ti does not have, but since the GPU and generally the core clocks are lower on a Titan (Unless they overclocked it a lot further, but it still shows in everything else to be lower on the FPS in games) then the only major contributing factor left is the difference of 3gb on the card.

This indicates a need for more ram...
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Ok stop right there, ive seen the link and the video where they show these benchmarks states at the beginning the Titan Blacks core clocks are the default 889 base...
Er, so what if they were ? Your supposed original point was that the GTX Titan was supposedly faster than the GTX 780 Ti...
In all actuality, the GTX Titan should not be ahead stock to stock of a GTX 780ti in SLI in Crysis 3, yet it is...
So what does the Titan Blacks clocks got to do with it? Also:
1. The video just shows a GPU-Z screen for the stock card it doesn't follow that the stock clocks were used- especially when the Digital Storm reviewer actually posted the overclocks used
2. You're also referencing the wrong Digital Storm review from the chart I posted.
Also juggling settings??? You mean cranking them up to max?
No. I mean what I referenced
- a juggling of full screen AA and texture settings could easily manufacture that scenario.
Simply adding a combination of enough MSAA (or SSAA) AND texture settings (or using full dynamic lighting or similar) to fill the smaller framebuffer without choking the GPU which would stall out both versions of the card.
Ill name 3 games that hit the 3gb wall, BF4, Crysis 3, and Rome Total War 2 just off the top of my head.
You still have to make the distinction between usage and allocation. This subject has been reiterated more times than I can remember (Here's one....Here's another one) in response to the "running out of vRAM" doom mongers. All vRAM usage monitors don't report actual usage, the report vRAM allocation - that is, actual usage + whatever the application wants to cache. Typically, if the frame buffer is larger, the app takes to opportunity to use it to cache more resources. It isn't uncommon for a an app to max out whatever framebuffer it finds.

I'd also note that Rome II's engine actively adjusts quality as well as caching to tailor itself to the available framebuffer, so I wouldn't take any 3GB usage scenario's as gospel.


As for the rest, I still don't see any benchmarks comparing a 3GB to 6GB of the same GPU-based card in a multi GPU configuration, showing an advantage at a playable framerate.
 
Last edited:

Stolicran

New Member
Joined
Aug 6, 2014
Messages
2 (0.00/day)
Considering Asus allows VGAhot Wire you should be able to OC the Unicore nicely.
Even if you look at stoke 780 in 2 way SLI compared to the titan, the titan fails

In terms of overall gaming performance, the graphical capabilities of the Nvidia GeForce GTX 780 SLI are significantly better than the Nvidia GeForce GTX Titan.

http://www.game-debate.com/gpu/inde...pare=geforce-gtx-780-sli-vs-geforce-gtx-titan

The GTX 780 has 288.4 GB/sec greater memory bandwidth than the GeForce GTX Titan, which means that the memory performance of the GTX 780 is massively better than the GeForce GTX Titan. (this is stock 3gb NOT this new 6Gb variation.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Considering Asus allows VGAhot Wire you should be able to OC the Unicore nicely.
Even if you look at stoke 780 in 2 way SLI compared to the titan, the titan fails

In terms of overall gaming performance, the graphical capabilities of the Nvidia GeForce GTX 780 SLI are significantly better than the Nvidia GeForce GTX Titan.

http://www.game-debate.com/gpu/inde...pare=geforce-gtx-780-sli-vs-geforce-gtx-titan

The GTX 780 has 288.4 GB/sec greater memory bandwidth than the GeForce GTX Titan, which means that the memory performance of the GTX 780 is massively better than the GeForce GTX Titan. (this is stock 3gb NOT this new 6Gb variation.

Wow! TWO 780's are better than ONE Titan and you attribute that to memory performance ?
 

Stolicran

New Member
Joined
Aug 6, 2014
Messages
2 (0.00/day)
Clearly not as that would be silly.
I am in the camp that we are going to see a greater amount of applications and games that will require and thrive with more vram.

I also am an advocate of vga hotwire which allows for some great overclocking in areas that Nvidia has locked for the average user.

Sli is not for everyone, however in this array it would be larger than the sum of its parts. The link I posted, I think is self explanatory. However if you care to dispute any or all of it, im all ears.

James
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Clearly not as that would be silly.
I am in the camp that we are going to see a greater amount of applications and games that will require and thrive with more vram.
That is a given. The only real difference in opinion is when these games and apps will arrive in any quantity.
For some reason, some people seem to think that these new vRAM taxing games and apps will be critical within the time frame of current architectures ( Kepler, Maxwell, Volcanic Islands, Pirate Islands). I am not in that camp. I believe that for vRAM limitations to be exposed will require a greater lead-in time. 4K is niche, made all the worse by Windows font issues. Game image quality levels aren't dictated by the high end, they are dictated by the graphics ability of the cards that represent the bulk of sales.
By the time 4K and higher image quality levels ( path / ray tracing, voxel based global illumination etc.) become more widely accepted ( look at the time it took for 1080p to become mainstream) we will have a whole new series of architectures based upon high bandwidth memory (HBM) that make these current top cards look like entry level - note that these current cards will be largely consigned to the history books and budget gamers buying second hand when GPUs built on 20nm/16nm packing wide I/O memory (which is available to OEMs/ODMs now) in a year or so. If you are of the opinion that vRAM limitation will become critical before then, then you are right - be aware that the vast majority of GPUs being sold are 1GB and 2GB boards though - you really think developers are going to alienate 90+% of the user base without allowing the hardware to mature?
Sli is not for everyone, however in this array it would be larger than the sum of its parts.
Thanks, I'm well aware of the advantages and otherwise of SLI (I've used dual/triple cards since the GeForce 6800 series) and CrossfireX for that matter, and I'm well acquainted with the GTX 780 in particular considering it is what I'm presently using.
 
Top