• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI Announces Radeon R9 290X Gaming 8GB Graphics Card

I keep seeing people saying 4k is too much for one card.
I have to say in a lot of cases you are wrong.
I was playing in 4k on one gtx 680 for some time.
One gtx 980 runs start citizen at 4k perfectly smooth.
Tested it myself on ultra.

I have never seen anywhere near 8 gb of memory usage though.
Most I have seen is like 2.
 
I use almost 3gb in bf4 at 2560x1440...(default ultra). Amazed you aren't seeing more than 2gb in other titles.

As far as single card performance at 4k, depends on the title and settings, but most aren't what one would consider 'playable' (30fps+) with reasonable texture settings when looking at benchmarks.. I'd rather rock 1440 and ultra than 4k and medium... But that is just me.

not what I would consider playable here..But it isn't a fps so more forgiveness is built in not to mention it's largely a cpu bound game...http://www.pugetsystems.com/labs/articles/Star-Citizen-Benchmark-Arena-Commander-v0-8-571/page3
 
Last edited:
I keep seeing people saying 4k is too much for one card.
I have to say in a lot of cases you are wrong.
I was playing in 4k on one gtx 680 for some time.
One gtx 980 runs start citizen at 4k perfectly smooth.
Tested it myself on ultra.

I have never seen anywhere near 8 gb of memory usage though.
Most I have seen is like 2.
You're right but GTX980 is only borderline (~30-40fps) 4K capable with recent games. It's not even capable outputting 30fps in Crysis 3.
I'm not so sure with next year games or with AA, texture mods, and ENB even though it's currently the fastest and most expensive single card (not counting old and odd Titans).

Comfortably running 4K with single card is still a year or two away.
 
You're right but GTX980 is only borderline (~30-40fps) 4K capable with recent games. It's not even capable outputting 30fps in Crysis 3.
I'm not so sure with next year games or with AA, texture mods, and ENB even though it's currently the fastest and most expensive single card (not counting old and odd Titans).

Comfortably running 4K with single card is still a year or two away.

Crysis is a terrible benchmark imho.
Crysis is the perfect example of badly optimized games.
I really don't think about "oh can I play crysis in 4k".
Who does?

I can comfortably play star citizen at 4k on one card.

I understand what you mean and to an extent I agree but most games you will be able to play in 4k on one gtx 980.
 
I use almost 3gb in bf4 at 2560x1440...(default ultra). Amazed you aren't seeing more than 2gb in other titles.

As far as single card performance at 4k, depends on the title and settings, but most aren't what one would consider 'playable' (30fps+) with reasonable texture settings when looking at benchmarks.. I'd rather rock 1440 and ultra than 4k and medium... But that is just me.

not what I would consider playable here..But it isn't a fps so more forgiveness is built in not to mention it's largely a cpu bound game...http://www.pugetsystems.com/labs/articles/Star-Citizen-Benchmark-Arena-Commander-v0-8-571/page3

that's an old version of SC
 
Are those directfets?
 
Well, tbh they could've added that new TFV cooler on top. I mean, why wouldn't you?
 
You won't need AA with 4K, unless you're using 65-inch or 80-inch 4K TVs, instead of 28-inch or 32-inch monitors. At monitor sizes, the jagged lines simply aren't discernible.

You are pretty correct there.
I find that on my 50" 4k TV, I use 2X AA though, never more.
Totally unnecessary to use more.
 
Crysis is a terrible benchmark imho.
Crysis is the perfect example of badly optimized games.
I really don't think about "oh can I play crysis in 4k".
Who does?

I can comfortably play star citizen at 4k on one card.

I understand what you mean and to an extent I agree but most games you will be able to play in 4k on one gtx 980.

Star Citizen is not even released. What are you talking about?!
 
Star Citizen is not even released. What are you talking about?!

You do know there is a multiplayer mode right?
It runs perfectly in it.
 
Well, it seems like this card will never make it to the states and is only Asia and Europe...
 
Well, it seems like this card will never make it to the states and is only Asia and Europe...

There are too many people in the States with more money than sense for it not to get here. :peace:
Seeing as though games will not use 8GB of RAM at this point, I would be leery of buying into 8GBs of RAM on a GPU.

Once this is massive RAM idea is utilized and more mainstream, these 8GB cards may not work properly like the cards released at that time.
 
Once this is massive RAM idea is utilized and more mainstream, these 8GB cards may not work properly like the cards released at that time.
This card will become useful as soon as we start using more than 4GB of video ram, we don't need to be using 8GB for it to be useful. 8GB just seems like a big number, but it's no bigger of a jump than 1GB to 2GB when most intense games back then didn't use more than 800MB of VRAM. The simple fact is that if it's not all used isn't an issue. If any more than 4GB is used, it's worth it because the moment you start running off system memory because you've run out dedicate memory, you performance starts dropping pretty fast.

The real question is, when will 4GB not be enough and will it be soon? I think someone earlier posted that at 4k they are using as much as 3.5GB of VRAM. That's really close to eating up 4GB already. I think that's just worth mentioning.
 
The reason I said that today's 8GB card may not work properly in the future, (when 8GB is really needed, utilized, and properly implemented) has to do with the flow of DATA between the cards and their screens. Are connectors that we use today going to be relevant (bandwidth capable) in a few years when so much more DATA will be passing to monitors?
 
Again, anything over 4GB will render 6/8GB useful...we are there at 4K for some titles... more added weekly.

In a year or two I would imagine so. By the time the method becomes obsolete, the cards should also long be obsolete as well. Outside of NVIDIA's compression technology 256 bit is 256 bit. Unless they completely change the way GPUs do things, which is possible, it will still be viable in say 2 years. If they do change the way that GPUs do things, than all other cards vram amount be damned, will be in the same boat.
 
Another thing that bothers me is that If you have two cards with 4GB of RAM on them, why can't you have 8GB to play with in SLI or Crossfire? (yeah just getting bitchy now)
 
Blame SLI/CFx and how it currently works...

I understand that, but I wish they would figure out how to utilize all of every card's RAM
 
Another thing that bothers me is that If you have two cards with 4GB of RAM on them, why can't you have 8GB to play with in SLI or Crossfire? (yeah just getting bitchy now)
I understand that, but I wish they would figure out how to utilize all of every card's RAM
They need to. Both cards are rendering the same exact scene, which requires textures to be loaded on both graphics cards. PCI-E communication is relatively slow compared to the memory bus on a GPU so there is no way you could share the data to process it in parallel.

Think of it this way, if you have two gpus, one renders a frame, than the next renders a frame, then the first renders again, going back and forth. Both GPUs needs all the stuff to render that scene, therefore memory doesn't scale out. Also sharing memory would imply a memory controller shared by two GPUs and that adds latency and reduces bandwidth which could cripple a GPU.

All in all, things are the way they are because people who are very smart and do this for a living haven't thought up a better way to do it. So before you complain a little too much, I beg you to remember that this isn't easy stuff to design and there are limitations to what hardware as we know it can do.
 
Gpus just cant request more work than what a piece of software can push out. Thats where i think games should be pc developed and then pushed to console dummed down.
 
So before you complain a little too much, I beg you to remember that this isn't easy stuff to design and there are limitations to what hardware as we know it can do.

It's not a complaint as much as a wish for a better way of doing it. I like the GPUs that I have and the new ones coming out too.
I remember in the past that it was possible to add more memory to some of my video cards. Plug an play memory would be cool nowadays.
 
It's not a complaint as much as a wish for a better way of doing it.
That's like people at work saying "Why can't we just do X, or add feature Y", something like that and most people underestimate the work to do it because they don't recognize all the work that's needed to do one thing. Crossfire/SLi is no different. Its requirements are vastly understated which leads to people like you wondering why it can't be more simple and the answer is because it can't.
I remember in the past that it was possible to add more memory to some of my video cards. Plug an play memory would be cool nowadays.
Yeah, except for the fact that it would be slow, which is why they stopped doing it. Even with DIMMs like DDR3 or 4, each DIMM would have a 64-bit memory bus. To get at least 256 you would need 4 DIMMs. Where the heck are those going to go on a GPU? Thanks, but no thanks. How about that 512-bit bus on the 290(x)? That would be 8 DIMMs.

I'm saying this all because CFX/SLI isn't going to change because both cards *NEED* to have everything to render any given scene. Adding any back-and-forth communication between the GPUs to share memory will slow down overall rendering. In fact duplication of data such as textures is one of the ways they can run crossfire without having games implement something insane and without slowing down GPUs due to superfluous communication because memory requests are blocking and will slow the entire thing down.

I'm responding in depth because people like you say, "Well, wouldn't it be nice if this worked like this." The problem is more often than not, people over-simplify the problem and don't realize the one of the reasons crossfire/sli even works is because data exists on both GPUs. So if you want all of your VRAM, don't run crossfire/sli, but what you're carping about is part of how those technologies work.
 
Last edited:
Really the solution to the CFX/SLI issue is to make fatter cards with more chips on them. If you have something like a 295x2 or Titan Z you have the potential to allow two GPU chips to access the same block of VRAM, or some way to have a command/instruction chip distribute the load over two powerful slave GPUs which share VRAM
 
Back
Top