• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 1070 Ti by Late October

Oh but you're wrong, in last 7 years all game engines are using deferred rendering as a norm, they are all rendering scene compositely in layers using g-buffers and build frame buffer from that ... with higher resolution frame buffer come same high resolution g-buffers times number of layers, both bandwidth and fill rate requirements rise at 4K ... case in point, texture resolution can be only 2x2 pixels and that 4 texels can cover your entire 4k screen - you still require certain texture fill rate in a deferred renderer even your shaders happily compute only texture samplers interpolate between 4 samples for each pixel on your 4k screen ;)
Try reading my post again.
Frame buffers scale with screen resolution, going from 1440p to 4K, even with AA only increases the consumption with megabytes. With tiled rendering, the frame buffers mostly stay cache local, resulting in very marginal bandwidth requirements with resolution changes.
Texture resources which uses most of bandwidth are not proportional with screen resolution, they are proportional with detail levels.
 
Try reading my post again.
Frame buffers scale with screen resolution, going from 1440p to 4K, even with AA only increases the consumption with megabytes. With tiled rendering, the frame buffers mostly stay cache local, resulting in very marginal bandwidth requirements with resolution changes.
Texture resources which uses most of bandwidth are not proportional with screen resolution, they are proportional with detail levels.
Dude, I hear you, but you are underestimating the importance of frame buffer resolution - all those texture resources need to be sampled more if the polygon takes up more area on screen in pixels, so frame buffer resolution is direct cause for increased bandwidth in both non-proportional parts
 
Dude, I hear you, but you are underestimating the importance of frame buffer resolution - all those texture resources need to be sampled more if the polygon takes up more area on screen in pixels, so frame buffer resolution is direct cause for increased bandwidth in both non-proportional parts
No, you're wrong. A patch of a texture is loaded from memory regardless of sampling resolution. Higher resolution will increase TMU load, but not memory bandwidth.
 
No, you're wrong. A patch of a texture is loaded from memory regardless of sampling resolution. Higher resolution will increase TMU load, but not memory bandwidth.
It may easily hit a lower MIP level of the texture on 4k on some medium distance surface, but that's beside the point because you are fixed on that part of the bandwidth that doesn't change much with resolution ... the other part where diff memory compression tries to help, is compositing frame buffer - those are also huge textures that change every frame and need to be sampled to calculate final image
Seems like you are arguing infinite cache scenario
 
Last edited:
You need to learn how binning works. By your logic every GTX 1070 is a failed GTX 1080.
Well, isn't it?? ;) ;)

I'm curious, how does binning work exactly? In understanding of binning, every 1070, is a failed GTX 1080, and every 1080 is a failed Titan X Pascal -> gtx 1080 ti -> Titan Xp.

My understand is that they manufacture their main target chip, the GP102. then ofcourse lots of defects and not everything perform to their standards.. they get classified/remodelled/modified/cut/locked into different low tier Gpus, depending on how bad the results
 
Nordic Hardware is reporting the 1070 Ti will have a suggested MSRP of $430.
 
Back
Top