1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Can AMD or Intel create a real SOC solution; universal for consoles and PCs?

Discussion in 'General Hardware' started by EastCoasthandle, May 18, 2011.

  1. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    dont need to gpus people fail to understand the efficiency gains consoles have

    again 3.2ghz triple core that is slower in x86 work then a typical dual core cpu today.
    256mb essentially 7800gt

    compare consoles graphics with Unreal Engine 3 vs PC we get higher res and AA in most UE3 games but not much else maybe slightly higher texture resolutions and of course higher frame rate

    now roughly said a 7800gt performance wise = 8600gt = 9500gt think on that for a second

    8600gt is only 32shaders
    2600xt is 120 shaders

    so for AMD / Nvidia it would take a 32 shader Nvidia gpu or 120 shader AMD gpu to equal the performance of a console aka todays gen if at the SAME efficiency

    APU from AMD current have 4 cores + 400 shaders

    Below is an example of what i mean from the Game FEAR
    Current consoles are at 8600 / 2600 performance 400shaders in the APU = 2900 / 8800 series performance now do you people get the damn picture

    the new IGPs built into AMDs cpus coming out in JUNE 2011 aka 1 month from now will offer a quadcore with a gpu on par to a 2900 / 8800 gpu going by the fact fear runs at 30fps on console at a resolution of 1000x600 or 1280x720p roughly it means a 2600 / 8600 is already faster with the 2900 / 8800 offering nearly 4x that performance that means in a High efficiency situation as in a console where 80-95% is the norm vs 40% of a PC would mean the gain would be more then that,

    new consoles arent expected till 2014 now if in 2011 again we can get 4cores 400 shaders roughly equal to a Phenom II 955 + 8800gt /9800gt / gt 240 / gt 440 or 2900 / 3800 / 4850 / 5670

    all you have to do is look at the tick tock cycle

    2012 would bring an update of maybe 4 cores 600shaders 2013 would be the next drop in manufacturing for amd which would end up resulting in 8 cores from bulldozer + 800-1000shaders

    that would mean by the time a consoles design is finalized and SOC chip with the power needed would be viable,
    a good way to think about this is look at Dragon Age origins performance across ATi / AMD hardware the game only used 700 shaders out of 800 on a 4870, more lost performance on newer cards but performance still scaled. so even at a disadvantage in terms of coding efficiency etc the cards continue to offer better performance, this would mean on a console front where they squeeze every bit they can get you could essential expect a 5830/5850 or GTX 460 / 470 performance + a quadcore chip to be feasible on a console. as even at 1080P non 3D games only need 30fps solid again the chart below demonstrates my point. since FEAR was ported to consoles it gives an idea of whats to come. At the launch of the PS3 the 8800 launch was just around the corner consoles were only on par with PC tech for around 6months before new technology resulted in double the performance. Its also well known that most modern gpus are limited by cpu or other system components at low resolution. so an SOC quadcore thats used to the maximum capacity vs a quadcore today that only has 2 cores used would mean the consoles again would win out with efficiency bonuses. Not hard to connect the dots really.
    I included PREY another console title below to demonstrate the performance gap
    now again fact is imagine if developers had in every system a GTX 460 or 5850 with 512mb of gpu memory dedicated, most console games right now are 30fps at 600p now looking at efficiency and pc performance charts at the same settings it would be around 130-160fps at 600p meaning from resolution scaling you could hit 1200p and 75-80fps so by the time new consoles are finalized, an SOC chip would easily be doable at low power and cheap cost

    because the fact is we have hit a wall in advancing graphics for every 5-10% we gain in image quality it takes another 50% more power, this is why the current generous of consoles are getting dragged out, the next set of consoles should actually be SOC now that i think about it
    look at a 5850 today and realize the game its running isnt optimized imagine a 5850 / gtx 460 at 80-95% efficiency in terms of a PC gpu that would put a 5850 / GTX 460 up around 6970 / GTX 570 performance lvls, all on a tiny chip by itself because it dosent NEED to be as powerful as a desktop part do to a far greater efficiency in the use of said hardware.
    [​IMG]
    [​IMG]
     
    Last edited: May 23, 2011
  2. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.27/day)
    Thanks Received:
    39
    Will it be able to run heaven benchmark extreme tesselation 2400x1600 120fps 3D?
    That should be the goal to give the next gen a 10+year lifespan,
    give it enough power to handle all media and gaming when 52in tvs are in the resolution those $$$ purty 30in monitors are now. PC gaming would end so enjoy your gaming rigs while they can crush consuls.
     
  3. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    thats just it the next set of GPUs on the desktop and ones after that wont do that either not the way things are now and 3D is a passing fad

    1/10 ppl cant see it im one of those that cant

    top it off tell me how many people do you know with a 3d TV?
    how many are going to run out and buy one anytime soon?
    how many are going to go 3D when TV is still 720p and HD streaming from netflix in many parts of the US is still somewhat slow?

    3d is a fad. 1080p will be the norm for a long period of time due to consumer adoption trends,

    just look at the god damn steam Survey

    heaven bench is over using tessellation all these tessellation benchs = fucking fail but then again i dont expect people who do actually do 3D work to understand that.

    you do realize that you can tessellate an object to extremes but a pixel cant display it after a certain extent, and guess what? Unigine Extreme is just such a case and the same way Tessellation is used in Alien vs Predator is also screwy, same goes for HAWX 2

    AvP tessellation makes the alien model more dense as a mesh sadly for the most part theres 0 change in the actual model tessellation is applied but makes no change to the physical shape of the mesh to show any actual improvements

    tessellations actual use is a way to fluidily increase detail aka remove LOD lvls no more POP in in games you dont need 500 million polygons to do that, again dont expect ppl to understand that either.

    simple fact is in 2006 Oblivion had the highest polycount of any game in certain regions this polycount exists on consoles as well, as of today there are very few games that use more polygons then Oblivion but offer far better graphical quality. fact is the biggest issues with games isnt poly count tessellation etc. its realistic believeable animation *compare LA Noire to any other game* and lighting + shadowing lighting shadowing subsurface scattering for light entering skin etc to give proper tones, these things add more realism there also far more taxing in general but again it comes back to the point

    5-10% image quality gains = 50% more hardware performance needed.

    key point is an SOC chip can and will do what needs to be done, theres a reason AMD isnt even bothering with 3D on the Desktop instead outsourcing it elsewhere, because by the Time 3D is of any value AMD expects to have the technology to deploy and actual holodeck if you will its what there actually working towards.

    so tell me how is 3D on a 52inch tv going to matter when

    as of this moment
    Steam HW survey
    4 core CPU 26%
    2 core CPU 55%
    1 core CPU 17%

    quadcore adoption is only 15% over 18months 18months from now will be 2013. at the same trend that would mean on a PC front gamer wise only 41% of the market would be quadcore at that point

    56% of users are DX10 gpus
    only 40% are on Windows 7
    22% are still using Windows XP and actively using steam

    looking at adoption rates by the time the new consoles roll around about id guess 9-15% of TVs would be 3D, as no one who as a 720p or 1080p tv has any reason to get a 3D tv, why would someone replace there currently perfectly fine TV for something thats not any better with an extra gimmick you have to look at the consumer base as a whole the average users on Techpowerup! make up 1% of the market when it comes to these things,

    fact is many 360 titles have 2xAA many PS3 titles use an MLAA type A A depending on the titles

    most console gamers cant tell you the difference between 30fps or 60fps, or how they feel. so 30fps will remain the minimum and 1080p would be the new standard. an SOC chip at 90% efficiency would allow both and 2xAA to 4xAA with DX11 effects. with relative ease.

    The issue with efficiency is also why AMD earlier talked about the whole removal of the Direct X API

    the API allows the games we play to function and be possible but at the same time it robs performance, writing games or application to use the hardware directly without an API making the calls offers huge performance scaling problem is that this cant be feasible down on a PC, it can however be done on a console this again raises the efficiency because you an essential cutout a middle man that slows things down.
     
    Last edited: May 23, 2011
  4. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.27/day)
    Thanks Received:
    39
    if microsoft can perfect gesture controls/kinect for the next xbox, and 3D isn't dead yet it could keep the tech alive. Who wouldn't love to have the minority support setup.
     
  5. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.27/day)
    Thanks Received:
    39
    And if you expect a 10+year lifespan you need to be ready for an increase in screen resolution.
     
  6. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    me i physically work all day every day when i come home i dont want to flail around with the fail motion controls of today,

    motion and gesture will work when i can have screens that go all the way around the room 360' otherwise its still awkward a good example is Killzone 3, even with the stupid gun holder for the move it works but feels awkward unprecise and cumbersome, it works for party games etc no so much for serious games. even in later consoles the tech might have uses I believe one member on here pointed out that all the hardware giants are perfecting there own tech 1 piece at a time as part of a bigger puzzle my point was i dont want broken shit today that has half ass support i want fully functional tomorrow,

    3D as it is on a TV, motion controls as they are with sensor cameras all = fail at this stage and not much will change in 2-3 years either. it needs more time 5-10 years by then AMD expects to have a holodeck operational so yea still pretty much fail. I play games to relax and unwind most of the tech and how its used is a bigger pain in the ass then its worth.
     
  7. xenocide

    xenocide

    Joined:
    Mar 24, 2011
    Messages:
    2,159 (1.50/day)
    Thanks Received:
    467
    Location:
    Burlington, VT
    Not sure if this is a serious response. But I'll post my thoughts.

    There is no way any form of technology could have a 10+ year lifespan, it goes against the entire premise of the field. Sure, you could use a device for 10 years, but it wouldn't be nearly as powerful as even the lowest level offerings towards the end of it's life cycle. A top-end system from 10 years ago is about as powerful as a CELL PHONE from this generation.

    This is the same with Console's. Take a look at the difference 10 years can make. Compare a launch title from the PS2 with something like Crysis 2. The difference is depressing lol. Not only are the aesthetics significantly better, but the clarity is also a lot better. There are a lot of Console games that definitely struggle with recent releases. I have seen a lot of PS3 (and 360) games that have serious frame lag at 1080p, and that is mostly due to aging hardware.

    The other elephant in the room is the fact that Consoles are always the same hardware, Developers will keep getting better at optimizing for that specific setup. With PC's, developers have to include support for a whole array of different hardware and software setups, which causes bloat, and inevitably a loss of performance. You can really see this in the Cell Phone industry right now. The iPhone4, despite having less powerful hardware, is still able to produce better looking and performing games than most Android Devices with superior hardware.

    SOC is still very much in it's infancy, despite the concept being pretty well known.
     
  8. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.27/day)
    Thanks Received:
    39
    motion controls for games are horrible. For menus and internet browsing, they're great.
     
  9. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    so why would you need a 3d tv of 52inches with 120hz support with 2 gpus etc to surf the net with motion controls lol
     
  10. Over_Lord

    Over_Lord News Editor

    Joined:
    Oct 13, 2010
    Messages:
    755 (0.47/day)
    Thanks Received:
    87
    Location:
    Hyderabad
    dont you get anything? THe same 5 year old hardware on consoles still runs the greatest and latest PC games, so here's the beeline

    If indeed there is common hardware, people who only game would have no particular reason to upgrade their hardware for 5 years. Now imagine that, HD5000 release in 2009, HD6000 in 2014, would you be able to live with that?
     
  11. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.27/day)
    Thanks Received:
    39
    I like to surf the web in style:rockout:
     
  12. xenocide

    xenocide

    Joined:
    Mar 24, 2011
    Messages:
    2,159 (1.50/day)
    Thanks Received:
    467
    Location:
    Burlington, VT
    Except that modern PC Games are clearly console ports that are hardly optimized. Most of them would run a lot better if they weren't boxed into the older version of DirectX 9, and designed to run on that same old 5 year hardware. Sure, they can add some fancier effects (such as AA) but it isn't nearly the full potential of the platform. There's also the fact that PC's are infinitely more versatile than any console, and thus have a lot of baggage incorporated with them...
     
  13. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.43/day)
    Thanks Received:
    69
    they still have to deal with the heavy API penalty.
     
  14. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.43/day)
    Thanks Received:
    69
    AMD has a point.
     
  15. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    does that mean i win? :roll:
     
  16. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.43/day)
    Thanks Received:
    69
    I dunno.:toast:
     
    crazyeyesreaper says thanks.
  17. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    yea im not sure either thats why i asked seems everyone ran away.
     
  18. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    10,432 (6.33/day)
    Thanks Received:
    2,544
    Location:
    Seattle, WA
    I dont see PC gaming being more expensive then console gaming. PC games a little bit cheaper on average. With console people normally buy big honken plasma screens that are at like $1000.
     
    Damn_Smooth says thanks.
    Crunching for Team TPU
  19. Damn_Smooth

    Damn_Smooth

    Joined:
    May 16, 2011
    Messages:
    1,435 (1.03/day)
    Thanks Received:
    478
    Location:
    A frozen turdberg.
    True, but you also have PC gamers running quadfire and quad sli on multiple monitors. But since they are the minority, and my entertainment system cost way more than my PC, I shouldn't have said that. Thank you for the correction.
     
  20. Over_Lord

    Over_Lord News Editor

    Joined:
    Oct 13, 2010
    Messages:
    755 (0.47/day)
    Thanks Received:
    87
    Location:
    Hyderabad
    so that puts us again here, that if the hardware becomes set, no-one would upgrade. I believe the same X360 running COD Black Ops @ 60fps (5 year old hardware) and a modern card like HD5850 scoring about the same albeit at higher resolution, it's a testament to how shitty optimization for PC goes.
     
  21. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    exactly ^ which is why by the time consoles do get replaced a 5850 performance + 6 core cpu based on Bulldozer would be viable from AMD

    i would expect by late 2013 to see 6 core zambezi + 1120 shaders - 1300 shaders on an APU with room to spare. that would be the next viable upgrade for consoles

    double the cpu cores and 3-4x the cpu performance + DX11 gpu with performance capabilities around a 5830 5850 or 6850 if used efficienctly something like a 6850 could provide graphics quality close to that of a 570 / 6950 today it all comes to how efficiently the hardware is used. PC hardware is fairly low in terms of efficiency in its use of PC hardware.
     
  22. lilhasselhoffer

    lilhasselhoffer

    Joined:
    Apr 2, 2011
    Messages:
    1,696 (1.18/day)
    Thanks Received:
    1,060
    Location:
    East Coast, USA
    What?

    If you were to port a true PC game to a console (Crysis, Metro 2033) the things would cry. I don't see a day when this will not be true. Yes, games ported from console to PC generally look equivalent, but that's bassackwards logic right there.

    It's like saying a hotpocket created from the finest gourmet ingredients is just as good as a hot pocket made from dog food, because they both look the same on the outside.


    As far as the incremental graphics upgrades, not going to happen. The additional time and development costs facilitate leap-frogging graphics generations. While computers march forward, the consoles wait until volume pricing can make them fiscally viable. An example of this is that the xbox 360 Jasper units used much more efficient processors, but were crippled on the boards to create performance equal to that of the original 360s. This same tactic applies to video cards. The producer takes a bath initially, with the understanding that in 18 months they will break even, and in 24 months they will turn a profit because manufacturing costs are significantly down.

    Consoles have adopted the ultimate consumer role in technology. They are viable for a given time, completely hardware locked, and become fiscally viable less than half way into their useful life. The PC is the opposite. Pricing is generally higher, but this is because upgrades are easy and happen frequently. Whenever a console demonstrate true upgradeability (not a ram module, I'm looking at you N64) then you can say my 6xxx series is equivalent to a 1xxx series.
     
  23. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.34/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    IMO a SOC will just never cut it for a high-end gaming solution, be it a PC or a console. Unless all 3 console developers follow the Wii route. And I don't see it happening considering not even Nintendo seems to be following that route now. A revision using a SOC down the line when smaller process are available, now that's another thing, but IMHO the next gen consoles will still have a CPU and GPU dies on their first design, because that's the only way they could push the envelope* and have good yields.

    * Maybe it's more accurate to say stay relevant compared to PC.

    I don't agree at all (and I do 3D ;)). We need far more tesselation/geometry and instead of being used for simple smoothness and LOD, it must be used for real displacement effects. And it's doable, it's very doable. IMO Nvidia did it far better in that front, adding tesselators does not take much die space and can add a lot more detail that is not currently used by any demo/game except Nvidia's own ones which use 10x more geometric detail than Heaven without sacrificing ANY performance in comparison. Don't make mistake, for Nvidia cards, Heaven benchmark and Stonegiant demo and pretty much all the rest are heavily bottlenecked in the hull and domain stages which are dependant on shaders and not the fixed function stages. So you can add far more detail without sacrificing performance, pretty much in the way you can add much more Anti-Aliasing levels nowadays. Also adding so many geometry powerhouse did not make the well balanced Nvidia chips like GF104 and GF114* much worse than Cayman in the performance per die area front and it has 4x the tesselation capabilities (and also have better GPGPU). The point is that geometry computing (fixed function units) can be increased a lot, i.e 10 fold without increasing die space/power consumption a lot, if at all comparatively speaking, something that cannot be said about shaders.

    The current state of tesselation is totally skewed by the fact that AMD cards currently lack the grunt, so developers are focusing more on the control than on the detail/resolution of 3d meshes, but that does not need to be like that. Not at all. Currently they are following AMD's guidelines about "efficient" adaptative tesselation, because there's no other choice if you want your game to be able to play on both brands, but it's not entirely true that adaptative tesselation is more efficient (at silicon level) than brute force: in many instances and posible uses for tesselation adding more fixed function units is arguably far more efficient and IMO, the way to go. Balance is required of course.

    We are not far from the point in which triangle count can match the pixel count of bump maps without sacrificing too much performance (my GTX460 almost does it in the many Nvidia demos) and that's the way to go IMO, get rid entirely of bump mapping shader effects, POM, etc, because those effects are probably more expensive performance/silicon wise than tesselation, once enough fixed function units have been put there. So in next GPUs they could add comparatively less shaders and more tesselators and geometry related parts and shader performance would still increase a lot because you'd free up a lot of shaders which are currently used for bump effects.

    PS: Not a single demo (Nvidia's included) is close to 1 pixel = 1 triangle, let alone Heaven which according to what I read somewhere, averages something like 128 pixels/triangle iirc.
    PS2: Also once you start using tesselation for displacement purposes, you do WANT more triangles than texels to avoid heigh/displacement related aliasing. As much as you want at least 4 samples for accurate regular AA, you'd want more than one sample for displacement too. And as you know that's true for everything including lighting, something that can be seen when comparing the far more accurate lighting in Crysis 1 than Crysis 2...

    This is mostly true and the reason why for the next generations we need a huge improvement in tesselation/geometry and less in lighting/shading. Shaders currently take up 60-80% of die space, so doubling them has a massive effect on die space and little benefit. Geometry related units take up a lot less space and the real die space taken by the fixed function units is absolutely obscured by the fact that they are being embedded into the front end, along with the setup engine, instruction decode, etc. You can probably take GF110, quadruple it's tesselator count up to 64 and it would still be in the same 500-550 mm^2 league. But you do need to create your GPU with all that in mind. All it takes is the right architecture and Fermi is closer but not right there yet, and sorry but when it comes to this the 6+ year old AMD/ATI architecture is far far far behind, with no signs of being "fixable". I mean look what they did on Cayman, they just doubled up everything, because that's the only thing that the architecture probably allowed, with the consequence that Cayman is far less "efficient" than Cypress or Barts. Don't get me wrong it's still a little more efficient than Nvidia's design in overall graphics tasks (i.e current games), but it's completely limited as to fully supporting DX11 and its features. A 16 tesselator Cayman based chip would probably be a lot less efficient than GF100 and possibly just as big.

    * GF100/110 are excluded from that comparison, because they have been designed for GPGPU as much as for graphics, a design choice that is paying dividends in the professional market.
     
    Last edited: May 24, 2011
  24. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,208 (3.78/day)
    Thanks Received:
    2,843
    Location:
    04578
    yet were talking console lvl with an SOC chip a discrete gpu is not a SOC chip :toast:

    which is why i made the arguments i did, we can get that huge boost in the next discrete gpus but you know as well as i do it wont make it to consoles. the next console will be an SOC chip somewhere in the vein of AMDs current APUs,

    as ive already pointed out who know maybe intel will come out with a better graphics die but i doubt it. as far as a PC graphics stand point id agree with you on tessellation to an extent, but while increasing geometry isnt so hard. It also makes developers lazy. tessellation isnt so much a savor because lets face it if you could use tessellation to increase detail you could essential use higher polygon models from the get go that are more efficient. which is where my training comes in tessellation i find is acceptable for world geometry as it allows for easy world map building etc. but for important objects such as characters etc. i find that going the extra mile and modeling it properly would be better idea then tessellation. That said this is a bit off topic

    and as far as Nvidias tech demos the endless city or w.e bench has been hacked to work on AMD gpus and sure enough AMD gpus put up the frame rates fairly close to the Nvidia gpus so it seems both GPU vendors need more grunt overall. I just happen to find AMDs method more appealing, if we are gonna seriously use GPU physics, id rather have fixed function tessellation with shaders free for open cl etc then nvidias method to brute force with shaders for everything. Eitherway. Tessellation is chump change for now and it dosent matter what happens with the next gen of cards either, it will be the gpus that come after the 600 series and 7000 series that matter because that will be when the consumer base at large will have adopted DX11 tech and newer tech to an extent to make it worth while for both vendors and developers,

    PS more on tessellation
    I can see geometry getting upped but look at Crysis in general without tessellation still looks better then most games that use it today and that came out 5 years ago now. major point isnt if me or you is right or AMD or Nvidia in there methods. what is apparent is everyones currently doing it wrong again AvP prime example Alien mesh is tessellated but at least in the benchmark the displacement maps are missing and or gone meaning the mesh is tessellated for the sake of being tessellated, details are from just plain old normal maps. Metro 2033 uses them on the characters but thats wastefull a good character modeler can get the same detail without wasting so many polygons. same applies to stalker CoP. right now its my honest opinion tessellation is being improperly used, and its just a gimmick untill its usage becomes more mainstream and developers find better ways to use it. a good example for tessellation would be water for rivers or oceans etc. when characters walk through it it gets displaced. or fallout new vegas for example tessellation would do wonders for the mojave desert in terms of the rocks and canyons theres better uses for tessellation then what we are seeing right now. and while those Demos that get put out are interesting both nvidia and amd both have similar performance in them. In general though i dont think anything will change much in its usage or how things are handled not for another 2 years at least.


    more on topic id again expect a APU from AMD to eventually make its way to consoles in 2013/2014.
     
    Last edited: May 24, 2011
  25. cheesy999

    Joined:
    Jul 2, 2010
    Messages:
    3,957 (2.32/day)
    Thanks Received:
    595
    Location:
    UK
    they are still the fastest and most efficient method of non fixed purpose processing

    would love to see some fixed purpose parts though - because of design 2-4xAA is added to all xbox games without performance loss - at the end of the day i'd love it if all GTX 600 series GPU's did 4x AA without any performance drop it would make them a lot more viable for high end gaming - remember how old the xbox is though, by 600 series they could probably include 32x AA with no loss taking up minimal die space
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page