Thursday, May 31st 2018

NVIDIA to Detail New Mainstream GPU at Hot Chips Symposium in August

Even as NVIDIA's next-generation computer graphics architecture for mainstream users remains an elusive unicorn, speculation and tendrils of smokehave kept the community in a somewhat tight edge when it comes to the how and when of its features and introduction. NVIDIA may have launched another architecture since its current consumer-level Pascal in Volta, but that one has been reserved to professional, computing-intensive scenarios. Speculation is rife on NVIDIA's next-generation architecture, and the posted program for the Hot Chips Symposium could be the light at the end of the tunnel for a new breath of life into the graphics card market.

Looking at the Hot Chips' Symposium program, the detailed section for the first day of the conference, in August 20th, lists a talk by NVIDIA's Stuart Oberman, titled "NVIDIA's Next Generation Mainstream GPU". This likely means exactly as it reads, and is an introduction to NVIDIA's next-generation computing solution under its gaming GeForce brand - or it could be an announcement, though a Hot Chips Symposium for that seems slightly off the mark. You can check the symposium's schedule on the source link - there are some interesting subjects there, such as Intel's "High Performance Graphics solutions in thin and light mobile form factors" - which could see talks of the Intel-AMD collaboration in Kaby Lake G, and possibly of the work being done on Intel's in-house high-performance graphics technologies (with many of AMD's own RTG veterans, of course).
Source: Hot Chips Program
Add your own comment

30 Comments on NVIDIA to Detail New Mainstream GPU at Hot Chips Symposium in August

#1
bug
Call me weird, but what I care about a video card is availability and price. An announcement that a video card might be launched/detailed... not so much.
Posted on Reply
#2
oldtimenoob
Finally some clarification on what is coming.... waited long enough.
Posted on Reply
#3
Vya Domus
Isn't this like the third or fourth time we hear news in the least few months that Nvidia is going to announce something ?
Posted on Reply
#4
iO
They will anounce it much sooner as it would be very unusual to make a detailed Hot Chip presentation about an unknown chip.
Posted on Reply
#5
bug
iOThey will anounce it much sooner as it would be very unusual to make a detailed Hot Chip presentation about an unknown chip.
1180 is supposedly expected mid-June(ish). Showing its mid-range counterpart in August is sooner than expected, but by that time it wouldn't be an unknown chip anymore ;)
Posted on Reply
#6
RejZoR
Unless they do some anti-cryptomining witchcraft, these will be unobtainable just like the rest of mid end cards...
Posted on Reply
#7
TheHunter
Amd vega20 7nm should release soon so it puts more pressure, I know it won't compete much, but I have a gut feeling this vega20 will be enough to rival titanv and 1080ti,

That said this new 1180 should be just tiny bit faster and the cycle is complete.
Posted on Reply
#8
kruk
TheHunterAmd vega20 7nm should release soon so it puts more pressure, I know it won't compete much, but I have a gut feeling this vega20 will be enough to rival titanv and 1080ti,
It currently seems it won't compete at all, as 2018 7nm Vega is AI focused. Maybe in 2019 ...
Posted on Reply
#9
Vayra86
The rumored teaser for the pre-announcement to the announcement has been made.

Best day of my life
TheHunterAmd vega20 7nm should release soon so it puts more pressure, I know it won't compete much, but I have a gut feeling this vega20 will be enough to rival titanv and 1080ti,

That said this new 1180 should be just tiny bit faster and the cycle is complete.
We'll see... not holding my breath for anything that puts Vega and high end GPUs in one sentence though. I'll see it when its released, if ever. Its clear as day that Vega is subpar as a gaming GPU and primarily aimed at other markets. It'll do low power well. High performance? Not so much. Also, since Navi is still on the roadmap and has far more potential than anything Vega in terms of gaming.
Posted on Reply
#10
StrayKAT
TheHunterAmd vega20 7nm should release soon so it puts more pressure, I know it won't compete much, but I have a gut feeling this vega20 will be enough to rival titanv and 1080ti,

That said this new 1180 should be just tiny bit faster and the cycle is complete.
I suspect Nvidia will keep pulling out ahead for some time... But after years of buying Nvidia, I'm now Team Red. My new TV has FreeSync, and I can't help but see the benefits now with a Vega. It's not even a 4k card, but it makes 4k fairly playable. I'm willing to go for slower gpus when the overall package (freesync vs gsync) is much cheaper.
Posted on Reply
#11
rtwjunkie
PC Gaming Enthusiast
bug1180 is supposedly expected mid-June(ish). Showing its mid-range counterpart in August is sooner than expected, but by that time it wouldn't be an unknown chip anymore ;)
June was a rumor. This is possible announcement teaser. :laugh:

In any case, first announcement is nearly always of the Gx-104 chip, x being whatever this family is called. Since this is their gaming card annnouncement, and it is mainstream card, I contend that it will be on Gx-104 chip, and this the 1180 and 1170.
Posted on Reply
#12
Vayra86
StrayKATI suspect Nvidia will keep pulling out ahead for some time... But after years of buying Nvidia, I'm now Team Red. My new TV has FreeSync, and I can't help but see the benefits now with a Vega. It's not even a 4k card, but it makes 4k fairly playable. I'm willing to go for slower gpus when the overall package (freesync vs gsync) is much cheaper.
The price of the Nvidia adaptive sync package is ridiculous, so I applaud your choice.

Going for the fastest GPU was never a good idea and it never will be :) (based on price/perf)
Posted on Reply
#13
StrayKAT
Vayra86The price of the Nvidia adaptive sync package is ridiculous, so I applaud your choice.

Going for the fastest GPU was never a good idea and it never will be :) (based on price/perf)
Thanks.. I haven't been happier with a computer purchase in awhile. I did already have a 1080p Freesync monitor, but that feature went unused (and 1080p isn't very stressing anyhow). But now I see how awesome the Radeon/Freesync combo is.
Posted on Reply
#14
jabbadap
Well it's not Jensen himself, so there should be something new released before that. Be it product release or new arch announcement, it will first come out from Jensen himself not by some senior engineer.
Posted on Reply
#15
Fluffmeister
jabbadapWell it's not Jensen himself, so there should be something new released before that. Be it product release or new arch announcement, it will first come out from Jensen himself not by some senior engineer.
Indeed, it's more of an engineer focused conference, Nvidia used the 2016 event to reveal more info on Pascal and a die shot of the GP100:

www.anandtech.com/show/10588/hot-chips-2016-nvidia-gp100-die-shot-released

Of course this was after the release of products based on both the GP100, GP104 and GP106.
Posted on Reply
#17
RejZoR
AMD actually just needs to pull another Polaris. They know they can't compete at top end, but just like they've shown with Polaris, they could and did compete brilliantly at mid end. Low end and mid end is where things sell in huge volumes and people wouldn't mind if AMD focuses at that again. Still hoping that NAVI GPU core stacking thing like with Ryzen will work for them for the higher end.
Posted on Reply
#18
efikkan
RejZoRAMD actually just needs to pull another Polaris. They know they can't compete at top end, but just like they've shown with Polaris, they could and did compete brilliantly at mid end. Low end and mid end is where things sell in huge volumes and people wouldn't mind if AMD focuses at that again. Still hoping that NAVI GPU core stacking thing like with Ryzen will work for them for the higher end.
Pull another Polaris? You mean using overkill resources to keep up with a much less features GPU?
RX 580 needs 2304 cores, 6175 GFlop/s and 256 GB/s of bandwidth to keep up with GTX 1060's 1280 cores, 3855 GFlop/s and 192.1 GB/s.

Both AMD and Nvidia will probably eventually move to a MCM based design, but the problem for AMD is that MCM will not solve their underlying problem, it will actually just make it worse. AMD's problem for Fiji, Polaris and Vega is scheduling. All of these designs have plenty of resources compared to their counterparts from Nvidia, but they struggle because AMD can't use their resources efficiently while Nvidia can. AMD sits at about ~67% of the efficiency of Nvidia in gaming workloads, but scale nearly perfectly in simple compute workloads. This has clearly to do with scheduling of resources. The parallel nature of rendering might mislead some into thinking it's easily scalable, but most don't know it's actually a pipeline of small parallel blocks, full for resource dependencies. If it's not managed well, the GPU will keep having idle cycles for parts of the GPU, leading to the problem AMD currently have. Just keep throwing more resources at it wouldn't help either, as managing more resources well is even harder.

MCM will help with cost and yields, but it will make scaling harder too. In a MCM configuration, the GPU would have a scheduler and several separate GPU modules. The cost of transferring data between these will increase, so scheduling have to be drastically improved to keep up with the efficiency of a monolithic GPU design. Nvidia will also have to step up their game to do this well, but they are currently much better at it already. What AMD needs is a complete redesign built for efficiency rather than brute force, and of course abandon GCN.
Posted on Reply
#19
bug
efikkanPull another Polaris? You mean using overkill resources to keep up with a much less features GPU?
RX 580 needs 2304 cores, 6175 GFlop/s and 256 GB/s of bandwidth to keep up with GTX 1060's 1280 cores, 3855 GFlop/s and 192.1 GB/s.
As if you don't know the answer already: evil Nvidia is keeping developers from using async compute everywhere, that's why AMD's hardware can't shine (and power draw is not important. really, really)™
Posted on Reply
#20
RejZoR
efikkanPull another Polaris? You mean using overkill resources to keep up with a much less features GPU?
RX 580 needs 2304 cores, 6175 GFlop/s and 256 GB/s of bandwidth to keep up with GTX 1060's 1280 cores, 3855 GFlop/s and 192.1 GB/s.

Both AMD and Nvidia will probably eventually move to a MCM based design, but the problem for AMD is that MCM will not solve their underlying problem, it will actually just make it worse. AMD's problem for Fiji, Polaris and Vega is scheduling. All of these designs have plenty of resources compared to their counterparts from Nvidia, but they struggle because AMD can't use their resources efficiently while Nvidia can. AMD sits at about ~67% of the efficiency of Nvidia in gaming workloads, but scale nearly perfectly in simple compute workloads. This has clearly to do with scheduling of resources. The parallel nature of rendering might mislead some into thinking it's easily scalable, but most don't know it's actually a pipeline of small parallel blocks, full for resource dependencies. If it's not managed well, the GPU will keep having idle cycles for parts of the GPU, leading to the problem AMD currently have. Just keep throwing more resources at it wouldn't help either, as managing more resources well is even harder.

MCM will help with cost and yields, but it will make scaling harder too. In a MCM configuration, the GPU would have a scheduler and several separate GPU modules. The cost of transferring data between these will increase, so scheduling have to be drastically improved to keep up with the efficiency of a monolithic GPU design. Nvidia will also have to step up their game to do this well, but they are currently much better at it already. What AMD needs is a complete redesign built for efficiency rather than brute force, and of course abandon GCN.
You're comparing apples to pineapple. The reality s, Polaris worked for AMD and t worked tremendously.
Posted on Reply
#21
Vayra86
RejZoRYou're comparing apples to pineapple. The reality s, Polaris worked for AMD and t worked tremendously.
From a market/economical perspective yes, but not from the architectural one. Its simply pricing a GPU according to its performance in the stack, but that doesn't make it good or efficient. Is it a step forward, yes. But it is still lacking. Vega makes that painfully clear - its basically scaled up GCN to its limits and when its at that limit, it lacks absolute performance and perf/watt is out the window entirely. The fact that underclocking works so well for Vega speaks volumes.

Its only a matter of time until GCN is also unable to play well in the midrange. AMD is literally just revamping 2012 technology by adjusting their target one step down the ladder every gen. That's not good. That's a sign of falling further behind every year. They're not moving forward and when they do, the product fails in one way or another. This is the real trend we're seeing since Fury X and they cannot really turn it around. Neither Polaris or Vega are sufficient for that. There is a reason GCN shines in the lower performance segments, at lower clocks and below 100% power targets: under the hood its essentially still the same tech optimized for 1Ghz clocks.
Posted on Reply
#22
RejZoR
When I said AMD needs to pull another Polaris I didn't meant actual Polaris iteration but something very focused on price/performance, even if it's not king of the hill. Polaris wasn't, it was just a mid end card, but it was priced well (if we ignore the later cryptomining bullshit) and it was performing well. If AMD could make a new architecture and not be obsessed with it being best of the best, they could make something useful. Either way, they are forced to do so if they want to compete. People keep on saying GCN isn't good. Based on what metric? GCN isn't something that basolutely defines the core. It's just a bunch of tech stuffed into a chip.
Posted on Reply
#23
efikkan
RejZoRWhen I said AMD needs to pull another Polaris I didn't meant actual Polaris iteration but something very focused on price/performance
Yes, we all got that.
But the problem is that Polaris was never a good alternative to Pascal. Polaris even struggles to compete with Nvidia's lowest member in the mid range; GTX 1060, and GTX 1060 will be replaced this fall. It's fine that AMD can't compete with GTX 1080 Ti, but they need strong contenders to the pricepoints of GTX 1060/1070/1080, because that's where they can make some money. If the gap between them keeps increasing, they would struggle to perform close to the next 1160/1170/1180 models. And if we're honest, AMD isn't really competing with GTX 1060 or above today. Their strategy of using brute force instead of designing a new architecture is just going to make them trail farther and farther behind, until the point where their large inefficient chips become too expensive.
Posted on Reply
#24
bug
RejZoRWhen I said AMD needs to pull another Polaris I didn't meant actual Polaris iteration but something very focused on price/performance, even if it's not king of the hill.
Still seeing Polaris through tinted glass. Polaris was simply a failed architecture that couldn't scale enough to cover the high end.
Nvidia had the same problem back in the FX5000 days, but they could still stretch it enough to give us the horrible FX5900 and the FX5950(?) Ultra.
Posted on Reply
#25
kruk
bugPolaris was simply a failed architecture that couldn't scale enough to cover the high end.
So Vega is a successful architecture as it can scale from ultra low end to high end? Just trying to understand how this logic works ...
Posted on Reply
Add your own comment
Apr 28th, 2024 01:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts