Friday, February 8th 2019

No AMD Radeon "Navi" Before October: Report

AMD "Navi" is the company's next-generation graphics architecture succeeding "Vega" and will leverage the 7 nm silicon fabrication process. It was originally slated to launch mid-2019, with probable unveiling on the sidelines of Computex (early-June). Cowcotland reports that AMD has delayed its plans to launch "Navi" all the way to October (Q4-2019). The delay probably has something to do with AMD's 7 nm foundry allocation for the year.

AMD is now fully reliant on TSMC to execute its 7 nm product roadmap, which includes its entire 2nd generation EPYC and 3rd generation Ryzen processors based on the "Zen 2" architecture, and to a smaller extent, GPUs based on its 2nd generation "Vega" architecture, such as the recently launched Radeon VII. We expect the first "Navi" discrete GPU to be a lean, fast-moving product that succeeds "Polaris 30." In addition to 7 nm, it could incorporate faster SIMD units, higher clock-speeds, and a relatively cost-effective memory solution, such as GDDR6.
Source: Cowcotland
Add your own comment

135 Comments on No AMD Radeon "Navi" Before October: Report

#52
evolucion8
HTC, post: 3990845, member: 51238"
Are you sure?

From what i heard / read, Navi is supposed to be the last GCN based family of cards and it will be Arcturus, If this is indeed it's name, that will be in a new non-GCN arch.

Supposedly, Navi was to be announced @ CES but due to unforseen problems, it was witheld from CES. Think TLB Phenom bug style of issues or perhaps worse. Also supposedly, Navi is meant to replace Polaris, meaning there won't be a 2080Ti contender coming from Navi arch, which means Vega VII is it, unless AMD refines it some more @ a later date: can it even be done???
Indeed, unless if AMD releases an MI60 harvested as Vega VII Pro or whatever confusing naming convention they come up with.
Posted on Reply
#53
efikkan
Casecutter, post: 3990833, member: 94772"
Just me, it sounds like AMD has to allocate 7nm wafer starts to the markets and products that give them best bang-for-buck.
If the Navi chip for this segment was coming soon, they simply wouldn't have released Radeon VII now. So Radeon VII a pretty good indicator that the "bigger" Navi isn't coming for at least 6 months.

This will of course mean they free up some capacity for making other stuff, like more Vega20 or Zen, at least for a few months.

Casecutter, post: 3990833, member: 94772"
I mean wouldn't we think there's many clients (IBM/Apple) that want to increase or start their production, and bet TSMC is fully booked... didn't I read TSMC is build another 7nm production facility?
AMD uses TSMC's 7nm HPC node. Apple and all the other mobile chip makers use the low power node, which are related but separate nodes.

HTC, post: 3990845, member: 51238"
From what i heard / read, Navi is supposed to be the last GCN based family of cards and it will be Arcturus, If this is indeed it's name, that will be in a new non-GCN arch.
Yes, Polaris, Vega and Navi (which I believe are named after stars), were announced as incremental changes to GCN. Navi will also be a monolithic GPU.

What comes after Navi might be using "Super SIMD", MCM and other technologies AMD are developing.

HTC, post: 3990845, member: 51238"
Also supposedly, Navi is meant to replace Polaris, meaning there won't be a 2080Ti contender coming from Navi arch, which means Vega VII is it, unless AMD refines it some more @ a later date: can it even be done???
It's fairly difficult to scale up a GPU, which is why all modern GPUs are made as a large design and then cut down.
But as you are saying, Navi is intended to replace their current lineup with more efficient alternatives, not compete with RTX 2080 Ti and whatever Nvidia launches next.
Posted on Reply
#54
Casecutter
efikkan, post: 3990886, member: 150226"
This will of course mean they free up some capacity for making other stuff, like more Vega20 or Zen, at least for a few months.
Exactly, say 6mo from now they start scheduling Navi wafer starts for like end of Nov-Oct release. That's a lot of production for "high margin" products. Selling the runts that just don't "make the cut as Instinct " to gamers, even for little or no profit (doubt) is better then scrapping.

efikkan, post: 3990886, member: 150226"
AMD uses TSMC's 7nm HPC node. Apple and all the other mobile chip makers use the low power node, which are related but separate nodes.
I would think the actual wafer spin-up is the same, and many production tools are common until you get to the litho part of things that might use dedicated machines, but then go back together for cure, cut etc.
Posted on Reply
#55
notb
64K, post: 3990637, member: 148270"
AMD is showing a profit for a while now
I don't understand the popularity of this argument. AMD is barely profitable - this kind of profit could have been made with clever accounting.
But people see a green bar on a plot and suddenly AMD is in fantastic condition. :)
Call me then they have profit margin like Nvidia or Intel. They're competing on the same market and have similar costs. Where's the profit?
bug, post: 3990661, member: 157434"
Disregarding 2nd gen DLSS and/or RTX, given no performance boost at all, 7nm will make the die smaller, which would mean sane prices for GPUs. I would worry about that ;)
And you know all this because Radeon VII is so cheap?
What else will you tell me? That 7nm chips will be more efficient and require less cooling? :p

Yes, in a distant future 7nm *may* become cheap. But at the moment it's still including a huge premium for R&D and the supply is very limited. And it may be like that for years.
So on one hand we have a new node that is very useful for smartphones makers, who can easily ask a $1000+ price for their flagship models despite the CPU being tiny and relatively cheap. They can pay a lot for 7nm.
On the other hand you have 3 companies making consumer CPUs and GPUs, who need 7nm to push performance, because that's what gaming clients demand. Their chips are huge and are a majority of PC cost.
IMO there is just one possible solution: gaming PC parts will become silly expensive. So if you're irritated by RTX or Intel 9th gen prices, brace yourself...
theoneandonlymrk, post: 3990814, member: 82332"
Your getting confused clearly, Amd said ,one game is not enough dev support yet for Amd to bother.
Millions of notebooks weren't enough for AMD to bother and make Zen more frugal. So yeah, why make cards that support a few AAA games indeed... Although in 2019 few will become few dozen and by 2021 most new games should support RTRT (if it catches on). I wonder when will AMD decide it's worth the fuss and whether they'll still be in business.
And Amd have Rspid packed math for quadratic equations and such , different approach ,lower cost , Intrinsically (sic)clever design imho.
Are you sure you know what rapid packed math stands for? It just means doing 2 FP16 operations with FP32 - an idea coming straight from compute cards.
So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD.

So both ideologically and practically it's a lot like AVX-512.
Sticking purpose built maths hardware on the side of an asic you Were selling For advanced maths use to prosumers is strange to me.
Sorry mate... Sometimes I understand what you're trying to say and sometimes I don't. This is the latter case. Can you rewrite this sentence?
And you're so wrong on the difficulty of changing a asic design That much when you're virtually At validation, naive bull.
IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types.
Posted on Reply
#56
bug
notb, post: 3990919, member: 165619"
And you know all this because Radeon VII is so cheap?
That was just an assumption. If Nvidia keeps the same hardware and builds it on 7nm, the die will be smaller and thus, cheaper (not based on Radeon VII, but on how new nodes have worked in the past).
Of course, it's possible for Nvidia to beef things up to get better RTRT performance and still end up with a huge die, but I'm hoping the beating they took recently conveys the message the market does not put up with their new prices.
Posted on Reply
#57
eidairaman1
The Exiled Airman
HTC, post: 3990845, member: 51238"
Are you sure?

From what i heard / read, Navi is supposed to be the last GCN based family of cards and it will be Arcturus, If this is indeed it's name, that will be in a new non-GCN arch.

Supposedly, Navi was to be announced @ CES but due to unforseen problems, it was witheld from CES. Think TLB Phenom bug style of issues or perhaps worse. Also supposedly, Navi is meant to replace Polaris, meaning there won't be a 2080Ti contender coming from Navi arch, which means Vega VII is it, unless AMD refines it some more @ a later date: can it even be done???
Yet it has been talked about a navi plus Arc
Posted on Reply
#58
FordGT90Concept
"I go fast!1!11!1!"
M2B, post: 3990705, member: 172252"
Polaris was't designed for Xbox One X.
Polaris was in development way before development of Xbox One X.
Consoles take 2+ years to design, manufacture, and launch. The SoC parameters are one of the first requirements laid out because it dictates virtually all of the other components (PCB layout, transformer requirements, connectors, cooling requirements, etc.).

Polaris debuted June 29, 2016. Xbox One X debuted November 7, 2017. Navi is expected to debut ~July, 2019 for a PlayStation 5 launch holiday 2020. The timelines match. If Navi has problems and it did in fact get bumped back to October or later, PlayStation 5 might be delayed too. That said, Micorosft may have deliberately choosen a holiday release for the Xbox One X to maximize sales/market impact. They may have been ready to launch many months before that so, PlayStation 5 could still make a holiday 2020 launch with the delay.

The way the custom SoC business works is that requirements are set and a contract signed. AMD spends 6-12 months designing the chip and informs the client what the engineering specs are (power, packaging, etc.) so the client can design the rest of the package. Then AMD starts sampling GPUs and engineering prototypes are manufactured by the client, tested internally, then shipped out to developers to create games for it. AMD refines the process and works towards mass production while the client finalizes everything else and developers polish games. Then at the end, you have about three months of stockpiling inventory of consoles and games alike so there's hopefully enough of everything available to meet market demand.

Desktop cards don't have the software side to worry about so much (other than drivers, which AMD addresses with engineering samples internally) which is why they can debut PC cards well before a console using the same architecture.


I think Microsoft's next console will be mostly DXR-based and on Arcturus. PlayStation 5 is a small step up from Xbox One X so Microsoft likely isn't going to feel inclined to make a Navi-based console. They're going to want to pent up hype for the big DXR push. This also adds credibility to the idea that Navi is GCN 6.0 and Arcturus is something new with tensor cores and the like.
Posted on Reply
#59
R0H1T
notb, post: 3990919, member: 165619"
I don't understand the popularity of this argument. AMD is barely profitable - this kind of profit could have been made with clever accounting.
But people see a green bar on a plot and suddenly AMD is in fantastic condition. :)
Call me then they have profit margin like Nvidia or Intel. They're competing on the same market and have similar costs. Where's the profit?

And you know all this because Radeon VII is so cheap?
What else will you tell me? That 7nm chips will be more efficient and require less cooling? :p

Yes, in a distant future 7nm *may* become cheap. But at the moment it's still including a huge premium for R&D and the supply is very limited. And it may be like that for years.
So on one hand we have a new node that is very useful for smartphones makers, who can easily ask a $1000+ price for their flagship models despite the CPU being tiny and relatively cheap. They can pay a lot for 7nm.
On the other hand you have 3 companies making consumer CPUs and GPUs, who need 7nm to push performance, because that's what gaming clients demand. Their chips are huge and are a majority of PC cost.
IMO there is just one possible solution: gaming PC parts will become silly expensive. So if you're irritated by RTX or Intel 9th gen prices, brace yourself...

Millions of notebooks weren't enough for AMD to bother and make Zen more frugal. So yeah, why make cards that support a few AAA games indeed... Although in 2019 few will become few dozen and by 2021 most new games should support RTRT (if it catches on). I wonder when will AMD decide it's worth the fuss and whether they'll still be in business.

Are you sure you know what rapid packed math stands for? It just means doing 2 FP16 operations with FP32 - an idea coming straight from compute cards.
So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD.

So both ideologically and practically it's a lot like AVX-512.

Sorry mate... Sometimes I understand what you're trying to say and sometimes I don't. This is the latter case. Can you rewrite this sentence?

IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types.
They will never have those margins & you know why that is. People want 16 core Ryzen at or below $500 though Intel is charging virtually double/core & even then many will prefer Intel because 5.5GHz of OCing madness :twitch:

In case of Nvidia it's similar, Vega II is a dud because it sells for $699 with 16GB HBM2 & can only match or exceed 1080Ti after 2(?) years lest we forget what the competition has now & their prices! It's fashionable to hate on AMD because they always seem to under-perform or exaggerate some of their selling points, but hey none remember that 28 core 5GHz joke or "freesync doesn't even work" from you know who :rolleyes:

And that is why AMD will always be the underdog &/or less profitable, simply because (bigger) brand name & bluster wins all the time - virtually every time these days! AMD could sell the next gen chips beating Intel in virtually every metric, except possibly raw clocks yet Intel will still outsell them 4:1 or 3:1 as these are the times we live in & that's our fault!
Posted on Reply
#60
Nkd
so many people actually believing this rumor as fact? Never heard of this site. There are more than enough credible sources who have said Navi is mid year and that was after CES. I believe the guy who broke Radeon 7 news and first one to have pictures of it before CES on Navi then this random site. Do people realize there is suppose to be bigger version of Navi coming out too right? May be that is coming in q4.
Posted on Reply
#61
Imsochobo
64K, post: 3990519, member: 148270"
You have to understand AMD's position financially to fully understand why they lag behind:

2017 Annual Financial Reports

Intel 71 billion dollars revenue and 21 billion dollars profit
Nvidia 9.7 billion dollars revenue and 3 billion dollars profit
AMD 5.3 billion dollars revenue and 43 million dollars profit (this includes both CPU and GPU businesses)

The problem AMD has is pretty obvious and frankly it's amazing that they were even able to bring Ryzen to market as it is.
You do know they're paying off a big dept ?
Posted on Reply
#62
notb
R0H1T, post: 3991109, member: 131092"
They will never have those margins & you know why that is. People want 16 core Ryzen at or below $500 though Intel is charging virtually double/core & even then many will prefer Intel because 5.5GHz of OCing madness :twitch:
// I don't know why we moved to CPUs, but whatever
So this is basically AMD's fault for ignoring demand.
We know what CPUs sell best today. We know how Intel makes money.
AMD should have tried to attack these markets, because clearly that's where the money is.

But AMD decided to NOT enter the profitable niches. They decided to make a very specific type of CPUs (basically: to win benchmark and excel in reviews). And as a result they have to accept a very specific profit margin - which is low. But it is their decision.
They could have made CPUs similar to Intel's. They could have gone after Intel's clients and convince them with lower prices. They would maintain their 10-15% market share, but with much higher profit.

Contrary to CPUs, AMD's GPUs are at least doing what they should. It's just that the technology is old and they seem not to have any idea how to improve it.
And that is why AMD will always be the underdog &/or less profitable, simply because (bigger) brand name & bluster wins all the time - virtually every time these days! AMD could sell the next gen chips beating Intel in virtually every metric, except possibly raw clocks yet Intel will still outsell them 4:1 or 3:1 as these are the times we live in & that's our fault!
Not true at all.
Big brand has more market share by definition. It has little to do with profitability (at least in electronics).
In a stable market of comparable products, all producers should have similar prices. So once costs are similar, they should have similar profitability. I'm not making this up - that's how economy works. :)
AMD is not making money, so they're doing something wrong. Either the prices are way too low or the costs are too high.
They're not making a fantastic product that everyone wants. They won't increase their market share by a lot.

If AMD keeps making near zero profit, they won't build any reserves and they won't have money to develop new product lines. They'll be OK as long as the market is healthy and there's high demand for what they currently make.
Posted on Reply
#63
R0H1T
notb, post: 3991176, member: 165619"
So this is basically AMD's fault for ignoring demand.
We know what CPUs sell best today. We know how Intel makes money.
No it's AMD which brought the eight ores to the mainstream market, Intel had been fleecing customers for ages with their quad cores & overpriced HEDT. But those sticking with Intel still bought the 7700k, then 8700k & now 9700k or above. Same with HEDT, albeit to a lesser extent because AMD's many (more) cores work better in that environment.
notb, post: 3991176, member: 165619"
But AMD decided to NOT enter the profitable niches.
You mean the 1800x or how many derided it for being overpriced or something? Remember the competition, what was the price of 8 core Intel then?
notb, post: 3991176, member: 165619"
They could have made CPUs similar to Intel's.
Similar how?
notb, post: 3991176, member: 165619"
Contrary to CPUs, AMD's GPUs are at least doing what they should.
Yet it's the CPU division which is bringing in the big bucks.
notb, post: 3991176, member: 165619"
Big brand has more market share by definition. It has little to do with profitability (at least in electronics).
Like Apple, no?
notb, post: 3991176, member: 165619"
If AMD keeps making near zero profit, they won't build any reserves and they won't have money to develop new product lines.
Not entirely their fault, that's the point.
Posted on Reply
#64
theoneandonlymrk
notb, post: 3990919, member: 165619"
I don't understand the popularity of this argument. AMD is barely profitable - this kind of profit could have been made with clever accounting.
But people see a green bar on a plot and suddenly AMD is in fantastic condition. :)
Call me then they have profit margin like Nvidia or Intel. They're competing on the same market and have similar costs. Where's the profit?

And you know all this because Radeon VII is so cheap?
What else will you tell me? That 7nm chips will be more efficient and require less cooling? :p

Yes, in a distant future 7nm *may* become cheap. But at the moment it's still including a huge premium for R&D and the supply is very limited. And it may be like that for years.
So on one hand we have a new node that is very useful for smartphones makers, who can easily ask a $1000+ price for their flagship models despite the CPU being tiny and relatively cheap. They can pay a lot for 7nm.
On the other hand you have 3 companies making consumer CPUs and GPUs, who need 7nm to push performance, because that's what gaming clients demand. Their chips are huge and are a majority of PC cost.
IMO there is just one possible solution: gaming PC parts will become silly expensive. So if you're irritated by RTX or Intel 9th gen prices, brace yourself...

Millions of notebooks weren't enough for AMD to bother and make Zen more frugal. So yeah, why make cards that support a few AAA games indeed... Although in 2019 few will become few dozen and by 2021 most new games should support RTRT (if it catches on). I wonder when will AMD decide it's worth the fuss and whether they'll still be in business.

Are you sure you know what rapid packed math stands for? It just means doing 2 FP16 operations with FP32 - an idea coming straight from compute cards.
So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD.

So both ideologically and practically it's a lot like AVX-512.

Sorry mate... Sometimes I understand what you're trying to say and sometimes I don't. This is the latter case. Can you rewrite this sentence?

IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types.
"Where's the profit? " your guessing and want facts to prove your wrong?? odd


"Are you sure you know what rapid packed math stands for? "

Are you, the limits are not the same on Vega 20 as 10, it can do down to 8bit and possibly 4bit Rapid packed math so you ARE wrong and its not like AVX512 which is used as you should know for things other then AI ,usually.

"So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD."

first re read RPM specs on vega 20,, What you mean via a new API like DX12+ or vulkan , thats happening dx11 is finally going the way of 10 and 9 , Nvidia's relic lead on older games is less relevant

My point you didnt get-
Nvidia sold CUDA on its ability to do compute prior , yet straight up it got dumped when they needed to do AI since it is not as fast as custom , specific hardware and they made tensor and RT cores which largely are better at specific tasks.

They presented a personel demonstration to PROSUMERS that GPGPU is not for them and special circuitry could be significantly better.

now go figure why Softbank sold out.

"IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types."

your opinion is that of a user not architect , it may seam easy to extend ,add a extra side bus to accommodate a new RT chip that you also just made in this last year hypothetically , then to slap it on a 2.5D interposer chip that you also just designed this last year.

But the facts are that's two to three extra chips to design ,since nvidia announced RT, plus a redesign of the one you just spent 3 years designing and validating the design of.

then you have validation testing to proof correct opperation and endurance , now add CE and enviromental testing .

your being silly , its way too much work and not possible in any way.

and finally AMD's margin increased not decreased this year, THEY ARE MAKING A PROFIT ON EVERYTHING THEY SELL they are not a charity.
Posted on Reply
#65
64K
Imsochobo, post: 3991157, member: 66457"
You do know they're paying off a big dept ?
Yes. A few years ago when some financial analysts sites were pretty sure AMD would need to file bankruptcy and pointed to RTG being split off as a separate company as an indication that AMD might need to sell it things were looking pretty bad for AMD. Their stock had fallen all the way to $1.60 a share and some were thinking it would continue to go lower. Even below $1 which would cause NASDAQ to de-list them from the exchange. Their market share at the time was less than half as much as what they paid for ATI back in 2006 alone. They owed far more than they were worth. They had sold their Fab and most of their assets including the office building where they were located and had to lease it back. Their GPU market share sank to 20%. Their CPU market share was pitiful. Several of their debts were listed at that time and one was for 600 million dollars which is due this year. They've been showing profits in their Quarterly Reports for a while now but not enough to completely pay off that debt. Some of it will need to be rolled over but that shouldn't be a problem.

The thing is that it's easy to do well when money is rolling in and the future looks bright. It's a lot harder to do well under adversity and AMD was under serious adversity back then and yet they still brought Ryzen to market. They have my respect for that. Having owned my own business I understand what they went through.

Today they have regained some CPU market share. They have also regained GPU market share (largely due to the mining craze though). Their stock is up to $23 a share and their future is looking pretty solid as long as they continue to make good decisions which I think they will do under Lisa Su's leadership.
Posted on Reply
#66
Imsochobo
theoneandonlymrk, post: 3991241, member: 82332"
....
and finally AMD's margin increased not decreased this year, THEY ARE MAKING A PROFIT ON EVERYTHING THEY SELL they are not a charity.
And more importantly.
They're paying the banks instead of loaning more money!
Posted on Reply
#67
Midland Dog
eidairaman1, post: 3990724, member: 40556"
Stop making assumptions, none of us know what Navi Brings, we are not AMD engineers here



Stop gap card, it's a RI card that had yield problems to not be a full RI card. It will suffice for them until Q4 2019+

Pretty much a Radeon Pro
navi is confirmed junkcn
Posted on Reply
#68
eidairaman1
The Exiled Airman
Midland Dog, post: 3991270, member: 168254"
navi is confirmed junkcn
Got any legitimate links?

Your comment is nothing but speculation at this point.
Are you an engineer?

I don't think so.

Typical of someone with a greeneye.
Posted on Reply
#69
renz496
FordGT90Concept, post: 3990500, member: 60463"
Look how many games actually use DXR. Then look at the primary customer Navi is for: Sony who doesn't even use DirectX in their consoles. I just don't see tensor cores nor DXR as being a priority for AMD, especially not something to derail product timelines for. As the OP says, I think the delay is because of 7nm issues moreso than Navi itself. The fact there's limited availability of Radeon VII cards also hints at 7nm issues.

I wouldn't be surprised if Microsoft is insisting on DXR for their next Xbox which might be Arcturus architecture (follows Navi).
tensor cores might not be a priority for AMD (hence even their instinct lack one despite it should compete head to head with nvidia volta in ML) but not DXR. when AMD launch Vega they brag Vega have the most complete support for DX12 feature at the time unlike pascal. to compete nvidia in RT they will going to need specialized hardware that is similar to nvidia RT core. sure current GCN should be able to run RT without RT cores but the performance impact will be similar to pascal when trying to run RT. the problem with AMD is they did not have the power budget to put RT cores into their current GCN architecture. just look at Vega 20 itself. less than 350mm2 on top of being build on 7nm node. and yet the power consumption already reaching 300w mark.
Posted on Reply
#70
cucker tarlson
theoneandonlymrk, post: 3990814, member: 82332"
Your getting confused clearly, Amd said ,one game is not enough dev support yet for Amd to bother.
And Amd have Rspid packed math.
used in FC5 exclusively
Posted on Reply
#71
theoneandonlymrk
cucker tarlson, post: 3991322, member: 173472"
used in FC5 exclusively
It is also a main feature of their pro driver and is used much more via that, as is RPM in general on AMD hardware then your view on it , you forgot wolfenstein too, but point taken they are pushing their own tech.
Posted on Reply
#72
Turmania
I really dislike Intel and Nvdia really wanted to go back to red camp. Perhaps it will never happen, as AMD just so happens to be very often shoot themselves in the foot.
Posted on Reply
#73
tfdsaf
xkm1948, post: 3990460, member: 50521"
With Navi still being GCN it is probably wont be much of a challenger to current nvidia anyway.
1. This is utter ignorance at its finest. Even Vega is NOT GCN, but even if it was there are major changes between the GCN architecture each generation. GCN 1 is literally uncomparable with GCN 4. Each GCN generation has been majorly redesigned and improved.

2. Navi is going to be a new architecture, just like Vega is a new architecture and NOT GCN.

3. Nvidia have been using the same architecture since their GTX 400 series. Literally the latest Turing architecture is technically an iteration and improvement over the GTX 400 architecture. Heck even that architecture shared much of the designs from the GTX 200 series. In essence the biggest architectural shift Nvidia did was from their GTX 9000 series to their GTX 200 series and then another smaller shift from their 200 series to their 400 series. Ever since the GTX 400 series its been small iterations and improvements over time.

AMD made the unified shaders shift way before Nvidia actually, they did it in their HD 2000 series GPU's, two years before Nvidia did it. Since then the biggest jump for AMD was with their 4000 series.
Posted on Reply
#74
bug
tfdsaf, post: 3991457, member: 169887"
1. This is utter ignorance at its finest. Even Vega is NOT GCN, but even if it was there are major changes between the GCN architecture each generation. GCN 1 is literally uncomparable with GCN 4. Each GCN generation has been majorly redesigned and improved.

2. Navi is going to be a new architecture, just like Vega is a new architecture and NOT GCN.

3. Nvidia have been using the same architecture since their GTX 400 series. Literally the latest Turing architecture is technically an iteration and improvement over the GTX 400 architecture. Heck even that architecture shared much of the designs from the GTX 200 series. In essence the biggest architectural shift Nvidia did was from their GTX 9000 series to their GTX 200 series and then another smaller shift from their 200 series to their 400 series. Ever since the GTX 400 series its been small iterations and improvements over time.

AMD made the unified shaders shift way before Nvidia actually, they did it in their HD 2000 series GPU's, two years before Nvidia did it. Since then the biggest jump for AMD was with their 4000 series.
You might want to let these guys know: https://en.wikipedia.org/wiki/AMD_RX_Vega_series
They have Vega listed as GCN5.

As for previous iterations, they were so major redesigns, AMD themselves called them 1.1, 1.2 and so on. Only when it became painfully clear their architecture was getting long in the tooth (I believe it was with Polaris?) they went back and renamed everything. That's not to say there were no improvements (Vega is clearly faster than, say, a 290X), but I'm sure they weren't as big as AMD wished.
Posted on Reply
#75
Super XP
AMD is keeping its Next Generation GPU tight lipped.
If speculation is correct and this 7nm Navi is based on gcn, hopefully AMD pulls this off, before the real deal is released.
Though there's been conflicting reports claiming 7nm Navi maybe the last gcn based but will get a overhaul all thanks to 7nm.
Other sources state, 7nm Navi s a completely new GPU design. Only time will tell.

One thing I hope AMD does not do is Re-Brand its GPU's again. This is probably one of the worst strategies a company can follow.

HTC, post: 3990845, member: 51238"
Are you sure?

From what i heard / read, Navi is supposed to be the last GCN based family of cards and it will be Arcturus, If this is indeed it's name, that will be in a new non-GCN arch.

Supposedly, Navi was to be announced @ CES but due to unforseen problems, it was witheld from CES. Think TLB Phenom bug style of issues or perhaps worse. Also supposedly, Navi is meant to replace Polaris, meaning there won't be a 2080Ti contender coming from Navi arch, which means Vega VII is it, unless AMD refines it some more @ a later date: can it even be done???
That was a direct quote from a site that posted this 3 months ago. The question is did things change within AMD? Not sure, but seeing how Navi is delayed till October 2019, perhaps its no longer the case.

eidairaman1, post: 3990806, member: 40556"
VII Card should of came out during the 1080Ti Era. Too late.

AMD unfortunately is 1 step behind the 2080/Ti.

Navi needs to be something considerable, perhaps not component limited like the 480/580 were in compare to the 290-390X...
AMD doesn't have to beat the crap out of Nvidia, they only need to remain competitive in performance and price. It seems AMD puts more concentration on its CPU department, because the GPU department has been lacking for years now. AMD GPUs are far from useless of course, they can Game any game you throw at them, just that benchmarks don't do justice and many rely on benchmarks religiously. Until AMD can launch its Brand Spanking New GPU Design, they need to compete on Price/Performance.
Posted on Reply
Add your own comment