• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Announces New Radeon Embedded GPUs

We all call a Ryzen or a Core i7 a X86 processor, even though they now have AVX, AVX2, AVX512, AMD64, SSE4.2, etc, and is far away from an 8086. The same happens with GCN, it's the same arch all the way from HD7000 to Vega, with additions along the way.
Why do you treat them different is something I don't understand, you don't call the i7 7700K an AVX processor, it IS an x86 one.

You compile a program with LLVM for GCN and it works in an A8 7600, a HD7730, and a Vega 56. "It's not GCN" is marketing, not reality.
 
We all call a Ryzen or a Core i7 a X86 processor, even though they now have AVX, AVX2, AVX512, AMD64, SSE4.2, etc, and is far away from an 8086.

We call a CPU "x86" because it is compliant with the x86 spec : registers , instructions , I/O , memory addressing , etc. When 2 CPUs are x86 that does not mean they are the same , the implementation of an instruction set can differ wildly. For example AMD with the FX chips used 2x 128 bit registers for AVX while Intel used a single 256 bit register. They both may be x86 or AVX or whatever the hell you want to call them but they don't perform the same and they don't work in the same way. Just how different iterations of GCN don't work exactly the same and they don't perform exactly the same.

Why do you treat them different is something I don't understand, you don't call the i7 7700K an AVX processor, it IS an x86 one.

I don't even know what are you trying to say with that. x86 or AVX are just names given to levels of compliance. The hardware implementations are not standardized.

"It's not GCN" is marketing, not reality.

Or with this...


it IS an x86 one.
You compile a program with LLVM for GCN and it works in an A8 7600, a HD7730, and a Vega 56.

Compiling a program means to convert code from one language to a different one , often a low level language. The only condition for that program to run is if the resulting instructions are executable by the target hardware. But guess what, you can compile the program several times for different hardware where it can use different/additional instructions. As such you don't break compatibility but newer more capable hardware does not execute that program in the same way because it is different.

You are looking at this skin-deep and because of that you don't understand why you are wrong. You may say again it for the 100th time , it wont make it true , the architectural iterations of CGN aren't all the same thing. Though feel free to try , I mean they do say that if you repeat a lie long enough it eventually becomes the truth.
 
Last edited:
Thanks for the quality response saves some time :laugh:
 
We all call a Ryzen or a Core i7 a X86 processor, even though they now have AVX, AVX2, AVX512, AMD64, SSE4.2, etc, and is far away from an 8086. The same happens with GCN, it's the same arch all the way from HD7000 to Vega, with additions along the way.
Why do you treat them different is something I don't understand, you don't call the i7 7700K an AVX processor, it IS an x86 one.

You compile a program with LLVM for GCN and it works in an A8 7600, a HD7730, and a Vega 56. "It's not GCN" is marketing, not reality.
We call a Ryzen or i7 a CPU, @Vya Domus covered most of it for you. There are differences between instruction sets and processors, processors cant run without instructions, then we add in updated instructions, add more cream to your coffee and it all becomes a huge mess to the point we call it a clusterfluck not a cpu... :eek::kookoo:

And now you are trying to compare CPU instructions with graphic cards instructions while not understanding that iterations of the original code can be vastly different with each generation. It wouldnt surprise me if there is less than 30% of the original code left of GCN 1.0, so yea saying its the same is true and false at the same time, wrap your head about that.
 
Looks like I stand corrected then.
Still GCN is not what it was, maybe 10nm or 7nm will help?
 
Not if they are nodes optimized for low power.
 
I
FTFY. By the time vega "trades blows" with the 1080ti, volta will have already been retired in favor of whatever replaces it. Also, just because one game favors an arch does not mean that arch has tons of untapped power. If that were true, then the 1060 should be way faster then the 480 based on a few nvidia centric games.

Aint nobody but fanbois got time to wait that long for AMD to perform.

The Volta argument is baseless because most people do not buy new graphics cards every year and nvidia releasing a new architecture will not magically make ur older card better. I am speaking from an architectural perspective, vega has alot of potential. And from a consumer perspective, you spend a 100 dollars less for a graphics card now that will keep up with games for a few years to come. Heres some screenshots from past reviews
Screenshot_20171006-213718.jpg
Screenshot_20171006-215500.jpg

Look at how the 290x literally jumped up a whole tier in one year by the time 390x was released.
 
You are simply wrong. One cannot help but classify your comments as trolling.

Let's just stop spamming this thread with the same thing over and over.

He's actually basically right until that statement.

It is a GCN iteration. I'm sorry fanboys, but even AMD qualifies it as such.

That said, GCN is not the problem.

Their insistence on retaining full double precision is.
 
Look at how the 290x literally jumped up a whole tier in one year by the time 390x was released.
That 390X is overclocked, running at least 50 MHz higher than stock. You should also compare "all resolutions" vs. "all resolutions" and not "all" vs. "4K". Yeah we all now Hawaii is better in high resolutions than Kepler, nothing new here. Also keep in mind, that 780 Ti is a horribly low clocked reference card, whereas the 390X Gaming isn't. I saw new reviews with custom 780 Ti that are clocked about 150 MHz higher than reference ones, they are still very fast, and mostly faster than 390X and comparable GPUs.

However, I concur with your original statement, saying that Vega has a lot of potential yet to be unlocked.
 
He's actually basically right until that statement.

It is a GCN iteration. I'm sorry fanboys, but even AMD qualifies it as such.

That said, GCN is not the problem.

Their insistence on retaining full double precision is.

And lose all those mining sales, are you mad?
 
And lose all those mining sales, are you mad?

They would need a high end compute oriented product first, but that would be the way to compete in gaming, yes.
 
Maybe if they have excessive money through Ryzen sales, they could afford to design a new architecture just for gaming, with all the compute/pro crap removed and with more ROPs and core clocks applied. Needless to say, it would kinda be like AMD's Maxwell. Less energy consumption, way more performance.
 
I would love to have something like that on an APU.
 
He's actually basically right until that statement.

It is a GCN iteration. I'm sorry fanboys, but even AMD qualifies it as such.

That said, GCN is not the problem.

Their insistence on retaining full double precision is.

And having different iterations means they are all the same ? :kookoo:
 
That 390X is overclocked, running at least 50 MHz higher than stock. You should also compare "all resolutions" vs. "all resolutions" and not "all" vs. "4K". Yeah we all now Hawaii is better in high resolutions than Kepler, nothing new here. Also keep in mind, that 780 Ti is a horribly low clocked reference card, whereas the 390X Gaming isn't. I saw new reviews with custom 780 Ti that are clocked about 150 MHz higher than reference ones, they are still very fast, and mostly faster than 390X and comparable GPUs.

However, I concur with your original statement, saying that Vega has a lot of potential yet to be unlocked.
Oops i didnt notice the 4k part. It seems the 390x review on tpu doesnt have an all resolutions graph. I screenshot the first glaph which normally relative all reaolutions. Heres 1080p graphs.
Screenshot_20171007-080020.jpg
Screenshot_20171007-075834.jpg


The 780 ti keeps up a slight lead but its definitely smaller. Meanwhile look at the widened gap between the 290x and gtx 780. Also do note that im not specifically pointing out the 390x and rather i used that review as a point of reference to have a year later review. Also as you mentioned, amd being better at 4k/high resolutions back then is also a statement to be made in itself. A person who invested in an 290x, especially custom ovetclocked models, are pretty much still able to play most games flawlessly similar or even better to a 580/580X user. And i remember a time when you could buy a 290x for less than 300$. I doubt we can same the same thing about a gtx 780 user.
 
And having different iterations means they are all the same ? :kookoo:

They are not 100% the same, they are backwards compatible, meaning the same instructions still apply, meaning they are the same on a percentage plus "the new things" (like audio effects acceleration, memory compression, etc). A new GCN is still a GCN. Don't like that? Go buy a Pascal, or one of those new UHD Graphics.
 
Funny you mention Pascal , Nvidia has been even more conservative than AMD with their designs. You're calling out the wrong company. :laugh:
 
And Intel has 3 generations of the same GPU, now in Ultra HD Graphics.
The only true CPU/GPU maker is VIA, bite me.
 
How you got from GCN to Intel or VIA and how that is relevant is beyond me , you are kind of running out of ideas.
 
Last edited:
Banter =/= flame war.
On topic, 1.2TFlop means a 1170MHz clock, so from 28nm to 14nm they managed a 46% clock increase on the same TDP, which is nice, but I think the FM2 APUs managed that before.
 
Meh, the "overclocking the 7750" was a bit much, I will hand you.
 
There is no respect for exaggeration nowdays, kids today... Double standards, I can say that Intel is still overclocking the 2500K and no one bats an eye.
 
Back
Top