Monday, August 5th 2019

Intel Plans to Launch Its Discrete GPU Lineup Starting at $200

During interview with Russian YouTube channel called PRO Hi-Tech, Raja Koduri, Intel's chief architect and senior vice president of architecture, software and graphics, talked about his career, why he left AMD, and where Intel is going with its discrete GPU attempts. However, one of the most notable things Mr Koduri said was regarding upcoming GPU lineup code-named Arctic Sound. He noted that Intel plans to release first GPU as a mid-range model at a price of $200, while enterprise solutions that utilize HBM memory will follow that.

Koduri said that he wants to replicate AMD's strategy of capturing high-volume price-points, such as the $199 Radeon RX 480. The plan here is to bring an affordable, good performing GPU to the masses - "GPUs for everyone" as he calls them. Additionally, he states that Intel's current strategy revolves around price, not performance, providing best possible value to consumers. Intel's approach for the next two or three years is to launch a complete lineup of GPUs, with a common architecture being used for everything from iGPUs found inside consumer CPUs to data-center GPUs.

Update: PRO Hi-Tech has posted a snippet of Raja Koduri interview, without the Russian overlay commentary. What he said was actually: "...Eventually our architecture, as publicly said, has to get from mainstream, which is starting at around $100, all the way to data-center class graphics with HBM memory...". This means that the previous speculation about $200 graphics card is false, as he didn't say that. All he said is that Intel wants to enter the "mainstream" GPU market and work its way up to data center.
Source: PRO Hi-Tech
Add your own comment

77 Comments on Intel Plans to Launch Its Discrete GPU Lineup Starting at $200

#51
Quicks
laszloknowing intel cpu pricing somehow i doubt their gpu will be priced according to performance...
Pretty sure the first or second series, will have good performance price ratio. But after they make their name it will be like you say. Hey the more the better then we have 3 fighting for pricing and performance, which will be good for us, unless AMD decides to take a back seat again and only focus on their CPU's... Now we just need Nvidia making CPU's that will be great sports.
Posted on Reply
#52
RH92
AleksandarKAdditionally, he states that Intel's current strategy revolves around price, not performance, providing best possible value to consumers.
That basically means Intel isn't there yet with performance so they have to play the price card !
Posted on Reply
#53
TesterAnon
Im sure Intel will love the GPU market, minimal upgrades every generation and prices increase exponentially.
Posted on Reply
#54
danbert2000
If Intel really does have some good GPU architecture going, it could be an exciting time for NUCs and other small, integrated computers. AMD got the ball rolling with Ryzen APUs, and they still have Zen2/Navi to combine into a good box, but Intel is also going to be in a position to create small gaming boxes with one chip soon and that's good news for people that want the flexibility of the PC environment without the cost or complexity of buying/building a box with a bunch of different components. Skull Canyon V2 could get close to an Xbox One S in performance and make a play for the console space.
Posted on Reply
#55
PanicLake
To me it would be more interesting "when"!
Posted on Reply
#56
Turmania
3 player competition is always good for us consumers. I do not expect instant fireworks from Intel and or Koduri himself. I would also love it if Nvdia goes into CPU business as well.
Posted on Reply
#57
ValenOne
lasThe 512 core version could have up to 14.7 TFLOPS depending on clockspeeds. FP32. Thats more than 2080 Ti.

I don't think it will beat 2080 Ti in gaming tho... xD but Intel could have something decent up their sleeve here. The 10th gen mobile chips are not bad in the iGPU department. 3 times faster than 9th gen on average and this is with very low core count and clockspeed.

Looking forward to see the performance on these 4 dGPU's.
For graphics, TFLOPS is nearly useless without good raster hardware.
Posted on Reply
#58
Blueberries
So with a little bit of market logic we can extrapolate that their base-model discrete GPU will have at least GTX 1660 performance.
Posted on Reply
#59
aQi
Ha ha ha ha ha “period”
Ha ha ha ha ha
Ehm ehm
Ha ha ha ha ha

I wonder if Intel still thinks it can be a game changer in graphics world. They speak as they think people are die hard fans of their igp and people are gona leave everything and buy their gfx.

I know where all of this is going. First they introduce AI instructions. 2nd they want their PCH to be the part of the cpu. Now hunting the gpu market. Hmmm things start to fill up now. Nividia holds the current position in graphical AI for Tesla and Volvo. So Intel wants a piece of that. As there is no competitor in that area.
Posted on Reply
#61
Massman
HD64GExactly what any sensible person could predict for their 2nd try (Larrabee anyone?) on this market.
3rd try - forgetting the i740 and doomed successor i752 en.wikipedia.org/wiki/Intel740



Posted on Reply
#62
ypsylon
I hope it won't be overpriced by 50% like their current CPU offerings at the moment.
Posted on Reply
#63
ObiFrost
Meanwhile on Tom's:
Our strategy revolves around price, not performance. First are GPUs for everyone at 200$
Basically 1030 performance with HBM, because "everyone needs a GPU".
Posted on Reply
#64
TheEmptyCrazyHead
Am I the only one finding kind of hilarious the idea of having an AMD cpu paired with an Intel gpu?
Posted on Reply
#66
Vayra86
Turmania3 player competition is always good for us consumers. I do not expect instant fireworks from Intel and or Koduri himself. I would also love it if Nvdia goes into CPU business as well.
They are, its called Tegra and its a high power draw ARM chip with pretty decent performance. Does not scale well into lower power devices though, you see it mostly in, for example the Shield and also Nvidias automotive chips. Both of which never became sales cannons.

en.wikipedia.org/wiki/Tegra

And its not just fusing some Cortexes together either - not all of it anyway

en.wikipedia.org/wiki/Project_Denver
Posted on Reply
#67
CheapMeat
Hardware communities have become so whiny... WAHHHH WE'RE NOT GETTING CARDS THAT BEAT AN RTX 2080 Ti for $200 WAHHHH. If it doesn't beat everything by a 10000000% for dirt cheap Intel wasted their time. What, it doesn't have 2080 Ti performance at only 10 watts? PIECE OF JUNK.... More low end and mid range cards? THROW THEM IN THE TRASH... /s

Having something different, maybe even a rock-paper-scissors type of variety seems to be worthless to most. Having more choice, also worthless it seems. And that's plain sad. All the grown men have turned back into 12 year old brats.


And no, I'm not saying buy regardless of x, y & z.
Posted on Reply
#68
Vya Domus
rvalenciaFor graphics, TFLOPS is nearly useless without good raster hardware.
No it's not, TFLOPS is pretty much the only metric that can predict performance consistently, prove me wrong and show me any other hardware specification that is as reliable as TFLOPS. ROP count on the other hand is in fact almost useless as a metric used to estimate performance.

The general rule is that with time as the theoretical TFLOPS increases so does the actual performance. Pick any two random GPUs from the last decade and order their performance by the number of TFLOPS and I bet 80% of the time the prediction would be correct.
Posted on Reply
#69
Vayra86
Vya DomusNo it's not, TFLOPS is pretty much the only metric that can predict performance consistently, prove me wrong and show me any other hardware specification that is as reliable as TFLOPS. ROP count on the other hand is in fact almost useless as a metric used to estimate performance.

The general rule is that with time as the theoretical TFLOPS increases so does the actual performance. Pick any two random GPUs from the last decade and order their performance by the number of TFLOPS and I bet 80% of the time the prediction would be correct.
AMD vs Nvidia cards consistently prove that TFLOPS does not relate to realtime or relative performance. Not even ballpark

Number is higher so perf is higher.. sure. But that happens with a lot of numbers and just like the rest does not really translate between brands, gpu generations etc etc

Its a bit like comparing clockspeeds to gauge performance, it only works within one generation of the same architecture. Ergo: pointless for most comparisons.
Posted on Reply
#70
Vayra86
Vya DomusNo, you are mixing things up. AMD vs Nvidia shows that one TFLOP metric used for one architecture cannot 100% model the performance characteristics of another architecture. It does not contradict however my statement that TFLOPS are in general a good tool to approximate or predict performance. Please tell me how many 5 TFLOPS GPUs you find that consistently beat let's say an 8 TFLOP GPU, go ahead, find any pair as such, any generation, any manufacturer. Then tell me how many did you find that don't.



Important ? Yes. A good indicator of performance ? No, it's absolutely worthless.

GP104 : 64 ROPs, 8.8 TFLOPS
TU106 : 64 ROPs, 7.5 TFLOPS
TU104 : 64 ROPs, 10 TFLOPS

If we go by TFLOPS the ranking list should be :

TU104 > GP104
TU104 > TU106
GP104 > TU106

We know in fact it's more like :

TU104 > GP104
TU104 > TU104
GP104 < TU106

2 out 3, pretty damn good estimation considering I know absolutely nothing more than the theoretical FLOPS.

What does the ROP count tell us here ? Absolutely nothing, nada, zero. The unique Vega example which does seem to be limited by it's ROPs and that's an assumption by the way, because neither me or you knows that for sure it's known that GCN has other limitations and peculiarities. But there must a point where you need to realize TFLOPS are the dominant factor in all this and not the ROPs. It's a bizarre argument you got here, at the end of the day you can be limited by anything. You can extend this notion and claim that memory bandwidth is the most important thing because you can have as many execution ports and ROPs as you want if you don't have the memory bandwidth it's for nothing.

Here is a much more sensible explanation on why Vega performs better at higher clocks : unlike other GPU architectures GCN has scalar ports used for instructions that don't need multiple SIMD lanes, and we know GCN can very ineficient with it's 64 wide wavefront. It would make sense that for certain shaders that make use of a lot of instructions that cannot be efficiently scheduled in a wavefront would run a lot quicker if the clocks are higher.

Fact: ROP counts don't change that much generation to generation meanwhile shaders do, a lot.

I've read dozen of papers that had to do with graphics and compute and not once were ROP counts quoted as being indicators of performance, people always use GFLOPS theoretical or measured. It really boggles my mind why you all insist on this, it's simply not the case.



That's a funny statement because the only bits left in a GPU that are DSP like are in fact things like the ROPs or TMUs.
I think we are saying mostly the same thing; it works within a samey architecture comparison, it does not work outside of it; once you start comparing those all bets are off.

Either I am still missing the point or we just attribute different value to this number. I really dont ever use this metric for anything worthwhile comparing, ever...

I mean, comparing Pascal and Turing or even any other Nvidia arch since Kepler isnt the best example to make your point. Most of the basics havent changed much.
Posted on Reply
#71
Apocalypsee
Why this turns to be AMD vs nvidia thread? The fact is this is Intel next discrete GPU thread. The only thing I wished for is competitive price/performance and Intel really need to step up in their driver department. With Gen11 support integer scaling I'm pretty sure they will carry this to discrete card as well, with that we will (hopefully) see AMD and nvidia supporting them too.
Posted on Reply
#72
Terryg0t1t
What's with the negative responses in the comment section? The mobile version of this with 1/8 the core count is almost as fast as the R7 GPU in the 2400G. With HBM memory and higher clocks, the dgpu version could pull 75% to 100% the performance of the 5700/2060 Super.
Posted on Reply
#73
kings
Terryg0t1tWhat's with the negative responses in the comment section?
Because is Intel, and bash Intel is the cool thing to do nowadays!
Posted on Reply
#74
Tatty_Two
Gone Fishing
Cleaned up a bunch of off topic nonsense that does not directly relate to this News thread, any more of it and free holidays will be on offer..... carry on.
Posted on Reply
#75
ValenOne
ApocalypseeWhy this turns to be AMD vs nvidia thread? The fact is this is Intel next discrete GPU thread. The only thing I wished for is competitive price/performance and Intel really need to step up in their driver department. With Gen11 support integer scaling I'm pretty sure they will carry this to discrete card as well, with that we will (hopefully) see AMD and nvidia supporting them too.
AMD and NVIDIA examples are used to show TFLOPS arguments are meaningless for raster graphics power.
lasThe 512 core version could have up to 14.7 TFLOPS depending on clockspeeds. FP32. Thats more than 2080 Ti.

I don't think it will beat 2080 Ti in gaming tho... xD but Intel could have something decent up their sleeve here. The 10th gen mobile chips are not bad in the iGPU department. 3 times faster than 9th gen on average and this is with very low core count and clockspeed.

Looking forward to see the performance on these 4 dGPU's.
That's wrong. RTX 2080 TI's boost modes can exceed paper specs mentioned in www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305




RTX 2080 Ti can reach 17.186 TFLOPS which is backed by six GPC (includes geometry-raster units) and read/write 88 ROPS at about 1950Mhz
Posted on Reply
Add your own comment
Apr 26th, 2024 16:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts