• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Plans to Launch Its Discrete GPU Lineup Starting at $200

This guy, Raja, almost killed ATI/AMD

He was Senior Vice President and Chief Architect of RTG at the end of his tenure with AMD. That's a position of considerable authority but do you not see that he answered to the CEO Lisa Su. He did what he was told to do by her and this is not intended as a slam on Lisa Su at all. As it turns out she was completely right to focus on the CPU business of AMD and the financials bare that out.
 
knowing intel cpu pricing somehow i doubt their gpu will be priced according to performance...

Pretty sure the first or second series, will have good performance price ratio. But after they make their name it will be like you say. Hey the more the better then we have 3 fighting for pricing and performance, which will be good for us, unless AMD decides to take a back seat again and only focus on their CPU's... Now we just need Nvidia making CPU's that will be great sports.
 
Additionally, he states that Intel's current strategy revolves around price, not performance, providing best possible value to consumers.

That basically means Intel isn't there yet with performance so they have to play the price card !
 
If Intel really does have some good GPU architecture going, it could be an exciting time for NUCs and other small, integrated computers. AMD got the ball rolling with Ryzen APUs, and they still have Zen2/Navi to combine into a good box, but Intel is also going to be in a position to create small gaming boxes with one chip soon and that's good news for people that want the flexibility of the PC environment without the cost or complexity of buying/building a box with a bunch of different components. Skull Canyon V2 could get close to an Xbox One S in performance and make a play for the console space.
 
3 player competition is always good for us consumers. I do not expect instant fireworks from Intel and or Koduri himself. I would also love it if Nvdia goes into CPU business as well.
 
The 512 core version could have up to 14.7 TFLOPS depending on clockspeeds. FP32. Thats more than 2080 Ti.

I don't think it will beat 2080 Ti in gaming tho... xD but Intel could have something decent up their sleeve here. The 10th gen mobile chips are not bad in the iGPU department. 3 times faster than 9th gen on average and this is with very low core count and clockspeed.

Looking forward to see the performance on these 4 dGPU's.
For graphics, TFLOPS is nearly useless without good raster hardware.
 
So with a little bit of market logic we can extrapolate that their base-model discrete GPU will have at least GTX 1660 performance.
 
Ha ha ha ha ha “period”
Ha ha ha ha ha
Ehm ehm
Ha ha ha ha ha

I wonder if Intel still thinks it can be a game changer in graphics world. They speak as they think people are die hard fans of their igp and people are gona leave everything and buy their gfx.

I know where all of this is going. First they introduce AI instructions. 2nd they want their PCH to be the part of the cpu. Now hunting the gpu market. Hmmm things start to fill up now. Nividia holds the current position in graphical AI for Tesla and Volvo. So Intel wants a piece of that. As there is no competitor in that area.
 
Meanwhile on Tom's:
Our strategy revolves around price, not performance. First are GPUs for everyone at 200$

Basically 1030 performance with HBM, because "everyone needs a GPU".
 

3rd try - forgetting the i740 and doomed successor i752 https://en.wikipedia.org/wiki/Intel740

View attachment 128358

Oh, look at that, a DFP interface, one of the shortest lived display interfaces ever...
 
3 player competition is always good for us consumers. I do not expect instant fireworks from Intel and or Koduri himself. I would also love it if Nvdia goes into CPU business as well.

They are, its called Tegra and its a high power draw ARM chip with pretty decent performance. Does not scale well into lower power devices though, you see it mostly in, for example the Shield and also Nvidias automotive chips. Both of which never became sales cannons.


And its not just fusing some Cortexes together either - not all of it anyway

 
Hardware communities have become so whiny... WAHHHH WE'RE NOT GETTING CARDS THAT BEAT AN RTX 2080 Ti for $200 WAHHHH. If it doesn't beat everything by a 10000000% for dirt cheap Intel wasted their time. What, it doesn't have 2080 Ti performance at only 10 watts? PIECE OF JUNK.... More low end and mid range cards? THROW THEM IN THE TRASH... /s

Having something different, maybe even a rock-paper-scissors type of variety seems to be worthless to most. Having more choice, also worthless it seems. And that's plain sad. All the grown men have turned back into 12 year old brats.


And no, I'm not saying buy regardless of x, y & z.
 
Last edited:
For graphics, TFLOPS is nearly useless without good raster hardware.

No it's not, TFLOPS is pretty much the only metric that can predict performance consistently, prove me wrong and show me any other hardware specification that is as reliable as TFLOPS. ROP count on the other hand is in fact almost useless as a metric used to estimate performance.

The general rule is that with time as the theoretical TFLOPS increases so does the actual performance. Pick any two random GPUs from the last decade and order their performance by the number of TFLOPS and I bet 80% of the time the prediction would be correct.
 
No it's not, TFLOPS is pretty much the only metric that can predict performance consistently, prove me wrong and show me any other hardware specification that is as reliable as TFLOPS. ROP count on the other hand is in fact almost useless as a metric used to estimate performance.

The general rule is that with time as the theoretical TFLOPS increases so does the actual performance. Pick any two random GPUs from the last decade and order their performance by the number of TFLOPS and I bet 80% of the time the prediction would be correct.

AMD vs Nvidia cards consistently prove that TFLOPS does not relate to realtime or relative performance. Not even ballpark

Number is higher so perf is higher.. sure. But that happens with a lot of numbers and just like the rest does not really translate between brands, gpu generations etc etc

Its a bit like comparing clockspeeds to gauge performance, it only works within one generation of the same architecture. Ergo: pointless for most comparisons.
 
No, you are mixing things up. AMD vs Nvidia shows that one TFLOP metric used for one architecture cannot 100% model the performance characteristics of another architecture. It does not contradict however my statement that TFLOPS are in general a good tool to approximate or predict performance. Please tell me how many 5 TFLOPS GPUs you find that consistently beat let's say an 8 TFLOP GPU, go ahead, find any pair as such, any generation, any manufacturer. Then tell me how many did you find that don't.



Important ? Yes. A good indicator of performance ? No, it's absolutely worthless.

GP104 : 64 ROPs, 8.8 TFLOPS
TU106 : 64 ROPs, 7.5 TFLOPS
TU104 : 64 ROPs, 10 TFLOPS

If we go by TFLOPS the ranking list should be :

TU104 > GP104
TU104 > TU106
GP104 > TU106

We know in fact it's more like :

TU104 > GP104
TU104 > TU104
GP104 < TU106

2 out 3, pretty damn good estimation considering I know absolutely nothing more than the theoretical FLOPS.

What does the ROP count tell us here ? Absolutely nothing, nada, zero. The unique Vega example which does seem to be limited by it's ROPs and that's an assumption by the way, because neither me or you knows that for sure it's known that GCN has other limitations and peculiarities. But there must a point where you need to realize TFLOPS are the dominant factor in all this and not the ROPs. It's a bizarre argument you got here, at the end of the day you can be limited by anything. You can extend this notion and claim that memory bandwidth is the most important thing because you can have as many execution ports and ROPs as you want if you don't have the memory bandwidth it's for nothing.

Here is a much more sensible explanation on why Vega performs better at higher clocks : unlike other GPU architectures GCN has scalar ports used for instructions that don't need multiple SIMD lanes, and we know GCN can very ineficient with it's 64 wide wavefront. It would make sense that for certain shaders that make use of a lot of instructions that cannot be efficiently scheduled in a wavefront would run a lot quicker if the clocks are higher.

Fact: ROP counts don't change that much generation to generation meanwhile shaders do, a lot.

I've read dozen of papers that had to do with graphics and compute and not once were ROP counts quoted as being indicators of performance, people always use GFLOPS theoretical or measured. It really boggles my mind why you all insist on this, it's simply not the case.



That's a funny statement because the only bits left in a GPU that are DSP like are in fact things like the ROPs or TMUs.

I think we are saying mostly the same thing; it works within a samey architecture comparison, it does not work outside of it; once you start comparing those all bets are off.

Either I am still missing the point or we just attribute different value to this number. I really dont ever use this metric for anything worthwhile comparing, ever...

I mean, comparing Pascal and Turing or even any other Nvidia arch since Kepler isnt the best example to make your point. Most of the basics havent changed much.
 
Why this turns to be AMD vs nvidia thread? The fact is this is Intel next discrete GPU thread. The only thing I wished for is competitive price/performance and Intel really need to step up in their driver department. With Gen11 support integer scaling I'm pretty sure they will carry this to discrete card as well, with that we will (hopefully) see AMD and nvidia supporting them too.
 
What's with the negative responses in the comment section? The mobile version of this with 1/8 the core count is almost as fast as the R7 GPU in the 2400G. With HBM memory and higher clocks, the dgpu version could pull 75% to 100% the performance of the 5700/2060 Super.
 
Cleaned up a bunch of off topic nonsense that does not directly relate to this News thread, any more of it and free holidays will be on offer..... carry on.
 
Back
Top