Tuesday, May 8th 2018

Intel Could Unveil its Graphics Card at 2019 CES

It looks like Intel is designing its discrete graphics processor at a breakneck pace, by a team put together by Raja Koduri. Its development is moving so fast, that the company could be ready with a working product to show the world by the 2019 International CES, held in early-January next year. Intel's development of a graphics processor is likely motivated by the company's survival instinct to not fall behind NVIDIA and AMD in making super-scalar architectures to cash in on two simultaneous tech-booms - AI and blockchain computing.

A blessing in disguise for gamers is the restoration of competition. NVIDIA has been ahead of AMD in PC graphics processor performance and efficiency since 2014, with the latter only playing catch-up in the PC gaming space. AMD's architectures have proven efficient in other areas, such as blockchain computing. NVIDIA, on the other hand, has invested heavily on AI, with specialized components on its chips called "tensor cores," which accelerate neural-net building and training.
Source: TweakTown
Add your own comment

38 Comments on Intel Could Unveil its Graphics Card at 2019 CES

#1
dj-electric
Seems very very soon. I would guess that they might show more of a proof-of-concept, with possibly H2-2019 launch?
Posted on Reply
#2
W1zzard
Just wondering, isn't anybody considering that Intel might just fail again at this? Or that their GPU isn't targeted at gamers at all? (which needs a ton of driver support)
Posted on Reply
#3
dj-electric
W1zzard said:
Just wondering, isn't anybody considering that Intel might just fail again at this? Or that their GPU isn't targeted at gamers at all? (which needs a ton of driver support)
Their actions tell that they seem very ambitious about this one. I would believe that Intel has the capability of making decent and competitive products in this segment. They definitely have the money, manpower and tools.

Its just that i don't think they can do it this fast. I would believe a CES2020 paper launch, not a 2019 one.
And sure, Intel knows how to lose battles. They do almost every other month in some segments.

If a product does see the light in 7 months, it might have an awkward start, but this trigger needs to happen for the next ones to be a lot better. Hopefully, this is a race they can even attend.
Posted on Reply
#4
Ferrum Master
IMHO they will fail...

But as always they will salvage some parts of the development into their CPU arch just as the ring arch for example. So it won't be a complete loss if it fails.

I am afraid that they will go emulation route mending the handicap by brute force...
Posted on Reply
#5
Caring1
I can't see why they can't pull one out of the hat, they've been making Co-Processors for years for data centres etc, surely adding a display output on to them can be done as Nvidia has on some of their top tier cards.
Posted on Reply
#6
Xzibit
Well if you read the source.

TweakTown
My sources are telling me to expect something late this year with all attention to be placed on Intel at CES 2019 in January, where Intel could unveil their new GPU.
Something means well anything CPU, Chipset Mobile, Auto, Laurebee 2, Knight Hill-top. That quote is the foundation of the entire articles speculations.

Also you have to wonder on what process. 10nm isn't working out so is it going to be their 14nm when others will be using 7nm.
Posted on Reply
#7
sergionography
W1zzard said:
Just wondering, isn't anybody considering that Intel might just fail again at this? Or that their GPU isn't targeted at gamers at all? (which needs a ton of driver support)
They've been making gpus for a while now so they probably learned enough about the business model of gpus. As for failing, technically you cant fail at making a discrete GPU because they are parallel and can be compensated by more cores aslong as its within reason. being that they already have somewhat decent gpu IP, its just a matter of scaling their design higher. Yes intel graphics are behind amd and nvidia, but thats also because they dont make their chips big enough. An intel i7 8700k with 24 EUs has a size of 155mm2 with about a third of the chip for graphics. Another aspect is that intel has proven to be efficient With their power usage since these EUs are designed with mobility in mind. The challenge now is scaling from about 50mm2 worth of graphics to like 250+
Amd requires 484mm2 worth of vega cores to compete with 300mm2 Pascal chips, and it isn't exactly failing except at the high end where it cant scale any further due to cost. Intel could do the same thing initially
Posted on Reply
#8
Imsochobo
W1zzard said:
Just wondering, isn't anybody considering that Intel might just fail again at this? Or that their GPU isn't targeted at gamers at all? (which needs a ton of driver support)
I'm pretty sure their linux support will be good.
They can jump right into the server market with openCL with an alliance with AMD on the software stack.
Posted on Reply
#9
sergionography
Imsochobo said:
I'm pretty sure their linux support will be good.
They can jump right into the server market with openCL with an alliance with AMD on the software stack.
That would actually be good for both of them. Someone definitely needs to put cuda in its place.
Posted on Reply
#10
dorsetknob
"YOUR RMA REQUEST IS CON-REFUSED"
Speculation ( as at the moment that's all we can do)
Intel Discrete Graphics Card combined with the IGP Could kick ass ( Running Intels version of AMD hybrid Crossfire Setup).
Posted on Reply
#11
dj-electric
dorsetknob said:
Speculation ( as at the moment that's all we can do)
Intel Discrete Graphics Card combined with the IGP Could kick ass ( Running Intels version of AMD hybrid Crossfire Setup).
I would much like something like exists in laptops. A dGPU with its own fast memory and all, working solo on 3D rendering, and iGPU when system at complete idle, using the dGPU's outputs on the same system

That is, in mainstream platforms. Of course us HEDT users would have to use the dGPU even at idle :)
Posted on Reply
#12
Prima.Vera
Actually, with the resources they have, and with the ATi staff already under their roof, I have a feeling we are going to see a surprise from Intel. They have there own fabs, all resources are in house, I think they are going for a killer product, even if initially will be for professional area only.
Posted on Reply
#13
LightningJR
I don't consider this fast. Like has been said Intel did attempt this before so they know what's involved. Not to mention they have been making iGPUs forever, there's not much of a leap from that to a dedicated GPU. With all their different products I would assume they have nearly everything they need to create a dedicated GPU in a reasonable time frame, I mean they even have their own fabs.

As w1zzard stated this might not be for the consumer market but I can't see it not being a possibility, their driver does need work but the foundation is there, same goes with the hardware side of the equation. As for if they will fail, well that's just speculation, any number of the infinite possibilities can happen. I just hope for more competition in this space, I am tired of the same two after all this time. Good luck buying this competition nvidia, this is no 3dfx. :p
Posted on Reply
#14
LemmingOverlord
W1zzard said:
Just wondering, isn't anybody considering that Intel might just fail again at this? Or that their GPU isn't targeted at gamers at all? (which needs a ton of driver support)
The main question I want answered is: what type of GPU is Intel developing (and why are we still calling certain chips Graphics Processing Units)?

Intel's earlier failures with graphics development were mostly to do with resources required to sustain long-term development cycles and "political will" at Intel because the "old" Intel was a CPU company. The current Intel diversified its portfolio and can do whatever it wants (vide the asinine McAfee deal).

It all comes down to "how long a deadline and how many resources did they give Raja?". While Raja's experience has been desktop GPUs, I wouldn't put it past Intel to repurpose a poor design just for the sake of saving the product, and end up with a crappy AI core.

I am not very impressed by Raja, tbh. His experience at AMD (even though you can't peg him entirely for the VEGA design) shows he oversells his product.
Posted on Reply
#15
DeathtoGnomes
dorsetknob said:
Speculation ( as at the moment that's all we can do)
Intel Discrete Graphics Card combined with the IGP Could kick ass ( Running Intels version of AMD hybrid Crossfire Setup).
I would speculate that Intel is ditching IGP to favor discrete.
Posted on Reply
#16
Fluffmeister
Prima.Vera said:
Actually, with the resources they have, and with the ATi staff already under their roof, I have a feeling we are going to see a surprise from Intel. They have there own fabs, all resources are in house, I think they are going for a killer product, even if initially will be for professional area only.
Indeedy, it will great to have a third player enter the market and provide more competition.

More freedom and choice is great.
Posted on Reply
#17
iO
Maybe a small chip to replace Vega M in Kaby Lake G. Or an AI specific accelerator or so.
But not a full blown big GPU, out of nowhere and competitive with Nvidia..
Posted on Reply
#18
Vya Domus
W1zzard said:
Just wondering, isn't anybody considering that Intel might just fail again at this?
If this really is a consumer product , then I'm pretty much convinced it will fail. But as you said , I am much more inclined to believe this is meant for datacenters , Intel really doesn't like how Nvidia is eating away server space sales.

sergionography said:
They've been making gpus for a while now so they probably learned enough about the business model of gpus.
They really didn't learn much , some of their decisions are absolutely baffling , such as this : http://www.joshbarczak.com/blog/?p=667

Why would they put so much effort into making such a efficient hardware functionality which is avoided like the plague by the graphics industry is beyond comprehension.

sergionography said:
Yes intel graphics are behind amd and nvidia, but thats also because they dont make their chips big enough.
Even something of similar size from ARM or Qualcomm would wipe the floor with these things in pretty much every relevant category (including power efficiency , though to be fair it would beat AMD/Nvidia a well). Their architecture just simply performs poorly and it's optimized for things that no one needs as pointed above. I don't know what is the cause of their failure to implement competitive GPUs in this segments , it's either lack of money allocated to it , or lack of talented people. I'm tempted to believe it's the latter and the addition of Raja wont be enough as far as I am concerned.
Posted on Reply
#19
Final_Fighter
this does not surprise me. intel already has graphics cores on their cpus. the problem is it was limited on how powerful they could make it because it had to run next to a cpu and pull all its power from the socket. all they really have to do is add more shaders, gddr5x or gddr6. it is not impossible for them to add more execution units on their gt3e and triple the pipelines. this with faster ram and an extremely increased power threshold is well within reason. the added power threshold will also allow them to increase core clocks. power wise it should be around 120-135w. so it might not be the most power saving but it could still be a good entry to mid range card.

edit: not to mention they will have refined the process used on there current integrated gpus for the upcoming discrete ones. plus stuff like this is something that just was not thought up yesterday, they have been planning it for a few years and have already worked out some of the issues. i wont be surprised if this is in the range of rx560 - gtx1060 category of cards. its also not suprising that they are waiting for gddr6 to come into full production by the time they show this.
Posted on Reply
#20
Blueberries
The acquisition of Raja and product development were likely planned at least a year in advance with the intention of releasing a product by CES 2019... large tech companies like Intel aren't giving away all of their secrets in press releases.

I'm less concerned with performance or the adaptions of an Intel GPU versus the consumer / professional options from their opponents but rather how a MAJOR change in innovation (from onboard to discrete) will lead to localized optimizations.

Xeon Phi-- Absolve memory / thread wall (MIC) local
3D XPoint - Absolve RAM limits / Storage latency micron acquisition
Intel ??? GPU - Absolve parallel bus latency through firmware or driver optimizations.

If the first iteration of Intel's discrete GPU doesn't scare its competition... the second or third will.
Posted on Reply
#21
theoneandonlymrk
I can see a low end consumer card with enhanced performance on their igpu but not a gaming competitive one, as i have said before I expect more of an application accelerator then a mere gpu ,to push edge use cases and possibly be an AI accelerator too, it certainly would be strange imho if intel just made a straight discreet Gpu ,to me, Amd and Nvidia are way too far ahead to catch in Gfx But not in other areas ie FPGAs ,Ai and database etc acceleration and intel would have a better chance if they can leverage their own special sauce into the mix and have it become integral , like Gpus themselves, No one else can even compete on that level at this point.
And that Usp could get them selling.

No one mentions the totally rubbish rendering of 3d objects on intel?? I have seen 3 way , intel ,amd , Nvidia, film and game rendering comparisons that shine a light on the rather poor game reproduction on intel hardware, they do films fine to be fair.
Posted on Reply
#22
geon2k2
dj-electric said:

Its just that i don't think they can do it this fast. I would believe a CES2020 paper launch, not a 2019 one.
Why would you think this?
They already have working GPU, working drivers, all the infrastructure and knowledge is in place, and don't forget they are doing GPUs since 20 years ago.
Its just scaling everything to a whole new level.
From programming perspective going from 1 core to 2 cores is hard, however once your software is working with 20, switching to 2000 is just a parameter change. (They have 20-40 in their current integrated GPUs)
Posted on Reply
#23
Vya Domus
geon2k2 said:

From programming perspective going from 1 core to 2 cores is hard, however once your software is working with 20, switching to 2000 is just a parameter change. (They have 20-40 in their current integrated GPUs)
It doesn't work like that at all , whether you are referring to GPUs or CPUs.
Posted on Reply
#24
Prima.Vera
One stupid question, and apologies in advance for it... :D
Isn't the SHADER cores nVidia and AMD proprietary ONLY?? If so, how can Intel develop a new GPU without those?? Sorry, just asking....
Posted on Reply
#25
Vya Domus
Prima.Vera said:
One stupid question, and apologies in advance for it... :D
Isn't the SHADER cores nVidia and AMD proprietary ONLY?? If so, how can Intel develop a new GPU without those?? Sorry, just asking....
There is no such thing as a proprietary shader core it's a loosely defined term anyway.

Sure they patent their GPUs in their entirety and their ISAs but these change very often and it's not worth it to waste time copying entire architectures. Not as a long term strategy anyway.
Posted on Reply
Add your own comment