• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Driver Update Confirms VPU Integration in Meteor Lake for AI Workload Acceleration

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.18/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
Intel yesterday confirmed its plans to extend its Meteor Lake architecture towards shores other than general processing. According to Phoronix, Intel posted a new driver that lays the foundations for VPU (Versatile Processing Unit) support under Linux. The idea here is that Intel will integrate this VPU within its 14th Gen Meteor Lake architecture, adding AI inferencing acceleration capabilities to its silicon. A sure-fire way to achieve enormous gains in AI processing, especially in performance/watt. Interestingly, Intel is somewhat following Apple's footsteps here, as the company already includes AI-dedicated processing cores in its desktop/laptop Apple Silicon processors since the M1 days.

Intel's VPU architecture will surely be derived from Movidius' designs, which Intel acquired back in 2016 for a cool $400 million. It's unclear which parts of Movidius/Intel IP will be included in the VPU units to be paired with Meteor Lake: whether a full-blown, SoC (System on Chip)-like VPU design such as the Myriad X VPU, or if Intel will take select bits of the architecture (plus the equivalent of five additional years of research and development), sprinkling them on top of their upcoming architecture. We do know the VPU itself will include a memory management unit, a RISC-based microcontroller, a Neural Compute System (what exactly entails this compute system and its slices is the mysterious part) and network-on-chip capabilities.





This move by Intel does show that while specialization in hardware design will continue (and is honestly inescapable), one of the ways for Intel to remain competitive is to simply integrate as many co-processors/accelerators in and around its CPUs as possible. There will be losses for some (customers will still have to pay for the additional silicon in each chip whether they plan to use their full capabilities or not), but it does represent one likely path for Intel to increase its competitiveness. Naturally, it also opens up the door for product segmentation (whether with the inclusion of the VPU, or by allowing it to scale across offerings, much like the company already does in terms of CPU cores).

View at TechPowerUp Main Site | Source
 
Wasn't it "Vector Processing Unit" ?
 
Last edited:
Big deal, AMD already said many different accelerators being added to future cpu's. I'm not sure about Zen 4 but Zen 5 will have AI accelerators among others and Zen 6 will add even more accelerators, just like Arrow Lake and so on. So much AI enhanced software now for photo and video editing, it's a welcome move.
 
Yeah, it's not really adding, they already have AI accelerators (under the name Intel DL Boost) in their CPUs. Surely this 'VPU' might be beefier and more capable, but it doesn't change the past and present.
 
Yeah, it's not really adding, they already have AI accelerators (under the name Intel DL Boost) in their CPUs. Surely this 'VPU' might be beefier and more capable, but it doesn't change the past and present.
Yes! Also, They have Two atoms for the VPU part from regular cores/atoms - so we can expect some beefier results :)
 
Back
Top