- Joined
- Oct 9, 2007
- Messages
- 47,164 (7.57/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Microsoft, Intel, and AMD are attempting to jumpstart demand in the PC industry again, under the aegis of the AI PC—devices with native acceleration for AI workloads. Both Intel and AMD have mobile processors with on-silicon NPUs (neural processing units), which are designed to accelerate the first wave of AI-enhanced client experiences on Windows 11 23H2. Microsoft's bulwark with democratizing AI has been Copilot, as a licensee of Open AI GPT-4, GPT-4 Turbo, Dali, and other generative AI tools from the Open AI stable. Copilot is currently Microsoft's most heavily invested application, with its most capital and best minds mobilized to making it the most popular AI assistant. Microsoft even pushed for the AI PC designator to PC OEMs, which requires them to have a dedicated Copilot key akin to the Start key (we'll see how anti-competition regulators deal with that).
The problem with Microsoft's tango with Intel and AMD to push AI PCs, is that Copilot doesn't really use an NPU, not even at the edge—you input a query or a prompt, and Copilot hands it over to a cloud-based AI service. This is about to change, with Microsoft announcing that Copilot will be able to run locally on AI PCs. Microsoft identified several kinds of Copilot use-cases that an NPU can handle on-device, which should speed up response times to Copilot queries, but this requires the NPU to have at least 40 TOPS of performance. This is a problem for the current crop of processors with NPUs. Intel's Core Ultra "Meteor Lake" has an AI Boost NPU with 10 TOPS on tap, while the Ryzen 8040 "Hawk Point" is only slightly faster, with a 16 TOPS Ryzen AI NPU. AMD has already revealed that the XDNA 2-based 2nd Generation Ryzen AI NPU in its upcoming "Strix Point" processors will come with over 40 TOPS of performance, and it stands to reason that the NPUs in Intel's "Arrow Lake" or "Lunar Lake" processors are comparable in performance; which should enable on-device Copilot.
View at TechPowerUp Main Site | Source
The problem with Microsoft's tango with Intel and AMD to push AI PCs, is that Copilot doesn't really use an NPU, not even at the edge—you input a query or a prompt, and Copilot hands it over to a cloud-based AI service. This is about to change, with Microsoft announcing that Copilot will be able to run locally on AI PCs. Microsoft identified several kinds of Copilot use-cases that an NPU can handle on-device, which should speed up response times to Copilot queries, but this requires the NPU to have at least 40 TOPS of performance. This is a problem for the current crop of processors with NPUs. Intel's Core Ultra "Meteor Lake" has an AI Boost NPU with 10 TOPS on tap, while the Ryzen 8040 "Hawk Point" is only slightly faster, with a 16 TOPS Ryzen AI NPU. AMD has already revealed that the XDNA 2-based 2nd Generation Ryzen AI NPU in its upcoming "Strix Point" processors will come with over 40 TOPS of performance, and it stands to reason that the NPUs in Intel's "Arrow Lake" or "Lunar Lake" processors are comparable in performance; which should enable on-device Copilot.
View at TechPowerUp Main Site | Source