• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Files Patent for Chiplet Machine Learning Accelerator to be Paired With GPU, Cache Chiplets

years ago Intel created the first chiplets, why didn't they patent the idea then?? maybe they didn't do so because of that previous do nothing ceo they had? (I am referring to the ceo who was getting his noddle wet with an employee)
What chiplets are you referring too?

If it's the old dual core designs with two chips, they were two full blown single core chips on one package. Not quite the same as a chiplet .
 
So a GPU with built-in accelerator for farming bot? :respect:
 
With the purchase of Xilinx I suspect AMD will start reducing the number of hardware accelerated functions like encode and decode, machine learning, and will replace them with an FPGA section. Perfect for chiplet implementation.
FPGA is only useful if the usage scenario is not widely used, like a limited usage scenario. ASIC is designed when that fixed function is popular enough to justify the R&D and silicon die cost.

Encode & Decode is widely popular and used, the only issue with it is how much die space it takes from the GPU/APU die, when packaging and MCM becomes the trend of GPU's then having a dedicated media chiplet might be useful, but it all depends on how much it will cost to separate and implement (packaging) and power consumption also, not to mention the space area on the PCB. If the cost saving from the die area is good compared to separating it on another chiplet is worth it. Then they might do it.

FPGA are normally expensive, for such fixed functions it's better just to keep it like they're currently, they're also not space and power efficient compared to a fixed function ASIC.

FPGA should be optional in the first implementation for CDNA, I don't think it will be added to RDNA until maybe later generations. As the FPGA is used mainly for professional usage scenarios, later it will become more mainstream and cheaper that AMD can have it also on RDNA. Then, it can be used in some apps to accelerate specific functions, for example when a new codec appears, encoding softwares can reprogram the FPGA to accelerate the encoding without waiting for the ASIC to be developed which can take few years. And also the ASIC will requires you to buy a new GPU while the FPGA can add the function with no need to buy another card.
 
Back
Top