• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Vega" Architecture Gets No More ROCm Updates After Release 5.6

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,564 (7.66/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD's "Vega" graphics architecture powering graphics cards such as the Radeon VII, Radeon PRO VII, sees a discontinuation of maintenance with ROCm GPU programming software stack. The release notes of ROCm 5.6 states that the AMD Instinct MI50 accelerator, Radeon VII client graphics card, and Radeon PRO VII pro-vis graphics card, collectively referred to as "gfx906," will reach EOM (end of maintenance) starting Q3-2023, which aligns with the release of ROCm 5.7. Developer "EwoutH" on GitHub, who discovered this, remarks gfx906 is barely 5 years old, with the Radeon PRO VII and Instinct MI50 accelerator currently being sold in the market. The most recent AMD product powered by "Vega" has to be the "Cezanne" desktop processor, which uses an iGPU based on the architecture. This chip was released in Q2-2021.



View at TechPowerUp Main Site | Source
 

tabascosauz

Moderator
Supporter
Staff member
Joined
Jun 24, 2015
Messages
7,790 (2.39/day)
Location
Western Canada
System Name ab┃ob
Processor 7800X3D┃5800X3D
Motherboard B650E PG-ITX┃X570 Impact
Cooling NH-U12A + T30┃AXP120-x67
Memory 64GB 6400CL32┃32GB 3600CL14
Video Card(s) RTX 4070 Ti Eagle┃RTX A2000
Storage 8TB of SSDs┃1TB SN550
Case Caselabs S3┃Lazer3D HT5
@btarunr Renoir/Lucienne and Cezanne/Barcelo/whatever you wanna call it still continues on the mobile side, as new products, under the Ryzen 7020 and 7030 names. Released last year in 2022. Lucienne is decidedly trash heap but Barcelo is still being quietly put in new, reasonably mid-high end notebooks. Since it's all listed by AMD as one big family under Ryzen 7000, I don't see Barcelo going away until Rembrandt-R (7035) and Phoenix (7040) do too. Which will be......pretty interesting to see in the context of GCN5.1 being dropped.

Vega had a good run, I honestly don't see a problem with this. It's just that it should have died off long ago, instead of being continuously shoehorned into new products in the past 3 years.
 
Last edited:
Joined
Sep 17, 2014
Messages
21,214 (5.98/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Vega had a good run, I honestly don't see a problem with this. It's just that it should have died off long ago, instead of being continuously shoehorned into new products in the past 3 years.
Its funny, I remember predicting this exact problem with AMD's one-off (well, two, including Fury X, which had an even shorter support cycle, and qualitatively much worse one) HBM adventures on gaming GPUs. AMD, with its problematic driver regimes in the past, was now going to push optimizations for two different memory subsystems in their GPU families? Of course not.

And here we are. That's also why I think they are much more focused and strategically positioned better today; the whole business is chiplet-focused now, moving to unification rather than trying weird shit left and right. Its also why I don't mind them not pushing the RT button too hard. Less might be more.
 
Joined
Feb 11, 2009
Messages
5,422 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
What does this mean? idk what ROCm even is.
 
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
This is a bad decision from AMD. Maybe they have to, because it is GCN, still they should try to keep the support for longer. Why? Because Nvidia does it and this is an area where matching Nvidia doesn't need stronger hardware and better software. It's just a business decision. Let's not forget that the main argument for years in favor of AMD was that "Fine Wine". This is the complete opposite.
 
Joined
Aug 13, 2010
Messages
5,399 (1.07/day)
AMD simply can't get their CUDA competitor off the ground. Still very much locked out of the ML industry. Not happy to see that
 
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
AMD simply can't get their CUDA competitor off the ground. Still very much locked out of the ML industry. Not happy to see that
I think they can. They seems to fixed some things those last months.
UPDATE 1-AMD's AI chips could match Nvidia's offerings, software firm says
So maybe their are fixing their software problems. Probably dropping the older architectures is to make their job easier. But they do send the wrong message to professionals. Long term support should be also a priority.
 
Joined
Aug 13, 2010
Messages
5,399 (1.07/day)
I think they can. They seems to fixed some things those last months.
UPDATE 1-AMD's AI chips could match Nvidia's offerings, software firm says
So maybe their are fixing their software problems. Probably dropping the older architectures is to make their job easier. But they do send the wrong message to professionals. Long term support should be also a priority.
Do you know how astronomically ahead anything with Tensor cores is to even RDNA3 on ML applications?
I think its a bit of a system that feeds itself. I think that if the ROCm ecosystem was more popular, AMD would have been incentivized to make their GPUs train faster. If you want to generate a model, you are better with an RTX 3070 Ti than an RX 7900 XTX at this point.

The only thing I believe AMD's RDNA2-3 cards are decent at is inference.
 
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Do you know how astronomically ahead anything with Tensor cores is to even RDNA3 on ML applications?
I think its a bit of a system that feeds itself. I think that if the ROCm ecosystem was more popular, AMD would have been incentivized to make their GPUs train faster. If you want to generate a model, you are better with an RTX 3070 Ti than an RX 7900 XTX at this point.

The only thing I believe AMD's RDNA2-3 cards are decent at is inference.

No, please explain it to me. And really talking about AI and ML in servers and then pointing to gaming cards, like the 3070 Ti and the RX 7900XT/X looks a bit odd. AMD is not using RDNA3 in Instinct cards.

In any case AMD GPUs do find their way in super computers meant to be used also for AI and ML. That probably means something. Also Nvidia is having so much difficulty fulfilling orders that I believe I read about 6 months waiting list. If AMD's options can be at 80% performance and at 80% price, I would expect many turning to AMD solutions instead of waiting 6 months. And there is a paragraph in the above article that does seems to implie that something changed about the AMD software
"For most (machine learning) chip companies out there, the software is the Achilles heel of it," Tang said, adding that AMD had not paid MosaicML to conduct its research. "Where AMD has done really well is on the software side."
 

bug

Joined
May 22, 2015
Messages
13,320 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Its funny, I remember predicting this exact problem with AMD's one-off (well, two, including Fury X, which had an even shorter support cycle, and qualitatively much worse one) HBM adventures on gaming GPUs. AMD, with its problematic driver regimes in the past, was now going to push optimizations for two different memory subsystems in their GPU families? Of course not.

And here we are. That's also why I think they are much more focused and strategically positioned better today; the whole business is chiplet-focused now, moving to unification rather than trying weird shit left and right. Its also why I don't mind them not pushing the RT button too hard. Less might be more.
It's a little debatable how "focused" they are, considering they still put Vega in current products. I wish they dropped this habit once and for all. Or at least stick to a couple of generations.
What does this mean? idk what ROCm even is.
It's their Linux compute stack. I.e. what keep s them from being taken seriously for AI/ML :(

No, please explain it to me. And really talking about AI and ML in servers and then pointing to gaming cards, like the 3070 Ti and the RX 7900XT/X looks a bit odd.
It may seem odd, but it's really not. People dive into AI/ML using the hardware they have, they don't buy professional adapters for a hobby. This, in turn, determines what skills are readily available in the market when you're looking to hire AI/ML engineers.
 
Joined
Sep 27, 2008
Messages
1,048 (0.18/day)
@btarunr Renoir/Lucienne and Cezanne/Barcelo/whatever you wanna call it still continues on the mobile side, as new products, under the Ryzen 7020 and 7030 names. Released last year in 2022. Lucienne is decidedly trash heap but Barcelo is still being quietly put in new, reasonably mid-high end notebooks. Since it's all listed by AMD as one big family under Ryzen 7000, I don't see Barcelo going away until Rembrandt-R (7035) and Phoenix (7040) do too. Which will be......pretty interesting to see in the context of GCN5.1 being dropped.

Vega had a good run, I honestly don't see a problem with this. It's just that it should have died off long ago, instead of being continuously shoehorned into new products in the past 3 years.

The CPU core is still Zen 2, but the iGPU on those 7020 series chips is RDNA2 instead of Vega. (Radeon 610M)

You're right about the 7030 chips though
 
Joined
Mar 10, 2010
Messages
11,878 (2.29/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Pro users using Rocm, on Cezanne!?!, That a big crowd?!.
 

tabascosauz

Moderator
Supporter
Staff member
Joined
Jun 24, 2015
Messages
7,790 (2.39/day)
Location
Western Canada
System Name ab┃ob
Processor 7800X3D┃5800X3D
Motherboard B650E PG-ITX┃X570 Impact
Cooling NH-U12A + T30┃AXP120-x67
Memory 64GB 6400CL32┃32GB 3600CL14
Video Card(s) RTX 4070 Ti Eagle┃RTX A2000
Storage 8TB of SSDs┃1TB SN550
Case Caselabs S3┃Lazer3D HT5
The CPU core is still Zen 2, but the iGPU on those 7020 series chips is RDNA2 instead of Vega. (Radeon 610M)

You're right about the 7030 chips though

Ah no, that was my mistake. 7020 is Mendocino and is the new Zen2/RDNA2 Athlon release. I was referring to the Renoir refresh.

It does still illustrate the utter disaster that is the 7000 mobile naming scheme. AMD seriously wants people to view Mendocino, Barcelo, Rembrandt, Phoenix and Dragon Range as equals in terms of technology :roll:

Pro users using Rocm, on Cezanne!?!, That a big crowd?!.

As far as I can tell ROCm support on APUs (even if they are "Vega") is kinda pepega and a clear answer/proper documentation is scarce. Still, why not? I can think of plenty of people very interested in running stuff like stable diffusion - it doesn't mean they have the funds to smash on a high end GPU.
 
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
It may seem odd, but it's really not. People dive into AI/ML using the hardware they have, they don't buy professional adapters for a hobby. This, in turn, determines what skills are readily available in the market when you're looking to hire AI/ML engineers.
I don't believe Nvidia financial success is based on the fact that millions with Nvidia gaming cards decided to make AI and ML their hobby. I understand your point, but gaming cards are probably irrelevant here.
I also understand your point about hobbyists getting used to CUDA and then probably doing some studies on CUDA to get jobs. But again, where Nvidia and AMD and everyone else is targeting, it's not about "what was your hobby, what did you learn in university?". If that was the case, then EVERYTHING ELSE than CUDA would being DOA.
 

bug

Joined
May 22, 2015
Messages
13,320 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Pro users using Rocm, on Cezanne!?!, That a big crowd?!.
That's AMD's real problem: compute underperforms and is hard to set up. As a result, everything happens in the green camp, AMD's crowd is not big no matter how you look at it.

I also understand your point about hobbyists getting used to CUDA and then probably doing some studies on CUDA to get jobs. But again, where Nvidia and AMD and everyone else is targeting, it's not about "what was your hobby, what did you learn in university?". If that was the case, then EVERYTHING ELSE than CUDA would being DOA.
It's not entirely about that. But when you need to move fast, existing knowledge in the market is a factor.
 
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
It's not entirely about that. But when you need to move fast, existing knowledge in the market is a factor.
Google and others might grab individuals that are good in CUDA, not to program in CUDA, but because they understand what AI and ML programing is and how it looks like and how to get the results needed. Most of them will have to learn something else to get and keep their new/old jobs.
Again, if it was CUDA and only CUDA, EVERYTHING else would have being DOA. Not just anything AMD, but anything Intel, anything Google, anything Amazon, anything Tenstorrent, anything Apple, anything Microsoft, anything different than CUDA. Am I right? Am I wrong?

Now I do agree that for companies and universities with limited resources for probably limited projects, where limited I mean projects that are still huge in my eyes or some other individual's eyes throwing random thoughts in a forum, will just go out and buy a few RTX 4090s. Granted. But again, I doubt Nvidia's lattest amazing success is based on GeForce cards.
 
Joined
Mar 10, 2010
Messages
11,878 (2.29/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
That's AMD's real problem: compute underperforms and is hard to set up. As a result, everything happens in the green camp, AMD's crowd is not big no matter how you look at it.


It's not entirely about that. But when you need to move fast, existing knowledge in the market is a factor.
So loads do AI on mx450 equipped laptops.

My point was no pro uses Cezanne for AI work and even if that Apu had an Nvidia GPU it would still be irrelevant it's a consumer part.

RocM is irrelevant to consumer part's so no need to mention them at all or discuss consumer part's in this thread.

What's next should we talk. AMD driver issues on consumer part's here too , someone will no doubt.
 

bug

Joined
May 22, 2015
Messages
13,320 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Google and others might grab individuals that are good in CUDA, not to program in CUDA, but because they understand what AI and ML programing is and how it looks like and how to get the results needed. Most of them will have to learn something else to get and keep their new/old jobs.
Again, if it was CUDA and only CUDA, EVERYTHING else would have being DOA. Not just anything AMD, but anything Intel, anything Google, anything Amazon, anything Tenstorrent, anything Apple, anything Microsoft, anything different than CUDA. Am I right? Am I wrong?

Now I do agree that for companies and universities with limited resources for probably limited projects, where limited I mean projects that are still huge in my eyes or some other individual's eyes throwing random thoughts in a forum, will just go out and buy a few RTX 4090s. Granted. But again, I doubt Nvidia's lattest amazing success is based on GeForce cards.
Everything other than CUDA is virtually DOA. Most libraries that underpin AI/ML projects are CUDA-based.

Of course you can have your own implementation from scratch (possibly even more performant than CUDA, if you're in a specific niche), but at this point, the entry barrier is quite high.

Mind you, I'm not saying CUDA is better (I haven't used it). But I know at least two guys who tried to dabble in AI/ML using AMD/OpenCL and they both said "screw it" in the end and went CUDA. One of them was doing it for a hobby, the other one for his PhD. TL;DR CUDA is everywhere and it sells hardware while AMD keeps finding ways to shoot themselves in the foot.
 
Joined
Aug 13, 2010
Messages
5,399 (1.07/day)
Now I do agree that for companies and universities with limited resources for probably limited projects, where limited I mean projects that are still huge in my eyes or some other individual's eyes throwing random thoughts in a forum, will just go out and buy a few RTX 4090s. Granted. But again, I doubt Nvidia's lattest amazing success is based on GeForce cards.
You'd be amzed to know how small, medium and often large businesses deploy racks and racks of GeForce based ML servers today to perform training. Cards like the RTX 3090, and later the RTX 4080 and 4090 really do represent server-grade compute strength that was previously available to only few.

This economy is crazy. Startups that want to start and train models in-house will buy often 15-25 high end GPUs and put them in racks or rigs to get their initial versions ready.
 
Joined
Jan 8, 2017
Messages
9,115 (3.37/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
You'd be amzed to know how small, medium and often large businesses deploy racks and racks of GeForce based ML servers today to perform training. Cards like the RTX 3090, and later the RTX 4080 and 4090 really do represent server-grade compute strength that was previously available to only few.

This economy is crazy. Startups that want to start and train models in-house will buy often 15-25 high end GPUs and put them in racks or rigs to get their initial versions ready.
They really don't, businesses that need that kind of stuff turn to cloud solutions, buying 20 RTX 4090s upfront is a catastrophically cost ineffective solution, no one is doing that.

Do you know how astronomically ahead anything with Tensor cores is to even RDNA3 on ML applications?
I don't and neither do you. AMD's upcoming MI300 looks like it's going to be at the very least comparable to H100 and that the fact that Nvidia had to respond with new variants of H100 to match the memory capacity goes to show they feel some heat coming from AMD. Not to mention that AMD keeps getting more and more huge projects to build supercomputers competing with Nvidia/Intel offerings, if Nvidia was so astronomically ahead no one would be paying millions of dollars for their stuff, be real.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,320 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I don't and neither do you. AMD's upcoming MI300 looks like it's going to be at the very least comparable to H100 and that the fact that Nvidia had to respond with new variants of H100 to match the memory capacity goes to show they feel some heat coming from AMD.
That's the sad part: it doesn't matter how capable is the hardware if the software sucks.
Not to mention that AMD keeps getting more and more huge projects to build supercomputers competing with Nvidia/Intel offerings, if Nvidia was so astronomically ahead no one would be paying millions of dollars for their stuff, be real.
If Ferrari were so great, everybody would be buying Ferraris, nobody would buy anything else, right? ;)

It's an exaggeration, of course. But it shows why you can sell even when there's a considerable gap between you and competition.
 
Joined
Aug 13, 2010
Messages
5,399 (1.07/day)
They really don't, businesses that need that kind of stuff turn to cloud solutions, buying 20 RTX 4090s upfront is a catastrophically cost ineffective solution, no one is doing that.
Im not speaking out of nowhere or speaking in hypothetics, im in this business myself :). Not everyone is rushing to get 20 4090s, but small offices will already start equipping their employees with machines that allow them to train smaller models locally. There's really nowhere else besides geforce cards they turn to. That means that the product is built with CUDA, and probably for the next few years the business will grow using it and exapnding on their resources.
We have already seen that with most of the popular available tools for developers, someone better get two RTX 4090's in a machine than four RX 7900 XTXs or whatever Radeon instinct equivelant to it is. The situation is extremely skewed towards NVIDIA in the ML ecosystem today. At this point, im pretty sure that two zeroes won't make AMD's sales for ML reach NVIDIA's. It really is quite an astronomical difference.

I don't even want to open up on the GTCs, the courses and the academies that exist on NVIDIA's side to enrich the CUDA based ML world today. This is a losing game for anyone else in this industry so far, Intel included and their less than great solutions. These DGX servers NVIDIA offer are just the cherry on top
 
Last edited:
Joined
Sep 6, 2013
Messages
3,054 (0.78/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Everything other than CUDA is virtually DOA. Most libraries that underpin AI/ML projects are CUDA-based.

Of course you can have your own implementation from scratch (possibly even more performant than CUDA, if you're in a specific niche), but at this point, the entry barrier is quite high.

Mind you, I'm not saying CUDA is better (I haven't used it). But I know at least two guys who tried to dabble in AI/ML using AMD/OpenCL and they both said "screw it" in the end and went CUDA. One of them was doing it for a hobby, the other one for his PhD. TL;DR CUDA is everywhere and it sells hardware while AMD keeps finding ways to shoot themselves in the foot.
That still doesn't explain why huge companies like Google or Intel build their own hardware for AI and ML. Do we, simple forum users, understand reality better than them?
You'd be amzed to know how small, medium and often large businesses deploy racks and racks of GeForce based ML servers today to perform training. Cards like the RTX 3090, and later the RTX 4080 and 4090 really do represent server-grade compute strength that was previously available to only few.

This economy is crazy. Startups that want to start and train models in-house will buy often 15-25 high end GPUs and put them in racks or rigs to get their initial versions ready.
Individuals and startups are not the target audience for Nvidia, Intel, AMD, Google, Tenstorrent etc.
RX 7900 XTXs or whatever Radeon instinct equivelant to it is.
Someone in the business should have that answer.

In any case MosaicML seems to be a company working with Nvidia for a long time and only now coming out with a press release saying "Hey, you know something? AMD's options can be a real alternative NOW, because
thanks largely to a new version of AMD software released late last year and a new version of open-source software backed by Meta Platforms called PyTorch that was released in March
Maybe, being in the business, you need to update your info.
 
Joined
Aug 13, 2010
Messages
5,399 (1.07/day)
Individuals and startups are not the target audience for ,Nvidia
If startups warent the target of NVIDIA, they wouldn't invest such tremendous efforts in running GTCs and funding programs worth hundreds of millions to make sure that as many small and medium businesses as possible would be using their hardware and software tools. This is objectively false. Often times, NVIDIA like those business enough to buy into them or them entirely.
If such things weren't important to them, they wouldn't give as many software tools and grow as large community using accessible and affordable hardware to such clients. They would force them to buy server-grade hardware only and unlock those features there. They wouldn't sell you on Xaviar NX / Orin products that you can buy for a couple of hundred dollars and develop for, including in hardware integration level to boards

These products exist especially for startups and small businesses. Here's our little Xaviar we built a board to accomodate. very cute

20230703_150538.jpg


People really stay under large rocks. Time to lift them up, you missed how NVIDIA exponentially grew their ML outreach since 2017
 
Last edited:
Top