• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon AI PRO R9700 GPU Arrives on July 23rd

ROCm is the alternative, the default (no need to even say it) is CUDA.
Well, your phrasing made it sound weird:
My opinion will be outdated when these ROCm alternatives start being viable outside of edge cases
Makes more sense if you typo'ed and meant "CUDA alternatives" instead.

Viable (if that's true, which isn't clear) ≠ competitive.
Please, do point to whenever I said it's competitive.
It is starting to be viable. Just to be clear, so far we have been talking about AI and whatnot, for which Pytorch already has upstream ROCm support and even vLLM is getting first-grade support after AMD got their shit straight and is helping the devs.

If you were to say something like rendering (which is not my area), I'd have no opinion whatsoever, other than agreeing that Optix simply mops the floor with AMD when it comes to blender.

Ask a dev whether they'd prefer a Vega 64 or a 1080 Ti. Are you really going to keep pretending they're even close to being equivalent for productivity software?

The 1080 Ti supports CUDA. The Vega 64 supports what?
The 1080ti has no cooperative matrix support, and its performance is subpar, a 2060 manages to be over 2x faster for anything matrices.
Both are shit.

Do you know what folks on a really tight budget prefer? Cheaper and crappy GPUs with tons of VRAM, be it a P40 or a MI25. If not that, geforce pascal makes no sense whatsoever and you'd be better off with a 2060, so vega64 and 1080ti are equally irrelevant.

Another false equivalence. The 7900XTXs are nowhere near as sought after as the 24/32 GB NVIDIA consumer cards for prosumer/workstation
7900XTXs are simply almost impossible to find used in my market, and go for way more than 3090s, and priced way too close to 4090s (which doesn't make sense whatsoever).
Looking over on ebay US, 3090s are a bit cheaper than 7900XTXs, with the 4090s being way more expensive than both.
beyond people who don't actually make money with their GPU and just like big VRAM numbers.
Now I'm confused. We are talking about used products. People making proper money as a business won't even be looking for those so the discussion of used products would be moot.
Hobbyists and small scale stuff would be the ones looking into those, no matter if nvidia or AMD, and that's where the argument makes sense.

The release of the 9070XT has made them essentially irrelevant for gaming too, though there's still hangers on due to what, 5% better raster?
What does gaming have to do with anything we've discussed so far?

Again, CUDA still works on these cards, they're just not getting further developments.
So? How's that any relevant? I'm not talking about those lacking any sort of software support, rather that the hardware itself is useless, which applies for both vega64 and the 1080ti.


Again, all your points seem to be from someone that has no industry experience in that specific field. I'm not sure what you're even trying to argue for anymore lol
 
Well, your phrasing made it sound weird:

Makes more sense if you typo'ed and meant "CUDA alternatives" instead.
The way I wrote it made sense enough. ROCm, ZLUDA, SCALE, these alternatives to CUDA (doesn't need to be said, it's the default, as I've already stated).
Please, do point to whenever I said it's competitive.
If it's not competitive, why bother?
If you were to say something like rendering (which is not my area), I'd have no opinion whatsoever, other than agreeing that Optix simply mops the floor with AMD when it comes to blender.
CUDA cards mop the floor with every AMD prosumer/productivity/workstation alternative, viable or not. Until that changes, there is essentially no competition. Proving something like ROCm works in a lab or when on a budget does not = the competitive solution the entire rest of the market goes for. Move the needle or be ignored.
The 1080ti has no cooperative matrix support, and its performance is subpar, a 2060 manages to be over 2x faster for anything matrices.
Both are shit.
It has CUDA, so it is useful, pretty much that simple. 11 GB VRAM and enough cores to do most of the work you do on a 4080, just slower. The point is it's viable, a word you use. It was competitive on release eight years ago, and today it's usable and does the job. Obviously if building new on a $3-400 budget you would pick something like a 40/5060 Ti 16 GB, not a used 1080 Ti. The point I'm making is in a choice between Vega and Pascal, the Pascal developer/professional is still making money today, and has got eight years of usefulness out of their card. The later RTX cards didn't exist when the 1080 Ti released, the Vega 64 did, hence my comparison, so arguing they are better is irrelevant.
7900XTXs are simply almost impossible to find used in my market, and go for way more than 3090s, and priced way too close to 4090s (which doesn't make sense whatsoever).
Looking over on ebay US, 3090s are a bit cheaper than 7900XTXs, with the 4090s being way more expensive than both.
Yes, I know, the 7900XTX doesn't make sense.
Now I'm confused. We are talking about used products. People making proper money as a business won't even be looking for those so the discussion of used products would be moot.
Hobbyists and small scale stuff would be the ones looking into those, no matter if nvidia or AMD, and that's where the argument makes sense.
People making proper money as a business buy RTX Pro cards. The rest buy xx90 cards or whatever they can afford/find.
What does gaming have to do with anything we've discussed so far?
They're irrelevant for both gaming/productivity in 2025 when picking a card, for gaming you pick 9070XT, for productivity you pick NVIDIA, that's what it has to do with this discussion.
So? How's that any relevant? I'm not talking about those lacking any sort of software support, rather that the hardware itself is useless, which applies for both vega64 and the 1080ti.
"Useless". You're bundling both into the same camp because that way you can claim a false equivalence, instead of admitting the 1080 Ti was way more useful as a productivity card over the past eight years, and indeed today.
Again, all your points seem to be from someone that has no industry experience in that specific field. I'm not sure what you're even trying to argue for anymore lol
Appeal to authority huh?
 
Last edited:
Or how AMD are currently comparing the $11700 96 core 9995WX against the $5890 60 core Xeon-3595X, instead of the $7999 64 core 9985WX.

Anything to show a bigger bar chart.
"Anything to show a bigger bar chart"

lol

They are comparing their best vs their best.
 
I would like to get one to tinker around with it, no matter what someone might believe how useful it might be. Having alternatives due to other options should be in every consumer top list.
 
They are comparing their best vs their best.
Then why doesn't AMD compare their best workstation card (R9700) against the best Nvidia workstation card (RTX Pro 6000 Blackwell)? Why'd they pick the second-tier consumer GPU with half the VRAM and then crow it can't run models that exceed its VRAM? The only basis for comparing the R9700 against the 5080 is they cost roughly the same, which is why the issue of AMD comparing their $11,000 Threadripper against a $6000 Xeon was brought up.
 
Last edited:
Then why doesn't AMD compare their best workstation card (R9700) against the best Nvidia workstation card (RTX Pro 6000 Blackwell)? Why'd they pick the second-tier consumer GPU with half the VRAM and then crow it can't run models that exceed its VRAM? The only basis for comparing the R9700 against the 5080 is they cost roughly the same, which is why the issue of AMD comparing their $11,000 Threadripper against a $6000 Threadripper was brought up.
Closer to 12k vs 6k, and I think you mean Xeon.
 
Then why doesn't AMD compare their best workstation card (R9700) against the best Nvidia workstation card (RTX Pro 6000 Blackwell)? Why'd they pick the second-tier consumer GPU with half the VRAM and then crow it can't run models that exceed its VRAM? The only basis for comparing the R9700 against the 5080 is they cost roughly the same, which is why the issue of AMD comparing their $11,000 Threadripper against a $6000 Xeon was brought up.

Totally backwards logic, the type you'd only use if you are a fanboy who wants to find something to complain about.

You think it's unfair that they're comparing those GPUs therefore it must also be unfair that they're comparing their best CPU with Intel's best CPU. Which I am sure you realize is total nonsense.

If you think comparing the R9700 to a 5080 is wrong, fine, whatever, but pretending other random comparisons they made are unfair because "AMD bad" or whatever is just dumb.
 
Totally backwards logic, the type you'd only use if you are a fanboy who wants to find something to complain about.
I'm not the one who brought up the Threadripper comparisons, I was explaining to you why someone else had brought it up (there's another thread if you want to talk about the Threadripper announcement).

You think it's unfair that they're comparing those GPUs therefore it must also be unfair that they're comparing their best CPU with Intel's best CPU. Which I am sure you realize is total nonsense.
Actually, I don't even really care that much about the fairness of the comparison. My much bigger concern is how desperate AMD has to be to resort to these absurd comparisons. You do have a point that with the Threadripper, they're at least comparing top model to top model, and everyone is used to taking marketing announcements and first-party benchmarks with a hefty plate of salt. But these specific "benchmarks" for the R9700 are so outlandish and so unrealistic that it reeks of panic. If I was a system builder or IT department for an enterprise in the market for AI workstations, this announcement would confirm my purchase of an Nvidia workstation card. If the best scenario AMD can come up for their new workstation flagship with is just exceeding the VRAM of a second-tier consumer card, AMD is dead in this market.
 
I was explaining to you why someone else had brought it up
Doesn't matter, I explained why it doesn't make sense.

My much bigger concern is how desperate AMD has to be to resort to these absurd comparisons.
They're not absurd.

The comparison is perfectly reasonable, R9700 has way more VRAM, there are uses cases where you are simply shit out of luck with a 5080 no matter how awesome and fast it is and you'd have to pay at least 2500$ for a 5090 to get the same 32GB or God knows how much for one of their pro cards with the same amount of VRAM. There really isn't any other product they could have compared it to that would have made more sense.

this announcement would confirm my purchase of an Nvidia workstation card
Yeah right, because if you had the money to pay thousands for an Nvidia workstation card you'd be waiting to see how AMD's mid range offerings fair as if you'd expect them to be better in every way than Nvidia's best cards lol.

Again, this is the thought process of a fanboy, not a real consumer. A real consumer with this budget wouldn't be buying their workstation cards because he wouldn't have the money, hence comparing this to things like the 5080 makes sense.
 
Last edited:
The comparison is perfectly reasonable, R9700 has way more VRAM, there are uses cases where you are simply shit out of luck with a 5080 no matter how awesome and fast it is and you'd have to pay at least 2500$ for a 5090 to get the same 32 GB or God knows how much for one of their pro cards with the same amount of VRAM.
Yeah right, because if you had the money to pay thousands for an Nvidia workstation card you'd be waiting to see how AMD's mid range offerings fair as if you'd expect them to be faster than Nvidia's best cards lol.
I can't believe you typed these two statements one after the other, and then hit "Post reply". That is hilarious. Does an extra $1200 for a 5090 or 4500 Blackwell matter, or does it not? Is there a market and usecase for the R9700, or no? You seem to be conflicted. And this isn't a "mid-range offering," this is their best workstation model this generation.

If AMD's target market is just "people who need 32GB of VRAM but also will be bankrupted by spending an extra $1300 for an Nvidia card," again, they are completely dead in this market. That hasn't worked in the consumer space, and it most definitely isn't working in the workstation market.
 
If AMD's target market is just "people who need 32GB of VRAM but also will be bankrupted by spending an extra $1300 for an Nvidia card," again, they are completely dead in this market. That hasn't worked in the consumer space, and it most definitely isn't working in the workstation market.
AMD's target is people who are buying cards in that price range for the purpose of running LLMs, it's that simple. I know you can't understand it or perhaps you're pretending to not understand it but there's nothing more than can be explained.

If every consumer would always be willing to pay more because "an extra $$$ wouldn't bankrupt them", then Nvidia wouldn't sell cheaper pro cards, which they do, cheapest Blackwell card (24GB) is 1500$ in case you didn't know. So clearly there is a market for cheaper products in the segment, unless Nvidia is also selling products that are "dead" according your very high IQ take.

So it's obvious you have no idea what you are talking about.
 
Last edited:
The way I wrote it made sense enough. ROCm, ZLUDA, SCALE, these alternatives to CUDA (doesn't need to be said, it's the default, as I've already stated).
It did not, but ok.
If it's not competitive, why bother?
Competitive is not a simple, single metric. Being too simplistic and reducing it to a single graph on what you think is competitive is as important as the use of AMD GPUs for microcontrollers.

CUDA cards mop the floor with every AMD prosumer/productivity/workstation alternative, viable or not. Until that changes, there is essentially no competition.
Wrong. Don't do such bold claims that only show your ignorance.
Proving something like ROCm works in a lab or when on a budget does not = the competitive solution the entire rest of the market goes for. Move the needle or be ignored.
Sure, tell that to hyperscalers.
It has CUDA, so it is useful, pretty much that simple. 11 GB VRAM and enough cores to do most of the work you do on a 4080, just slower. The point is it's viable, a word you use. It was competitive on release eight years ago, and today it's usable and does the job. Obviously if building new on a $3-400 budget you would pick something like a 40/5060 Ti 16 GB, not a used 1080 Ti. The point I'm making is in a choice between Vega and Pascal, the Pascal developer/professional is still making money today, and has got eight years of usefulness out of their card. The RTX cards didn't exist when the 1080 Ti released, the Vega 64 did, hence my comparison, so arguing they are better is irrelevant.
Again, no, that's totally wrong and now how it's used. You could even give a read on the local LLM thread we have in here and see how things work.
Yes, I know, the 7900XTX doesn't make sense.
People making proper money as a business buy RTX Pro cards. The rest buy xx90 cards or whatever they can afford/find.
They're irrelevant for both gaming/productivity in 2025 when picking a card, for gaming you pick 9070XT, for productivity you pick NVIDIA, that's what it has to do with this discussion.
Bruh lol
"Useless". You're bundling both into the same camp because that way you can claim a false equivalence, instead of admitting the 1080 Ti was way more useful as a productivity card over the past eight years, and indeed today.
The 1080ti was extremely relevant at its time and AMD was utter crap. I never said otherwise.
Nowadays? The 1080ti is as relevant as vega64 for the stuff we're talking about: better used as a space heater.
Appeal to authority huh?
Well, you don't seen to be doing any argument in good faith, bring no evidence or proper arguments, and are just bashing a product that you have never used on tasks you clearly never worked with while praising another product for in the tasks that you clearly never used it for either.
So yeah, I feel like I get a pass to step down to your level on a pointless discussion :p
 
CUDA being open source would've been better off for everyone but it's Nvidia they've gotten 90% of the market from rigging the game.
Open source has worked out so well for AMD. Why would Nvidia want that?
 
idk, ask nvidia themselves
I’m aware, I specifically was comparing CUDA to ROCm and AMD’s desire to have other people do their job for them.

Look at the commit history of ROCm. While I won’t say nobody wants anything to do with it…
 
Last edited by a moderator:
Look at the commit history of ROCm. While I won’t say nobody wants anything to do with it…
Both companies are not really welcoming external contributors, because bureaucracy. Even as an external contractor for AMD getting upstream changes done in their repos is quite annoying.
Similar thing happens at Nvidia, where they have an extremely annoying setup for most of their projects with an internal perforce repo and the public github one that have to be kept in sync.
 
Back to topic, interesting that their Pro GPU uses R9700 naming instead of Pro R9070.
 
Last edited by a moderator:
The key words there are "compelling" and "better".
I would find it compelling if it were even on par or close. That's just me though.

hence why RDNA is being dropped next gen.
To be fair, it's being replaced and I suspect they'll keep it around for a bit like they do with every generational switch-over.

That's exactly what ROCm is. It's not great, but it has certainly been improving and getting more traction.
Gonna have to read up on that.
 
Last edited by a moderator:
Wonder if these have SRIOV.... Could be tempted > <
 
I'm not the one who brought up the Threadripper comparisons, I was explaining to you why someone else had brought it up (there's another thread if you want to talk about the Threadripper announcement).


Actually, I don't even really care that much about the fairness of the comparison. My much bigger concern is how desperate AMD has to be to resort to these absurd comparisons. You do have a point that with the Threadripper, they're at least comparing top model to top model, and everyone is used to taking marketing announcements and first-party benchmarks with a hefty plate of salt. But these specific "benchmarks" for the R9700 are so outlandish and so unrealistic that it reeks of panic. If I was a system builder or IT department for an enterprise in the market for AI workstations, this announcement would confirm my purchase of an Nvidia workstation card. If the best scenario AMD can come up for their new workstation flagship with is just exceeding the VRAM of a second-tier consumer card, AMD is dead in this market.
So you are butthurted, because AMD is comparing their 12k CPU that is twice as fast and consume less power than the best Intel's 6k CPU, also you are butthurted, because they show how their card supposed to have MSRP of ~1300 is better than Nvidia's 5080 which currently cost more than 1300. Maybe they should show comparison with Nvidia's pro series, where Nvidia will be twice as fast for 4x more money, but I am pretty sure then you will forget about the price and will point on the score claiming that this card is useless.
 
Thread cleaned. Discuss the topic please, not what people can post.
 
The lack of CUDA makes AMD DGPUs DOA for most things workstation/scientific. Occasionally there's a metric where AMD is competitive for workstations, but it's the exception, not the rule.

This is regarding productivity OFC, not gaming, where RDNA 4 is reasonably competitive vs Blackwell.

Beyond just the hardware, developer and software support for NVIDIA's CUDA architecture is so many orders of magnitude ahead it's not even funny. AMD has made some steps in the right direction recently, but they have a lot of catching up to do and NVIDIA has insane momentum.

View attachment 408167View attachment 408168View attachment 408169View attachment 408170
I wouldn't say DOA.
Also the situation has changed somewhat compared to the 7900XTX days.

AMD is performing very well in some workloads (Topaz AI, Unreal Engine, ON1 Resize, partially SPEC workstation etc.).
More here @10:38.
But it's still a long road ahead of them: https://www.phoronix.com/review/radeon-rx-9060-xt-rocm
nVidia is extremely strong in rendering, but in everything else it's much closer than some would like to admit.
 
I was estimating $1500, 1250 for this GPU is a great deal. 32 GB on a "budget", with half performance but also half the price of the 5090. This one has a place even in high end gaming PCs at that price. Good stuff.
 
Back to topic, interesting that their Pro GPU uses R9700 naming instead of Pro R9070.
Back in the day the Radeon 9700 pro was an amazing card. Shame they have slipped so far
 
This one has a place even in high end gaming PCs at that price. Good stuff.
If it wasn't for the stupid blower design.

Yes, I know that it makes sense for workstation PC, but not for a gaming PC in 2025.
 
Last edited:
Back
Top