• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon AI PRO R9700 GPU Arrives on July 23rd

If it wasn't for the stupid blower design.

Yes, I know that it makes sense for workstation PC, but not for a gaming PC in 2025.

It's likely there will be blocks, and probably third party heatsinks that work with it, it's not that much of a problem as far as I'm concerned. Worth it. The price is too good to pass up for a 32 GB GPU, even if it's plain ole' G6. AMD should make sure this is widely available IMO.
 
Correct, and many software developers don't even certify for non NVIDIA cards, the install base is just too low to make financial sense. It doesn't help that AMD's support of their architectures seems to depend on mood, sometimes architectures aren't given ROCm support for months after release, or only certain models.

Even a $300 RTX 5060 or a used RTX 2060 will run CUDA software, though you might end up desiring more VRAM.

AMD needs to prove to software companies, developers, and users that they will continue to support cards many years after release, and code will function on a 10 year old CUDA card or a brand new one, like NVIDIA does.


What they're doing instead is trying to write translation layers like SCALE or ZLUDA so that CUDA will work on AMD. It's not working very well. The problem is NVIDIA released CUDA in 2007 and has been steadily improving it since then, while fostering developer and partner support through providing free resources, libraries and education, through great cost. That investment is what has paid off and continues to pay off as now everyone is pulling on the CUDA rope, whereas AMD skimping on support and development has meant they are irrevelant in the prosumer/workstation domain. Support is rarely dropped, even recently the huge fiasco over 32 bit Physx being depreciated (due to 32 bit CUDA being dropped) with Blackwell only goes to show that the concept of an NVIDIA card not natively running everything is incredibly shocking to the community.
Yes, but in 2007, AMD was in the midst of a CPU crisis and remained so until the arrival of Ryzen. It had acquired ATI just a year earlier for billions and needed to come up with a recovery plan. They miraculously managed not to go bankrupt. And to do so, they focused entirely on CPUs, of course. They didn't have time to think about the future of GPUs.

Nvidia's strong point is that all its software is optimized for Nvidia, that's all. AMD engineers aren't stupid; it just takes a huge amount of work to ensure that the libraries are optimized for AMD as well. It won't be easy for them to compete, but I admire that they're doing their best in so few years
 
Yes, but in 2007, AMD was in the midst of a CPU crisis and remained so until the arrival of Ryzen. It had acquired ATI just a year earlier for billions and needed to come up with a recovery plan. They miraculously managed not to go bankrupt. And to do so, they focused entirely on CPUs, of course. They didn't have time to think about the future of GPUs.
This is true, and explains the why. However much they are the underdog with people rooting for them though, them not being in a position at the time doesn't change reality. It's a reason, sure, but not an excuse, considering they've been marketing GPUs for workstation ever since, but without the software/support their competition has. The lack of competition is due to the lack of competition, not this kind of false equivalence reality many people try to peddle instead, where AMD has GPU products and software that are 'just as good' but noone uses them because 'NVIDIA rigged the game'.
Nvidia's strong point is that all its software is optimized for Nvidia, that's all.
They introduce new types of dedicated hardware several years or even decades before the competition, it's not just software. Often, their direct competition goes some other way, then AMD circles back and copies what NVIDIA did, only starting 3-5 years later. As we saw with the push towards RT/PT, which was scoffed at five years ago, and now is the standard. As we're seeing with FSR4 and the hardware/software break that RDNA4 is to previous generations. Or as we're seeing with unified compute languages like CUDA spawning 'decent but too late' alternatives like oneAPI, or 'has potential but after a lot of work' ROCm.
AMD engineers aren't stupid; it just takes a huge amount of work to ensure that the libraries are optimized for AMD as well. It won't be easy for them to compete, but I admire that they're doing their best in so few years
This is true, hopefully they end up doing that huge amount of work, which will take years, probably about a decade, but the track record isn't great, and as others have said, they often make things open-source, do some or most of the foundational work, then expect the community to pick up the slack. As you can see from how popular ROCm is on Git, as @Visible Noise mentioned, this isn't a particularly effective approach.
I’m aware, I specifically was comparing CUDA to ROCm and AMD’s desire to have other people do their job for them.

Look at the commit history of ROCm. While I won’t say nobody wants anything to do with it…
 
They introduce new types of dedicated hardware several years before the competition, it's not just software.
They introduced dedicated hardware to close the stack, and that's what they continue to do because if they opened everything up, they wouldn't have much to sell compared to the others. They'd all use CUDA, and prices would plummet.

An instinct MI355x isn't worse than a b100 on the hardware front; it just doesn't have software optimized for it.
 
They introduced dedicated hardware to close the stack, and that's what they continue to do because if they opened everything up, they wouldn't have much to sell compared to the others. They'd all use CUDA, and prices would plummet.

An instinct MI355x isn't worse than a b100 on the hardware front; it just doesn't have software optimized for it.
So be first then. The one who does it right and does it first wins, because they own the rights. Bitterness about this doesn't change how corporations work. They are literally legally obligated to increase shareholder value, and leaders will be ejected if they do not do this. Besides, much of CUDA is opensource anyway, as has been mentioned already, so this isn't some kind of cutting truth.

Lets not turn this into a moral argument when we're talking about megacorpos. Businesses are not obligated to risk R&D and all the costs associated with bringing new products to market, just to give it away for free. AMD is literally no better, look at the prices of their CPUs at both consumer and enterprise levels. The better product wins, and you pay more for it. The competition licenses your IP if they can, as with x86-64. That's just life.
 
They introduced dedicated hardware to close the stack, and that's what they continue to do because if they opened everything up, they wouldn't have much to sell compared to the others. They'd all use CUDA, and prices would plummet.
I don’t understand this argument. Yes, a for-profit company creates a fully integrated hardware-software stack to incentivize the customers to stick with their solutions and maximize profits. They have no reason to open any of it. This isn’t a bug, it’s not “unfair”, it’s literally the market working as intended.
 
So be first then. The one who does it right and does it first wins, because they own the rights. Bitterness about this doesn't change how corporations work. They are literally legally obligated to increase shareholder value, and leaders will be ejected if they do not do this. Besides, much of CUDA is opensource anyway, as has been mentioned already, so this isn't some kind of cutting truth.

Lets not turn this into a moral argument when we're talking about megacorpos. Businesses are not obligated to risk R&D and all the costs associated with bringing new products to market, just to give it away for free. AMD is literally no better, look at the prices of their CPUs at both consumer and enterprise levels. The better product wins, and you pay more for it. The competition licenses your IP if they can, as with x86-64. That's just life.
Open source?
The entire core of CUDA is super closed—drivers, runtime, toolchain, etc.
What are you talking about? Nvidia's margins lies there; it doesn't make better hardware than others, it just has a closed stack.

Of course, AMD is the same; they're mega-companies, not do-gooders, and they're right to do so. They exploit the high price if they can.

That doesn't mean you can say Nvidia's advantage is hardware, when in fact it's only due to closed software.
AMD has even more advanced nodes than Nvidia, it simply doesn't have software support because all companies choose Nvidia, as is only right.

I don’t understand this argument. Yes, a for-profit company creates a fully integrated hardware-software stack to incentivize the customers to stick with their solutions and maximize profits. They have no reason to open any of it. This isn’t a bug, it’s not “unfair”, it’s literally the market working as intended.
Wait... I replied that Nvidia's strong point is its closed-source software, not its hardware, as the other user said. Of course, there's nothing against it.
The risk is maybe one day some governerors impose restrictions or to open cuda for monopoly risk, and than the castle falls
 
Last edited:
Open source?
The entire core of CUDA is super closed—drivers, runtime, toolchain, etc.
What are you talking about? Nvidia's margins lies there; it doesn't make better hardware than others, it just has a closed stack.
The core may be, but NVIDIA both pays for their own development, and encourages/enables developers to contribute. Best of both worlds. Their opensource Linux GPU drivers have gotten quite good recently. Again, "just has a closed stack" isn't a criticism or even an accurate observation.
Of course, AMD is the same; they're mega-companies, not do-gooders, and they're right to do so. They exploit the high price if they can.

That doesn't mean you can say Nvidia's advantage is hardware, when in fact it's only due to closed software.
AMD has even more advanced nodes than Nvidia, it simply doesn't have software support because all companies choose Nvidia, as is only right.
It's literally both, but believe what you want to believe. Much of the reason their software stack is better and more developed, is because they brought hardware that can run that type of software to market before anyone else. They are now on 5th or so gen specialized hardware in certain cases, when the direct competition has just brought a kind of first gen to the market, or is still using a hybrid approach. Thus, they set the standards, and companies that wanted cutting edge bought NVIDIA, learned NVIDIA, and trained devs on NVIDIA stacks. Ignoring this or pretending it isn't so doesn't change reality.

But please, show me the "equivalent hardware" to NVLink?

1752834455327.png
 
Last edited:

It's literally both, but believe what you want to believe. Much of the reason their software stack is better and more developed, is because they brought hardware that can run that type of software to market before anyone else. Thus, they set the standards, and companies that wanted cutting edge bought NVIDIA, learned NVIDIA, and trained devs on NVIDIA stacks. Ignoring this or pretending it isn't so doesn't change reality.
lol, are you posting links to cuda libraries? It has nothing to do with the CUDA core software.
AMD at the time of this comment has the best GPGPU regardings specs, but it does not have the software, so for this is a fraction of the power of nvidia b100.
That's all. Only CUDA software
 
lol, are you posting links to cuda libraries? It has nothing to do with the CUDA core.
The core may be, but NVIDIA both pays for their own development, and encourages/enables developers to contribute. Best of both worlds. Their opensource Linux GPU drivers have gotten quite good recently. Again, "just has a closed stack" isn't a criticism or even an accurate observation.
AMD at the time of this comment has the best GPGPU regardings specs, but it does not have the software, so for this is a fraction of the power of nvidia b100.
That's all. Only CUDA software
Not only is it not accurate, but this is a weak attempt to pretend the competition is only behind because they're so nice and good, and provide equivalent open source alternatives. So essentially, it's a cope.

AMD has plenty of proprietary or closed source software, let's not play this game, it's just none of it is as popular or profitable as CUDA.
 
Not only is it not accurate, but it's a weak attempt to pretend the competition is only behind because they're so nice and good, and provide 'equivalent' open source alternatives. So essentially, it's a cope.

AMD has plenty of proprietary or closed source software, let's not play this game, it's just none of it is as popular or profitable as CUDA.
I understand, it's a waste of time explaining things to fans. They don't want to hear.
Good day
 
I understand, it's a waste of time explaining things to people they don't want to hear.
Good day
You should explain to the world's companies that AMD is actually "faster" or "the best" on paper. All they need to do is figure out how to get their software to run on it. I mean AMD call themselves a software company now don't they? Should be easy! Probably something these companies would want to know considering they're investing billions/trillions.

Do you seriously think that these teams of people representing billions of dollars, whose careers hinge on making the right technical decisions haven't evaluated the alternatives? Aren't capable of making their own software/hardware? Haven't tried to?
AMD at the time of this comment has the best GPGPU regardings specs, but it does not have the software, so for this is a fraction of the power of nvidia b100.
That's all. Only CUDA software
 
You should explain to the world's companies that AMD is actually "faster" or "the best" on paper. All they need to do is figure out how to get their software to run on it. I mean AMD call themselves a software company now don't they? Should be easy! Probably something they'd want to know considering they're investing billions/trillions. Do you seriously think that these teams of people representing billions of dollars, whose careers hinge on making the right technical decisions haven't evaluated the alternatives? Aren't capable of making their own software/hardware? Haven't tried to?
Are you there? I've been telling you for an hour that they don't have the software.
Because ROCm will never be able to match CUDA if no one uses it. They'll always be a second choice, but you have to understand why! Not because AMD can't make a B100—in fact, the MI355X is even better on paper—but simply because the software stack isn't up to par and probably never will be because all the software is optimized only for NVIDIA.
 
Are you there? I've been telling you for an hour that they don't have the software.
Because ROCm will never be able to match CUDA if no one uses it. They'll always be a second choice, but you have to understand why! Not because AMD can't make a B100—in fact, the MI355X is even better on paper—but simply because the software stack isn't up to par and probably never will be because all the software is optimized only for NVIDIA.
So they don't have the software, but ROCm exists, but it doesn't matter because noone uses it? Is that correct?
MI355X-
"
Based on that spec alone, AMD will potentially deliver roughly the same AI computational horsepower with MI355X as Nvidia will have with Blackwell. AMD will also offer up to 288GB of HBM3E memory, however, which is 50% more than what Nvidia offers with Blackwell right now. Both Blackwell and MI355X will have 8 TB/s of bandwidth per GPU.

Of course, there's more to AI than just compute, memory capacity, and bandwidth. Scaling up to higher numbers of GPUs often becomes the limiting factor beyond a certain point, and we don't have any details on whether AMD is making any changes to the interconnects between GPUs. That's something Nvidia talked about quite a bit with it's Blackwell announcement, so it will be something to pay attention to when the products start shipping."

So again, you're still claiming that the AMD stack is competitive, just lacking software. But from what I can tell just from a minute of checking one reference that occured to me from memory, they have no answer to NVLink. There's likely many other specialised hardware solutions that don't have equivalents, and I mean equivalents, not "does the same thing but slower". I can't really be bothered to go and research them all, just needed one example to demonstrate that 'it's just software the hardware is the same' statement is misleading.
The fact they're using
more advanced nodes
and still at best reach 'parity', isn't a compliment to the architecture. Blackwell pushing the bar forwards again while reusing the same node as Ada, just goes to show the casual approach NVIDIA can take to maintain hardware/software dominance. They don't need cutting edge nodes or massive amounts of HBM just to reach 'equivalence' on paper.
 
So they don't have the software, but ROCm exists, but it doesn't matter because noone uses it? Is that correct?
MI355X-
"
Based on that spec alone, AMD will potentially deliver roughly the same AI computational horsepower with MI355X as Nvidia will have with Blackwell. AMD will also offer up to 288GB of HBM3E memory, however, which is 50% more than what Nvidia offers with Blackwell right now. Both Blackwell and MI355X will have 8 TB/s of bandwidth per GPU.

Of course, there's more to AI than just compute, memory capacity, and bandwidth. Scaling up to higher numbers of GPUs often becomes the limiting factor beyond a certain point, and we don't have any details on whether AMD is making any changes to the interconnects between GPUs. That's something Nvidia talked about quite a bit with it's Blackwell announcement, so it will be something to pay attention to when the products start shipping."

So again, you're still claiming that the AMD stack is competitive, just lacking software. But from what I can tell just from a minute of checking one reference that occured to me from memory, they have no answer to NVLink. There's likely many other specialised hardware solutions that don't have equivalents, and I mean equivalents, not "does the same thing but slower". I can't really be bothered to go and research them all, just needed one example to demonstrate that 'it's just software the hardware is the same' statement is misleading.
The fact they're using

and still at best reach 'parity', isn't a compliment to the architecture. Blackwell pushing the bar forwards again while reusing the same node as Ada, just goes to show the casual approach NVIDIA can take to maintain hardware/software dominance. They don't need cutting edge nodes or massive amounts of HBM just to reach 'equivalence' on paper.
You're completely off track.

I'll give you a clear example so I hope you understand:

If CUDA software were as "open" (duopoly thanks to ibm) as the x86 architecture has become, anyone could make GPUs that use CUDA, just like AMD makes CPUs that use x86, which was previously reserved for Intel. So don't think NVIDIA's monopoly will last long; it's very likely that some government will intervene.

They have their own technology, called Infinity Fabric Link. There's no point in getting too hung up on it; on the hardware front, AMD has it all.

https://www.digitimes.com/news/a20240702PD221/france-nvidia-antitrust-investigation-ai.html and many many more https://www.businessinsider.com/nvidia-secret-sauce-regulators-gpu-chips-jensen-huang-2024-7
 
Last edited:
You're completely off track.

I'll give you a clear example so I hope you understand:

If CUDA software were as "open" (duopoly thanks to ibm) as the x86 architecture has become, anyone could make GPUs that use CUDA, just like AMD makes CPUs that use x86, which was previously reserved for Intel. So don't think NVIDIA's monopoly will last long; it's very likely that some government will intervene.

They have their own technology, called Infinity Fabric Link. There's no point in getting too hung up on it; on the hardware front, AMD has it all.
x86 is far from open lmao. That essentially was the entire point of RISC, to create an open architecture. It's cross licensed to the point where essentially only Intel and AMD can make it.

Talking about absurd hypotheticals like "if CUDA was open" to prove some alternate reality point that can't be argued against, because it doesn't exist, is certainly one of the approaches of all time.

I agree, this is a pointless discussion.
 
x86 is far from open lmao. That essentially was the entire point of RISC, to create an open architecture. It's cross licensed to the point where essentially only Intel and AMD can make it.

Talking above absurd hypotheticals like "if CUDA was open" to prove some alternate reality point that can't be argued against, because it doesn't exist, is certainly one of the approaches of all time.

I agree, this is a pointless discussion.
Are you really this ignorant? I used the " ".
 
An instinct MI355x isn't worse than a b100 on the hardware front; it just doesn't have software optimized for it.
It does, AMD is already selling billions of dollars worth of Instinct accelerators, these things have already been deployed and are in use.
 
I asked for people to stay on topic. Instead, we have almost a full page of two members arguing about CUDA and software.

Quit it.
 
Am i understanding this correctly? Amd is charging 625$ for 16 gb of vram? I thought those cost 10$ each, what happened?

Its nice to have cheap gpus with lots of vram for the workloads that need it, but this is not cheap. Cant see a pro splurging for this instead of the 5090 with those prices.
 
Last edited:
Am i understanding this correctly? Amd is charging 625$ for 16 gb of vram? I thought those cost 10$ each, what happened?

Nvidia happened. When you ask the most exorbitant amounts of money for higher VRAM capacities an extra 600$ becomes very reasonable.

As usual people who would have no interest in these things anyway are complaining about pricing even though AMD is clearly offering something of much better value for specific purposes.
 
9700 you say?!

 
Nvidia happened. When you ask the most exorbitant amounts of money for higher VRAM capacities an extra 600$ becomes very reasonable.
Ah, nvidias CEO is the one making the decisions at amd?
As usual people who would have no interest in these things anyway are complaining about pricing even though AMD is clearly offering something of much better value for specific purposes.
Funny, this works both ways. But to address your argument, Im not complaining, I think the product is "decent". But im still wondering why the usual suspects that made 900 posts about nvidia upcharging us 50$ for 8gb of extra vram aren't screaming their lungs out now that amd is upcharing us 625$.
 
Last edited:
But im still wondering why the usual suspects that made 900 posts about nvidia upcharging us 50$ for 8gb of extra vram aren't screaming their lungs out now that amd is upcharing us 625$.
Because while AMD is charging 625$ for the next VRAM tier Nvidia is charging double that. I know people on forums like to pretend this stuff doesn't matter but it does in the real world.
 
Back
Top