• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon Pro v540 Research Thread

Going to see if i can Jank together my Radeon Pro Duo cooler to this V540, PCBs are far apart from each other but lets see if i can get something working! if not im going to be listing my V540 for sale for cheap sub $200 if anyone is interested
 

Attachments

  • 1721686747648.jpeg
    1721686747648.jpeg
    1.3 MB · Views: 132
No further progress from my side, mainly because I have setup that works for me and I don't see any reasonable way forward. I can share my notes about setting up Proxmox hypervisor, if there is any interest maybe write a short guide, but I can't promise a timeline for that.

One interesting discovery I recall - it's possible to use MorePowerTool to create working SPPT. While the card is not visible in the tool, you can export SPPT to .reg file, manually edit adapter number to point to one of V540 GPUs, and import directly to registry. I briefly tested with clock raised to 1650MHz and there was measurable performance uplift. This might interest @LabRat 891 .
 
No further progress from my side, mainly because I have setup that works for me and I don't see any reasonable way forward. I can share my notes about setting up Proxmox hypervisor, if there is any interest maybe write a short guide, but I can't promise a timeline for that.

One interesting discovery I recall - it's possible to use MorePowerTool to create working SPPT. While the card is not visible in the tool, you can export SPPT to .reg file, manually edit adapter number to point to one of V540 GPUs, and import directly to registry. I briefly tested with clock raised to 1650MHz and there was measurable performance uplift. This might interest @LabRat 891 .
That would be awesome a Tutorial and how to do it, so we can use it also
 
No further progress from my side, mainly because I have setup that works for me and I don't see any reasonable way forward. I can share my notes about setting up Proxmox hypervisor, if there is any interest maybe write a short guide, but I can't promise a timeline for that.

One interesting discovery I recall - it's possible to use MorePowerTool to create working SPPT. While the card is not visible in the tool, you can export SPPT to .reg file, manually edit adapter number to point to one of V540 GPUs, and import directly to registry. I briefly tested with clock raised to 1650MHz and there was measurable performance uplift. This might interest @LabRat 891 .
I would second any notes on setting it up in Proxmox, as it would be nice for the sake of knowledge alone.
 
Quick and dirty notes on installing V540 in Windows VM, running on Proxmox hypervisor.

Host platform shouldn't matter much, but needs to have decent IOMMU support. On desktop boards it's hit or miss. For display adapter, any vendor is fine, we only need it for Proxmox installation. If using discreet card, better connect it to secondary PCIe x16 slot.

Enable IOMMU in BIOS.

Proxmox installer

I suggest configuring static IP for host, to simplify access.

After completing installation, login to Proxmox via Web browser, click on your host in left pane, then in middle pane Updates>Repositories, disable Enterprise repo, add No-Subscription repo.

Open shell on host, run:

Code:
apt update

apt dist-upgrade

nano /etc/default/grub
edit:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset"
Code:
nano /etc/modules
add:
Code:
vfio
vfio_iommu_type1
vfio_pci
Code:
nano /etc/modeprobe.d/pve-blacklist.conf
add:
Code:
blacklist amdgpu
Code:
update-initramfs -u -k all
reboot

Create Windows VM - detailed steps are beyond scope of this post.
Guest OS - MS Windows
Version - depending on your ISO, Pro/Enterprise version strongly recommended due to RDP support.
Graphics - Default
Machine - q35
BIOS - OVMF
Add EFI Disk
Pre-enroll keys yes
Qemu Agent yes
Add TPM yes
Disks - add OS disk to IDE or SATA bus, check Advanced > SSD Emulation yes
CPU - cores as needed, Type - host
Memory - as needed, Ballooning device no
Network - Model Intel E1000

Finish Windows setup, enable RDP, install VirtIO drivers:
Shutdown VM.

For better performance, now we can connect OS disk to VirtIO SCSI bus, and change network to VirtIO adapter.

Start VM, verify that OS boots and RDP access works. Shutdown VM.

In VM Hardware:
set display to none
add PCI device
Raw Device - look for Navi12 with lower bus-id
Primary GPU yes
Advanced
ROM-bar yes
PCI-Express yes
Vendor ID: 0x1002
Device ID: 0x7362
Sub-Vendor ID: 0x1002
Sub-Device ID: 0x1a34

Start VM, from now only access is by RDP. Manually install drivers for V540 via Device Manager. From my experience, these are most stable:

If you need virtual display for streaming, use these drivers:
For good gaming performance, I recommend Sunshine/Moonlight combo. Other streaming apps should also be fine. GPU hardware encoding is working.
 
Much appreciated for your notes, I think I can replicate this another day but I am unfortunately not able to for quite some time.
 
Doublepost, but story time.

I decided to take a crack at this again by selling my soul a second time to Mr Bezos and AWS, more specifically, this time seeing if I can check on if there's any newer windows drivers and get the linux drivers. While I did find newer drivers, my brilliant mind forgot to save the urls for the linux ones (got the urls for the windows ones though). That said they are saved, and I did attempt to use them for a bare metal scenario, to no avail, on my end at least. Proxmox definitely was an option though, but my linux expertise is just not good enough to discern what's wrong. I don't want to post the drivers directly yet, but if anyone wants to take a crack at the linux drivers, send a DM.

Additionally managed to go to the Tustin Microcenter during RTX 50 series launch day (I didn't plan on buying one anyway, even if I did I stood 0 chance) but to ask JayzTwoCents a question or two about this card and see if he can think of some solutions to putting it in a proper case. And I brought some other unreleased GPUs with me for conversation sake. Short version his response was pretty telling that this is not gonna go well. Normally I would not go out of my way to an event like this, especially just to ask a specific person something like this (no offense if you see this Jay!), but I imagined he'd have encountered some weird stuff like this, and may have a case in mind that may accommodate backwards airflow.

In the end I spent a bit of time exploring the case options and test fitting it in the store and did come across one that could potentially work, though as an ITX build. The Fractal Design Ridge has a lot of room, more than enough to contain the v540 plus both fan mount options (blower or 40mm, maybe even an 80mm if you angle it slightly, keeping it in 3 slot depth), and if you opted to keep the airflow backwards as the card prefers, the case's intake perforation is pretty good too, so long as you remove the fabric dust filter. The intake being on the sides and extra fans gives it tons more breathing room/fresh air too, so this might be a pretty good option so long as you dont mind working with small form factor constraints. You would probably want to get 2 40mm fans going on this, vs just 1, or opt for a high cfm 80mm fan.

You can obviously run it on a bench or a normal case, but I wanted something cleaner while respecting the airflow directoion, so I tried to build around that. Dont mind the PSU and RAM choices, I had some secondary intent if I did this.
https://pcpartpicker.com/list/FbsmFZ - this uses a 7600X3D for bonus fun points.
https://pcpartpicker.com/list/6k3tKq - this uses an Epyc AM5 variant, for similar reasons.
 
You don't need Linux drivers to run Proxmox, in fact you have to specifically block it from interacting with the card.

I'd say the main issue with stock heatsink is how much noise you are going to tolerate to keep it cool. You can try to swap heatsinks between GPUs, then you should probably be fine with airflow front to back.

I personally only use it when I'm not at home :)
 
Has anyone tried to remove or somehow bypass the FPGA on the card? Not sure entirely what it'd achieve, perhaps it is doing some sort of BIOS protection or validation. I dunno, I kinda wanna see this thing running properly on Windows lol
 
Has anyone tried to remove or somehow bypass the FPGA on the card? Not sure entirely what it'd achieve, perhaps it is doing some sort of BIOS protection or validation. I dunno, I kinda wanna see this thing running properly on Windows lol
I dont think anyone has tried yet.
Though, it looks like the 2 that @Catch2223 owns are on ebay https://www.ebay.com/itm/116540029803 so perhaps whoever snags one could try?
 
Has anyone tried R.ID drivers? I had gotten a couple of V340L cards working by having them identify as MI25 Instinct cards and turning off SBCC memory since that just locks everything up.

If it's been mentioned already I apologize as I haven't dug through all 12 pages.
 
Has anyone tried R.ID drivers? I had gotten a couple of V340L cards working by having them identify as MI25 Instinct cards and turning off SBCC memory since that just locks everything up.

If it's been mentioned already I apologize as I haven't dug through all 12 pages.
Yeah one did. Page 11 has a comparison between them and the official drivers. Strangely the performance is significantly lower using the R.ID drivers.
 
Yeah one did. Page 11 has a comparison between them and the official drivers. Strangely the performance is significantly lower using the R.ID drivers.
Was it ran under Enterprise drivers? There are a few tweaks (mainly SBCC memory off was the biggest thing for me) to get them to run. I had all four GPUs pushing blender and Cinebench R24. Only issue was thermals after my fan went out.
 
Was it ran under Enterprise drivers? There are a few tweaks (mainly SBCC memory off was the biggest thing for me) to get them to run. I had all four GPUs pushing blender and Cinebench R24. Only issue was thermals after my fan went out.
Was run under proxmox, and passed through to windows vms. The specifics can be found here https://www.techpowerup.com/forums/threads/amd-radeon-pro-v540-research-thread.308894/post-5260961 and https://www.techpowerup.com/forums/threads/amd-radeon-pro-v540-research-thread.308894/post-5261376
The only drivers we have are the ones that were linked in the OP, and R.ID
 
Was run under proxmox, and passed through to windows vms. The specifics can be found here https://www.techpowerup.com/forums/threads/amd-radeon-pro-v540-research-thread.308894/post-5260961 and https://www.techpowerup.com/forums/threads/amd-radeon-pro-v540-research-thread.308894/post-5261376
The only drivers we have are the ones that were linked in the OP, and R.ID
Ah, I didn't do the VM stuff. If I get my hands on another V series I'll do more testing. Also hopeful ZLUDA will have support for these cards as that's a lot of potential.
 
Ha, I didn't do the VM stuff. If I get my hands on another V series I'll do more testing. Also hopeful ZLUDA will have support for these cards as that's a lot of potential.
Well luckily for you, like I mentioned it seems Catch2223 is selling both of theirs so your chance is definitely there, and no telling when more will show up for sale again. The potential of these GPUs is definitely pretty interesting assuming both work without issues natively rather than virtualized, but so far no real success getting stable results on that.

My experience with ZLUDA is hit and miss, it definitely works, but whether this would work with it I dont know.
 
Yes, both attached to a single VM. It's still the same OS I setup last year - I didn't touch anything hardware or driver wise to make this run.
 
Very nice. I've been running 2 separate VMs instead but maybe I'll drop that idea. The noise of the fan has been driving me insane though so I tore the setup down for the mean time.
 
Back
Top