• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Tech Source Releases Condor 4000 3U VPX Graphics Card for GPGPU Applications

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,794 (7.40/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Tech Source, Inc., an independent supplier of high performance embedded video, graphics, and high end computing solutions, has released the Condor 4000 3U VPX form factor graphics/video card. Designed for compute-intensive General Purpose Graphics Processing Unit (GPGPU) applications deployed in avionics and military technology, the Condor 4000 3U VPX graphics card delivers up to 768/48 GFLOPS of peak single/double precision floating point performance and 640 shaders with the AMD Radeon E8860 GPU at its core.

The Condor 4000 3U VPX card is for seriously high-end graphics, parallel processing, and situation awareness image and sensor processing applications such as radar, sonar, video streaming, and unmanned systems. The new card operates at higher speeds than an XMC form factor equivalent card as it occupies a dedicated slot to allow for better cooling which enables it to run at 45 Watts full power.



Selwyn L. Henriques, president and CEO of Tech Source Inc., commented, "Our GPGPU customers want every ounce of performance they can get. So this 3U VPX graphics card is an attractive option as it delivers 60 percent better performance than the previous generation."

The Condor 4000 3U VPX board is fully conduction-cooled and has 6 digital video outputs (2 x DVI and 4 x DisplayPort) available from the rear VPX P2 connector on the card. It also features 2 GB of GDDR5 memory and supports the latest versions of the APIs such as Open GL 4.2, Direct X 11.1, OpenCL 1.2, and DirectCompute 11 for GPGPU computing.

The Condor 4000 3U VPX card is available with Linux and Windows drivers by default and other real time operating systems such as VxWorks may be supported. Tech Source offers 15 years product support and a board customizing service for those with specialized requirements.

View at TechPowerUp Main Site
 
Was anyone producing things like this one in the past, or does this show something positive for AMD's GCN in the compute space?
 
Generally for compute intensive tasks in portable applications (such as drones/robotics), FPGAs are preferred over CPU/GPUs due to their lower latency and lower power usage and ease of replacement.
 
This is the tech that has been missing from many unmanned systems, using multiple inputs such as inertia, GPS, sonar (material identification/density) radar (location and environment mapping), infrared and being able to combine them in real time for programmed logic and decision making, autonomous machines can use the wider parallel processing to determine exact location, map and determine where to go and where to avoid.

Lets say we wanted to map the oceans, how do you determine where you are when GPS doesn't work under water? Get a fixed location, use a combination of sensors to pick a few points to triangulate a location (same way we have a camera angle in games to determine distance to an item) then use them as reference points, and the ability to use a programmable board with things like varying filters based on expected feedback VS actual feedback is new, before we had hardware filters built in and they may have been able to work in perfectly clear water, but add in debris and it became inaccurate, or thermal convection causes scintillation and distortion, so being able to run a kalman filter on multiple systems and deterministically choose the highest accuracy one is new and a huge improvement.
 
This is the tech that has been missing from many unmanned systems, using multiple inputs such as inertia, GPS, sonar (material identification/density) radar (location and environment mapping), infrared and being able to combine them in real time for programmed logic and decision making, autonomous machines can use the wider parallel processing to determine exact location, map and determine where to go and where to avoid.

Lets say we wanted to map the oceans, how do you determine where you are when GPS doesn't work under water? Get a fixed location, use a combination of sensors to pick a few points to triangulate a location (same way we have a camera angle in games to determine distance to an item) then use them as reference points, and the ability to use a programmable board with things like varying filters based on expected feedback VS actual feedback is new, before we had hardware filters built in and they may have been able to work in perfectly clear water, but add in debris and it became inaccurate, or thermal convection causes scintillation and distortion, so being able to run a kalman filter on multiple systems and deterministically choose the highest accuracy one is new and a huge improvement.

As far as GPUs go, this one is pretty tame on the transistor count. Will it really offer computational advantages over a modern FPGA? The adaptive filtering does seem like a huge advantage though, that is something software does have over something not easily reconfigured on-the-fly like an FPGA. I'm just curious about the overall computational throughput, though.
 
As far as GPUs go, this one is pretty tame on the transistor count. Will it really offer computational advantages over a modern FPGA? The adaptive filtering does seem like a huge advantage though, that is something software does have over something not easily reconfigured on-the-fly like an FPGA. I'm just curious about the overall computational throughput, though.
The whole point of a FPGA is the ability to reprogram via a GUI and software.
 
The whole point of a FPGA is the ability to reprogram via a GUI and software.

I know that, but reprogramming gates takes way longer than just switching parameters in software. That's why I was curious if there was an advantage of running a GPU over an FPGA in this case. I was under the impression that a modern FPGA has enough gates to accommodate multiple Kalman Filters or adjust Kalman filters on the fly. I could see a GPU being advantageous if that's not the case, if the FPGA were in need of being reprogrammed while the robot is deployed in order to switch parameters, that's downtime you don't have localization or sensors.

I was just curious if in this specific application they were running into that problem with hardware based Kalman filters. If that makes sense.
 
Straight over my head, and not afraid to admit it. :laugh:
Only experience I have is with mining using FPGA's, and the software was pre written but could be easily modified.
 
Straight over my head, and not afraid to admit it. :laugh:
Only experience I have is with mining using FPGA's, and the software was pre written but could be easily modified.

No worries. I'm an engineer by profession so I get a little carried away. I was hoping to get a dialogue going about the trade-offs, always good to know what's available out there so you can make some informed decisions should you have to build a similar system and talking to people with experience is the best way to learn.
 
Back
Top