• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RTX2080 and CUDA Accelerated Nanopore DNA Sequencing

Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
10-oxford-nanopore.png


I gotta say in terms of supporting scientific research, Nvidia is second to none. Waayyy better than ALL the other hardware tech giant COMBINED!

So we had this amazing Nanopore based DNA sequencing machine mentioned previously in this thread:

https://www.techpowerup.com/forums/threads/we-need-moar-cores-for-science.240729/

One major problem was the speed of base calling during real time sequencing rarely able to catch up with the generation of raw data=====> From current signal squiggle plot to A, T, C, G basepairs. My 10c20t 6950X has been constantly pegged to the max during sequencing runs at 4.2GHz. Those 1950X build and 2990WX build I helped built a while ago did solve the problem. However most of the time the CPU were still used to 100%, which is CRAZY for a 32 core 64 thread 2990WX running all core 4.1GHz. That prevented anything else from being done on the workstation when sequencing is going.

A lot of the work of base calling involves machine learning based deconvolution of current signals. So naturally some researchers started thinking about using GPU to acceleate that process. The developer of Oxford Nanopore reached to the GPU makers on the market for help. One company responded with a bunch of nothing, while another designated an entire programmer team to helping out. Yep Nvidia's CUDA development team IMMEDIATELY reached out for help on getting CUDA acceleration to work for Oxford Nanopore.

So all in short, within a few months we got a CUDA working on basecalling! So we tried with an RTX 2080 and holy shit IT IS FAST!

https://medium.com/@kepler_00/nanop...nvidia-docker-v2-with-a-rtx-2080-d875945e5c8d


Needless to say I will be trying this with my 2080Ti very soon. And if things goes well some lab may be getting a new RTX Titan to aid in CUDA acceleration. Gotta say Nvidia's CUDA support is truly amazing and they are truly committed to helping out scientific research.

https://news.developer.nvidia.com/oxford-nanopore-selects-nvidia-agx-for-personal-dna-rna-sequencer/

https://blogs.nvidia.com/blog/2018/10/10/oxford-nanopore-dna-sequencer-nvidia-agx/


I believe there are works currently going on for utilizing the Tensor cores in the RTX series for even better performance. We shall see how that play out. I am very excited for the new era of DNA sequencing. $1000~$1500 for a full 30X coverage human genome with all DNA methylation mapped out. This is unthinkable even 5 years ago. Now it can be performed on a handheld sequencing machine with a beefy gaming GPU equipped PC. Unreal




Oh and if you are near Nvidia HQ at California here are some good workshops, well if you are a computer nerd that is also a genetics nerd
https://londoncallingconf.co.uk/events/accelerating-bioinformatics-workshop-nvidia

 
Joined
Sep 27, 2014
Messages
550 (0.16/day)
Did you tried Boinc community for distributed computing?
https://boinc.berkeley.edu/

Also, most of the apps can be programmed to use GPU's by using OpenCL language. OpenCL is open source, CUDA is proprietary.
nVidia cards can do both OpenCL and CUDA, but of course that nVidia will lead you to CUDA, to exclude any comparation with different manufacturer GPU. CUDA programming is pushing you in a niche, but since you got that help free... more power to them.

There is a translator compiler from CUDA to OpenCL, but I don't know how efficient it is:
https://github.com/hughperkins/coriander
 
Last edited:
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Did you tried Boinc community for distributed computing?
https://boinc.berkeley.edu/

Also, most of the apps can be programmed to use GPU's by using OpenCL language. OpenCL is open source, CUDA is proprietary.
nVidia cards can do both OpenCL and CUDA, but of course that nVidia will lead you to CUDA, to exclude any comparation with different manufacturer GPU. CUDA programming is pushing you in a niche, but since you got that help free... more power to them.

There is a translator compiler from CUDA to OpenCL, but I don't know how efficient it is:
https://github.com/hughperkins/coriander

Nope. Not touching OpenCL with a 10 ft pole. Wasted a good chunk of my life on it.
 
Joined
Sep 27, 2014
Messages
550 (0.16/day)
I think this is a cool project. I don't know how long it takes to decode the genome with one GPU - 100 minutes? Is this a Single-Precision heavy task? Because I see you are talking about cards with gimped Dual-Precision (1/32). Exceptions would be "Titan V", "Quadro GP100" or "Quadro GV100" (1/2).
 
Last edited:
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
I think this is a cool project. I don't know how long it takes to decode the genome with one GPU - 100 minutes? Is this a Single-Precision heavy task? Because I see you are talking about cards with gimped Dual-Precision (1/32). Exceptions would be "Titan V", "Quadro GP100" or "Quadro GV100" (1/2).

Not decoding genome, but decoding current signals in HDF5 format to common DNA reads format FASTQ. Current signals are read in at 6 base-pairs at a time and different composition of DNA as well as the modification on DNA will result in base calling difficulties.
 
Top