First of all, it’s not about
SLI, you may heard about in the good old days of computer gaming. SLI is a technology to link 2–4 GPUs to share the work on rendering an image. It’s only about rendering graphics.
In
CUDA (that is about computations, not graphics) you can directly access any available GPU in your system, so just add several GPUs and use any of it. You can write your program to do anything you want, loading data into any GPU and running computations on a GPU of choice.
Usually deep learning engineers do not write CUDA code, they just use frameworks they like (TensorFlow, PyTorch, Caffe, …). In any of these frameworks you can tell the system which GPU to use.