Windows Subsystem for Linux with CUDA support + TensorFlow
Let me save you many hours of pain and just give you the rundown.
* You MUST be on Windows 10 21H2 or above. 21H2 cannot currently be updated to (yeah, it's because they are pushing Windows 11) so you need to download it as an ISO.
* Install the latest "game ready driver" for your card (you don't need a special driver despite the nvidia documentation, I got mine here: https://www.nvidia.com/Download/index.aspx).
* In general you should ignore the nvidia documentation as they make it almost impossible to figure out what combination of cuda & cudnn versions you need.
Steps to install:
1. Open CMD as administrator.
2. Run: wsl --install -d ubuntu
3. Open the Ubuntu WSL and enter the following commands:
# Install CUDA
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubun...
sudo sh -c 'echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubun... /" > /etc/apt/sources.list.d/cuda.list'
sudo apt-get update
sudo apt-get --yes install cuda-toolkit-11-2 cuda-toolkit-11-2
sudo sh -c 'echo "deb http://developer.download.nvidia.com/compute/machine-learnin... /" > /etc/apt/sources.list.d/nvidia-machine-learning.list'
sudo apt-get update
sudo apt install cuda-11-2 libcudnn8 libcudnn8-dev
sudo apt install libnvinfer8 libnvinfer-dev libnvinfer-plugin8
# Verify CUDA installation
cd /usr/local/cuda-11.2/samples/4_Finance/BlackScholes
sudo make
./BlackScholes
# Install TensorFlow
sudo apt-get update && sudo apt-get install python3-pip python3-dev
pip3 install tensorflow-gpu
If you are reading this in the future I have some advice:
* Look at this page to get at least an idea of what versions of cuda and cudnn will work together: https://www.tensorflow.org/install/source#gpu
* Use apt-cache search <THING> to see the possible versions, allowing you to guess what you need to install. Good luck.
If you follow the MS docs, they do not mention anything about install CUDA on the Linux VM, you do not get a functional setup to start using CUDA and if you follow Nvidia's documentation, they push you down the docker container route. I personally prefer to setup Tensorflow using Anaconda rather than pip.
A slight correct on the OP, the special driver NVIDIA mentions is for CUDA only (no display driver included), Nvidia does not "support" installing of their full display driver in WSL2 so YMMV in terms of reliability and performance.
I personally gave up on WSL2, not because of WSL2, but because it is so hard to use remotely (I do not have frequent local access to my GPU workstation). I tried various instructions and WSL2 itself was fine when used locally (via remote desktop), but networking scripts I found were unreliable, or if I used Windows openssh server it kept crashing and bash.exe is too limited to be useful, just not worth the hassle and effort when I can install Linux native.