Linux

Install TensorFlow GPU on WSL2 (Windows Subsystem for Linux) – Step by Step Tutorial



Learn how to install TensorFlow GPU on WSL2 Ubuntu 24.04 for Windows 11 in this comprehensive tutorial! This video covers the …

[ad_2]

source

Related Articles

31 Comments

  1. A very comprehensive and thorough guide. It definitely shows how much research you have done. Thank you for this amazing video! Also tensorflow is stopping support for TensorRT in the next update so if you are installing a newer version don't bother installing TensorRT

    To the people who are facing issues when installing a different version of CUDA or cuDNN please make sure that in whichever command you run to make sure the version number is of the version of your installation .
    For example I installed version 12.3 of CUDA so in commands like:
    sudo cp include/cudnn*.h /usr/local/cuda-12.1/include
    I had to change it to:
    sudo cp include/cudnn*.h /usr/local/cuda-12.3/include

    Thoroughly check each command for version number or else you will face issues.

    Just follow the steps mentioned in the video and you should be fine.

  2. Sorry but I followed each and every steps of yours but it still showing this error ……can u tell me what should I do ? I have checked everything every path and everything is correct , but it still showing this error

    2024-09-14 01:32:27.891355: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered

    2024-09-14 01:32:27.905640: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered

    2024-09-14 01:32:27.911770: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

  3. Thank you very much for this tutorial. I had been burning 2 weeks to make it alive.

    Your PATH did not work for me to TensorRT
    export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1/bin:$PATH

    instead I used
    export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1.6/bin:$PATH

  4. Awesome video. Thanks a lot. Till now I was using Ubuntu and windows separately using dual boot and has tf gpu install at both place. Both has its benefit. With windows it allowed me to open large models because some how in windows my gpu was able access my system ram too. But Ubuntu supported many features of torch which doesn’t work on windows. No I am getting a new laptop and wanted to try this wsl. I want to know from your experience is your gpu able to access system memory?

  5. Man, I don't know how it's possible but it's working. You did such a great tutorial for a noob as me that started 3 days ago learning Deep Learning. And it worked at the first try! Great job

  6. If I install the latest version of tensorflow without tensorrt, would I later be able to use the downgraded versions of cuda and tensorflow for tensorrt? Also, the latest version of tensorflow uses cuda 12.3 while pytorch is 12.1 or 12.4, so can I have multiple cuda versions? I'm unfamiliar with these technologies because I'm just starting out.

  7. Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered.
    E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
    E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

  8. Thanks for the great content !!! .
    A question though, your github page also includes the following instructions to remove and add some symbolic links, which were not talked about during the video. Are they necessary?
    “`
    sudo rm /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn*.so.8
    sudo ln -s /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.x.x /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
    “`

  9. hello, i followed every procedures of yours, but im installing cuda 11.8 and cudnn 8.6, coz based on tensorflow's website, they should be compatible with my rtx3060.
    however, i reached an issue in initializing cudnn. after running "./test_cudnn", this error pops out, when i use "ls" i can see the file, but coudnlt open, do you know how to solve this?

    ./test_cudnn: error while loading shared libraries: libcudnn.so.8: cannot open shared object file: No such file or directory

    thank you sir

  10. Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader. I've seen 4+ GPU setups working for linux environments.
    Curious if you have any idea why cudnn doesn't work on WSL w/multple GPUs?

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button