Linux
Install TensorFlow GPU on WSL2 (Windows Subsystem for Linux) – Step by Step Tutorial
Learn how to install TensorFlow GPU on WSL2 Ubuntu 24.04 for Windows 11 in this comprehensive tutorial! This video covers the …
[ad_2]
source
Learn how to install TensorFlow GPU on WSL2 Ubuntu 24.04 for Windows 11 in this comprehensive tutorial! This video covers the …
[ad_2]
source
Lorem ipsum dolor sit amet, consectetur.
Great tutorial! Works smoothly with tf 12.7 (without tfrt)
Works perfect as of 19/09/2024, amazing video!
Very detailed video, Thumbs up for that. One suggestion though , plz zoom on to the specific point for better visibility
A very comprehensive and thorough guide. It definitely shows how much research you have done. Thank you for this amazing video! Also tensorflow is stopping support for TensorRT in the next update so if you are installing a newer version don't bother installing TensorRT
To the people who are facing issues when installing a different version of CUDA or cuDNN please make sure that in whichever command you run to make sure the version number is of the version of your installation .
For example I installed version 12.3 of CUDA so in commands like:
sudo cp include/cudnn*.h /usr/local/cuda-12.1/include
I had to change it to:
sudo cp include/cudnn*.h /usr/local/cuda-12.3/include
Thoroughly check each command for version number or else you will face issues.
Just follow the steps mentioned in the video and you should be fine.
Sorry but I followed each and every steps of yours but it still showing this error ……can u tell me what should I do ? I have checked everything every path and everything is correct , but it still showing this error
2024-09-14 01:32:27.891355: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-14 01:32:27.905640: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-14 01:32:27.911770: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Bro you're GOATED. The only tutorial that worked for me after searching everywhere. Subscribed and Liked
Thank you very much for this tutorial. I had been burning 2 weeks to make it alive.
Your PATH did not work for me to TensorRT
export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1/bin:$PATH
instead I used
export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1.6/bin:$PATH
wow, you have really helped me to set up my machine learning environment on my windows. Thank you!!
This helped me a looooot! thank you so much! now I can use all my GPU ππΌ
Thanks for the great tutorial! I have tried a few but this is the only one I got to work!! My MSc project thanks you!!!
somehow i still get the cudnn error:)
but the tensorrt have been solved so thank you
Thank you a lot for this ππ
/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link
Im facing the above error after `sudo nano /etc/ld.so.conf` adding pat in this file.
can you please help me out.
Thank you so much for helping us in a detailed manner, thank you so much ! β€
I wish that I have 1M account so I make 1M like to your video
Awesome video. Thanks a lot. Till now I was using Ubuntu and windows separately using dual boot and has tf gpu install at both place. Both has its benefit. With windows it allowed me to open large models because some how in windows my gpu was able access my system ram too. But Ubuntu supported many features of torch which doesnβt work on windows. No I am getting a new laptop and wanted to try this wsl. I want to know from your experience is your gpu able to access system memory?
Thanks, Man. This really helps.
Man, I don't know how it's possible but it's working. You did such a great tutorial for a noob as me that started 3 days ago learning Deep Learning. And it worked at the first try! Great job
i pray that u r reserved a seat in heaven bro ,u saved me a ton π
If I install the latest version of tensorflow without tensorrt, would I later be able to use the downgraded versions of cuda and tensorflow for tensorrt? Also, the latest version of tensorflow uses cuda 12.3 while pytorch is 12.1 or 12.4, so can I have multiple cuda versions? I'm unfamiliar with these technologies because I'm just starting out.
Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered.
E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Brilliant, Thank You! Been trying to do this for at least 6 month now and finally got it to work!!!π
Thanks for the great content !!! .
A question though, your github page also includes the following instructions to remove and add some symbolic links, which were not talked about during the video. Are they necessary?
“`
sudo rm /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn*.so.8
sudo ln -s /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.x.x /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
“`
on 17:11 i do get nvcc: command not found error but i could not solve it even with your tutorial π
Thank you!. This worked!
hello, i followed every procedures of yours, but im installing cuda 11.8 and cudnn 8.6, coz based on tensorflow's website, they should be compatible with my rtx3060.
however, i reached an issue in initializing cudnn. after running "./test_cudnn", this error pops out, when i use "ls" i can see the file, but coudnlt open, do you know how to solve this?
./test_cudnn: error while loading shared libraries: libcudnn.so.8: cannot open shared object file: No such file or directory
thank you sir
its showing bad subsitutuion after i added tensor path….this is the error -bash: :${LD_
LIBRARY_PATH}: bad substitution
Thank you so much bro , I wasted hours and days trying to figure whats wrong. you deserve more support and subscribers.
You're the software wizard I needed, thank you very much
Thank you so much! The only tutorial that has worked so far end-to-end. Thanks again!
Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader. I've seen 4+ GPU setups working for linux environments.
Curious if you have any idea why cudnn doesn't work on WSL w/multple GPUs?