Proxmox
Ultimate Proxmox LXC Docker GPU Passthrough for NVIDIA
Getting a stable functional Proxmox GPU Passthrough setup for your LXC containers that also supports docker is easy with this …
[ad_2]
source
Getting a stable functional Proxmox GPU Passthrough setup for your LXC containers that also supports docker is easy with this …
[ad_2]
source
Lorem ipsum dolor sit amet, consectetur.
I want to lose my mind.
What is the ceph repo?
great video, saving that for later. thanks.
cool…. Thanks…. no container experience, and really fed up with VMs and GPUs (pretty much reverting to bare metal everything… so…. this might be total ignorance/misunderstanding the tutorial, but if I try/want to run two models{?} simultaneously voice/text/voice and image detection… is it two containers? sharing gpu{s}…… or one container able to run two models simultaneously? hopefully I'm using the correct terms…. side note: running pfSense {paid} on bare metal for 2 yrs…. very stable…. wasn't as a VM server2019, 4x1g-2/2x10g-2×40/56g port nics/Supermicro x10 srl-f/E5-2690v4/128g ecc/2-1T 980pro raid1 boot/Intel 8950-SCCP Quick Assist….
Any idea how to get this working with Nvidia GRID? I've been going round in circles for a while, can't seem to actually find how to get a copy of the Nvidia virtualisation drivers for generic Linux KVM.
you are the best bro… i love your videos…. with timeline that rocks.
i got my proxmox with radarr and plex, and windows vm with a rtx a2000 passthrough… and i wan to use with containers too, and voila, i got your video
For the vast majority of my Linux stuff, I've switched over from running Linux VMs to running LXCs with maybe only a tiny handful of notable exceptions.
One of the best things about using LXCs is that you can share multiple GPUs across multiple LXCs, which you can't do between VMs.
Excellent video!
Excelllent video, great explanation. Ran acrosss your channel recently and it’s quickly becoming my go to. Keep up the great work!
I have Ollama running in LXC on an Nvidia card, when I try to use the same video card in another container (stable diffusion), ollama stops seeing the video card and starts working on the processor, only restarting the container helps, how can this be fixed?
Can you mix different NVIDIA models to work together or they have to be the same? eg I have an RTX3060 12Gb and a GTX1060 3GB, can they work together to split the load?
Great video! To the point and step by step! Thank you