locally hosted ai
-
Proxmox
Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance
Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a smaller PCIe slot than a full 16x? I dive…
Read More »