ai home server
-
Proxmox

Local Ai Models on Quadro P2000 – Homelab testing Gemma Ai, Qwen2, Smollm, Phi 3.5, Llama 3.1
Longtime homelab favorite Quadro 2000 is a 5 GB GPU that is still pretty decent and in a lot of home servers already, but how does it handle running local LLMs? This is crazy, but it works and the performance IS NOT what you expect. Must watch if you have P2000 already in your system! I will cover some tips…
Read More » -
Proxmox

Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance
Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a smaller PCIe slot than a full 16x? I dive…
Read More »

