ai home server
-
Proxmox
Local Ai Models on Quadro P2000 – Homelab testing Gemma Ai, Qwen2, Smollm, Phi 3.5, Llama 3.1
Longtime homelab favorite Quadro 2000 is a 5 GB GPU that is still pretty decent and in a lot of home servers already, but how does it handle running local LLMs? This is crazy, but it works and the performance IS NOT what you expect. Must watch if you have P2000 already in your system! I will cover some tips…
Read More » -
Proxmox
Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance
Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a smaller PCIe slot than a full 16x? I dive…
Read More » -
Proxmox
REFLECTION Llama3.1 70b Tested on Ollama Home Ai Server – Best Ai LLM?
The buzz about the new Reflection Llama 3.1 Fine Tune being the worlds best LLM is all over the place, and today I am testing it out with you on the Home Ai Server that we put together recently. Running in Docker on a LXC in proxmox on my Epyc quad 3090 GPU Ai Rig. Ollama got out support for…
Read More »