llm benchmark
-
Proxmox
Llama 3.2 Vision 11B LOCAL Cheap AI Server Dell 3620 and 3060 12GB GPU
We are testing a killer cheap AI home server off a single 3060 GPU and a 3620, a very low cost and surprisingly capable when paired with the new Llama 3.2 11B LLM powered by Ollama, OpenWEBUI and LCX containers in Proxmox. Cheap AI Server Dell Precision 3620 Tower 3060 12GB GPU GPU 6 to 8 pin Power Adapter Ai…
Read More » -
Proxmox
Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance
Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a smaller PCIe slot than a full 16x? I dive…
Read More »