home ai server
-
Proxmox
Qwen QwQ 2.5 32B Ollama Local AI Server Benchmarked w/ Cuda vs Apple M4 MLX
The new Qwen with Questions aka QwQ LLM fine tune based off the popular Qwen 2.5 32B base is a unique step in chain of though and reasoning which really is impressive! I was lucky enough to find some stats from a X poster about their Apple Q8 M4 Max Tokens per second also to compare against for all those…
Read More » -
Proxmox
Shocking Claims: Is the New AI Model Reflection 70B a Total Scam?
Your Queries: llama, llm, ollama, reflection, reflection ai, reflection llama ai, ai, huggingface, claude 3.5 sonnet benchmarks, open ai chatgpt new features, reflection 70b, llama reflection, llama reflection 70b, reflection tuning, reflection llama 70b, reflection 405b, reflection ai, claude 3.5 opus, chatgpt memory, ai news, gpt5, llama, llm, ollama, proxmox, ai server, home ai server, home ai, ai home server,…
Read More » -
Proxmox
REFLECTION Llama3.1 70b Tested on Ollama Home Ai Server – Best Ai LLM?
The buzz about the new Reflection Llama 3.1 Fine Tune being the worlds best LLM is all over the place, and today I am testing it out with you on the Home Ai Server that we put together recently. Running in Docker on a LXC in proxmox on my Epyc quad 3090 GPU Ai Rig. Ollama got out support for…
Read More » -
Proxmox
Proxmox Ai Homelab all-in-one Home Server – Docker Dockge LXC with GPU Passthrough
One server for the ultimate Homelab, multi 3090 GPU LXC and Docker host has turned into a really cool rig. I show you here the complete step by step setup from Proxmox to Nvidia Drivers to Nvidia Container Toolkit and then we setup a LXC to house a docker that handles GPU passthru with bare metal speeds. Then I install…
Read More »