ollama
-
Proxmox

SUPER Cheap Ai PC – Low Wattage, Budget Friendly, Local Ai Server with Vision
Exploring the cheap end of AI we end up testing M2000 or K2200 and P2000s against CPU inference to see what local Ai performance looks like in the mid $100 price range. I thought going in it would be the K2200, but there is a twist to this so make sure you watch! SUPER BUDGET AI RIG Dell Optiplex 7050…
Read More » -
Proxmox

Qwen QwQ 2.5 32B Ollama Local AI Server Benchmarked w/ Cuda vs Apple M4 MLX
The new Qwen with Questions aka QwQ LLM fine tune based off the popular Qwen 2.5 32B base is a unique step in chain of though and reasoning which really is impressive! I was lucky enough to find some stats from a X poster about their Apple Q8 M4 Max Tokens per second also to compare against for all those…
Read More » -
Virtualization

How is Apple M4 Mac Mini for Programming – AI – Coding – React JS
I tried Apple’s new M4 Mac Mini for my daily programming tasks where I create websites, code, do some web design etc. The Mac Mini is the newest product from Apple with the new M4 Chip and it’s meant to be a powerhouse, but how does it stack up when doing things like running AI models, coding in vs code,…
Read More » -
Proxmox

How to Use Bolt.new for FREE with Local LLMs (And NO Rate Limits)
Over the last month, together as a community we started oTToDev, a fork of Bolt.new that aims to add a bunch of much needed functionality like being able to use any LLM you want, including local ones with Ollama. In this video I give some super important tips and tricks for using local LLMs with oTToDev, some of which can…
Read More » -
Linux

Install and Run Arduino IDE on Raspberry Pi 5 and Linux Ubuntu
#raspberrypi5 #raspberrypi #machinelearning #robotics #iot #edgecomputing #edge #aleksandarhaber #arduino #arduinoIDE #iot It takes a significant amount of time and energy to create these free video tutorials. You can support my efforts in this way: – Buy me a Coffee: – PayPal: – Patreon: – You Can also press the Thanks YouTube Dollar button In this tutorial, we explain how to…
Read More » -
Proxmox

Local Ai Models on Quadro P2000 – Homelab testing Gemma Ai, Qwen2, Smollm, Phi 3.5, Llama 3.1
Longtime homelab favorite Quadro 2000 is a 5 GB GPU that is still pretty decent and in a lot of home servers already, but how does it handle running local LLMs? This is crazy, but it works and the performance IS NOT what you expect. Must watch if you have P2000 already in your system! I will cover some tips…
Read More »





