Proxmox
Speed up homelab patching with a CACHE
Today I’m setting up a simple nginx proxy, so I can store updates used by my many Linux systems. Most of them run a derivative of …
[ad_2]
source
Today I’m setting up a simple nginx proxy, so I can store updates used by my many Linux systems. Most of them run a derivative of …
[ad_2]
source
Lorem ipsum dolor sit amet, consectetur.
It will be interesting to show how to do cache for docker dependencies.. Homelab cluster with bunch of VMs with docker inside each VM..
While looping over files with find, variable mirror gets emptied after first loop. This results in first file to be ok and subsequent files to write incomplete URLs. I am however not familiar with bash scripting enough to explain why. Edit: Also in your blog you place nginx log files in /var/debcache/ but at the end of blog you tail files in /etc/debcache/
Would love to see a similar solution for RH based linux distros 🙂
Great video as always . thanks
How do you keep all of your containers updated without having to do manual apt upgrades on each one?
Will this break the dist upgrade process? Say Ubuntu 22.04 to 24.04. Since it is renaming the repositories in sources.list.
Awesome thanks!
Very nice video and overall good idea! I personally like Ansible more than a shell script – especially because you plan to reconfigure all your servers. But that doesn't matter as long as it works for you. 🙂
Do you have an automatic way of generating ipv6 DNS entries based on slaac or is this step still a manual process. I assume you use opnsense?
"We don't need IPv4" – Love it. IPv6 isn't the future. IPv6 is now!
Perfect guide. Something I need to add to my ever growing list of things to do.
PERFECT! a project for the weekend! Thank You!
Seems like a lot of work over apt install apt-cacher-ng on a spare server, and apt install auto-apt-proxy on the clients.
Man I was just reading about this. Does anyone know how this compares to something like a Squid cache in pf/OPNsense? I've been planning on getting a dedicated machine for *sense, wondering how much I need to worry about storage on it.
can also be done with lancache and adding the rules into there. but in that case your also adding in another DNS resolver
Now, I just need a mirror who serves my ISP with anything higher than 300kbps in a Gigabit link.
For reference, the .sources files are deb822 format.
Its intended to reduce the need for separate keyring files, iirc.
Also, I can't help but want to try and write an ansible playbook to replace that bash script.😂
Thank you for posting this! I've been meaning to set up apt-cacher-ng for a while now but I'd rather use nginx for the exact reasons you explained
Great Video as always! Really nice how much faster my updates are now with a cache in place.
You have to look at proxmox offline mirror! 😉 I have air-gapped requirements BUT your setup is very elegant
Now put that in a conatiner 😅 and I'll run in my stack
I was actually recently thinking about looking into caching for fun, but decided it was too much work for not enough gain. Thanks a bunch, it's now basically a turnkey solution for me (not quite, but close enough), so that equation has flipped and I can "just do it". Neat!
long time ago I thought about the same but honestly it is a lot to set up – I am surprised nobody made it yet as a package or container that are already preconfigured
suggestion to improve the rewrite script readability: use cat/EOF to pipe a multiline string to a file.
Was running apt-cacher-ng on a Pi, before, and a Debian vm most recently. This was 'easier' to set up, mostly. I have a dozen or so iron and vm Debians, and at least that many Pis. Took less than an hour to spin up a minimal VM and install, another 20-30 minutes to get the rest set up, and it's cooking away for them all. It'd be nice to have a text doc with what needs to be changed on your scripts from your domain and particular setup, but other than that, very nice.
Excellent work. Will immidiately spin up an lxc with nginx + apt cache and try this out 🙂 ps. added a couple of lines so that icons also load.
Why not squid?
Your solution is quite simple and elegant.
What about the apt-cacher-ng ?
Hmm.. not in a place to try this, but one might rewrite all repos, but put the repo hostname as first part of the URL, where one can extract it in the nginx config again, making it able to cache arbitrary repos.
Sweet! This actually gave me some ideas about copying the config files onto my fresh Debian installs. I was using self hosted gitea but having a simple wget command and piped it to bash is a better idea.
I just really liked the nginx-fu needed to get this working. Learned a lot there. Have only like one Ubuntu server in Proxmox, and one unRAID which doesn't really benefit from caching
Now that i have more than 20 LXCs i'm starting to automate updates with ansible and this will come handy to save some bandwidth (not that I'm behing a metered connection, but if the servers are up 24/7 better put them to good use). Thanks for the video.
I have to do the same for my opensuse systems
Nice! Gonna set this up today