VMware

An Epyc Homelab Monster: the Perfect Media Server mega upgrade



With 24 cores and 48 threads, the EPYC 7402 is a monster of a CPU. Paired with the Supermicro H12SSL-i motherboard, and more PCIe devices than you can shake a stick at. Today, Alex builds the homelab box to end all boxes.

Part 1 – My Perfect Media Server isn’t so perfect anymore…

$1750 all-in gets you the CPU, Motherboard, 256GB ECC RAM and the CPU cooler from the eBay listing below (no affiliation).

– eBay listing:
– Sliger CX4712 Case:
– EPYC 7402 CPU:
– Supermicro H12SSL-i Motherboard:
– Icy Dock SSD Caddy:
– 2x u.2 8x PCIe card:

– Supermicro Fan Control script:

๐Ÿ–ฅ๏ธ
๐ŸŽ™๏ธ podcast
๐Ÿฆฃ mastodon techhub.social/@ironicbadger
๐Ÿ““
๐Ÿ’พ
๐Ÿง‘๐Ÿฝโ€๐Ÿ’ป

===

Chapters:

00:00 – Component Overview
01:59 – Case Overview
03:49 – SSD Caddy
07:16 – Build Montage 1
09:15 – Letโ€™s Build
15:05 – Build Montage 2
16:19 – PCIe Bifurcation
19:17 – PCI Passthrough
20:46 – Power Usage
24:13 – Price
25:03 – Toasty U.2
25:48 – Fan Control
27:03 – Case Conclusions
28:36 – The Future Bites

===

Music by Alex Kretzschmar

* Sliger CX4712 case generously provided by the manufacturer for review. No money exchanged hands in return for my honest opinions in this video.

[ad_2]

source

Related Articles

44 Comments

  1. You sir have done what I tell everyone who thinks they want to build a workstation/HDET with a Threadripper or Xeon – just get an Epyc. You save money, get more PCIe channels and more threads. Great build – boring is good!

  2. Great video, why would you use 2 old SSDs? 4 nvme be great for freebsd raid0 stripe across them all, rest of space for bhyve virtualization. Then another raid for the spinning disks. I'd like it if you used 1 64 core epyc or duals, most people rocking 128 cores these days in dual cpu setups.

    For nvme you have PCIe 4 lanes for some 990 Samsung nvmes for the bifurcation card?

  3. lots of older cases had motherboard trays. My inwin q500's ive had since my 486dx2 days. I have 2 of them still. One has my older unraid 5.0.6 running 4tb or smaller drives either internally in the case or in my sata2 hotswap drive enclosures. has a 3 bay and 4 bay setup in the 5.25 bays.

  4. the removable fan on the cpu cooler is from the server world, that way you can remove the fan quickly and easily with out having to fight with the cooler, just the fan, on some systems you can replace a failed fan with out turning off the system, for places screaming about 5.9 uptime it can be a good thing.

  5. The Case is absolutely impressive! 175 Watts is still a lot. With a bad energy contract here in germany it would be around 600โ‚ฌ in Electricity costs alone. Even with a good one it would be around 450โ‚ฌ

  6. Thank you for the video and awesome information. I am looking at this build to upgrade my HomeLab habit (currently have a couple of HP Elitedesk 800 G3 sff).

    My use case will be the following:
    – Plex with Arrโ€™s and associated apps
    – HomeAssistant and associated apps
    – Frigate and Associated apps
    – Local LLM
    – Retro Gaming

    Is there a list of full build parts (like PSU)? I am currently on OMV but looking at TrueNas or Unraid. Any suggestions? Thanks again!

  7. 14:40 FROM UNDERNEATH?? Never even thought to look due to the absurdity. assumed they were rivets. It was easier taking apart the cage holding the 10 adapters (which isn't even needed).

  8. Thank you for the video, really interesting to watch and explained quiet a lot – excited about my build now. Dipping my feet into homelab stuff and just bought a H11SSL-i Mobo with a 7551P and 128GB RAM – I have so much more info now!

  9. How did you get your Arc GPU properly passed through to your VM running Plex in docker? I have mine (a380) passed through from Proxmox to a Ubuntu 22.04 VM. The drivers are installed in the VM so that the card shows up in intel_gpu_top, but the VM crashes if I try to transcode with the card. I also get errors in dmesg about resizable BAR not working and only allocating 256MB to the card. Any tips?

  10. BTW – i've watched a billion simialr vids and have been really looking for a similar upgrade for my 200TB array. I pulled the trigger on a very simair config. Thanks for the recommendation!

  11. Can I ask, I have mounted the cpu fan the opposite way around IE fan facing rear, will this effect the vrms temp and will it be ok as it is? or will I have to turn the heatsink aroung? don't really want to do this because of how sensitive the eypc can be. Also, on reading the script it says it can be dangerous and smoke the cpu, is the script safe to run?

  12. Good price on these CPUs , i had no idea an EPYC was this attainable. Arent they rather slow though? Theyโ€™re half speed compared to what the 12th/13th gen regular intel chips boost to. RAM is also starting to hover in the half speed range compared to DDR5. I currently have a refurb dual CPU Xeon Workstation and itโ€™s barely usable. Are there homelab workloads that take advantage of slower processing but more cores/threads? The main advantage I can see is the huge amount of pcie lanes for addon cards and drives.

  13. Hey Alex, coming from SelfHosted podcast (and other JB shows). I recently also bought myself an EPYC server ( Supermicro cse-745tq case, Supermicro H11ssl-c motherboard, AMD EPYC 7302P, 128GB RAM, Noctua nh-u9 tr4-sp3 CPU cooler) My storage differs frequently, because i don't run the server 24/7 (yet) because of electricity costs ( and many problems with my elecricity dealer ), but mainly i have 2 SATA 64GB DOMs from Supermicro (great little things) for OS, and than i mostly use ASUS PCIe adapter for 4 NVMe M.2, a pair of Intel Optane 100GB ( indestructable things with 60 DWPD) as mostly support drives ( ZIL, L2ARC, SpecialDev) and a pair of WesternDigital SN700 2TB for my VMs. The SATA/SAS bays in front are mostly empty unless i test some drives or some wierd pool configurations. I also have Tesla P4 which i bought almost 2 years ago, used as split GPU for Plex transcoding and some gpu accelerated VMs. Im running some Mediaservers, dashboards, testing enviroment for work, i have Synology NAS for all that storage with 4 16T drives, Also everything remote is running on Tailscale (powered by wireguard noise protocol……), no traffic on public IP. BTW what is the cost of elecricity where your server is located, currently in Europe(CzechRepublic) the prices are not realy good, running power hungry Server at home is not something i would like to do, so im running some desktop HW fitted for Low power home server. Uff, long post, hope you did not die of boredom reading this ๐Ÿ˜›

  14. @ktzsystems do you have the build list in a Google Spreadsheet format? I'd like to build the same thing you did for my home lab but I'm curious about the raid card, PSU and cable you purchased to wire up the drives?

  15. Are you saying that with everything assembled and running your power usage is under 150 watts? Is that an estimate or meter reading ?

    If itโ€™s real I will be replacing my Dell R720s.

  16. Hey @ktzsystems, I really enjoyed this, clear audio, informational and very good explanations about your choices and bifurcation. Thanks! ๐Ÿ‘๐Ÿป๐Ÿ˜Ž instant sub ๐Ÿค˜๐Ÿป

  17. Love the video. Can I ask about the sata dom ports. Does the ssds that you`re using in this build for the boot drives get powered by the DOM ports or did you add the power from the PSU?

  18. I have the Icy Dock 4 drive bay for 5.25 drive bay, my storage server is using 2 of them and they are great. You can easily replace the fans with Noctua fans of the same size and never hear them.

  19. On the H12 motherboard, you can change fan settings which might stop the ramp-up/down. Try optimal or even heavy workload. Your Noctuas will still be quiet. Better than messing with raw bmc scripting if it works. Nice video sir.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button