VMware

Quadruple your Mini PC Memory with VMware NVMe Memory Tiering!



4x your Mini PC Memory // VMware NVMe Memory Tiering

We take a deep dive look at VMware’s new NVMe memory tiering feature found in VMware vSphere 8.0 update 3. The new NVMe memory tiering is going to be a game changer for home labs since it will allow us to 4x our Mini PC system memory in systems that normally can’t have more than 96 GB of memory in the case of DDR5 SODIMM memory equipped mini PCs. Let’s look at upping your memory in your mini PC in just a few commands!

Video’s sponsor:

Written post covering VMware NVMe memory tiering:

★ Subscribe to the channel:
★ My blog:
★ Twitter:
★ LinkedIn:
★ Github:
★ Facebook:
★ Discord:
★ Pinterest:

Introduction – 0:00
A word about the Video sponsor – 0:39
VMware proving they are a leader – 2:22
NVMe memory tiering and what it is – 2:36
Not a dumb paging system – 3:00
A disclaimer about it being tech preview – 3:57
Memory tiering notes – 4:26
What types of workloads are a good fit? 4:45
Unsupported VMs that you can’t run with NVMe memory tiering – 6:05
Networking disclaimer with memory tiering – 7:00
Booting VMware ESXi without memory tiering – 8:18
Running the commands for memory tiering – 9:39
Finding your identifier of your NVMe drive – 10:09
Viewing partitions on your NVMe drive you want to use – 11:15
Deleting partitions – 11:42
The command to set the NVMe drive as an NVMe memory tiering device – 12:25
Listing the disks allocated as a tier device – 12:45
Setting the percentage of memory tiering with NVMe – 13:20
Rebooting the VMware ESXi host – 15:00
Booting the mini PC with NVMe memory tiering enabled – 15:13
Memory increases to 468.3 GB of memory – 15:38
The implications of what NVMe memory tiering will do for home lab – 15:54
Wrapping up – 16:43

[ad_2]

source

Related Articles

20 Comments

  1. Talk about kicking the can down the road; so much technical debt. Great if you are still nursing a herd of low performance systems that nobody wants to modernize kill.
    Server 2003 will never die.

  2. I understand that the bills have to be paid somehow to keep the servers running optimally. But come on, advertising a product for home labbers, and then have the starting price be $10k. Really? Let's just be real, nobody is going to buy that product as homelabbers.

  3. Some PHB is going to learn about this and low spec real RAM for NVMe because "we are saving tens of thousands per system".
    Memory tiering needs to be a native feature of the operating system, not black magic applied surreptitiously to VMs.
    And as others have mentioned, this was the use case for Optane which is dead.
    And of course this is going to eat up and destroy NVMe drives as the wear leveling won't know how to cope.

    I'd like to see RAM to PCIe bridges… so that you can put in something like last generation RAM as a pseudo NVMe drive. Make good use of that DDR4 that is mostly obsolete… but would be great for adding on 2nd tier memory to beefy servers. But we are already have CCIX and CXL… so even that idea is obsolete.

    There has to be more to this technology than is covered in this video. I would expect that a guest addition/driver would go a long way toward optimizing the VM performance in a tiered memory scenario.

    But the end result is that there is really no substitute for RAM.

  4. WOW back to the future, back in the 1970s when core memory was physically large and expensive, high paging applications were allocated to high speed disk drives and low rate paging applications were allocated to standard speed disk drives. So not new just reinventing the wheel.

  5. Thank you for the video. This sounds like Swap or page file virtual memory rebranded as "Memory Tiering". I guess since now we have Nvme getting speeds of 17GBps same as older ram speeds might work better? Sounds like 40 year old technology with a new name. But again thank you for the video. I wonder if you "merged" a few Nvme drives as raid Zero, increasing their speed would translate to a closer Ram speed… sothing to try….

  6. I'm wondering, if used in Prod, the Server mfgs will support this from a warranty perspective due the potential impact on TBW and DWPD of the NVMes and assume it will become much more critical to monitor the SMART stats of the drive. If you have static pages in tiered memory, then it could make sense, but if you have a lot of memory-write intensive workloads using tiered memory, over time, that could be problematic.

  7. Shame optane isn't still a thing, would have been perfect for this. I'd still be interested to see performance of optane vs. consumer NVME for this use case if there's some way to benchmark performance.

  8. sounds similar to NVDIMMS but just over standard PCIE bus rather than memory slots/channels. I think the cells will need to be a bit more wear resistant in this case, like Optane.

  9. Any home lab can get u3 ESXi v8, vCenter8 and the whole vSphere8 VCF managed environment for $210 a year. Seriously, join VMUG Advantage. VMware Workstation 17 Pro is also free.

  10. I bought a second MS-01 just to try this out last week. So far, so good. BTW, I left a comment on your blog a few days ago regarding tiering, if you haven't seen it.

  11. Credit where credit is due. That is a pretty cool tech. However, it's VMware a lot of us have moved to proxmox, I wonder if they have applied for patents on this?

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button