• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSAN and the component limits!

Duncan Epping · Feb 10, 2026 · 2 Comments

Over the last few years, I’ve not had many discussions about component limits, but recently, these discussions have popped up more frequently, somehow. If you ask a customer what the component limit of vSAN is, some may say 9000 per host (OSA), others will say 27,000 per host (ESA), and some may know the number of the components limit per cluster even. (Documented here, and here.) However, there’s one critical component most people don’t tend to think about. For this post, I am going to focus on vSAN ESA.

As mentioned, there is a host limit, a cluster limit, but there is also a per device limit. The often made mistake is that people seem to assume that the cluster limit and the host limit are fixed limits by itself. However, there’s a dependency here. As mentioned, a device also has a limit. With vSAN ESA, a single device can hold at most 3000 data components and 3000 metadata components. This is what vSAN ESA supports today (vSAN 9.0). Now let’s focus on those data components for now. (Capacity Leg) This also means that if you have a host with 8 devices or fewer, your maximum number of components is not 27.000, but rather “the number of devices * 3000“. In other words, if you have a host with one NVMe device for vSAN ESA, the maximum number of components for that host is also 3000, if you have a host with two devices, then the max number is 6000, and so on.

So why is this worth writing about? Well, if you take the number of components per object into consideration, and you multiply that by the typical number of objects you will quickly understand why. Let’s assume you use RAID-5 with ESA in a 4+1 configuration, this would result in 5 components at least. If you have multiple disks per VM, you will easily end up with 35-40 components per VM. This means that if you look at that 3000 limit, and you divide it by, let’s take 40 components, we are talking about 75-80 VMs. Now, of course, you will have multiple hosts, so this number also multiplies per host, but hopefully this illustrates why it is important to consider this maximum number.

Then the question that remains is, why are these questions being raised now? Well, now that customers are becoming more comfortable with vSAN ESA, we are also seeing more exotic configurations. We are seeing customers deploying very large capacity devices, but with a limited number of them. Where in the past customers would use 6-8 devices per host with a capacity of 1-2TB each, more and more often I am now getting inquiries about configurations with a single 15TB NVMe device, or two 7.xTB devices. You can imagine that when you do the math, the limit of 3000 data components is far easier to reach than the 27,000 per host component limit.

So please, if you are planning for a new vSAN cluster, take these maximums into consideration. Do not just think about capacity, there’s more to take into account!

#111 – VMUG Connect, coming to a city near you soon! (Featuring Brad Tompkins)

Duncan Epping · Feb 2, 2026 · Leave a Comment

In 2025, the first VMUG Connect event was held in the US, and as it was a big success, VMUG decided to repeat that formula in Amsterdam, Minneapolis, Toronto, Dallas, and Orlando. I invited Brad Tompkins to the show to explain what these VMUG events are all about and also provide some insights into the world of VMUG.

During this episode, Brad went over the agenda for VMUG Connect Amsterdam, and explained what VMUG itself is all about. Brad also discussed the different types of events that VMUG organizes, and had a call to action: participate! I could not agree more. I truly believe that VMUG is a great way to grow your network, grow your skillset through for instance, public speaking, or simply grow your knowledge by attending a session or having discussions on anything with your peers.

Listen to the full episode via Spotify (https://bit.ly/3M0Jxga), Apple (https://apple.co/45Fhjye), or online via Yellow-Bricks here. Or simply watch the video on youtube!

I have the pleasure and honor of hosting a keynote session in Amsterdam at Connect in March. I hope to see all of you there. For more details on pricing, locations, and more, make sure to visit the VMUG Connect website. Also, if you are planning on attending, I would highly encourage you to book asap as we’ve seen Explore on Tour selling out everywhere…

 

Can I replicate, or snapshot, my vSAN Stretched Cluster Witness appliance for fast recovery?

Duncan Epping · Jan 20, 2026 · Leave a Comment

I’ve been seeing this question pop up more frequently, can I replicate or snapshot my vSAN Stretched Cluster Witness appliance for fast recovery? Usually, people ask this question as they cannot adhere to the 3-site requirement for a vSAN Stretched Cluster. So by setting up some kind of replication mechanism with low RPO, they try to mitigate this risk.

I guess the question stems from a lack of understanding of what the witness does. The witness provides a quorum mechanism, the quorum mechanism helps determine which site has access to the data in the case of a network failure (ISL) between the data locations.

Can I replicate, or snapshot, my vSAN Stretched Cluster Witness appliance for fast recovery?

So why can the Witness Appliance not be snapshotted or replicated then? Well, in order to provide this quorum mechanism, the Witness Appliance stores a witness component for each object. This is not per site, or per VM, but for every object… So if you have a VM with multiple VMDKs, you will have multiple witness objects per VM stored on the witness appliance. That witness object holds metadata and, through a log sequence number, understands which object holds the most recent data. This is where the issue arises. If you revert a Witness Appliance to an earlier point in time, the witness components also revert to an earlier point in time, and will have a different log sequence number than expected. This results in vSAN being unable to make the object available to the surviving site, or the site that is expected to hold quorum.

So in short, should you replicate or snapshot the Witness Appliance? No!

 

#110 – vSAN over Fibre Channel featuring Rakesh Radhakrishnan!

Duncan Epping · Jan 19, 2026 · 3 Comments

After Rakesh and I shared the news at Explore, and Explore On Tour, that vSAN would potentially be able to use Fibre Channel as a transport layer, the phone has been ringing nonstop. It was time for me to invite Rakesh and ask him to explain what vSAN over FC is all about. In this episode Rakesh covers the why, the what, but unfortunately the when is still a question. Nevertheless, this episode is a must-listen or watch, as I decided to also make it available on youtube this time. Pick what you prefer! Listen on Spotify (https://bit.ly/3YMdlzP), Apple (https://apple.co/45OncsI), or listen via the embedded player below, or just click the video!

Discussed papers/links:

  • ⁠vSAN vs Traditional Storage video⁠
  • ⁠vSAN beats all-flash top storage system

Playing around with Memory Tiering, are my memory pages tiered?

Duncan Epping · Dec 18, 2025 · 1 Comment

There was a question on VMTN about Memory Tiering performance, and how you can check if pages were tiered. I haven’t played around with Memory Tiering too much, so I noted down for myself what I needed to do on every host in order to enable it. Note, if the command contains a path and you want to do this in your own environment you need to change the path and device name accordingly. The question was if memory pages were tiered or not, so I dug up the command that allows you to check this on a per host level. It is at the bottom of this article for those who just want to skip to that part.

Now, before I forget, probably worth mentioning as this is something many people don’t seem to understand, memory tiering only tiers cold memory pages. Active pages are not being moved to NVMe, on top of that, it only tiers memory when there’s memory pressure! So if you don’t see any tiering, it could simply be that you are not under any memory capacity pressure. (Why move pages to a lower tier when there’s no need?)

List all storage devices via the CLI:

esxcli storage core device list

Create memory tiering partition on an NVMe device:

esxcli system tierdevice create -d=/vmfs/devices/disks/eui.1ea506b32a7f4454000c296a4884dc68

Enable Memory Tiering on a host level, note this requires a reboot:

esxcli system settings kernel set -s MemoryTiering -v TRUE

How is Memory Tiering configured in terms of DRAM to NVMe ratio? A 4:1 DRAM to NVMe ratio would be 25%, 1:1 would be 100%. So if you have it set at 4:1, with 512GB of DRAM you would only use 128GB of the NVMe at most, regardless of the size of the device.

esxcli system settings advanced list -o /Mem/TierNvmePct

Is memory tiered or not? Find out all about it via memstats!

memstats -r vmtier-stats -u mb

Want to show a select number of metrics?

memstats -r vmtier-stats -u mb -s name:memSize:active:tier1Target:tier1Consumed:tier1ConsumedPeak:comnsumed

So what would the outcome look like when there is memory tiering happening? I removed a bunch of the metrics, just to keep it readable, “tier1” is the NVMe device, and as you can see each VM has several MBs worth of memory pages on NVMe right now.

 VIRTUAL MACHINE MEMORY TIER STATS: Wed Dec 17 15:29:43 2025
 -----------------------------------------------
   Start Group ID   : 0
   No. of levels    : 12
   Unit             : MB
   Selected columns : name:memSize:tier1Consumed

----------------------------------------
           name    memSize tier1Consumed
----------------------------------------
      vm.533611       4096            12
      vm.533612       4096            34
      vm.533613       4096            24
      vm.533614       4096            11
      vm.533615       4096            25
----------------------------------------
          Total      20480           106
----------------------------------------
  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 496
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in