Azure Local is GA with disaggregated storage

It was official on April 24th with Azure Local build 2604.

You now have 3 options with Azure Local:

  • Add supported FC HBAs to an existing Azure Local cluster and create an external NTFS Cluster Shared Volume (CSV) for user workloads.
  • Deploy a new HCI Azure Local cluster that has FC HBAs and is connected to external storage.
  • Deploy a new disaggregated Azure Local cluster that only has external storage with a 30 and 300G CSV for infrastructure resources

In this post I am going to focus on disaggregated deployments, which Microsoft documents here. Disaggregated is really a blessing in a couple of ways. First, not having to buy 2-4 devices per node, when you intend for all user workloads to be placed on an Everpure FlashArray, is meaningful. A year ago maybe a couple of thousand dollars were at stake, but today (May 2026), including a PCIe card with 2 M.2 slots and two 1TB M.2 SSDs can set you back $5000 per server, plus many months of lead time. 100 of those servers is a half million dollar cost, you no longer have to spend. Microsoft is looking out for their customer’s current investment. If you have significant local storage, utilize both!

Another large benefit of disaggregated deployments, is that by omitting hyperconverged (HCI) storage, the scalability is increased 4 fold. Storage Spaces Direct (S2D), today, has a maximum cluster size of 16, though in practice most customers realize an effective cluster size of 8 nodes or less. This limitation can be avoided by deploying disaggregated. Microsoft will allow in Azure Local the maximum supported cluster size in for a Windows Server Failover Cluster, 64 nodes. I don’t tend to see production clusters that large, but I do have dozens of customers that have 30+ node clusters on VMware and they were really hesitant at increasing the number of total clusters 3-6 fold.

I have laid out the flexibility in deployments above, with and without HCI storage. The following section will focus on the value and benefits a FlashArray can provide to an Azure Local deployment.

  • The storage capacity and performance can be upgraded independently of the compute cluster.
  • Direct Flash Modules are dense with almost an order of magnitude greater lifespan than simple SSDs because the FlashArray treats all NAND it can reach as a single unit. There are less tiny SSD silos with flash translation layers to do their own garbage collection reducing their lifespan.
  • Simplicity. Everpure is built on reducing complexity, where there is 1 pool of storage, without silos. This one is a draw in comparison to HCI but so many of Everpure’s competitors are quite complicated.
  • Non Disruptive Upgrade (NDU). With Purity everything is non-disruptive. Upgrade the Chassis, Controller, NVRAM (on R4 and older), DFM, PSU, Purity software without disruption.
  • Evergreen. Whether you prefer CAPEX, OPEX, or something in between, Everpure has an Evergreen solution that protects your purchase, and enables flexibility to respond to business changes.

With respect to features that are directly supported and can be leveraged by the Azure Local cluster, those features include:

  • Asynchronous Protection Group (pgroup) Snapshot Replication. Repatriate the snapshot back to the primary FlashArray to recover things you don’t keep on the primary storage, or copy the pgroup snapshot to a volume at the target FlashArray that is connected to a different Azure Local cluster.
  • ActiveDR is the ability to continuously replicate where the DR array is approximately 1 second behind. Simply promote the DR Pod, import the VMs, hydrate them in the Azure portal and you are running.
  • Uniform ActiveCluster. This configuration is where all of the Azure Local nodes are connected to both storage arrays. Should there be an issue with one of the FlashArrays, Windows MPIO simply utilizes the other connects paths, without having to fail over any virtual machines. Should an entire site go down, the hypervisor responds by bringing all of the resources online and there is no data loss.
  • Volume Shadow-copy Service (VSS). Take application consistent snapshots of the FlashArray CSV. If you need to restore to a different Azure Local cluster, utilize hydration and all the resources will be reflected in the Azure Portal.

My next post will be some screenshots and demo videos outlining some of the features that customers find useful in reducing their administrative burden.