NSX 6.2 Edge Services Gateway choices

One of the biggest issues when deploying NSX to determine what ESG, My current client. We had two options for a highly resilient deployment here is an outline of the proposed design.

Option One: Stateful Active/Standby HA Model

This is the redundancy model where a pair of NSX Edge Services Gateways is deployed. One of the Edge functions is in Active mode (i.e. actively forwards traffic and provides the other logical network services), while the second unit is in Standby state, waiting to take over should the active Edge fail.

Health and state information for the various logical network services are exchanged between the active and standby NSX Edges leveraging an internal communication protocol. The first vNIC interface of type “Internal” deployed on the Edge is used by default to establish this communication, but the user is also given the possibility of explicitly specifying the Edge internal interface to be used. Having this degree of control on the interface to use is important because it allows choosing between a VLAN-backed port-group and a logical switch (VXLAN-backed port-group) as a communication channel.

Note: it is mandatory to have at least one internal interface configured on the NSX Edge to be able to exchange keepalives between the Active and Standby units. Deleting the last Internal interface would break this HA model.

The below diagram highlights how the Active NSX Edge is active both from control and data plane perspective. This is the reason it can establish routing adjacencies to the physical router (on a common “External VLAN” segment) and the DLR control VM (on a common “Transit VXLAN” segment). Traffic between logical segments connected to the DLR and the physical infrastructure always flow only through the active NSX Edge appliance

NSX_OPT1

If the Active NSX Edge fails (for example because of an ESXi host failure), both control and data planes must be activated on the Standby unit that takes over the active duties.

NSX_OPT1_2

Deployment considerations about the behavior of this HA model
The Standby NSX Edge leverages the expiration of a specific “Declare Dead Time” timer to detect the failure of its Active peer. This timer is configured by default to 15 seconds and can be tuned down to a minimum value of 6 seconds (via UI or API call), and it is the main factor influencing the traffic outage experienced with this HA model.
Once the Standby gets activated, it starts all the services that were running on the failed Edge (routing, firewalling, etc.). While the services are restarting, traffic can still be routed leveraging the information in the NSX Edge forwarding table that was kept in synch between the Active and the Standby Edge units. The same applies to the other logical services since the state is synchronized and available also for FW, LB, NAT, etc.

Note: for the LB service only the persistence state is synchronized between the Active and the Standby units.

There are other considerations, but the above-highlighted text is deemed the most significant aspect of this deployment for my client as some of their applications will fail due to timeout reasons.

Option Two: ECMP HA Model

NSX_OPT2

In the ECMP model, the DLR and the NSX Edge functionalities have been improved to support up to 8 equal cost paths in their forwarding table. Focusing for the moment on the ECMP capabilities of the DLR, this means that up to 8 active NSX Edges can be deployed at the same time, and all the available control and data planes will be fully utilized (Figure 16).

This HA model provides two main advantages:

  1. An increased available bandwidth for north-south communication (up to 80 Gbps per deployment).
  2. A reduced traffic outage (regarding % of affected flows) for NSX Edge failure scenarios.

Notice from the diagram in the figure above that traffic flows are very likely to follow an asymmetric path, where the north-to-south and south-to-north legs of the same communication is handled by different NSX Edge Gateways. The DLR distributes south-to-north traffic flows across the various equal cost paths based on hashing of the source and destination IP addresses of the original packet sourced by the workload in logical space. The way the physical router distributes north-to-south flows depends instead on the specific HW capabilities of that device.

Traffic recovery after a specific NSX Edge failure happens in a similar fashion to what described in the previous standalone HA model, as the DLR and the physical routers would have to quickly time out the adjacency to the failed unit and re-hash the traffic flows via the remaining active NSX Edge Gateways.

NSX_OPT2_2

Deployment considerations specific to this HA model:

The length of the outage is determined by how fast the physical router and the DLR Control VM can time out the adjacency to the failed Edge Gateway. It is therefore recommended to tune aggressively the hello/hold time timers (1/3 seconds).

In contrast to the HA standalone model, the failure of one NSX Edge now affects only a subset of the north-south flows (the ones that were handled by the failed unit). This is an important factor contributing to an overall better recovery functionality with this ECMP HA model.

When deploying multiple Active NSX Edges, there is no support for state synchronization for the various logical services available on the NSX Edge Services Gateway. It is hence recommended to leverage DFW and one-arm load-balancer deployments and deploy only routing services on the different NSX Edge Gateways.

Conclusion

ECMP HA Model is the ESG deployment of choice for my specific client. The length of the outage is the prime consideration for the technical choice between the two ES deployment types. With ECMP, the team can use load balancing services using one arm load balancers configurations. With this in mind, Equal Cost Multi-Path (ECMP) routing will be implemented between the NSX Edge Service Gateways and the Datacentre core switches.  Because the Datacentre core switches are already routing for several internal datacentre VLANs, a Virtual Route Forwarding (VRF) instance will likely be required to ensure separation.

The Datacentre core VRF will be responsible for routing traffic either to the External Load Balancers for egress to the internet, or to the Internal networks for data centre server access or user access over the WAN.

With the ESGs and Datacentre core split between SPDC and WCDC, some attention may be required on traffic patterns to minimize tromboning between the two sites.  Due to dark fibre and high bandwidth, a balance should be struck between complexities of implementation vs. amount of traffic traversing sites

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *