All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces the various components of Meraki SD-WAN and the possible ways in which to deploy a Meraki AutoVPN architecture to leverage SD-WAN functionality, with a focus on the recommended deployment architecture.
Software-defined WAN (SD-WAN) is a suite of features designed to allow the network to dynamically adjust to changing WAN conditions without the need for manual intervention by the network administrator. By providing granular control over how certain traffic types respond to changes in WAN availability and performance, SD-WAN can ensure optimal performance for critical applications and help to avoid disruptions of highly performance-sensitive traffic, such as VoIP. For more information, click here.
Before deploying SD-WAN, it is important to understand several key concepts.
All MXs can be configured in either NAT or VPN concentrator mode. There are important considerations for both modes. For more detailed information on concentrator modes, click here.
In this mode the MX is configured with a single Ethernet connection to the upstream network. All traffic will be sent and received on this interface. This is the recommended configuration for MX appliances serving as VPN termination points into the datacenter.
It is also possible to take advantage of the SD-WAN feature set with an MX configured in NAT mode acting as the VPN termination point in the datacenter.
There are several options available for the structure of the VPN deployment.
In this configuration, branches will only send traffic across the VPN if it is destined for a specific subnet that is being advertised by another MX in the same Dashboard organization. The remaining traffic will be checked against other available routes, such as static LAN routes and third-party VPN routes, and if not matched will be NATed and sent out the branch MX unencrypted.
In full tunnel mode all traffic that the branch or remote office does not have another route to is sent to a VPN hub.
In a hub and spoke configuration, the MX security appliances at the branches and remote offices connect directly to specific MX appliances and will not form tunnels to other MX or Z1 devices in the organization. Communication between branch sites or remote offices is available through the configured VPN hubs. This is the recommended VPN topology for most SD-WAN deployments.
It is also possible to use a VPN "mesh" configuration in an SD-WAN deployment.
In a mesh configuration, an MX appliance at the branch or remote office is configured to connect directly to any other MXs in the organization that are also in mesh mode, as well as any spoke MXs that are configured to use it as a hub.
Deploying one or more MXs to act as VPN concentrators in additional datacenters provides greater redundancy for critical network services. In a dual- or multi-datacenter configuration, identical subnets can be advertised from each datacenter with a VPN concentrator mode MX.
In a DC-DC failover design, a spoke site will form VPN tunnels to all VPN hubs that are configured for that site. For subnets that are unique to a particular hub, traffic will be routed directly to that hub so long as tunnels between the spoke and hub are established successfully. For subnets that are advertised from multiple hubs, spokes sites will send traffic to the highest priority hub that is reachable.
When configured for high availability (HA), one MX serves as the master unit and the other MX operates in a spare capacity. All traffic flows through the master MX, while the spare operates as an added layer of redundancy in the event of failure.
Failover between MXs in an HA configuration leverages VRRP heartbeat packets. These heartbeat packets are sent from the Primary MX to the Secondary MX out the singular uplink in order to indicate that the Primary is online and functioning properly. As long as the Secondary is receiving these heartbeat packets, it functions in the spare state. If the Secondary stops receiving these heartbeat packets, it will assume that the Primary is offline and will transition into the master state. In order to receive these heartbeats, both VPN concentrator MXs should have uplinks on the same subnet within the datacenter.
Only one MX license is required for the HA pair, as only a single device is in full operation at any given time.
Connection monitor is an uplink monitoring engine built into every MX Security Appliance. The mechanics of the engine are described in this article.
The Meraki SD-WAN implementation is comprised of several key features, built atop our AutoVPN technology.
Prior to the SD-WAN release, AutoVPN tunnels would form only over a single interface. With the SD-WAN release, it will now be possible to form concurrent AutoVPN tunnels over both Internet interfaces of the MX.
The ability to form and send traffic over VPN tunnels on both interfaces significantly increases the flexibility of traffic path and routing decisions in AutoVPN deployments. In addition to providing administrators with the ability to load balance VPN traffic across multiple links, it also allows them to leverage the additional path to the datacenter in a variety of ways using the built-in Policy-based Routing and Dynamic Path Selection capabilities of the MX.
Policy-based Routing allows an administrator to configure preferred VPN paths for different traffic flows based on their source and destination IPs and ports.
Dynamic Path Selection allows a network administrator to configure performance criteria for different types of traffic. Path decisions are then made on a per-flow basis based on which of the available VPN tunnels meet these criteria, which is determined using packet loss, latency, and jitter metrics that are automatically gathered by the MX.
Performance-based decisions rely on an accurate and consistent stream of information about current WAN conditions in order to ensure that the optimal path is used for each traffic flow. This information is collected via the use of performance probes.
The performance probe is a small payload (approximately 100 bytes) of UDP data sent over all established VPN tunnels every 1 second. MX appliances track the rate of successful responses and the time that elapses before receiving a response. This data allows the MX to determine the packet loss, latency, and jitter over each VPN tunnel in order to make the necessary performance based decisions.
This guide focuses on the most common deployment scenario, but is not intended to preclude the use of alternative topologies. The recommended SD-WAN architecture for most deployments is as follows:
This guide focuses on two key SD-WAN objectives:
Redundancy for critical network services
Dynamic selection of the optimal path for VoIP traffic
The traffic flow across VPN tunnels in an SD-WAN deployment consists of a few key decision points that determine how traffic is routed:
If tunnels are established on both interfaces, Dynamic Path Selection is used to determine which paths meet the minimum performance criteria for a particular traffic flow. Those paths are then evaluated against the policy-based routing and load balancing configurations.
For a more detailed description of traffic flow with an SD-WAN configuration, please see the appendix.
There are several important failover timeframes to be aware of:
|Service||Failover Time||Failback Time|
|AutoVPN Tunnels||30-40 seconds||30-40 seconds|
|DC-DC Failover||20-30 seconds||20-30 seconds|
|Dynamic Path Selection||Up to 30 seconds||Up to 30 seconds|
|Warm Spare||30 seconds or less||30 seconds or less|
|WAN connectivity||300 seconds or less||15-30 seconds|
This section will outline the configuration and implementation of the SD-WAN architecture in the datacenter.
The Cisco Meraki Dashboard configuration can be done either before or after bringing the unit online.
Begin by configuring the MX to operate in VPN Concentrator mode. This setting is found on the Security Appliance > Configure > Addressing & VLANs Page. The MX will be set to operate in NAT mode by default.
Next, configure the Site-to-Site VPN parameters. This setting is found on the Security Appliance > Configure > Site-to-site VPN page.
For the Name, specify a descriptive title for the subnet.
For the Subnet, specify the subnet to be advertised to other AutoVPN peers using CIDR notation
NAT traversal can be set to either automatic or manual. See below for more details on these two options.
An example screenshot is included below:
Whether to use Manual or Automatic NAT traversal is an important consideration for the VPN concentrator.
Use manual NAT traversal when:
If manual NAT traversal is selected, it is highly recommended that the VPN concentrator be assigned a static IP address. Manual NAT traversal is intended for configurations when all traffic for a specified port can be forward to the VPN concentrator.
Use automatic NAT traversal when:
If automatic NAT traversal is selected, the MX will automatically select a high numbered UDP port to source AutoVPN traffic from. The VPN concentrator will reach out to the remote sites using this port, creating a stateful flow mapping in the upstream firewall that will also allow traffic initiated from the remote side through to the VPN concentrator without the need for a separate inbound firewall rule.
This section outlines the steps required to configure and implement warm spare (HA) for an MX Security Appliance operating in VPN concentrator mode.
When configured for high availability (HA), one MX is active, serving as the master, and the other MX operates in a passive, standby capacity. The VRRP protocol is leveraged to achieve failover. Please see here for more information.
High availability on MX Security appliances requires a second MX of the same model. The HA implementation is active/passive and will require the second MX also be connected and online for proper functionality.
High availability (also known as warm spare) can be configured from Security Appliance > Configure > Addressing & VLANs. Begin by setting Warm Spare to Enabled. Next, enter the serial number of the warm spare MX. Finally, select whether to use MX uplink IPs or virtual uplink IPs.
Use Uplink IPs is selected by default for new network setups. In order to properly communicate in HA, VPN concentrator MXs must be set to use the virtual IP (vIP).
Virtual IP (vIP)
The virtual uplink IPs option uses an additional IP address that is shared by the HA MXs. In this configuration, the MXs will send their cloud controller communications via their uplink IPs, but other traffic will be sent and received by the shared virtual IP address.
MX Security Appliances acting in VPN concentrator mode support advertising routes to connected VPN subnets via OSPF. This functionality is not available on MX devices operating in NAT mode.
An MX VPN concentrator with OSPF route advertisement enabled will only advertise routes via OSPF; it will not learn OSPF routes.
When spoke sites are connected to the VPN concentrator, the routes to spokes sites are advertised using an LS Update message. These routes are advertised as type 2 external routes.
In order to configure OSPF route advertisement, navigate to the Security Appliance > Configure > Site-to-Site VPN page. From this page:
In the datacenter, an MX Security Appliance can operate using a static IP address or an address from DHCP. MX appliances will attempt to pull DHCP addresses by default. It is highly recommended to assign static IP addresses to VPN concentrators.
Static IP assignment can be configured via the device local status page.
The local status page can also be used to configure VLAN tagging on the uplink of the MX. It is important to take note of the following scenarios:
This section discusses configuration considerations for other components of the datacenter network.
The MX acting as a VPN concentrator in the datacenter will be terminating remote subnets into the datacenter. In order for bi-directional communication to take place, the upstream network must have routes for the remote subnets that point back to the MX acting as the VPN concentrator.
If OSPF route advertisement is not being used, static routes directing traffic destined for remote VPN subnets to the MX VPN concentrator must be configured in the upstream routing infrastructure.
If OSPF route advertisement is enabled, upstream routers will learn routes to connected VPN subnets dynamically.
The MX Security Appliance makes use of several types of outbound communication. Configuration of the upstream firewall may be required to allow this communication.
Dashboard & Cloud
The MX Security Appliance is a cloud managed networking device. As such, it is important to ensure that the necessary firewall policies are in place to allow for monitoring and configuration via the Cisco Meraki Dashboard. The relevant destination ports and IP addresses can be found under the Help > Firewall Info page in the Dashboard.
Cisco Meraki's AutoVPN technology leverages a cloud-based registry service to orchestrate VPN connectivity. In order for successful AutoVPN connections to establish, the upstream firewall mush to allow the VPN concentrator to communicate with the VPN registry service. The relevant destination ports and IP addresses can be found under the Help > Firewall Info page in the Dashboard.
Uplink Health Monitoring
The MX also performs periodic uplink health checks by reaching out to well-known Internet destinations using common protocols. The full behavior is outlined here. In order to allow for proper uplink monitoring, the following communications must also be allowed:
Cisco Meraki MX Security Appliances support datacenter to datacenter redundancy via our DC-DC failover implementation. The same steps used above can also be used to deploy one-armed concentrators at one or more additional datacenters. For further information about VPN failover behavior and route prioritization, please review this article.
This section will outline the configuration and implementation of the SD-WAN architecture in the branch.
Before configuring and building AutoVPN tunnels, there are several configuration steps that should be reviewed.
While automatic uplink configuration via DHCP is sufficient in many cases, some deployments may require manual uplink configuration of the MX security appliance at the branch. The procedure for assigning static IP addresses to WAN interfaces can be found here.
Some MX models have only one dedicated Internet port and require a LAN port be configured to act as a secondary Internet port via the device local status page if two uplink connections are required. This currently includes all MX models other than the MX84, MX400, and MX600. This configuration change can be performed on the device local status page on the Configure tab.
AutoVPN allows for the addition and removal of subnets from the AutoVPN topology with a few clicks. The appropriate subnets should be configured before proceeding with the site-to-site VPN configuration.
Begin by configuring the subnets to be used at the branch from the Security Appliance > Configure > Addressing & VLANs page.
By default, a single subnet is generated for the MX network, with VLANs disabled. In this configuration a single subnet and any necessary static routes can be configured without the need to manage VLAN configurations.
If multiple subnets are required or VLANs are desired, the VLANs drop-down should be set to enabled. This allows for the creation of multiple VLANs, as well as allowing for VLAN settings to be configured on a per-port basis.
Once the subnets have been configured, Cisco Meraki's AutoVPN can be configured via the Security Appliance > Configure > Site-to-site VPN page in Dashboard.
From the Security Appliance > Configure > Site-to-Site VPN page:
Hub priority is based on the position of individual hubs in the list from top to bottom. The first hub has the highest priority, the second hub the second highest priority, and so on. Traffic destined for subnets advertised from multiple hubs will be sent to the highest priority hub that a) is advertising the subnet and b) currently has a working VPN connection with the spoke. Traffic to subnets advertised by only one hub is sent directly to that hub.
To allow a particular subnet to communicate across the VPN, locate the local networks section in the Site-to-site VPN page. The list of subnets is populated from the configured local subnets and static routes in the Addressing & VLANs page, as well as the Client VPN subnet if one is configured.
To allow a subnet to use the VPN, set the Use VPN drop-down to yes for that subnet.
Rules for routing of VPN traffic can be configured on the Security Appliance > Configure > Traffic Shaping page in Dashboard.
Settings to configure Policy-based Routing (PbR) and Dynamic Path Selection are found under the Uplink preferences heading.
The following sections contain guidance on configuring several example rules.
One of the most common uses of traffic optimization is for VoIP traffic, which is very sensitive to loss, latency, and jitter. The Cisco Meraki MX has a default performance rule in place for VoIP traffic, Best for VoIP.
To configure this rule, click Add a preference under the VPN traffic section.
In the Uplink selection policy dialog, select UDP as the protocol and enter the appropriate source and destination IP address and ports for the traffic filter. Select the Best for VoIP policy for the preferred uplink, then save the changes.
This rule will evaluate the loss, latency, and jitter of established VPN tunnels and send flows matching the configured traffic filter over the optimal VPN path for VoIP traffic, based on the current network conditions.
Video traffic is an increasingly prevalent part of modern networks as technologies like Cisco TelePresence continue to be adopted and integrated into everyday business operations. This branch site will leverage another pre-built performance rule for video streaming and will load balance traffic across both Internet uplinks to take full advantage of available bandwidth.
To configure this, click Add a preference under the VPN traffic section.
In the Uplink selection policy dialog, select UDP as the protocol and enter in the appropriate source and destination IP address and ports for the traffic filter. For the policy, select Load balance for the Preferred uplink. Next, set the policy to only apply on uplinks that meet the Video streaming performance category. Finally, save the changes.
This policy monitors loss, latency, and jitter of VPN tunnels and will load balance flows matching the traffic filter across VPN tunnels that match the video streaming performance criteria.
Web traffic is another common type of traffic that a network administrator may wish to optimize or control. This branch will leverage a PbR rule to send web traffic over VPN tunnels formed on the WAN 1 interface, but only if that matches a custom-configured performance category.
To configure this, select Create a new performance rule under the Custom performance rules section.
In the Name field, enter a descriptive title for this custom rule. Specify the maximum latency, jitter, and packet loss allowed for this traffic filter. This branch will use a "Web" custom rule based on a maximum loss threshold. Then, save the changes.
Next, click Add a preference" under the VPN traffic section.
In the Uplink selection policy dialog, select TCP as the protocol and enter in the appropriate source and destination IP address and ports for the traffic filter. For the policy, select WAN1 for the preferred uplink. Next, configure the rule such that web traffic will fail over if there is poor performance. For the performance category, select one of the pre-built performance rules, or any currently configured custom performance rules for this network. Then, save the changes.
This rule will evaluate the packet loss of established VPN tunnels and send flows matching the traffic filter out the preferred uplink. If the loss, latency, or jitter thresholds in the "Web" performance rule are exceeded, traffic can fail over to tunnels on WAN2 (assuming they meet the configured performance criteria).
Configuring VPN flow preferences based on layer 7 traffic filters is currently only available in beta. If you are interested in participating in this beta, please contact your Cisco Meraki sales rep or the Cisco Meraki Support team.
To configure this rule, click Add a preference under the VPN traffic section.
In the uplink selection policy dialogue, click Add+ to configure a new traffic filter. From the filter selection menu, click the VoIP & video conferencing category and then select the desired layer 7 rules. This example will use the SIP (Voice) rule.
Then, select the Best for VoIP performance class for the preferred uplink and save the changes. This rule will evaluate the loss, latency, and jitter of established VPN tunnels and send flows matching the configured traffic filter over the optimal VPN path for VoIP traffic, based on the current network conditions.
No, currently AutoVPN always uses AES-128 encryption for VPN tunnels.
Both QoS and DSCP tags are maintained within the encapsulated traffic, and are copied over to the IPsec header.
While it is possible to establish VPN connections between Meraki and non-Meraki devices using standard IPsec VPN, SD-WAN requires that all hub and spoke devices be Meraki MXs.
Both products use similar, but distinct, underlying tunneling technologies (DMVPN vs. AutoVPN). A typical hybrid solution may entail using ISR devices at larger sites and MX devices at smaller offices or branches. This will require dedicated IWAN concentration for ISR, as well as a separate SD-WAN headend for MXs, at the datacenter.
While the MX supports a range of 3G and 4G modem options, these are currently used only to ensure availability in the event of WAN failure and cannot be used for load balancing in conjunction with an active wired WAN connection.
SD-WAN can be deployed on branch MX appliances configured in a warm spare capacity, however only the acting master MX will build AutoVPN tunnels and route VPN traffic.
The following flowchart shows the full pathing logic of SD-WAN. This will be broken down in more detail in the subsequent sections.
The very first evaluation point in SD-WAN traffic flow is whether the MX has active AutoVPN tunnels established over both interfaces.
When VPN tunnels are not successfully established over both interfaces, traffic is forwarded over the uplink where VPN tunnels are successfully established.
If we can establish tunnels on both interfaces, processing proceeds to the next decision point.
If we can establish tunnels on both uplinks, the MX appliance will then check to see if any Dynamic Path Selection rules are defined.
If Dynamic Path Selection rules are defined, we evaluate each tunnel to determine which satisfy those rules.
If only one VPN path satisfies our performance requirements, traffic will be sent along that VPN path. The MX will not evaluate PbR rules if only one VPN path meets the performance rules for Dynamic Path Selection.
If there are multiple VPN paths that satisfy our Dynamic Path Selection requirements or if there are no paths that satisfy the requirements, or if no Dynamic Path Selection rules have been configured, PbR rules will be evaluated.
After performance rules for Dynamic Path Selection decisions are performed, the MX evaluates the next decision point.
After checking Dynamic Path Selection rules, the MX security appliance will evaluate PbR rules if multiple or no paths satisfied the performance requirements.
If a flow matches a configured PbR rule, then traffic will be sent using the configured path preference.
If the flow does not match a configured PbR rule, then traffic logically progresses to the next decision point.
After evaluating Dynamic Path Selection and PbR rules, the MX Security appliance will evaluate whether VPN load balancing has been enabled.
If VPN load balancing has not been enabled, traffic will be sent over a tunnel formed on the primary Internet interface. Which Internet interface is the primary can be configured from the Security Appliance > Configure > Traffic Shaping page in Dashboard.
If load balancing is enabled, flows will be load balanced across tunnels formed over both uplinks.
VPN load balancing uses the same load balancing mechanics as the MX's uplink load balancing. Flows are sent out in a round robin fashion with weighting based on the bandwidth specified for each uplink.