Cloud-Managed EVPN Fabric Installation / Lab Guide
Cloud Fabric Installation Overview
Cisco Meraki's cloud-managed fabric solution provides a flexible and powerful way to design, deploy, and manage your cloud Fabrics using the Meraki Dashboard. There are multiple valid approaches to configuring a fabric, depending on your network topology, scale, security requirements, and operational preferences.
This guide presents one specific method for installing and configuring a Meraki cloud fabric. It focuses on a common, straightforward deployment pattern that emphasizes best practices for reliability, simplicity, and scalability. While the steps outlined here have been tested and are recommended for many environments, they are not the only way to achieve a successful fabric deployment.
Alternative configurations may include different underlay/overlay designs, varied placement of border gateways, custom routing policies, or integration with existing non-Meraki infrastructure. The Meraki Dashboard's intuitive interface and extensive documentation allow you to adapt the fabric to your unique needs.
As you proceed through this guide, feel free to modify or deviate from the presented steps where appropriate for your environment. Always refer to the latest Meraki documentation for the most current features, recommendations, and supported topologies.
Let's get started.
Topology Overview:
This lab guide may use hardware that is 'compatible' but may not be officially supported yet. Please see the Cloud EVPN Fabric Overview page for officially supported hardware.
Spine-leaf architecture with BGP EVPN VXLAN overlay on OSPF underlay.
-
Spine Layer: 2x Catalyst 9500H (IP transport; BGP Route-Reflector; connected via HundredGigE to leaves/borders).
-
Leaf Layer: 2-4x Catalyst 9300X (VTEP for access; TwentyFiveGigE uplinks to spines; trunk downlinks to access switches/APs).
-
Border Layer: 2x Catalyst 9300X (VTEP for external L3 routing; GigabitEthernet to core/fusion routers with VRFs).
-
Access Layer: 2x Catalyst 9300 (stacked for wired) + 2x Catalyst 9166i APs (wireless); trunk uplinks to leaves.
Sample Connectivity Diagram:
Lab Size/Scale Considerations:
-
Recommended lab is 3-4 switches (1 border/spine and 2 leaves)
-
For scale/capacity numbers please see the Cloud EVPN Fabric Overview
Lab Configuration Steps
Underlay Configuration
STOP!!! DECISION TIME
Before you begin you must decide if you want to deploy the lab with an automated SVI underlay or custom user-created underlay
Option 1: Automated SVI Underlay (Default)
[NOTE] If you cannot enforce VLAN pruning across the deployment with R-PVST+, it is strongly suggested to start with a reliable fabric underlay using a pure layer 3 routed underlay design (Routed Underlay Option #2).
-
This option requires STRICT pruning of fabric vlans from the underlay. Bridging fabric vlans between fabric nodes via Layer2 WILL create chaos and unpredictability.
-
This option automates the creation of underlay transport VLANs (900-915) on fabric nodes for a layer2 underlay. It also provisions a subnet for this underlay interface.
Option 2: User-Created Routed Underlay (Recommended)
- Safest Option but requires manual underlay and OSPF peer configurations
-
Use Routed Ports to connect the fabric nodes together, these must be created manually
Physical Connectivity
Border Connectivity
There are two ways to connect the border
-
Single Trunk Interface
-
Single cable for both management and fabric egress vlan
-
L2 spanning-tree domain extended to the Fusion device
-
Routed Interfaces
-
Two connections required, one for management and one for fabric egress
-
Pure L2 isolation from the Fusion device
Fabric Node Connectivity
Underlay Automation
Underlay Routing
-
IGP (OSPFv2)
-
OSPF is used for the underlay routing protocol. It will only create the OSPF peering on the underlay.
-
Route redistribution from/to the OSPF underlay is currently not supported in EFT. Static routes are recommended at this time.
-
The subnet will need to routed/reachable from outside the fabric (DHCP, Radius etc).
-
In the above example. The loopback pool subnet (10.254.254.0/24) should have a static route on the fusion router pointing to the Border node’s underlay interface
Underlay Transport
- In Cloud Fabric Phase-1, we're using SVI/VLANs as underlay transports to facilitate brownfield migration. The goal is to migrate all underlay subnets into the fabric allowing for pure layer3 inter-switch links.
-
Larger deployments can look like the following.
-
Vlans 900 & 902 are used for SPINE-BORDER links
-
Vlans 901 & 903 are used for SPINE-LEAF links
-
For multi-role nodes, these vlans aren't provisioned
-
A more “common” representation of the previous diagram would look like the following
-
If you expand the Border/Spine into a more complex fabric environment, you could see something like this
-
This example shows how the fabric underlay segments can be mapped to trunks while preventing them from being blocked by spanning-tree. R-PVST+ is used to ensure that only the looped VLAN is blocked and not the critical underlay VLAN.
Pre-Fabric Check-list
1. Make sure your firmware is up-to date
-
Check firmware version is showing correct version. For this lab we'll be using 17.18.2 GA
-
Make sure your config-generation is working without issues
-
'show meraki config up' from Cloud-CLI or Tools Show-CLI command runner.
-
Everything should be showing “Pass”
-
If you see the above example “Not needed”, that means there is no queued config and everything is stable.
-
Check the last config-save time. Make sure it is at least 15+ minutes old
-
If there are any failures, please contact support and resolve prior to deploying fabric
-
Verify no conflict with fabric underlay resource reservations (Vlan-id, loopbacks, etc)
Network Prep / Staging
Network Settings
-
Navigate to "Network-wide -> General"
-
Set Traffic Analysis to "Detailed: collect destination hostnames"
Switch Port configuration
We want to prune all VLANs that aren't in use on the switch. By default, every port is configured as a trunk port. If there is at least ONE port on a switch still configured for trunk(allowed 1-1000), all 1000 ports will be provisioned on the switch. To enable Rapid-PVST+, we need to prune the vlans to below 300. You can follow the process here or do so manually.
-
Configure all ports to be Access ports VLAN1
-
Navigate to "Switching -> Switch Ports", change results-per-page to 200 for easier configuration.
-
Bulk select all ports on the page
-
Edit ports to configure and click "update" to commit
-
Type = Access
-
VLAN = 1
2. Port Automation with Smart-Ports
Smart-Ports are used essential to simplifying switch-port configuration and minimizing misconfigurations. They can be used to map previously definte port profiles to ports.
Migrating Cloud-Management to a Routed Uplinks
Routed Ports can be used for Cloud management, but needs to be configured in two steps. In this example, we get the switches connected to dashboard, let them upgrade and get stable. Only then do we migrate the switch to use routed uplinks
IMPORTANT:
-
Routed port subnets cannot overlay with any existing subnet
MIGRATION STEPS
-
Navigate to the Switch Details Mimic panel and select the desired port, next click "Edit"
-
Change the ports "Interface mode" to "Routed port" and update. This will redirect you to the "Routing and DHCP" page.
-
On the "Routing and DHCP" page, it should let you enter the interface details. Make sure to enable the "V4 Uplink" checkbox. Save changes
-
The Routed uplink config will get pushed to the switch. Make sure the config has been pushed before physically wiring the port
-
The switch might take <X> minutes before it migrates to Routed Uplink
-
Check your routed interface, it should match the switches management uplink details in the side-panel
-
SUCCESS! Repeat for all switches. The switch might take a while to flip over to routed-uplink.
Network Switch Settings in Dashboard
-
Navigate to "Switching -> Switches -> Switch Settings"
-
Set "Management VLAN" to your management VLAN
-
Change "STP configuration mode" to "Enable Rapid per-VLAN Spanning Tree (RPVST+)
[NOTE!!!] By converting to RPVST+ you are limited to 300 total active vlans. Trunks configured for more than 1000 vlans will be truncated to 300. (This is why it's important to prune vlans before this step)
-
Change priority for the Leaves to '61440' for VLANs 900-915 and ‘0’ for fabric subnets
-
Change priority for SPINE-1 to ‘0’ for VLAN 900,901
-
Change priority for SPINE-2 to ‘0’ for VLAN 902,903
-
Configure Quality of Service
-
Create a new rule "Add a QoS rule for this network" and Save the default. It should be Any/Any Trusted.
-
Edit DSCP to CoS map and configure best practices, Save
-
Configure MTU to '9100', click Confirm
-
At this point you've configured A LOT. Use Cloud-CLI to verify configuration has been pushed before proceeding to the next step.
-
'show meraki config up' from Cloud-CLI or Tools Show-CLI command runner.
-
If you want to force a config push. You can enable/disable a port in dashboard, or change something arbitrary like the devices street address.
[TIP] If your impatient like me, just add a space to the end of the existing physical street address, make sure ‘update location’ is checked and then hit enter.
Deploying Cloud Fabric
Steps to deploy
-
All switches already in Cloud-Config mode
-
Pre-check and Dashboard Network Prep
-
VLAN pruning, RPVST+ and/or Routed-Port config
-
(optional) DIY underlay provisioning (routed port links, OSPF peering)
-
Create & Deploy Cloud Fabric
-
Validate Fabric Device Config
-
Validate Routing Environment
-
Validate Client Connectivity
Creating a new Fabric
-
Navigated to "Organization -> Fabric"
-
Click "Create new fabric"
-
Enter your fabric details on the Fabric setup page
-
BGP ASN: 65005
-
(optional) auth-key
-
Select networks
-
Underlay loopback IP pool: 10.254.254.0/24
-
VRFs, click "next"
-
Creating Fabric Subnets, click "Add subnet"
-
Create fabric_subnet_101 (Distributed Anycast Gateway BRIDGED Mode)
-
Name: fabric_subnet_101
-
Vlan-id: 101
-
Interface IP/Mask: 10.1.101.1/24 <<<<<---- ANYCAST GATEWAY
-
DHCP Server IP: 10.10.248.1 (This must be outside the fabric, on the MX or centralized)
-
Anycast Gateway: Enabled
-
Broadcast Replication: Enabled
-
Save subnet
-
Create fabric_subnet_102 (Distributed Anycast Gateway Routed Subnet)
-
Name: fabric_subnet_102
-
Vlan-id: 102
-
Interface IP/Mask: 10.1.102.1/24 <<<<<---- ANYCAST GATEWAY
-
DHCP Server IP: 10.10.248.1 (This must be outside the fabric, on the MX or centralized)
-
Anycast Gateway: Enabled
-
Broadcast Replication: Disabled
-
Save subnet
-
Once your subnets look good, click "Next"
-
Border Configuration
-
You can create a L3 interface at this stage, click "Create L3 interface" on your Border-1
-
Create your EGRESS interface, this is the interface which the fabric overlay traffic will egress the network
-
Name: FABRIC_EGRESS
-
VRF: Fabric
-
Vlan-id: 50
-
MTU: 1500
-
Interface IP/mask: 10.1.100.2/30
-
Create eBGP instance
-
Neighbor IPv4: 10.1.100.1
-
Remote AS: 65001
-
VRF: Fabric
-
Source Interface: 10.1.100.2/30
-
Click Save
-
Validate config before clicking "Save to staging"
-
Click on the fabric instance to enter the staged config. Click "Comment and deploy" to deploy the config
Validating and Troubleshooting Deployment
Fabric and Device Configuration Validation
Fabric UX
-
The Fabric Summary page has many good sources of information, from deployment status to statistics and detailed view
-
Detailed Views
-
Topology – good for identifying connectivity between fabric nodes. This currently requires CDP/LLDP so will only show for Trunk/SVI underlay
-
Fabric Devices – tracks device health status, loopback ip and fabric roles


