Cloud-Managed EVPN Fabric Installation / Lab Guide
Cloud Fabric Installation Overview
Cisco Meraki's cloud-managed fabric solution provides a flexible and powerful way to design, deploy, and manage your cloud Fabrics using the Meraki Dashboard. There are multiple valid approaches to configuring a fabric, depending on your network topology, scale, security requirements, and operational preferences.
This guide presents one specific method for installing and configuring a Meraki cloud fabric. It focuses on a common, straightforward deployment pattern that emphasizes best practices for reliability, simplicity, and scalability. While the steps outlined here have been tested and are recommended for many environments, they are not the only way to achieve a successful fabric deployment.
Alternative configurations may include different underlay/overlay designs, varied placement of border gateways, custom routing policies, or integration with existing non-Meraki infrastructure. The Meraki Dashboard's intuitive interface and extensive documentation allow you to adapt the fabric to your unique needs.
As you proceed through this guide, feel free to modify or deviate from the presented steps where appropriate for your environment. Always refer to the latest Meraki documentation for the most current features, recommendations, and supported topologies.
Let's get started.
Topology Overview:
This lab guide may use hardware that is 'compatible' but may not be officially supported yet. Please see the Cloud EVPN Fabric Overview page for officially supported hardware.
Spine-leaf architecture with BGP EVPN VXLAN overlay on OSPF underlay.
-
Spine Layer: 2x Catalyst 9500H (IP transport; BGP Route-Reflector; connected via HundredGigE to leaves/borders).
-
Leaf Layer: 2-4x Catalyst 9300X (VTEP for access; TwentyFiveGigE uplinks to spines; trunk downlinks to access switches/APs).
-
Border Layer: 2x Catalyst 9300X (VTEP for external L3 routing; GigabitEthernet to core/fusion routers with VRFs).
-
Access Layer: 2x Catalyst 9300 (stacked for wired) + 2x Catalyst 9166i APs (wireless); trunk uplinks to leaves.
Sample Connectivity Diagram:
Lab Size/Scale Considerations:
-
Recommended lab is 3-4 switches (1 border/spine and 2 leaves)
-
For scale/capacity numbers please see the Cloud EVPN Fabric Overview
Lab Configuration Steps
Underlay Configuration
STOP!!! DECISION TIME
Before you begin you must decide if you want to deploy the lab with an automated SVI underlay or custom user-created underlay
Option 1: Automated SVI Underlay (Default)
[NOTE] If you cannot enforce VLAN pruning across the deployment with R-PVST+, it is strongly suggested to start with a reliable fabric underlay using a pure layer 3 routed underlay design (Routed Underlay Option #2).
-
This option requires STRICT pruning of fabric vlans from the underlay. Bridging fabric vlans between fabric nodes via Layer2 WILL create chaos and unpredictability.
-
This option automates the creation of underlay transport VLANs (900-915) on fabric nodes for a layer2 underlay. It also provisions a subnet for this underlay interface.
Option 2: User-Created Routed Underlay (Recommended)
- Safest Option but requires manual underlay and OSPF peer configurations
-
Use Routed Ports to connect the fabric nodes together, these must be created manually
Physical Connectivity
Border Connectivity
There are two ways to connect the border
-
Single Trunk Interface
-
Single cable for both management and fabric egress vlan
-
L2 spanning-tree domain extended to the Fusion device
-
Routed Interfaces
-
Two connections required, one for management and one for fabric egress
-
Pure L2 isolation from the Fusion device
Fabric Node Connectivity
Underlay Automation
Underlay Routing
-
IGP (OSPFv2)
-
OSPF is used for the underlay routing protocol. It will only create the OSPF peering on the underlay.
-
Route redistribution from/to the OSPF underlay is currently not supported in EFT. Static routes are recommended at this time.
-
The subnet will need to routed/reachable from outside the fabric (DHCP, Radius etc).
-
In the above example. The loopback pool subnet (10.254.254.0/24) should have a static route on the fusion router pointing to the Border node’s underlay interface
Underlay Transport
- In Cloud Fabric Phase-1, we're using SVI/VLANs as underlay transports to facilitate brownfield migration. The goal is to migrate all underlay subnets into the fabric allowing for pure layer3 inter-switch links.
-
Larger deployments can look like the following.
-
Vlans 900 & 902 are used for SPINE-BORDER links
-
Vlans 901 & 903 are used for SPINE-LEAF links
-
For multi-role nodes, these vlans aren't provisioned
-
A more “common” representation of the previous diagram would look like the following
-
If you expand the Border/Spine into a more complex fabric environment, you could see something like this
-
This example shows how the fabric underlay segments can be mapped to trunks while preventing them from being blocked by spanning-tree. R-PVST+ is used to ensure that only the looped VLAN is blocked and not the critical underlay VLAN.
Pre-Fabric Check-list
1. Make sure your firmware is up-to date
-
Check firmware version is showing correct version. For this lab we'll be using 17.18.2 GA
-
Make sure your config-generation is working without issues
-
'show meraki config up' from Cloud-CLI or Tools Show-CLI command runner.
-
Everything should be showing “Pass”
-
If you see the above example “Not needed”, that means there is no queued config and everything is stable.
-
Check the last config-save time. Make sure it is at least 15+ minutes old
-
If there are any failures, please contact support and resolve prior to deploying fabric
-
Verify no conflict with fabric underlay resource reservations (Vlan-id, loopbacks, etc)
Network Prep / Staging
Network Settings
-
Navigate to "Network-wide -> General"
-
Set Traffic Analysis to "Detailed: collect destination hostnames"
Switch Port configuration
We want to prune all VLANs that aren't in use on the switch. By default, every port is configured as a trunk port. If there is at least ONE port on a switch still configured for trunk(allowed 1-1000), all 1000 ports will be provisioned on the switch. To enable Rapid-PVST+, we need to prune the vlans to below 300. You can follow the process here or do so manually.
-
Configure all ports to be Access ports VLAN1
-
Navigate to "Switching -> Switch Ports", change results-per-page to 200 for easier configuration.
-
Bulk select all ports on the page
-
Edit ports to configure and click "update" to commit
-
Type = Access
-
VLAN = 1
2. Port Automation with Smart-Ports
Smart-Ports are used essential to simplifying switch-port configuration and minimizing misconfigurations. They can be used to map previously definte port profiles to ports.
Migrating Cloud-Management to a Routed Uplinks
Routed Ports can be used for Cloud management, but needs to be configured in two steps. In this example, we get the switches connected to dashboard, let them upgrade and get stable. Only then do we migrate the switch to use routed uplinks
IMPORTANT:
-
Routed port subnets cannot overlay with any existing subnet
MIGRATION STEPS
-
Navigate to the Switch Details Mimic panel and select the desired port, next click "Edit"
-
Change the ports "Interface mode" to "Routed port" and update. This will redirect you to the "Routing and DHCP" page.
-
On the "Routing and DHCP" page, it should let you enter the interface details. Make sure to enable the "V4 Uplink" checkbox. Save changes
-
The Routed uplink config will get pushed to the switch. Make sure the config has been pushed before physically wiring the port
-
The switch might take <X> minutes before it migrates to Routed Uplink
-
Check your routed interface, it should match the switches management uplink details in the side-panel
-
SUCCESS! Repeat for all switches. The switch might take a while to flip over to routed-uplink.
Network Switch Settings in Dashboard
-
Navigate to "Switching -> Switches -> Switch Settings"
-
Set "Management VLAN" to your management VLAN
-
Change "STP configuration mode" to "Enable Rapid per-VLAN Spanning Tree (RPVST+)
[NOTE!!!] By converting to RPVST+ you are limited to 300 total active vlans. Trunks configured for more than 1000 vlans will be truncated to 300. (This is why it's important to prune vlans before this step)
-
Change priority for the Leaves to '61440' for VLANs 900-915 and ‘0’ for fabric subnets
-
Change priority for SPINE-1 to ‘0’ for VLAN 900,901
-
Change priority for SPINE-2 to ‘0’ for VLAN 902,903
-
Configure Quality of Service
-
Create a new rule "Add a QoS rule for this network" and Save the default. It should be Any/Any Trusted.
-
Edit DSCP to CoS map and configure best practices, Save
-
Configure MTU to '9100', click Confirm
-
At this point you've configured A LOT. Use Cloud-CLI to verify configuration has been pushed before proceeding to the next step.
-
'show meraki config up' from Cloud-CLI or Tools Show-CLI command runner.
-
If you want to force a config push. You can enable/disable a port in dashboard, or change something arbitrary like the devices street address.
[TIP] If your impatient like me, just add a space to the end of the existing physical street address, make sure ‘update location’ is checked and then hit enter.
Deploying Cloud Fabric
Steps to deploy
-
All switches already in Cloud-Config mode
-
Pre-check and Dashboard Network Prep
-
VLAN pruning, RPVST+ and/or Routed-Port config
-
(optional) DIY underlay provisioning (routed port links, OSPF peering)
-
Create & Deploy Cloud Fabric
-
Validate Fabric Device Config
-
Validate Routing Environment
-
Validate Client Connectivity
Creating a new Fabric
-
Navigated to "Organization -> Fabric"
-
Click "Create new fabric"
-
Enter your fabric details on the Fabric setup page
-
BGP ASN: 65005
-
(optional) auth-key
-
Select networks
-
Underlay loopback IP pool: 10.254.254.0/24
-
VRFs, click "next"
-
Creating Fabric Subnets, click "Add subnet"
-
Create fabric_subnet_101 (Distributed Anycast Gateway BRIDGED Mode)
-
Name: fabric_subnet_101
-
Vlan-id: 101
-
Interface IP/Mask: 10.1.101.1/24 <<<<<---- ANYCAST GATEWAY
-
DHCP Server IP: 10.10.248.1 (This must be outside the fabric, on the MX or centralized)
-
Anycast Gateway: Enabled
-
Broadcast Replication: Enabled
-
Save subnet
-
Create fabric_subnet_102 (Distributed Anycast Gateway Routed Subnet)
-
Name: fabric_subnet_102
-
Vlan-id: 102
-
Interface IP/Mask: 10.1.102.1/24 <<<<<---- ANYCAST GATEWAY
-
DHCP Server IP: 10.10.248.1 (This must be outside the fabric, on the MX or centralized)
-
Anycast Gateway: Enabled
-
Broadcast Replication: Disabled
-
Save subnet
-
Once your subnets look good, click "Next"
-
Border Configuration
-
You can create a L3 interface at this stage, click "Create L3 interface" on your Border-1
-
Create your EGRESS interface, this is the interface which the fabric overlay traffic will egress the network
-
Name: FABRIC_EGRESS
-
VRF: Fabric
-
Vlan-id: 50
-
MTU: 1500
-
Interface IP/mask: 10.1.100.2/30
-
Create eBGP instance
-
Neighbor IPv4: 10.1.100.1
-
Remote AS: 65001
-
VRF: Fabric
-
Source Interface: 10.1.100.2/30
-
Click Save
-
Validate config before clicking "Save to staging"
-
Click on the fabric instance to enter the staged config. Click "Comment and deploy" to deploy the config
Validating and Troubleshooting Deployment
Fabric and Device Configuration Validation
Fabric UX
-
The Fabric Summary page has many good sources of information, from deployment status to statistics and detailed view
-
Detailed Views
-
Topology – good for identifying connectivity between fabric nodes. This currently requires CDP/LLDP so will only show for Trunk/SVI underlay
-
Fabric Devices – tracks device health status, loopback ip and fabric roles
-
BGP Peers – show BGP peer statistics between your fabric nodes in the overlay
-
Ext Border Peers – shows external peering statistics from your border devices
-
VXLAN Tunnels – <not avail yet>
-
Overlay Subnets – check on the status of your overlay subnets and configuration
Switch UX
-
Switch Connectivity and Uptime
-
Firmware Check
-
Cloud Configuration check
-
Active Management IP and uplink
-
Spanning-Tree Status
CloudCLI Commands
-
‘show ip int br | exc una’ - this shows you all the active interfaces excluding the inactive ones
-
‘show running-config’ - self explanatory. Shows the full config
-
‘show run | sec router bgp’ - shows just the BGP config
-
‘show run | sec router ospf’ - shows just the OSPF config
-
‘show meraki config updater’
-
In above image, no errors and config doesn’t need to change
-
In above image, no errors here, all config was applied successfully
-
‘show meraki config monitor’
-
‘show meraki connect’ - This command shows cloud connectivity status and statistics
-
This command shows cloud connectivity status and statistics
-
The Tunnel Config Fetch grabs the connectivity info needed to establish the tunnel (headend configuration and load balancing)
-
Tunnel State will show current status of the tunnel. If this shows down, check your IP, DNS and ICMP connecitivty to the cloud “catalyst.meraki.com”
-
ICMP should be able to reach 8.8.8.8 from the switch
-
DNS should resolve “catalyst.meraki.com”
-
‘show uac uplink’ - this shows the status of UAC(“Uplink Auto Configuration”)
-
Note that the “configured IPv4 Uplink interface” is the preferred uplink, the actual uplink in use is show in the “Uplink IPv4 interface”
-
This should usually match unless there is a problem with the preferred uplink
-
‘show uac uplink db’ - this shows all the currently detected VLANs and routed ports that UAC is evaluating for uplinks
-
Your uplink should stay stable on our preferred uplink.
-
Use the statistics to identify issues with existing vlans/ports
Underlay routing
-
Loopback pool subnet reachability
-
You should be able to ping your Loopback100 from outside your fabric. Try from your DHCP or Fusion router
-
Make sure the reverse path is good as well from your fabric node to your DHCP or Fusion router
-
OSPF Routing
-
‘show ip ospf neighbor’ - shows OSPF neighbor relationships
-
‘show ip ospf database’ - this shows all routes learned and advertised
-
‘show ip route ospf’ - shows your routing table with only the OSPF routes. You should see all your loopbacks here, from your loopback pool
-
Spanning-Tree
-
‘show spanning-tree summary’ - shows a summary of stp, mode and current root-bridge vlans
-
‘show spanning-tree vlan 902’ - shows port specifics on spanning tree instances
Overlay routing
-
Show Fabric Devices
-
‘show l2vpn evpn mac ip’ - shows the fabric IP/MAC/EVI/VID clients
-
‘show nve peers’ - shows all the NVE peers
-
BGP Peering and Fabric connecitivty
-
‘show ip bgp summary’
-
‘show ip bgp peer-group' - shows peer group information
-
‘show ip bgp vpnv4 all’ - shows all the learned routes and known subnets
-
‘show ip bgp l2vpn evpn’ - shows all the fabric routes, mac and ip bindings
-
Overlay Routing Table
-
‘show ip route vrf <VRF-NAME>’ - this shows the active routing table for the Fabric VRF
-
Please ensure you see a default route in the Fabric VRF otherwise you are not properly peering with your fabric. Default route should be learned from the border nodes
What's Next
This Cloud Fabric Installation Guide is a living, ever-evolving document designed to adapt to the rapid advancements in Cisco Meraki's cloud-managed networking capabilities. As new features, best practices, and supported topologies emerge in the Meraki Dashboard, future versions of this guide will incorporate additional configuration examples, troubleshooting tips, advanced integration scenarios, and updated recommendations to ensure it remains a valuable resource for your deployments.
In the meantime, for those planning more complex, granular, or large-scale network designs—such as multi-site fabrics, hybrid environments, or high-density deployments—a comprehensive Cisco Validated Design (CVD) guide is in development and will be available shortly. The CVD will provide rigorously tested, prescriptive blueprints with detailed architectures, scalability considerations, performance validations, and end-to-end deployment guidance to help minimize risk and accelerate implementation in enterprise environments.
Stay tuned to the official Meraki documentation and Cisco resources for updates. Thank you for using this guide—we hope it has helped you successfully deploy your Meraki cloud fabric! If you have feedback or suggestions for future enhancements, feel free to share them with the Meraki community or support team.


