Skip to main content
Cisco Meraki

Cisco Meraki Best Practice Design

Document Abstract

This multi-part document is designed to discuss key components, design guidance and best practices for various Meraki technologies. It highlights specific use cases, supported architectures and feature recommendations for your Cisco Meraki cloud managed infrastructure. This document is a collection of other documents, most of which were written specifically for this larger document and were tailored to best deliver a cohesive summary of Cisco Meraki's best practice design, as a whole.


The audience for this document includes but is not limited to, network, security, and wireless engineers along with device management administrators. Using this document, you should be able to deploy various Meraki technologies with confidence and should understand the implications of moving from a CLI or on-premises management solution to a cloud-based platform. Additionally, you should understand the depth of the Meraki platform and how Meraki cloud management technology keeps your infrastructure highly available and secure.


NOTE: This document is intended to be used as a general reference guide for best practice design with Meraki, but it is important to note that each deployment is unique and users should determine design and configurations based on their individual needs. Deployment plans should be created, reviewed and finalized with your Meraki sales team and systems engineer.

Meraki Cloud Architecture

The Meraki cloud solution is a centralized management service that allows users to manage all of their Meraki network devices via a single simple and secure platform.




Users are able to deploy, monitor, and configure their Meraki devices via the Meraki dashboard web interface or via APIs. Once a user makes a configuration change, the change request is sent to the Meraki cloud and is then pushed to the relevant device(s).

Definition of Terms

The Meraki dashboard: A modern web browser-based tool used to configure Meraki devices and services.

Account: A Meraki user’s account, used for accessing and managing their Meraki organizations.

Organization: A logical container for Meraki networks managed by one or more accounts.

Network: A logical container for a set of centrally managed Meraki devices and services.




Management data: The data (configuration, statistics, monitoring, etc.) that flows from Meraki devices (wireless access points, switches, security appliances) to the Meraki cloud over a secure internet connection.

User data: Data related to user traffic (web browsing, internal applications, etc.). User data does not flow through the Meraki cloud, instead flowing directly to their destination on the LAN or across the WAN.



Meraki Cloud Architecture

The Meraki cloud is the backbone of the Meraki management solution. This "cloud" is a collection of highly reliable multi-tenant servers strategically distributed around the world at Meraki data centers. The servers at these data centers are powerful hosting computers comprised of many separate user accounts. They are called multi-tenant servers because the accounts share (equal) computing resources on their host (the server). However, even though these accounts share resources, Meraki ensures that customer information is kept secure by restricting organization access based on account authentication, as well as hashing authentication information such as user passwords or API keys.




Data Centers

Customer management data is replicated across independent same-region data centers in real time. The same data is also replicated in automatic nightly archival backups hosted by in-region third-party cloud storage services. The Meraki cloud does not store customer user data. More information about the types of data that are stored in the Meraki cloud can be found in the “Management Data” section below.

All Meraki services (the dashboard and APIs) are also replicated across multiple independent data centers, so they can failover rapidly in the event of a catastrophic data center failure.

Meraki data centers are located around the world, enabling high-availability local data containment for data sovereignty in sensitive countries and regions, and high-speed connections to facilitate reliable cloud management communication. These data centers hold certifications such as PCI, SAS70 Type II/SSAE, PCI, and ISO27001. More information about cyber security can be found under Hardware and Software Security section below. 

More key data center features include:

  • 99.99% uptime service level agreement

  • 24x7 automated failure detection

  • Real-time replication of data between data centers

  • All sensitive data (e.g., passwords) is hashed on servers

To learn more about monitoring, redundancy, disaster recovery, security, etc., reference our data center design page. More details about data center redundancy and reliability is covered in the “Reliability and Availability” section below.

Note: some account and configuration settings are subject to regional export for management. A full list of these settings can be found in our article, Data Stored on the Meraki Primary Controller.


Data Center Locations




Each region (North and South America, Europe, Asia, China) has, at minimum, a geographically matched pair (for failover) of data centers where any endpoint’s primary Meraki server will be located. The table below details which data centers cover each dashboard region.



Data center 1

Data center 2

North and South America




Germany Germany


Australia Singapore


China China


Upon account creation, customers can select which region their data is hosted in. For customers with globally dispersed networks, separate organizations should be created for each data storage region (N. and S. America, Europe, Asia, and China). The hosting region for each account can be found at the bottom of Meraki dashboard pages when a user is signed in.

Data Center Storage

Meraki data centers contain active Meraki device configuration data and historical network usage data. These data centers house multiple compute servers, which are where customers’ management data is contained. These data centers do not store customers’ user data. These data types are covered in more detail in the “Data” section below.




Meraki Device-to-Cloud Communications

Meraki uses an event-driven remote procedure call (RPC) engine for Meraki devices to communicate to the dashboard and for Meraki servers to send and receive data. Meraki hardware devices act as the server/receiver as the Meraki cloud initiates calls to the devices for data collection and configuration deployment. The cloud infrastructure is the initiator, so configurations can be executed in the cloud before the devices are actually online or even physically deployed.




In the event of cloud connectivity loss (which is most commonly caused by a local ISP or connection failure), the Meraki hardware device will continue to run with its last known configuration until cloud connectivity is restored.

Communication Process

If a device is offline, it will continue to attempt to connect to the Meraki cloud until it gains connectivity. Once the device comes online, it automatically receives the most recent configuration settings from the Meraki cloud. If changes are made to the device configuration while the device is online, the device receives and updates these changes automatically. These changes are generally available on the device in a matter of seconds. However, large quantities of changes may take noticeably longer to reach their devices. If no configuration changes are made by the user, the device continues to periodically check for updates to its configuration on its own.

As the device runs on the network, it will communicate device and network usage analytics back to the Meraki cloud. Dashboard analytics based on this information, in the form of graphs and charts, are updated regularly in the Meraki cloud and are displayed in the dashboard of users when they are viewing this information.

Configuration Containers

Device configurations are stored as a container in the Meraki backend. When a device configuration is changed by an account administrator via the dashboard or API, the container is updated and then pushed to the device the container is associated to via a secure connection. The container also updates the Meraki cloud with its configuration change for failover and redundancy.



Secure Device Connectivity

For devices to communicate with the cloud, Meraki leverages a proprietary lightweight encrypted tunnel using AES256 encryption while management data is in transit. Within the tunnel itself, Meraki leverages HTTPS and protocol buffers for a secure and efficient solution, limited to 1 kbps per device when the device is not being actively managed.



Configuration Interfaces

The Meraki dashboard

The Meraki dashboard is a modern web browser-based tool used to configure Meraki devices and services.




The Meraki dashboard is the visual alternative to the traditional command line, which is used to manage many routers, switches, security devices, and more. Instead, Meraki puts all devices within networks in one place and allows users to apply changes in a simple, easy-to-use format.

In addition to simplifying device management, the dashboard is also a platform for viewing network analytics, applying network permissions, and keeping track of users. The dashboard allows users to view camera streams, manage users’ mobile devices and computers, set content rules, and monitor upstream connections from a single place.

Meraki APIs

Meraki APIs provide control of the Meraki solution in a programmable way, enabling actions that may not be possible with the dashboard, or proving more granular control. Meraki APIs are RESTful APIs using HTTPS for transport and JSON for object serialization.

By providing open API accessibility, Meraki leverages the power of the cloud platform on a deeper level to create more efficient and powerful solutions. Through Meraki APIs, users can automate deployments, monitor their networks, and build additional solutions on top of the Meraki dashboard.




API keys are tied to a specific user account through the Meraki platform. If an individual has administrative access to multiple Meraki organizations, a single key can configure and control those multiple organizations.

Reliability and Availability         

Meraki enables a high-availability (HA) architecture in multiple ways to ensure high serviceability to our customers. Network connections through our data centers are high in bandwidth and highly resilient. Shared HA structures ensure data is available in case of a localized failure, and our data center backup architecture ensures customer management data is always available in the case of catastrophic failure. These backups are stored on third-party cloud-based storage services. These third-party services also store Meraki data based on region to ensure compliance with regional data storage regulations.

Meraki constantly monitors the connections for integrity using multiple high-speed connections out of its data centers. Meraki network connectivity performs tests for DNS reachability to determine that integrity and data centers will failover to secondary links in the case of a degraded link.

Meraki Server High Availability

A single device connects to multiple Meraki servers at the same time, making sure all data is kept up-to-date in case there is need for a failover. This secondary Meraki server connection verifies device configuration integrity and historical network usage data in the case of a Meraki server failure.




In the event of server failure or connection loss, node connectivity can failover to the secondary server. Upon recovery of the primary server, the connection will be reestablished without noticeable impact to the connecting nodes.

Data Center Backup High Availability

Meraki keeps active customer management data in a primary and secondary data center in the same region. These data centers are geographically separated to avoid physical disasters or outages that could potentially impact the same region. Data stored in these data centers are synced in real time. In the case of a data center failure, the primary data center will fail over to the secondary data center with the most recent configuration stored.




Disaster Recovery Plan

The storage of customer management data and the reliability of its dashboard and API services are primary priorities for Meraki. To help prevent data loss in the event of a disaster, Meraki has multiple major points of redundancy. Each Meraki data center is paired with another data center in the same region. If a data center is completely wiped out, backups can be brought up within minutes at the other in-region data center. Next, if both data centers are impacted, nightly backups hosted in two different third-party cloud storage services, each with their own physical storage redundancies, can be used to recover data.  

Management Data

The Meraki cloud gathers and stores certain types of “management” data to enable its solutions. All forms of data are encrypted in transit to and from Meraki servers. There are four major types of data stored in the Meraki cloud:

User records: Includes account email and company name or other optional information such as user name and address.

Configuration data: Includes network settings and configurations made by customers in the Meraki dashboard.

Analytics data: Includes client, traffic, and location analytics data, providing visualizations and network insights into traffic patterns across customer sites.

Customer-uploaded assets: Includes custom floor plans and splash logos.

Server Data Segregation

User data on Meraki servers is segregated based on user permissions. Each user account is authenticated based on organization membership, meaning that each user only has access to information tied to the organizations they have been added to as users. Organization administrators add users to their own organizations, and those users set their own username and secure password. That user is then tied to that organization’s unique ID, and is then only able to make requests to Meraki servers for data scoped to their authorized organization IDs.

Additionally, the Meraki development teams have separate servers for development and production, so Meraki never uses live customer data for testing or development. Meraki user data is never accessible to other users or subject to development changes.

Network and Management Data Segregation

The Meraki “out of band” control plane separates management data from user data. Management data flows from Meraki devices (e.g. wireless access points, switches, and security appliances) to the Meraki cloud over a secure internet connection. User data (network traffic, web browsing, internal applications, etc.) does not flow through the Meraki cloud, and instead flows directly to the destination on the LAN or across the WAN.

Network Usage Data Retention

Meraki stores management data such as application usage, configuration changes, and event logs within the backend system. Customer data is stored for 14 months in the EU region and for 26 months in the rest of the world. Meraki data storage time periods are based on year-over-year reporting features in the dashboard (12-month periods), plus additional time to ensure data is removed from Meraki backups upon deletion (two months). Meraki uses a proprietary database system to build up easily searchable and referenceable data.



Segregated User Assets

Meraki stores customer-uploaded assets such as custom floor plans and splash logos. These items are leveraged within the Meraki dashboard for only that specific customer network and therefore are segmented securely based on standard user permissions tied to organization or network ID access. Only users authenticated to access the host network are able to access uploaded assets.

Data Security

All data transported to and from Meraki devices and servers is transported via a secure, proprietary communications tunnel (see the “Secure Connectivity” section above). Communications data is encrypted in transit via this tunnel. All client-management connections (dashboard/API) to the Meraki cloud have secure TLS encryption for all application traffic.

Additionally, Meraki data backups are fully encrypted using AES256 and have restricted access (see the “Physical and Operational Internal Security” section).

Data Privacy

Connecting to a cloud solution entails storing specific data in the cloud for easy use and access. To maintain integrity and security, a cloud infrastructure must take into account the sensitivity and compliance rules of that data. Specific industries and geographies have laws to protect the user data that Meraki addresses through our flexible cloud infrastructure.

Meraki embeds privacy by design in its product and feature development as well as business practices. Privacy is an integral piece of the Meraki design process and is a consideration from initial product design all the way through to product implementation. Meraki offers a full suite of privacy-driven features to all customers globally. These features allow our customers to manage privacy requirements and help support their privacy initiatives. Customers can read more about some of the Meraki privacy features in our Data Privacy and Protection Features article.


Meraki provides a comprehensive solution to ensure a PCI-compliant environment held to the strict standards of a Level 1 PCI audit (the most rigorous audit level). The rich security feature set addresses all PCI data security standards, helping customers build and maintain a secure network, maintain a vulnerability management program, implement strong access control measures, and monitor network security.


Customer security is a top priority for Meraki. Heavy investments in tools, processes, and technologies keep our users and their networks safe, including features like two-factor authentication for dashboard access and out-of-band cloud management architecture.

In addition to Meraki and Cisco’s internal security teams, Meraki leverages third parties to provide additional security. Precautions such as daily third-party vulnerability scans, application testing, and server testing are embedded in the Meraki security program. Meraki additionally started a vulnerability rewards program for both hardware and software, which encourages external researchers to collaborate with our security team to keep our infrastructure and customers safe. More information about this program can be found on our Bugcrowd program page.

Meraki intelligent security infrastructure eliminates the management complexities, manual testing, and ongoing maintenance challenges that lead to vulnerabilities. The intuitive and cost-effective security features are ideal for network administrators, while powerful and fine-grained administration tools, account protections, audits, and change management appeal to chief information security officers.

Hardware and Software Security

Meraki leverages technology such as secure boot, firmware image signing, and hardware trust anchors as part of the Cisco Secure Development lifecycle to maintain hardware and software integrity.

Physical and Operational Internal Security

Meraki is committed to maintaining user security by providing mandatory operational security training for all employees. Formal information security awareness programs have been put in place for all employees. In addition, all employees and contractors are required to comply with Cisco’s background check policy and are bound by the Meraki information security policy and industry standard confidentiality agreements.

Remote access to Meraki servers is done via IPSec VPN and SSH. Access is scoped and restricted by our internal security and infrastructure teams based on strict rules for business need.

For access to Meraki cloud servers, databases, and code, there are role-based access models for user access and specific permissions in place. Two-factor authentication is enforced for all users who have access to these systems, both internally and remotely.

Physical access to the Meraki cloud infrastructure is secured at all hours, by guard service patrols, and contains external and internal video surveillance with real-time monitoring. For physical access, all data centers have a high-security key card system and biometric readers. Access to these data centers is only given to users with a business need to access, leveraging PKI and two-factor authentication for identity verification. This access is limited to a very small number of employees and user access is audited monthly.

Please note that this reference guide is provided for informational purposes only. The Meraki cloud architecture is subject to change.

Building a Scalable Meraki Solution

When designing a network solution with Meraki, there are certain considerations to keep in mind to ensure that your implementation remains scalable to hundreds, thousands, or even hundreds of thousands of endpoints.

Design Considerations

Meraki Solution Sizing

When planning for deploying a Meraki solution, whether it be a small part of a larger network solution or a full-stack total solution, it is essential to take some time to consider the organizational structure you will use. Generally, your structure will be determined based on the size of your deployment.


You will need to make a few considerations based on the way the Meraki cloud solution is structured. You will begin by creating a Meraki account, which is a user’s identity for managing the Meraki dashboard management interface. Accounts have access to "organizations," which are logical container for Meraki "networks." And Meraki networks are logical containers for a set of centrally managed Meraki devices and services.



By understanding this structure, you can begin to determine how many you will need of each.

Number of User Accounts

Account Scope General Recommendation

  • 2 or more user accounts per owner
  • 1 user account per network admin


Your Meraki account is your first step in building a Meraki solution, and it will also be your only method of gaining access to your devices, and distributing access to other users. As such, we strongly recommend having at least one secondary account for owners, in case you are locked out of or lose access to your primary account. Lost or forgotten passwords are common, but lost email access can lead to total lockout from your organizations, so it is essential to consider a backup plan at the beginning of the planning process.


Additional network administrators or viewers will only require one account. Alternatively, distributed SAML access for network admins is often a great solution for ensuring internal scalability and secure access control.

Number of Organizations per Account

Organization Scope General Recommendation

  • One organization per customer


  • One organization per service


If you are the customer who will be using and managing Meraki equipment, it is likely that you will only need a single organization. Each organization is simply a container for your networks, and a single-organization model is generally the most simple solution if it's realistic for your deployment.


Multiple organizations are recommended in a few different situations.

  • Global multi-region deployments with needs for data sovereignty or operational response times
    • If your business exists in more than one of: The Americas, Europe, Asia/Pacific, China - then you likely want to consider having separate organizations for each region. Each dashboard organization is hosted in a specific region, and your country may have laws about regional data hosting. Additionally, if you have global IT staff, they may have difficulty with management if they routinely need to access an organization hosted outside their region.
    Companies with multiple business types with multiple different operational structures
    • Companies that have split business units generally find that they want multiple organizations for simpler management, based on which company sub-group or sub-company is using the service. This is commonly split based on groups like "mergers and acquisitions" vs "corporate" or "retail locations" vs "service locations," etc.
  • Very large companies with multiple distinct use cases
    • Very large companies, with tens or hundreds of thousands of employees, will often separate their organizations based on types of workers. Organizations may be split into groups such as "remote teleworkers," "branch offices," "campus offices," etc.
  • Service Provider businesses with separate service offerings
    • Service providers, companies that sell or lease Meraki service solutions to their end users, will generally find that they require multiple organizations. Generally, the number of organizations for service providers will be determined based on one of the following structure models.
      • One organization per service: Separate organizations for each service offering. Different organizations often represent different tiers of service.
      • One organization per customer: Common in cases when the end customer owns their own equipment or requires full management of their own network.
      • One organization per SD-WAN: The scope of an organization defines the connectivity domain for Meraki SD-WAN.
    • More information on service provider dashboard structures can be found in our Service Provider Recommended Dashboard Structures article.


Additionally, it is important to consider Meraki server and data center limits. Meraki server architecture is a multi-tenant solution that hosts multiple customers on the same hardware with secure permissions-based segmentation among them. The maximum scale supported in a single organization is 25,000 physical Meraki devices. If a single business intends to have more than 20,000 Meraki devices in their solution, they are strongly encouraged to work with their account team to design a deployment strategy across multiple organizations.

Number of Networks per Organization

Network Scope General Recommendation

  • One network per physical location or branch


Generally, networks will represent physical locations or actual LANs at different sites. A deployment with only 5 sites will usually only have 5 networks. Dashboard networks only support 1 MX security appliance per network (or 2, including 1 as a spare), so if MX devices are being used as LAN/WAN boundaries, each network will represent a unique LAN. Device configurations are scoped on a per-network basis, so generally, networks can also be thought of as representing unique configurations. For example, all access points on a network will share a common set of SSIDs. All layer 3 switches on a network will share routing information.


Additionally, for smaller deployments, it is sometimes common to separate networks based on type of device. So, there may be a "wireless devices" network, an "endpoint management" network, a "cameras" network, etc.

It is worth noting that, at more than 2000-5000 networks, the list of networks might start to be troublesome to navigate, as they appear in a single scrolling list in the sidebar. At this scale, splitting into multiple organizations based on the models suggested above may be more manageable.


For service providers, the standard service model is "one organization per service, one network per customer," so the network scope general recommendation does not apply to that model.

Number of Devices per Network

The number of Meraki devices (MX, MS, MR, MV, etc., not user devices) per network is a much more variable number that does not have a general recommendation. It will vary from case to case.

Note that there is a limit of 1000 devices per network. Networks exceeding this number should be split. However, it is generally uncommon for networks to approach this number unless they have a very large number of cameras or wireless access points. If this is the case, it is recommended to split the networks based on physical areas or use cases.

Multi-Organization Considerations

Each organization represents a separate SD-WAN instance. Meraki SD-WAN is implemented through use of Meraki Auto VPN technology. Auto VPN is an extremely low-touch solution for creating SD-WAN networks, by automatically networking all sites in a single organization. Because of this, multi-organization deployments are also multi-SD-WAN-domain deployments. Usually this is preferred for customers with needs for multiple organizations, but it should be noted in design.

Geographic Locations

For compliance reasons many countries require information gathered by companies to be kept within specific geographical regions. You should consider creating separate organizations in order to stay compliant. Additionally, whenever one is leveraging a cloud based solution, making sure the administrators of that system are close to the management hub makes the execution of cloud management more seamless. For example, deployments in the EU are subject to compliance with the GDPR and deployments in China are subject to country-wide security restrictions. Organizations may need to be scoped by region based on these considerations.


In order to maintain consistency between networks, many customers use network templates. Templates allow administrators to quickly make many copies of a certain network configuration across an organization. However, templates are not maintained across different organizations, and there are not currently any options for copying networks among organizations. It is recommended to change all relevant templates across your organizations at the same time if they are intended to act the same.


Similar to templates, firmware consistency is maintained across a single organization but not across multiple organizations. When rolling out new firmware, it is recommended to maintain the same firmware across all organizations once you have gone through validation testing.

Ordering Recommendations

It is recommended to order and ship devices within the same country. Doing so will automatically assign the correct regulatory domain for devices in the order, when relevant. This primarily applies to devices with wireless capabilities.

It is also recommended to separate your orders based on organization for inventory and claiming reasons (listed below). Orders for hardware that will be used in multiple organizations should ideally be split, unless doing so would cause more difficulty than it would solve.

On the other side of the same coin, multiple orders for a single organization (made at the same time) should ideally be joined. One order per organization usually results in the simplest deployments for customers. 

Inventory and Claiming

When claiming new Meraki devices, it is recommended to claim by order number within the organization you intend to use the devices (as opposed to claiming individual serial numbers). Claiming by order number will pull in all hardware and licenses associated with the order and tie them to the organization before devices ever physically arrive on site. Once claimed, devices can be moved from one organization to another if needed. Meraki recommends always claiming by order number when possible, rather than claiming by MAC or serial number.


API keys are tied to the access of the user who created them.  Programmatic access should only be granted to those entities who you trust to work within the organizations they are assigned to. Because API keys are tied to accounts, and not organizations, it is possible to have a single multi-organization primary API key for simpler configuration and management. This can be achieved by giving an account organization-level permissions for all organizations. However, access to this account should be granted carefully.

Deployment Considerations

Cloning Networks

An excellent way to save time in deployments with many networks is to clone networks. The larger a deployment is, the more helpful it is to have one or more "golden configuration networks" which are never used for devices, but represent an ideal configuration that new networks should have. Whenever a new network is created, there is an option to clone it from an existing network, and it's usually best to clone from networks specifically configured for this purpose. When planning a deployment, these "golden configuration networks" should ideally be created first, and subsequent networks can be copied from them.

Template-Based Networks

Cloning networks provides a static method for creating many networks with identical configurations. Alternatively, templates provide a more dynamic solution. Configuration templates can allow many Meraki dashboard networks to be deployed following a single base configuration network, and they will dynamically update as changes are made to the base configuration network. This makes it much easier to roll-out new sites/users and maintain consistency across each site's configuration. Networks that are bound to a template will utilize its configuration as a base. Any changes made to the template will then be pushed out to all bound networks.


Template-based networks are most useful in cases where a large number of sites exist that share a common network design. Examples of this are common in retail deployments with many stores, or in cases with large numbers of home users with teleworker VPN devices connecting to a corporate network over VPN.


Templates should always be a primary consideration during deployments, because they will save large amounts of time and avoid many potential errors.


It should be noted that service providers or deployments that rely heavily on network management via APIs are encouraged to consider cloning networks instead of using templates, as the API options available for cloning currently provide more granular control than the API options available for templates.


Tagging is a way to group or identify devices, networks or ports for specific use cases. These tags can be used to search, filter, identify or assign access to particular functions. The following items can have network tags applied to them:

  • Networks
  • Meraki devices
  • Switch ports
  • Systems Manager devices


Tagging networks allows specific admins to have network level configuration access without organization-wide access. Access can be scoped based on network tags, which allows for much more granular access control. This is most commonly used for assigning permissions to local IT admins that are not "super users."  Additionally, network tagging allows "visibility-only" roles for users to see the most relevant application data. This is most commonly used for managers interested in the traffic usage of their network, but may not want to make configurations.


Meraki device tags are used to easily search for certain types of devices on your networks, based on user-defined attributes. Common device tags may include locations like "1stFloor," "Dorms," "Classroom," etc., or  attributes of the device like "LeftRack" or "MiddleRack" for MX and MS models or "CeilingMount," "WallMount," for MR or MV models. Additionally, APs can be tagged for specific VLANs. There is an option to tag traffic with a particular VLAN on specific APs based on device tags. 


Systems Manager device tags are used to logically group end-user devices together and associate them with applications and profiles. Users may be given a tag for a certain application that should only be installed on their devices, or a certain security level that should only apply to them.


Switch port tags allow administrators to set granular port management privileges. Organization administrators could use port tags to give read-only admins configurations access and packet capture capability on specific ports.


There are two basic types of dashboard administrators: Organization administrators and Network administrators.

  • Organization administrators have complete access to their organization and all its networks. This type of account is equivalent to a root or domain admin, so it is important to carefully maintain who has this level of control.

    • Organization - Read-only: The user is able to access/view most aspects of network and organization-wide settings, but is unable to make any changes.

    • Organization - Full: The user has full administrative access to all networks and organization-wide settings. This is the highest level of access available.

  • Network administrators have access to individual networks and their devices. These users can have complete or limited control over their network configuration, but do not have access to organization-level information (licensing, device inventory, etc).

    • Network - Guest ambassador: The user is only able to see the list of Meraki authentication users, add users, update existing users, and authorize/de-authorize users on an SSID or Client VPN. Ambassadors can also remove wireless users, if they are an ambassador on all networks.

    • Network - Monitor-only: The user is only able to view a subset of the Monitor section in the dashboard and no changes can be made. This can be useful for proving networking monitoring access to customers in service provider deployments.

    • Network - Read-only: The user is able to access most aspects of a network, including the Configure section of the dashboard, but no changes can be made.

    • Network - Full: The user has access to view all aspects of a network and make any changes to it.


SAML (Security Assertion Markup Language) can be used with the Cisco Meraki dashboard to provide external authentication of users and a means of SSO (Single Sign-On). SAML users can be organization administrators or network administrators. Assignment of permission to these roles is identical to that of normal users. SAML access is highly recommended in deployments already set up with an identity provider service (IdP).

SNMP vs API for Device Status Reporting

Many deployments will find that they benefit from some type of device reporting, or may have some kind of mechanism in place for monitoring device status. Options for monitoring devices include standard dashboard monitoring, SNMP reporting and API device status reporting. SNMP is an available option for users who are used to using an SNMP solution, but for large deployments (20,000+ devices), we highly recommend relying on device status reporting via the API for scalability. Smaller to medium-sized deployments may also find that an API solution for device reporting better suits their needs, so the option should be considered.

You can read more about our available API endpoints for device status reporting in our API docs.

Syslog Reporting

The Meraki dashboard has built-in event log reporting for all of its devices, but the event log is limited to a history of about 3 months. Any deployments that require longer historical records should deploy a syslog server solution in their deployment, and should enable syslog reporting on their networks. The MX Security Appliance supports sending four categories of messages/roles: Event Log, IDS Alerts, URLs, and Flows.  MR access points can send the same roles with the exception of IDS alerts. MS switches currently only support Event Log messages.

You can read more about setting up syslog with a Meraki deployment in our syslog Server Overview and Configuration article.

Zero-Touch Deployments

Cisco Meraki links ordering and cloud dashboard systems together to give customers an optimal experience for onboarding their devices. Because all Meraki devices automatically reach out to cloud management, there is no pre-staging for device or management infrastructure needed to onboard your Meraki solutions. Configurations for all your networks can be made ahead of time, before ever installing a device or bringing it online, because configurations are tied to networks, and are inherited by each network's devices. Additionally, due to the real-time remote troubleshooting tools built into the dashboard, an IT Admin can remotely view the installation status while remote installers physically plug in ports and access points, allowing for a truly zero-touch deployment.

Each device, upon connecting to the internet, automatically downloads its configuration via the Meraki cloud, applying your network and security policies automatically so you don’t have to provision on-site. Wireless APs optimize their RF configuration based on the environment, and switches integrate seamlessly into existing RSTP domains. We recommend configuring your network ahead of time, before deploying, to ease installation time and avoid deployment errors.

Internet Access

Because each Meraki device gets all of its configuration information from the Meraki Cloud platform, the devices must have the ability to call out to the internet and access the Meraki platform for onboarding. This means that DHCP and DNS rules should be configured on your management VLAN and proper firewall rules should be opened outbound to make sure all Meraki devices are able to connect once they're turned on. A list of all ports and IPs needed for firewall rules can be found in your Meraki dashboard under Help > Firewall info, as the ports may vary based on which types of Meraki devices are in your organization.

Best Practices for Meraki Firmware

Introduction to Meraki Firmware

Cisco Meraki has always prided itself on delivering powerful networking and IT solutions in a simple, easy to manage fashion. This extends to firmware management on Meraki devices. Traditionally, firmware management is a tedious, time-consuming, and risky procedure met with dread and loathing by the network administrator tasked with carrying out the upgrades, but Meraki works to limit this burden. Complexity has long plagued firmware management practices throughout the industry, spawning horror stories about experiences such as upgrades that went sideways because of a corrupted USB drive or late nights in data centers manually provisioning the new code.

Why the Cloud Makes Us Different

Meraki tackles the complex firmware issue by leveraging the power of Meraki’s cloud-based dashboard to allow for easy deployment and firmware scheduling. The dashboard provides unique insights into new features as they become available in new firmware releases.

From a security perspective, the benefits of the cloud are unparalleled. Today, new security vulnerabilities are constantly announced, and network infrastructure is not immune to exploits. To contain threats at this scale, flexibility and rapid software remediation is paramount. At Meraki, we have the power to immediately react to discovered exploits, patch the vulnerability, and make this firmware immediately available for customers to leverage.

Simplified Firmware Updates

In the early days of Meraki, the only firmware configuration required was to specify a convenient maintenance window for your network, such as late at night on a weekend, for example. As Meraki has grown alongside its customer base, we have incorporated tighter controls over firmware for customers who desire these while still maintaining the simplicity of cloud-based delivery. Customers can now manage firmware for each network in their organization by selecting which firmware runs on which network.




Meraki differentiates itself through its firmware delivery using the Meraki cloud platform, by providing an exceptionally swift and reliable way to deliver firmware upgrades. The results are evident in our users’ impressive firmware adoption rates. Even given the options for finer controls, the vast majority of our users adopt and run on our latest firmware builds almost immediately after stable release candidates are available. Our extensive testing and our beta adoption process ensures that we deliver reliable builds at a regular cadence, delivering up-to-date security and stability.

Meraki Firmware Conventions

Meraki firmware nomenclature is the same across products and consists of a major and minor number as part of the name. The firmware version is named using the format given below:

<Product Name> <Major Firmware Version>.<Minor Firmware Version>

  1. Major Versions

A new major firmware is released with the launch of new products, technologies and/or major features. New major firmware may also include additional performance, security and/or stability enhancements.

  1. Minor Versions

A new minor firmware version is released to fix any bugs or security vulnerabilities encountered during the lifecycle of a major firmware release.


Meraki firmware release cycle consists of three stages during the firmware rollout process namely beta, release candidate (RC) and stable firmware. This cycle is covered in more detail in the Meraki Firmware Development Lifecycle section of this document.

Automated Firmware Upgrades

Meraki’s goal is to make networking simple and one of the ways that we do this is by automating firmware upgrades. In order to support the process of firmware maturity and to provide the most stable experience to customers, Meraki will schedule firmware upgrades for networks that meet the criteria for a firmware upgrade. All networks, by default, receive automated upgrades. While this upgrade method does not require any additional input from the administrator, it may not be appropriate as a complete firmware management process, depending on the needs of your network. 

When new firmware becomes available it will immediately be available on dashboard for an administrator to upgrade to. Though it will eventually be pushed to qualified networks via the automated upgrade process, the automated upgrade process does not happen immediately after release and is rolled out over time. The automated process can sometimes take weeks to occur on all networks, depending on certain factors.

Some factors that may affect the automated deployment time period include: potential conflicts between new and old firmware builds, the number of devices receiving the new build, or special configurations on critical devices or networks that require caution for upgrades. The primary considerations for Meraki when deploying firmware upgrades is to preserve maximum security, uptime, and compatibility. If any of these factors are at risk, Meraki may choose to wait to deploy until those risks have been resolved.

The firmware that is selected via the automatic upgrade process can be one of three release types; Stable, Stable Release Candidate, or Beta. When an automated firmware upgrade is released by Meraki, networks that are scheduled for automated upgrades will be moved to the latest version. Periodically, automated upgrades may occur for firmware versions that are beta, stable release candidate, or stable. Customers will be notified via email when these upgrades are scheduled. 

Automatic upgrades for beta firmware releases will only be scheduled for customers that have enabled the 'Try beta firmware' option in Network-wide > Configure > General or who are already running an older beta firmware release.

While automated firmware upgrades are pushed out to all networks over time, due to the potential delays mentioned above, a more manual process may be required for some organizations. If a network needs a more timely upgrade pattern, it is best for the organization administrators to schedule upgrade times manually on the Organization > Firmware Upgrades page in the dashboard.

Administrators and network alert recipients will be notified when an automated firmware upgrade is scheduled. By default, these upgrades are scheduled 1 to 2 weeks from the date of notification. Additionally, a notification banner within dashboard will be present for organization administrators after the upgrade has been scheduled. Networks that do not contain devices or where all devices are dormant will have upgrades scheduled immediately.

This firmware upgrade process cannot be opted out of as it is a core service provided by Meraki however the upgrade(s) may always be rescheduled.

Automated firmware upgrade decisions are made on a per-network basis. As a result, if an upgrade is to be deployed it may or may not be deployed to all networks in the organization with that device type. 

Automated firmware upgrades do not occur on a fixed timetable. As a result, a network running older beta firmware may not be immediately upgraded to recently released beta firmware. 

Some networks might not get a firmware upgrade scheduled due to various reasons. We recommend network administrators to check all of their Dashboard networks periodically for available firmware upgrades and upgrade them manually to the latest firmware versions in such scenarios.

General Firmware Best Practices

Meraki was built on the promise of making management of devices intuitive, and this extends to Meraki firmware management. Thanks to the power of the Meraki dashboard, we are able to create and release high quality firmware that allows access to cutting-edge features and high quality, secure software. Out of the box, we recommend you let the simple, automatic and seamless updates work to your advantage. By default, your devices will be scheduled for updates when new firmware becomes available — firmware that has been robustly validated and tested before being deployed.


Meraki’s default firmware settings include:

  • no automatic beta firmware deployments

  • a default upgrade window

  • a default upgrade choice of Wednesdays


On average, Meraki deploys a new firmware version once a quarter for each product family, and this cadence ensures you get access to new features and functionalities as they become available, minimizing major changes between firmware versions to ensure high quality software.


Once you are scheduled for an automatic update, Meraki will notify you 2 weeks in advance of the scheduled upgrade and, within this two week time window, you have the ability to reschedule to a day and time of your choosing. We recommend selecting a time that is most convenient to your business needs, and if you want to, you can set this time as your default upgrade window under your general network settings.




If you want to take advantage of the most advanced and newest features, we recommend that you enable the “Try beta firmware” toggle. This will give you early access to the latest Meraki firmware after it has finished the full internal automated and manual testing process in our firmware development cycle. If you are running beta firmware, you get earlier access to new features, as well as the opportunity to provide feedback on these features before they become generally available! If you have any issues on the new beta firmware you can always roll back to the previous stable version, or the previously installed version if you roll back within 14 days.


With configuration templates it is possible to push a standard configuration against multiple sites at the same time. When a network is attached to a template, the firmware is controlled by the template. It is not possible to configure a network to use a different version of firmware than what the template is configured for. In certain cases Meraki Support is able to upgrade individual devices, but this should not be relied upon as this prevents normal upgrades in the future.


Best Practice for Multi-Branch Deployments

Now that we understand how the Meraki firmware system works, let's talk about how you can leverage this to confidently manage firmware on your network.




As shown in the diagram above, firmware should be rolled out in stages when managing a large-scale network. This approach allows you to test new features and verify stability in your production environment before rolling out new features globally.

Test Network(s) (Beta)

It is recommended to have designated network(s) to test beta firmware when released. Test networks can be a lab network or production network that is smaller but that also has enough devices to test new features. If there are production networks of different sizes in your organization, it is best to have an additional beta network of each size.

Why you Should Test Beta Firmware

Although all Meraki beta firmware undergoes rigorous testing as described in the beta release process, we recommend testing the new beta code in your designated test networks. This ensures the firmware is tested based on the needs of your unique environment and works without issues for real users.

Beta Feedback Mechanisms

If bugs are encountered during beta firmware rollout, you should contact Meraki Support to ensure the issue is documented internally, using our defined process. Opening a case will ensure the appropriate details are collected and presented to Meraki engineering teams for resolution. As part of resolution, you may be provided with a hotfix (also commonly referred to as a patch fix) to verify resolution. Once a fix is confirmed, it will be rolled into a new beta version after going through our firmware release process so customers can continue testing.

Release Candidate Network(s) (Validate)

Once beta firmware is tested, you can choose to wait until the major version reaches release candidate (RC) status or roll out beta firmware to the remaining networks if you are satisfied with the validated beta firmware. If you have a policy to only use stable firmware in production, then you can move onto the next step in the process, which is to roll out the RC firmware to designated RC networks. As mentioned in the firmware rollout process, RC is very close to stable and hence can be rolled out to a larger pool of networks in the production environment. During this phase, customers do not need to have an extensive test plan because, at this point, all new features have been tested and the focus is on widely rolling out the firmware through the network. If you are encountering problems with stable firmware,we recommend upgrading to the next release candidate to see if the problems continue. The release candidate will include many fixes that might resolve the problem.  

Stable Release Network(s) (Full Deployment)

Once a firmware is marked as stable, customers can roll out firmware to all the remaining networks either using the firmware upgrades tool or, optionally, using the automatic upgrade process to roll out firmware. If you have followed our firmware best practice for validating and testing the current Stable Release, you can deploy with confidence that it will work well in your unique environment.


In the scenario where you find the new beta or release candidate firmware is functioning as required and you would like to use this version on your entire deployment, go ahead and deploy this version across your entire deployment - we strive to deliver high quality firmware at all stages of our development process. If you do run into issues after the deployment, you can always easily roll back to the previous major stable firmware version.

Best Practice for MR Firmware

As our wireless portfolio grows, Meraki continues to focus on delivering the high performance and high availability network that modern deployments require. To achieve this goal we focus on minimizing downtime during an upgrade, maintaining scheduling flexibility, and preserving the accuracy of your upgrade maintenance window. Most Meraki access points (APs) will reboot in less than 1 minute after an update, ensuring minimal disruption to the end user even if they need to do a firmware upgrade during working hours.


We are constantly working on improving the firmware upgrade experience and further minimizing network downtime. Please refer to the Access Point Firmware Upgrade Strategy for more details.


Additionally, when you are running a Meraki wireless network, it is important to keep a few things in mind to ensure you have a great Wi-Fi firmware deployment experience. First, make sure you keep all of your APs on a single firmware version. Many Wi-Fi features are depending on the same expected behavior among the access points. For example, if you are using L3 roaming, some different versions of firmware may not be compatible with each other for L3 roaming features in particular.


Second, when upgrading a wireless network, client devices with older drivers may have issues with new features. We test against over 100 unique client devices (including many different laptops, smartphones and legacy wireless devices with unique wireless chipsets) in our labs before shipping any wireless firmware, but it's a good idea to have a single test AP to validate clients that might be unique to your business environment. These may include a custom point of sale (POS) system or barcode scanner that is critical to your business. Again, testing these potentially unique clients on the latest Meraki beta wireless firmware is a best practice as it ensures that Meraki can be notified of any potential interoperability issue early in the development cycle.


In addition to these basic best practices, Meraki APs also include features unique to the product line that make large scale firmware updates better.

Best Practice for Large Scale Wireless Networks

Traditionally, when running large scale campus wireless networks, upgrading wireless firmware has been considered risky. Thanks to the agile and cloud-based firmware development process used by Meraki engineers, there are a few things you can do to make these deployments less risky. Even in the largest networks, the best practice with Meraki is to designate an isolated area of your network to test and validate the newest Meraki firmware. With a designated Meraki MR test area, you can get access to validate all Meraki wireless firmware in your physical environment. This allows you to engage with Meraki engineers directly, earlier in the software development process, so you can help provide feedback on new features and identify any potential issue that may affect your deployment.




In the example above we have designated the cafeteria and a large meeting area as our Meraki test area. This was done by moving the selected APs into their own dashboard network so they could be assigned a (beta) firmware version, separate from the main network(s). This also allows the APs to be rolled back to a stable version quickly, if needed, by simply moving the APs back to the main production dashboard network.

This part of our deployment is an ideal choice for a few reasons:

  • The area includes six Meraki access points, which ensures we have a reasonable number of access points to test on
  • The area provides us with a diverse group of client devices, as people will bring many different smartphones and laptops to this area
  • Almost all employees frequent this area of the building at some point during the day
  • Because this is not a business-critical area, the impact of a potential wireless issue will be more manageable to the users


Once you have validated and are comfortable with the current firmware in the test environment, you can confidently deploy the update to the rest of your network. It is important to note that, in this example, you may occasionally have some roaming issues as users navigate in and out of the designated test area, because the deployed firmware versions may be different, and roaming may not yet be seamless between the two versions.

Best Practice for MS Firmware

When you move farther up the networking stack to switching there are additional things you need to take into consideration. It’s always important to consider the topology of your switches as, when you drive closer to the network core and away from the access layer, the risk during a firmware upgrade increases. Because of this, in a larger switch-based network you should always start the upgrade closest to the access layer. Two unique aspects of managing Meraki switch firmware is that we support both:


  • Staged upgrades to allow you to upgrade in logical increments. For instance, starting from low-risk locations at the access layer and moving onto the higher risk core, and...

  • Automatic updates for switch stacks


When upgrading Meraki switches it is important that you allocate enough time in your upgrade window for each group or phase to ensure a smooth transition. Each upgrade cycle needs enough time to download the new version to the switches, perform the upgrade, allow the network to reconverge around protocols such as spanning tree and OSPF that may be configured in your network, and some extra time to potentially roll back if any issue is uncovered after the upgrade.


The high-level process for a switch upgrade involves the following:

  1. The switch downloads the new firmware (time varies depending on your connection)

  2. The switch starts a countdown of 20 minutes to allow any other switches downstream to finish their download

  3. The switch reboots with its new firmware (about a minute)

  4. Network protocols reconverge (varies depending on configuration)


Meraki MS devices use a “safe configuration” mechanism, which allows them to revert to the last good (“safe”) configuration in the event that a configuration change causes the device to go offline or reboot. During routine operation, if a device remains functional for a certain amount of time (30 minutes in most circumstances, or 2 hours on the MS after a firmware upgrade), a configuration is deemed safe.

When a device comes online for the first time or immediately after a factory reset, a new safe configuration file is generated since one doesn’t exist previously. Multiple reboots in quick succession during initial bringup may result in a loss of this configuration and failure to come online. In such events, a factory reset will be required to recover. 

It is recommended to leave the device online for 2 hours for the configuration to be marked safe after the first boot or a factory reset.

Staged Upgrades

To make managing complex switched networks simpler, Meraki supports automatic staged firmware updates. This allows you to easily designate groups of switches into different upgrade stages. When you are scheduling your upgrades you can easily (as in the example below) mark multiple stages of upgrades. In this case, we started with the access layer switches in Stage 1 and gradually upgraded toward the core in Stage 3. Once you start the staged upgrade, the Stage 1 switches will complete the entire upgrade cycle before the Stage 2 upgrades start. This cycle will repeat until all the switches are upgraded in all three stages.



Upgrading a Switch Stack

In addition to supporting staged upgrades, Meraki also simplifies managing a switch stack. As part of our upgrade toolset, we automatically handle the upgrade of the entire switch stack. This upgrade manages the upload of firmware to each switch and takes care of each reboot within the switch stack. If you are upgrading switch stacks within your staged upgrade, Meraki will automatically upgrade the switch stack as part the staged upgrades. The upgrade process for a stack follows the same high-level process outlined previously, with each stack member rebooting close to the same time and the stack then automatically re-forming as the members come online

Best Practice for MX Firmware

Unlike many other products offered by Meraki, MX appliances and Z-Series devices have a one-dashboard-network per-site model. This provides very granular control of how upgrades can be managed across the deployment.


All firmware upgrades will require that the MX appliance reboots, so it is important to ensure that an appropriate maintenance window has been put in place, as the MX upgrade process will take down the entire local network in most scenarios. Given the central/upstream nature of MX devices, it is also recommended to allow for sufficient time to monitor and test after the upgrade completes to ensure the maintenance window completes successfully.


When managing a deployment with many MXs, the following are useful best practices that can help make firmware transitions and management simpler.

Appliance Network with Two MXs in an HA Configuration

When MX appliances configured to operate in High Availability (HA) (either in NAT/routed mode or when operating as one-armed VPN concentrators), the dashboard will automatically take steps to minimize downtime when upgrades are performed to ensure a zero-downtime MX upgrade. This is achieved through the following automated process:


  1. The Primary MX downloads firmware

  2. The Primary MX stops advertising VRRP

  3. The Secondary MX becomes active

  4. The Primary MX reboots

  5. The Primary MX comes online again

  6. The Primary MX starts advertising VRRP again

  7. The Primary MX becomes active again

  8. The Secondary MX downloads firmware (approximately 15 minutes after the original upgrade is scheduled)

  9. The Secondary MX stops advertising VRRP

  10. The Secondary MX reboots and comes back online

Deployments using AutoVPN

When upgrading a VPN concentrator, it is important to plan for a maintenance window that allows for the upgrades to complete and for verifications to be performed that ensure connectivity is fully re-established and network systems are healthy.


It is highly recommended that customers plan for maintenance windows in accordance with the scale and complexity of the deployment where the upgrades are being performed. For example, more time should be allotted for upgrading a VPN concentrator supporting 1000 spoke sites and leveraging a dynamic routing connection between the concentrator and datacenter, than for a VPN concentrator with only 10 spoke sites.


If beta firmware is being tested on a VPN concentrator, it is best to plan for time in the maintenance window to allow for the upgrade to complete and validate the operational state after the upgrade has been completed. It is also recommended to allocate an additional window of time for rolling back to the previous build, in case you run into unmanageable issues.


When concentrators are configured in HA, they will follow the steps mentioned above. VPN tunnels will begin establishing to the spare appliance while the primary is upgrading. However, the primary appliances typically complete the upgrades fast enough that spoke sites have minimal interactions with the spare concentrator.


In general, even with equipment in HA, it is best to always be prepared for some amount of downtime and impact for spoke sites. In almost all cases these are simply a matter of seconds as spoke sites fail between concentrator pairs, but the impact can become more noticeable if there are WAN connectivity problems between the data center and spoke locations.

Meraki Firmware Development Lifecycle

One of the key advantages of being a cloud managed device company is that Meraki is able to leverage full internal automated testing, while also being able to utilize our cloud to monitor key device performance metrics across our entire installed user base. To ensure robust and reliable firmware development, Meraki follows a consistent software release process to validate and deploy consistent and reliable firmware. Meraki's firmware development process has four stages: alpha, beta, stable release candidate (RC), and stable. Every firmware version is created and released with the goal of graduating to stable. If a particular build fails to pass our key metrics at any stage of the development process, a new build is created and the process begins anew. The following sections go over each of the stages in more detail.



Alpha Pre-Release

With all new Meraki firmware including both major and minor releases, we start out every new build by running it through our full alpha testing process. Before any release hits our users’ hands, we validate the release by running it through our ever-expanding testing suites, and check for regressions or new features that are not performing as expected. Each product line has automated and manual testing specific to the product, that are designed to ensure Meraki minimizes the chance of regressions as we continue to create and expand on our software feature set.


As part of our core philosophy, after a new build has successfully passed the testing phase, we deploy the new firmware release on our own personal and engineering networks. We believe it is important that we deploy and run our own firmware before any of our customers deploy our firmware. During this process we will run this firmware in our real world deployments for one or more weeks before we consider releasing the build as a new beta version.




If a build successfully passes all of our release criteria, we will start to make the new build available to our customer base. If any issues are discovered that need to be resolved, we will start the process over once the issue has been addressed before moving the release forward. In some more rare cases, we will move forward with a build with a known regression, due to complexity or timing of the fix, and in this scenario we will note the regression in the release notes for that version.

Beta Release

Firmware is made available for production use at first under "Beta." Often customers will run beta firmware in their production network to take advantage of new features and bug fixes. Beta firmware has already gone through internal regression, stability, and performance testing to limit risks when applied to production networks. Customers that opt into beta firmware via the Try beta firmware configuration option on dashboard will be automatically notified and scheduled to upgrade to these versions as they are released. These upgrades can be canceled, modified, and reverted using the firmware upgrades tool in the dashboard. Customers can also manually upgrade their networks at any time to beta firmware by using the firmware upgrade tool. Beta firmware can be considered analogous to “Early Deployment” firmware seen in other products in the industry.

The latest beta firmware is fully supported by our Support and Engineering teams. Older betas are supported with best effort; an upgrade to the latest beta will ensure full support.

Stable Release Candidate

As a new firmware version matures from beta, it has the opportunity to graduate into a stable release candidate. A formal review of the beta firmware’s success is conducted by our software and product teams. Key performance indicators (KPIs) for quantifying firmware quality are analyzed including open support cases & engineering issues, firmware adoption, and stability metrics. After the formal review, a beta may be reclassified as a "Stable Release Candidate." At this point the firmware version will be indicated as such in the firmware upgrade tool. Once a new stable release candidate is available, Engineering will begin scheduling a limited set of customers for upgrade. These upgrades can be canceled, modified, or reverted using the firmware upgrade tool as well.

The latest stable release candidate firmware is fully supported by our Support and Engineering teams. Older stable release candidates are supported with best effort; an upgrade to the latest beta, stable release candidate, or stable will ensure full support.

Stable Release

A stable release candidate matures into a stable version over time as it is slowly rolled out to devices globally. When the Meraki install-base hits a specified threshold for a major version (roughly 20% of nodes), that firmware revision will be promoted to stable, pending a final formal review. For point releases, the determination will be made on a case-by-case basis.  

Again, the same KPIs are analyzed as used in the stable release candidate review. Upon completion of these processes the firmware can be promoted to "Stable." After promotion, stable versions can be applied by any customer via the firmware upgrade tool on dashboard. The latest stable version is also the version that is used for all newly created dashboard networks for a particular device.

Firmware Upgrade Tool

To make all of the best practices above simple to manage, you can use the Meraki firmware upgrade tool. We have built this tool to allow organizations to easily manage all Meraki firmware across the product portfolio in a single dashboard. As with all of our cloud features, we are continuing to build more functionality in the firmware upgrade tool to increase usability and simplify firmware management.




On the Overview tab, customers find a variety of information, such as a list of recent upgrades in the dashboard organization, pending upgrades that have been automatically or manually scheduled, the ability to cancel or reschedule these upgrades as well as a list of firmware versions that are available in beta, stable release candidate, or stable form for a given Meraki product.

Included with the available beta, stable release candidate, and stable firmware versions available in dashboard is a list of changelog notes. These notes allow customers to be fully aware of any new features, bug fixes, and existing known issues found between their existing firmware in use and the version planned for upgrade. Customers leveraging configuration templates may also enjoy the benefits of the firmware upgrade tool.

Requesting Specific Versions of Firmware

If there is a specific version of firmware that is needed for reasons of compatibility, then it can be requested from Meraki Support. Please note that any problems that are encountered while running versions of firmware that are not either stable or release candidate will not be supported and Meraki Support may need to recommend upgrading to the latest version of firmware in order to continue troubleshooting.

Locked Firmware

In general, it is discouraged to upgrade firmware on specific devices rather than upgrading the entire network. However, during the course of troubleshooting, Meraki Support may find it necessary to try a particular version of firmware on a specific device. For devices that have their firmware set manually by Meraki Support, you’ll see the message: Firmware version locked, please contact Support.

Addition or removal of locked firmware cannot be scheduled, please call Meraki Support to have this completed. The device will then automatically downgrade/upgrade to match the network firmware.

Additional Firmware Documentation

In addition to this best practices document please reference our other documentation to help you best deploy your Meraki products:


Best Practices for Service Providers

This article outlines the recommended dashboard structures for MSPs as well as some of the operational considerations.

The following three models represent the three main methods of dashboard structure recommended for MSPs. While the Standard Service model is recommended for most customers (and is used by roughly 80% of our MSPs), it may be worth considering the other models if the end-customers’ network requirements warrant a more tailored approach. Keep in mind that the differentiator among Organizations in the dashboard should be the nature of the service for the organization or the nature of the customer’s network.

Standard Service: Organization per Service, Network per Customer

The standard service model is the most popular and common structure used by MSPs and is highly recommended by Meraki as it enables multiple operational benefits for the MSP. In the Standard Service model, the MSP portal is structured around services offered. This Standard Service is based on the notion of the MSP offering a uniform service to all customers, and in this model, an MSP will typically create separate organizations for each service offering. Generally, organizations could represent tiers of service such that Basic, Intermediate and Advanced services provided by an MSP would each warrant one organization.

When a customer wants to change their service model, they can move to the Meraki organization that is already set for the service they want to move to, which allows organizations to function as templates for service offerings. This eliminates the overhead required to create organization configurations from scratch to suit each customer.


Bespoke/Tailored Service: One Organization per Customer

Sometimes, customers require having an organization-per-customer model, as in cases when the end customer owns their own equipment or requires full management of their own network. In a Tailored Service model, usually, each end customer owns hardware equipment and the MSP generally provides IT services or consulting. The Tailored Service model is best used for customers that require custom environments, customers that manage their own equipment, or customers whose contracts require their own access. This model is only recommended if the customer’s network structure requires it, because it does not scale as well as the Standard Service model. This structure should be used if customers own equipment, as it allows the freedom for customers to be treated in a modular manner and can be separated from the MSP if necessary, and the customers can be granted full management access. This model is best suited for small MSPs, or large bespoke MSPs with locally-managed locations.


SD-WAN as a Service: One Organization per Customer

In a Standard Service model, MSPs have multiple customers in the same organization for ease of management. This structure should not be used if SD-WAN is the service that is delivered to the customers because the scope of an organization defines the connectivity domain for AutoVPN. Each SD-WAN customer will therefore need to be assigned to its dedicated organization. This model is typically optimized for mid-sized to large end-customers with multiple locations/branches.


  • Ideal structure for an MSP portal also has:

    • One or more totally empty organization(s) for cloning purposes (no devices or licenses, ever)

    • One shutdown organization with a shutdown network, used to keep devices that are currently unused

    • Separate networks under each organization, generally organized by physical location


  • Note that an ‘MSP Portal’ is tied to an account. Multiple accounts under the same MSP company do not necessarily have the same MSP portal. Another account could potentially see a different set of customers if their account has not been added to the same organization admin lists.

Organization-per-Customer vs Network-per-Customer Considerations

Firmware Upgrades

Firmware can either be centrally managed across thousands of networks vs. individually per organization, depending on your structure model.



The Firmware Upgrades tool in the dashboard allows organization admins to quickly and easily manage firmware versions on a per-network and per-device type (MX vs MR) basis. Additionally, the Firmware Upgrades tool can be used to schedule, reschedule, and cancel bulk upgrades of networks, view firmware changelog notes, view firmware version numbers, and to rollback the firmware on a recently upgraded network.



There is no mechanism to manage firmware across multiple organizations. If using an organization-per-customer model, remember that each customer organization will have to be accessed individually for firmware management.

Search Function

Generally, you will find that there are more flexible and direct search criteria across a single organization. If your organizations are split per customer, you will not be able to easily search multiple customers across different organizations in the dashboard, and will likely need to rely on an API solution with an account that spans multiple organizations.



This models allows for a scalable search function, including the ability to search by network name, network tags, hardware serial or MAC address.



In this model, the top level search can only be used for searching by organization name. Multi-organization API accounts will be necessary to search/query across multiple organizations.

Cloning Function

The cloning function allows users to make copies of networks when creating new networks. This is very useful for creating large numbers of identical or nearly identical networks.



It is possible to clone a network with most network and device-level configurations.



It is only possible to clone a organization without device-level configurations.

Organization-level settings which are cloned:

  • Organization administrators
  • Organization administrators created through SAML
  • Configuration templates
  • Settings previously enabled by Meraki Support
  • Dashboard branding policies
  • Branding updates for in-life customers would be manually applied to each organization. Not possible to copy settings between organizations.
  • Splash page themes

Configuration Templates

Updates to the standard customer template cannot be replicated across multiple organizations. Note that while configuration templates are an option for MSPs, it is generally strongly recommended to use the cloning function instead, as it scales much better for larger deployments and is more easily programmable via API.

Configuration templates can allow many Cisco Meraki devices to be deployed following a single base configuration. Sites as part of a template can have exceptions to the configuration, and devices that need to be treated differently can be bound to a template. Otherwise a template is created per customer.



Templates could be shared across multiple customers.

When updating a service design (e.g. turn on new feature), you can clone a template and replicate for each customer.



Once an organization is created, any changes to the template must be individually applied for each customer - no ability to copy/replicate settings between organizations.

Claiming of Devices and Licenses

Generally, orders should be made on a per-organization basis. That is, one order per organization, at each time of purchase. Split purchases mean split order numbers, which makes claiming more complicated and prone to error.



A bulk order can be claimed by a single license key or order number. Licensing will be shared across the networks as it is scoped on a per-organization basis.



A bulk order will need to be processed through a bespoke mechanism to provide individual license keys for each customer organization. Each key will be claimed to their respective organization.

Usage Reporting

There is minimal visibility of usage across multiple organizations.



The Organization > Overview page will display a usage summary of all customers. You can also use Network Tags aross your organization to, for example, filter for customers with different tiers of service, such as 30Mb vs 1Gb customers.



There is no usage reporting in the MSP portal.

Administrator Management


One single administrator portal for add/modify/remove of roles and access across different customers.



No ability to copy administrator settings between organizations. Unless an API is used, each customer organization must be accessed for maintaining customer and admin access privileges.

Re-purposing a Device


Devices can be moved between networks without any extra license workflows



While devices can be moved between networks, Meraki Support must be contacted to move a license between organizations.


The Meraki dashboard changelog is a running list of configuration changes that have been made in an organization. 



Changelog will include changes to all customers' networks.



Changelog will be scoped by customer, and changes for each customer will need to be viewed on a per-organization basis.

Security Center

The Security Center is scoped by organization and shows a visual summary of security events, analytics and notifications across an organization, including intrusion detection, intrusion prevention, and malware events.



Centralized security view across all customers.



Each organization must be accessed to view security reporting.

Dashboard API and SNMP Access

A unique set of parameters must be maintained for each organization you wish to access if configuring or monitoring customers outside of the dashboard website.

General Best Practices

Minimum Admin Accounts per Organization

The Meraki dashboard administrator accounts for each organization should include:

  • At least 1 ‘primary’ organization Admin shared account for the MSP company, such as
  • One backup organization Admin account for recovery purposes in case you get locked out of the Primary Admin account
  • At least one account for the customer, even if the customer does not log in. If the customer does not log in or is not given these credentials, they should at least be given the credentials for the account when/if the MSP-Customer relationship has ended. The customer account should be a full organization Admin (read/write) if the customer owns their equipment. The account should be read-only or network-only if the customer leases their equipment from the MSP


2-Factor Authentication should always be used for MSPs for security purposes, and be sure to make a physical copy of your 1-time 2FA recovery codes.

Creating New Organizations/Adding New Customers and Branding

Keep at least one completely empty organization to use for cloning if new organizations need to be created. This cloning organization should be used as a template to create new organizations that will include the source organization’s:



Branding must be applied organization-by-organization, so the fastest way to apply branding to all your customer organizations is to clone them from a source organization with your MSP’s branding.

Note that branding is not retroactively applied to all organizations under an MSP portal, so it must be applied to each individual organization if organization cloning is not used upon creation.



Cloned organizations will keep the source organization’s admins list, so you can easily copy MSP admins to each organization.


SAML Access

In the same way that admins carry over when an organization is cloned, SAML access is also carried over. Keep this in mind, in case of privacy concerns if you do not want certain SAML admins from the source organization to be able to log into customer organizations after cloning

This organization must never have licenses or devices added to it, or it will eventually go out of licensing compliance. It should always be kept empty.

Adding an Organization to your MSP Portal

Adding an organization to your MSP Portal only requires adding your account email to the admins list for that organization. Keep in mind that this means that adding an individual account under an MSP domain will not add the same organization to that domain’s generic account. Email addresses are the only differentiator.

SAML Admin Accounts

Using SAML admin accounts will make it much easier to distribute permissions to your end users and network admins. Organization changelogs will record which SAML account was used to make changes, which is essential for most MSPs to keep track of network changes made by their network admins or end-customers.


It is best to make separate orders based on separate organizations. Even though devices can be easily split by order, licensing is generally bundled in a single key for each order, so if a single order is made for multiple organizations, the licensing will not be separate as-delivered and may require a call to Meraki to split/move.

Unused Devices and Inventory

Devices that are not being used by anyone should be kept in a totally separate shutdown organization, in a network. This will prevent the devices from being claimed by anyone else, as devices are only protected from being claimed if they are in a network. Devices that are in inventory, but not in a network can still be claimed in another organization. This organization should not have any licensing, and all the devices should be kept in a network that might be labeled something like “Unused Inventory.” The licensing for this organization will expire and go out of compliance, but the licensing warnings can be ignored, as the inventory can still be managed. This organization and network should only be used for unused inventory.

Support Cases

Cases opened with Meraki Support should always follow the following two rules:

  • MSPs calling/emailing support should always use their MSP customer number

  • Customers under an MSP calling/emailing support should always use their own customer number, not the MSP’s customer number

Cases are visible on a per-Meraki-Customer-Number basis. This means that if end-customers share a customer number, they will be able to see each others’ cases. This can be a privacy concern, so you should ensure that separate customers that should not be able to see each others’ cases do not have the same customer number.

Do not share your MSP customer number with your end customers. Always keep a record of your end customers’ Meraki Customer Numbers, and be ready to provide it if requested by support.

Adding/Acquiring new End Customers

When you acquire and add an end customer that is an existing Meraki customer under your MSP Portal, it is best to contact your sales representative and let them know. They can update their records to ensure your cases and accounts reflect this change, which ensures the correct contacts and accounts are listed for each group.


Note that your API key is based on your account, as is your MSP portal. Each organization you create using the same account will share the same API key. Keep this in mind when considering the visibility and permissions of your API key.

Best Practice Design - MX Security and SD-WAN

General MX Best Practices

With the increasing popularity and demand for SD-WAN architecture, planning and designing a secure and highly functional network can be a challenging task. With cloud technology on the rise, and an increase in the amount of user data in modern networks, it is no easy task to plan and accommodate, while maintaining overall security. The Cisco Meraki MX security appliances allow for high-end performance with a robust feature set to provide an easy to manage security solution for environments of any size. From small form factor teleworker gateways to powerful datacenter appliances, the Cisco Meraki MX allows for flexibility and functionality of network operations. 


For branch offices that require communication to the corporate network, remote employees on the go that need to view important documents on public networks, or administrators that require increased control over client devices, Software Defined Networking is an ever-growing key component of the modern data network. Typically this requires tedious interaction and configurations to ensure full functionality for end users and client devices, as well as constant monitoring. Due to the fact that Cisco Meraki security appliances are managed completely with the Meraki cloud, all of this can be done with our intuitive online dashboard. The dashboard provides the in-depth visibility needed for the modern network with integrated monitoring tools.  

To find more in-depth information on what model of the Cisco Meraki MX best suits your needs, please refer to the MX sizing guide.

Deployment Options

The Cisco Meraki MX security appliance has a number of deployment options to meet the needs of your network and infrastructure. Whether as the main edge firewall for your network, or as a concentrator device in your data center, the MX security appliance can be easily integrated. The operational modes of the MX can be found on the Cisco Meraki dashboard under Security & SD-WAN > Configure > Addressing and VLANs.

Screen Shot 2019-07-15 at 12.26.38 PM.png

Routed (NAT) Mode

Routed mode on a Cisco Meraki MX is best used when the security appliance will be connecting directly to your internet demarcation point. When this is the case, the MX will have a public IP address that is issued by the internet service provider. The MX will also be the device handling the routing for clients to the internet, and any other networks configured for the device to communicate to.  This mode is optimal for networking environments that require a security appliance with Layer 3 networking capabilities.


NAT Mode Considerations

  • A Cisco Meraki MX security appliance operating in NAT mode is best deployed when its WAN connection is directly connected to the ISP handoff
  • An MX can operate in NAT mode if it is behind another Layer 3 device that is also performing NAT, but you may run into complications with Meraki cloud connectivity, as well as some features such as Meraki Auto VPN

Passthrough/VPN Concentrator Mode

Passthrough mode on a Cisco Meraki MX configures the appliance as a Layer 2 bridge for the network.  The MX in this mode will not perform any routing or any network translations for clients on the network.  Passthrough/Concentrator Mode is best used when there is an existing Layer 3 device upstream handling network routing functions.  The MX in this instance would still act as a security appliance, but with less functionality for Layer 3 networking.

The recommended use case for the MX security appliance in passthrough mode is when it is acting as a VPN Concentrator for the Cisco Meraki Auto VPN feature.  Passthrough/VPN Concentrator mode ensures easy integration into an existing network that may already have layer 3 functionality and edge security in place.  With this mode, a Cisco Meraki MX security appliance can be integrated into the existing topology and allow for seamless site to site communication with minimal configuration needed.

MX as VPN Concentrator Green.png

Passthrough/VPN Concentrator Considerations

  • The Cisco Meraki MX will not perform layer functions such as NAT or routing.
  • An MX in passthrough/VPN concentrator mode will act as a layer 2 firewall that will integrate into the existing LAN with a layer 3 routing appliance upstream.
  • VPN destined traffic will need to be directed to the MX security appliance for effective routing to the VPN endpoint. As such, static routes on other Layer 3 capable devices may be needed for full VPN functionality.
  • MX appliances in passthrough are able to allow IPv6 traffic to pass across the existing LAN if the traffic flows through the MX.

Redundancy and High Availability

With the network being a vital tool and resource for today's business operations, it is essential that network functionality and connectivity remain as consistent as possible. Planning for high availability in your network infrastructure is critical. Cisco Meraki MX security appliances allow for easy and seamless configuration and design of a highly available network. To ensure the maximum amount of uptime for your network, MX security appliances include a number of capabilities for a redundant design. These capabilities include the ability to form a High Availability (HA) pair, as well as multiple internet uplink ports with automatic failover functionality.

High Availability and Redundant WAN Connections

The recommended deployment to ensure the most possible uptime is an environment that combines two high availability paired MX appliances with multiple Internet Service Provider connections. In this architecture, there is full redundancy to minimize the possible downtime in the event of a failure in either an appliance or a service provider. 



High Availability (HA) Pair

When deploying two MX security appliances in high availability (HA), it is recommended to have the management traffic for HA traverse via a downstream connection to a layer 2 switch, and to not have a dedicated HA cable connected between the two appliances. The reason for this is because there is an increased potential for a spanning-tree loop if the MX appliances are also connected to the same layer 2 switch. The best topology is to have the MX appliances connected to the same downstream Layer 2 switch.


MX HA Pair (1).png

NOTE: The MX generates and sends VRRP heartbeats across all configured VLANs. For best high availability behavior, it is recommended to have all VLANs allowed on the downstream connections to the switches that are connecting the MX HA pair.  It is also recommended that any downstream switches that may be passing the VRRP traffic are configured to be aware of all the VLANs configured on the MX security appliance.


For more information on HA configuration with VRRP on the Meraki MX, see the knowledge base document MX Warm Spare - HA Pair.

For more information on MX layer 2 connectivity, see the knowledge base document MX Layer 2 Functionality.

Cisco Meraki MX Networking

When planning and designing a network, an important consideration is whether the client and end-user devices should be on the same network or subnet. In most cases, having multiple subnets in your deployment is recommended, as it adds a layer of security against potentially malicious devices or users. This is especially recommended if you are planning on having a guest network set up. The MX security appliance is capable of supporting multiple subnets or VLANs so the user networks can be separated out.  

Addressing and VLANs

The best practice for planning and configuring a network is to have multiple VLANs for different traffic uses cases. This is especially recommended for separation of networks that will be hosting employee data and networks providing guest access. This is a critical consideration to ensure the maximum possible security for your networking environment. Using multiple VLANs for different use cases will also decrease the amount of broadcast traffic within the same subnet. When designing and configuring multiple VLANs, it is generally recommended to create the subnet to be sized for the necessary amount of devices intended to be in that particular network. Generally, networks with /24 subnet masks are large enough for common deployments, while also providing room for expansion and simple subnetting scalability if more VLANs are to be added. However, it is always best to consider the needs of your deployment when planning your subnets, as some deployments may require larger addressing spaces.


MX VLANs.png


For more information on enabling and configuring VLANs please see our knowledge base document Configuring VLANs on the MX Security Appliance.

Routing and Layer 3 Connectivity

In certain deployments, the MX security appliance may not be the only device performing layer 3 functions for the entirety of the data network. Examples of this can be a deployment where the MX acts as the internet edge gateway while layer 3 routing is performed on downstream devices, or where another layer 3 capable device is added into the existing routing topology. If this is the case, then the MX security appliance must have static routes configured to allow for effective communication to the other layer 3 destinations in the network. The other layer 3 devices must also have static routes in place to the MX. 

In the example below, an MX is set up as an Internet edge firewall, with the rest of the layer 3 routing taking place on a downstream switch stack. With this configuration, it is best to have a single subnet configured between the MX and the other layer 3 device, to minimize the amount of traffic and routing that will be taking place as well as to keep routing consistency. This single subnet will act as a transit VLAN for all routing that is to take place between the two layer 3 endpoints in the topology. 


MX to L3 Switch.png

For more information on MX routing and layer 3 connectivity, please refer to the documents MX and MS Basic Layer 3 Topology and MX Routing Behavior

MX Security Functionality

The MX security appliance allows for simple yet effective security for your networks and deployments with the numerous security functions it has to offer. Configuring advanced security policies and features ensures a protected environment for your deployment. The MX has the following services and configuration options for security functionality:

  • Layer 3 Stateful Firewall
  • Layer 7 Firewall functionality
  • Port Forwarding and NAT
  • Advanced Malware Protection (Advanced Security License)
  • Intrusion Detection and Prevention Services (Advanced Security License)
  • IP Source Address Spoofing Protection

Layer 3 Firewall Rules

The MX security appliance allows for custom outbound firewall rules to be configured to ensure precise and granular control over which networks are able to communicate with one another.  The MX security appliance is a stateful firewall, meaning that all inbound connections are blocked unless they have either originated from within the MX or a forwarding rule is configured.  

By default, all VLANs configured on the MX security appliance will be able to communicate with one another.  If there are VLANs you wish to not be able to pass traffic between, firewall rules will need to be configured.  It is best practice to restrict the amount of traffic that can travel between subnets that are not closely related.  For example, it is recommended to create firewall rules to block all traffic from a VLAN that may be used for guest access from being able to contact other VLANs used for business operations.  


Block data to guest.png


Additionally, if there are internal users that need internet access, but should be blocked from accessing a certain site or IP address, the firewall rules can be configured with IPs or URLs as the destinations. The use case for this is if you know of a website the users in that VLAN should not access.


Block VLAN access to site.png

NOTE: The layer 3 firewall rules configured will be processed in top-down order.  If traffic matches a rule in the list, the MX will no longer process any further rules for that traffic.  

Layer 7 Firewall Rules

Best practice design for Layer 7 rules is to ensure that the category you have selected to block does not fall under the traffic flow for applications you may use.  For example, if you choose to block the category for "File Sharing," and you block all options, you may cause a disruption in service for an application such as Microsoft OneDrive.  It is best to try and configure Layer 7 rules as granular as possible, to avoid such scenarios.

Using Layer 7 firewall rules for blocking traffic based on countries also has its caveats as well.  While it may seem more secure to block all countries other than the one the MX is located in, this can cause issues with traffic flows to certain resources that may actually be necessary for daily operations.  Certain webpages and web applications can be hosted in a country not being blocked, but they may pull supplementary data or resources from a server located in a country that is being blocked by the MX appliance.  As a result, certain aspects may not function as intended or may fail to function altogether.  It is essential to only block countries with a Layer 7 rule if you know traffic from this country is malicious in nature.  

For more information on MX layer 7 rules please refer to the knowledge base documents for the MX Firewall Settings and Creating a Layer 7 Firewall Rule

Port Forwarding and NAT Rules

Due to the nature of the Cisco Meraki MX, all inbound traffic that did not originate from within the networks configured on the appliance will be dropped by default. This is because the MX security appliance is a stateful firewall, so the MX must be aware of or expecting incoming traffic.  For connections that originate from outside of the MX and need to be allowed in, either a port forwarding rule or a NAT rule can be configured to permit this traffic.  

Port Forwarding Rules

It is important to plan out your port forwarding rules accordingly with the traffic you are planning to let in behind the firewall. The best configuration for port forwarding rules is to plan for as narrow of a scope as possible.  Only create port forwarding rules for subsequent connections on ports that are necessary.  It is also not recommended to create port forwarding rules with "Any" for the allowed remote IP ranges.  The appropriate use-case for having the parameter "Any" in the allowed remote IP field is if you have a web server behind the MX.  If a web server is in use for the port forwarding rule, it is best to use an obscure port range for the public ports configured, as common web ports can lead to potential vulnerabilities. 




1:1 and 1:Many NAT Rules

A 1:1 rule enables traffic destined for a particular public IP address to be forwarded to a particular internal host or hosts.  This is similar in nature to a port forward, but in this case the traffic is being sent to another public IP address that is not the IP of the MX WAN interface.  This is best used when there are multiple public IP addresses available, and you do not wish to have internet-based traffic for a web server destined to the public IP of the WAN interface on the MX.  If a 1:1 NAT rule is configured for other services that are not for a web facing server, then it is best practices to limit the range of ports being used, as well as the range of remote IPs for the connection.


1 to 1 NAT example.png




1:Many NAT Rules

In some instances, particularly if the number of public IPs available to you are limited, a 1:Many NAT rule can be put in place.  This is also very similar to a port forwarding rule, but again the public IP address that traffic is destined to determines how the Cisco Meraki MX security appliance handles the traffic.  If traffic destined for that specific IP address comes in on a particular public port, then the MX security appliance will forward the traffic to an internal host based on said port.  This enables the ability to use a single public IP address for multiple services.  This also allows for a concise deployment so there are not multiple public IPs required. 




Advanced Malware Protection (AMP)

Advanced Malware Protection (AMP) is an adaptive and powerful tool that is incorporated on the Cisco Meraki MX security appliance with the Advanced Security license.  AMP scans and inspects HTTP downloads that are moving through the MX security appliance.  The MX then takes action based on the threat intelligence it receives from the AMP cloud.  If a download matches a known signature from the AMP cloud, then the security appliance will block the download.  With an MX security appliance, it is highly recommended to have AMP enabled and functioning so your networking environment is secure and safe from attacks. 

For more information about AMP,  please refer to the knowledge base documents Advanced Malware Protection (AMP) and Threat Protection

Intrusion Detection and Prevention (IDS/IPS)

With the Advanced Security license, Intrusion Detection and Prevention (IDS/IPS) enables the MX security appliance to inspect all incoming and outgoing traffic that is being routed internally and externally in the networking environment. IDS/IPS searches for various signature-based attacks as traffic flows through the security appliance. This is achieved with the various rules and signatures that are in the SNORT® intrusion detection engine. This enables the MX to detect malicious traffic and not only provides alerts that a known signature was detected, but also take action against that threat and prevent it from doing harm on the network. To ensure a secure and stable networking environment, it is recommended to have Intrusion Detection and Prevention services enabled and operating on the MX.  

For more information on IDS/IPS, please refer to the knowledge base document Threat Protection

IP Source Address Spoofing Protection

IP Source Address Spoofing Protection is a security mechanism on the Cisco Meraki MX that enables protection from malicious users on the network from impersonating other hosts and attempting to bypass security restrictions.  The MX uses several methods of traffic verification to ensure client data is authentic and not spoofed by a malicious attacker.  It is recommended to have this feature set to the mode "Block" so malicious IP spoofing events are mitigated on the network by the MX security appliance as soon as they are identified. There is the option to have the mode set to "Log" but this will not provide protection against these types of attacks, but rather just alert that malicious traffic of this nature has been detected. 


IP Source sppofing prot.png


If you would like more information about this feature on the MX security appliance, please see the knowledge document IP Source Address Spoofing Protection

Site to Site VPN

When connectivity and intercommunication is needed between different networks that are separated geographically, a site to site VPN tunnel is the best solution. The MX security appliance is equipped with all the necessary functionality for VPN tunnel communication between sites and networks.  The SD-WAN capabilities of the MX security appliance allow for other MX devices in the same Cisco Meraki organization to easily establish VPN tunnels to one another with a quick and simple configuration. This is achieved with the Cisco Meraki Auto VPN.

Meraki Auto VPN

The MX security appliance allows for simple and seamless integration and configuration of VPN tunnels among sites.  This is achieved with Meraki's proprietary Auto VPN functionality that allows for simple and fast configuration of site to site VPN tunnels.  Additionally, Auto VPN enables maximum uptime of the site-to-site tunnels with congruent VPN tunnels on all available WAN interfaces. VPN tunnels are built actively on all WAN interfaces on the MX that can reach the Meraki cloud.  This allows for dynamic failover and built-in redundancy with no extra configuration needed.  It is recommended to use Meraki Auto VPN between MX security appliances for essential inter-site communication. Note that Auto VPN can only be used for Meraki to Meraki communications, for Meraki devices in the same Meraki dashboard organization. Separate Meraki dashboard organizations generally represent separate SD-WAN environments.

MX Auto VPN Green.png

Auto VPN Hub and Spoke Operation

The Auto VPN feature allows an MX security appliance to operate as either a hub or a spoke in the VPN topology. An MX configured as a hub will build a VPN tunnel to every MX that is operating as a hub and each spoke that the hub is configured as the hub appliance.  Spoke devices will build tunnels only to MX appliances they have configured as hubs, and not to other spoke sites.  It is important to take this behavior into consideration, as configuring each MX appliance as a spoke can cause a degradation of service in large deployments.  


MX VPN Hub and Spoke.png


It is recommended to have a Cisco Meraki MX security appliance configured as a hub if it is essential for all other security appliances configured in the VPN topology to have communication to networks on the hub appliance.  Typically this is effective for MX appliances at locations where there are a large number of resources for business operations, such as at the corporate office or a datacenter.  

Hub and Spoke example.png

Client VPN

The MX security appliance includes the option to configure client VPN functionality for remote users that require access to resources hosted in your data network.  The client VPN feature allows those users to establish a secure connection to the MX security appliance from their device as long as they have a valid internet connection.  Though it is not required for full client VPN functionality, the client VPN feature has increased functionality and ease of use when it is deployed with a policy encompassed with Cisco Meraki Systems Manager installed on the user's remote device.  Meraki Systems Manager allows for a dynamic policy to be remotely pushed to the client device so the client VPN functionality is seamlessly integrated on the end device without end-user configuration.  

It is highly recommended to deploy and use the client VPN feature with the use of a Systems Manager policy, as this allows for a better experience for end-users as they will not have to do any sort of configuration on their end.  This is exceedingly useful in the event that the end-user's client VPN service is having issues connecting to the MX security appliance.  This allows administrators the ability to effectively troubleshoot the issue without needing the end-user to engage in the technical troubleshooting process; creating a more simple end customer experience. 


Client SM VPN Example.png

For more information about the Client VPN feature, please see the following knowledge base documents:

SD-WAN & Traffic Shaping

The Cisco Meraki MX security appliance includes a multitude of built-in SD-WAN functions that are easy to deploy and configure.  This allows you to have built-in monitoring and simple configuration with an easy and intuitive interface.  

Under the SD-WAN settings, the uplink connections for the MX security appliance can be configured to best fit the needs of your networking environment.  The uplink bandwidth limit can be configured and changed to best fit the requirements of the connection with your internet service provider.  It is best practice to set the throughput bandwidth to the highest possible amount based on your bandwidth set by your provider as to avoid potentially saturating the connection.  

For uplink monitoring, it is recommended to configure multiple uplink statistic test IPs on the Meraki dashboard.  This is exceptionally useful for troubleshooting purposes, as it allows the dashboard to collect and monitor data on certain connections specified.  A few examples of connections to monitor would be a general connection to the internet (Google's is configured by default), the connection to your service provider gateway, and the connection to any remote sites that may be participating in site to site VPN tunneling. 

Another feature of the functionality that the MX includes is automatic security list updates for features such as AMP, IDS/IPS, and content URL filtering.  These lists can be configured to check for changes in security rules on an hourly, daily, or weekly basis.  The best practice to ensure your environment stays secure is to have this interval set to check the security lists hourly.  The frequency of updates can be changed per each WAN uplink, including the cellular uplink as well.  




Included with the SD-WAN options is the ability to have the MX route traffic to different uplinks depending on certain scenarios. Uplink selection enables the ability of policy-based routing on the MX, which uses flow preferences for specific traffic as it heads out the WAN connection. With this tool, it is recommended to have the uplink connections be set to load balance the traffic if applicable. This is best used if there are redundant internet connections that have similar bandwidth capabilities.  

With flow preferences based on the source traffic, it is easy to shape traffic to best fit the nature of the network traffic as it transverses to the WAN connection. An example of what a flow preference may be used for is guest traffic.  To allow for a consistent and stable connection the more cost-effective secondary WAN connection can be configured to handle guest related traffic. 

Uplink selection.png

SD-WAN Policies

SD-WAN policies can be configured to control and modify the flows for specific VPN traffic. With multiple WAN uplinks, the MX will proactively build multiple tunnels with each available WAN interface.  In the case where there are redundant WAN connections on the security appliance, traffic flows based on the type of traffic traversing the VPN connections can also be configured to allow for best performance. Custom policies set to desired preferences can be set to ensure traffic flows take the appropriate path based on your environment.  If a WAN connection that normally handles traffic such as file transfers begins to have performance issues, the Cisco Meraki MX can dynamically change the VPN connection to an alternative WAN uplink. This is done with custom policies or predetermined policies on the dashboard. It is encouraged to configure said policies in your deployment to best fit the needs based on the nature of the traffic and the capabilities of the WAN connections available on the MX

If you do have questions about what policies are best for your deployment, you can always reach out to either a Meraki Sales Engineer or your Meraki partner for a consultation on what best fits your needs.

SDWAN policies.png


Global Bandwidth Limits

In larger deployments, bandwidth limits for end-user devices can to be put in place to ensure the uplinks do not become saturated. This largely depends on the type of traffic that is seen from users, how many users are on the network, as well as the throughput capabilities of the MX security appliance and the uplinks. With more substantial deployments where there are a large number of clients, it is recommended to set a bandwidth limit on all traffic. The amount of bandwidth required per client can vary depending on the type of traffic seen in the networking environment.  

Speedburst is an option to allow clients a short window of time to exceed the bandwidth limit configured to allow, for example, a large file to transfer faster. This is a useful feature if there are users that will be handling and uploading large files frequently on the network. Speedburst is a recommended feature to enable, with the caveat that it should only be used if there are not a large number of clients that will need high amounts of bandwidth at the same time. If there are a select few users or devices that require moments of higher bandwidth then Speedburst is recommended.


Screen Shot 2019-07-18 at 3.22.47 PM.png

Traffic Shaping Rules

Different types of traffic require different priorities on the network.  The MX is able to prioritize and shape traffic on the local network based on the traffic type.  The Meraki dashboard offers default traffic shaping rules that best fit the needs for most deployments.  These default rules ensure best performance for local voice traffic, software updates for end client devices, and collaboration applications.  It is recommended to enable these default traffic shaping rules on the MX as it allows for simple and fast configuration for the best performance of network traffic.

Traffic Shaping rules default.png

Meraki Auto VPN

Auto VPN Best Practices

The best practices listed here focus on the most common deployment scenario, but is not intended to preclude the use of alternative topologies. The recommended SD-WAN architecture for most deployments is as follows:

  • MX at the datacenter deployed as a one-armed concentrator

  • Warm spare/High Availability at the datacenter

  • OSPF route advertisement for scalable upstream connectivity to connected VPN subnets

  • Datacenter redundancy

  • Split tunnel VPN from the branches and remote offices

  • Dual WAN uplinks at all branches and remote offices

Auto VPN at the Branch

Before configuring and building Auto VPN tunnels, there are several configuration steps that should be reviewed.


WAN Interface Configuration

While automatic uplink configuration via DHCP is sufficient in many cases, some deployments may require manual uplink configuration of the MX security appliance at the branch. The procedure for assigning static IP addresses to WAN interfaces can be found in our MX IP assignment documentation.

Some MX models have only one dedicated Internet port and require a LAN port be configured to act as a secondary Internet port via the device local status page if two uplink connections are required. MX models that require reconfiguring a LAN port as a secondary Internet port currently include the MX64 line, MX67 line, and MX100 devices. This can also be verified per-model in our installation guides online. This configuration change can be performed on the device local status page on the Configure tab.


Subnet Configuration

Auto VPN allows for the addition and removal of subnets from the Auto VPN topology with a few clicks. The appropriate subnets should be configured before proceeding with the site-to-site VPN configuration.


Hub Priorities

Hub priority is based on the position of individual hubs in the list from top to bottom. The first hub has the highest priority, the second hub the second highest priority, and so on. Traffic destined for subnets advertised from multiple hubs will be sent to the highest priority hub that a) is advertising the subnet and b) currently has a working VPN connection with the spoke. Traffic to subnets advertised by only one hub is sent directly to that hub.


Configuring Allowed Networks

To allow a particular subnet to communicate across the VPN, locate the local networks section in the Site-to-site VPN page. The list of subnets is populated from the configured local subnets and static routes in the Addressing & VLANs page, as well as the Client VPN subnet if one is configured.


To allow a subnet to use the VPN, set the Use VPN drop-down to yes for that subnet.

Auto VPN at the Data Center

Deploying a One-Armed Concentrator

A one-armed concentrator is the recommended datacenter design choice for an SD-WAN deployment. The following diagram shows an example of a datacenter topology with a one-armed concentrator:



NAT Traversal

Whether to use Manual or Automatic NAT traversal is an important consideration for the VPN concentrator.


Use manual NAT traversal when:

  • There is an unfriendly NAT upstream

  • Stringent firewall rules are in place to control what traffic is allowed to ingress or egress the datacenter

  • It is important to know which port remote sites will use to communicate with the VPN concentrator


If manual NAT traversal is selected, it is highly recommended that the VPN concentrator be assigned a static IP address. Manual NAT traversal is intended for configurations when all traffic for a specified port can be forward to the VPN concentrator.


Use automatic NAT traversal when:

  • None of the conditions listed above that would require manual NAT traversal exist


If automatic NAT traversal is selected, the MX will automatically select a high numbered UDP port to source Auto VPN traffic from. The VPN concentrator will reach out to the remote sites using this port, creating a stateful flow mapping in the upstream firewall that will also allow traffic initiated from the remote side through to the VPN concentrator without the need for a separate inbound firewall rule.


Datacenter Redundancy (DC-DC Failover)

Meraki MX Security Appliances support datacenter to datacenter redundancy via our DC-DC failover implementation. The same steps used above can also be used to deploy one-armed concentrators at one or more additional data centers. For further information about VPN failover behavior and route prioritization, refer to our DC-DC Failover documentation.

This section outlines the steps required to configure and implement warm spare high availability (HA) for an MX Security Appliance operating in VPN concentrator mode.


The following is an example of a topology that leverages an HA configuration for VPN concentrators:




When configured for high availability (HA), one MX is active, serving as the primary, and the other MX operates in a passive, standby capacity. The VRRP protocol is leveraged to achieve failover. Check out our MX Warm Spare documentation for more information.

MX IP Assignment

In the datacenter, an MX Security Appliance can operate using a static IP address or an address from DHCP. MX appliances will attempt to pull DHCP addresses by default. It is highly recommended to assign static IP addresses to VPN concentrators.


Uplink IPs

Use Uplink IPs is selected by default for new network setups. In order to properly communicate in HA, VPN concentrator MXs must be set to use the virtual IP (vIP).


Virtual IP (vIP)

Virtual IP is an addressing option that uses an additional (third) IP address that is shared by the HA MXs. In this configuration, the MXs will send their cloud controller communications via their uplink IPs, but other traffic will be sent and received by the shared virtual IP address.

MX Data Center Routing

The MX acting as a VPN concentrator in the datacenter will be terminating remote subnets into the datacenter. In order for bi-directional communication to take place, the upstream network must have routes for the remote subnets that point back to the MX acting as the VPN concentrator.


If OSPF route advertisement is not being used, static routes directing traffic destined for remote VPN subnets to the MX VPN concentrator must be configured in the upstream routing infrastructure.


If OSPF route advertisement is enabled, upstream routers will learn routes to connected VPN subnets dynamically.

Failover Times

There are several important failover timeframes to be aware of:



Failover Time

Failback Time

Auto VPN Tunnels

30-40 seconds

30-40 seconds

DC-DC Failover

20-30 seconds

20-30 seconds

Dynamic Path Selection

Up to 30 seconds

Up to 30 seconds

Warm Spare

30 seconds or less

30 seconds or less

WAN connectivity

300 seconds or less

15-30 seconds

Configuring OSPF Route Advertisement

MX Security Appliances support advertising routes to connected VPN subnets via OSPF. 

An MX with OSPF route advertisement enabled will only advertise routes via OSPF; it will not learn OSPF routes.

Note: MX devices in Routed mode only support OSPF on firmware versions 13.4+, when using the "Single LAN" LAN setting. OSPF is otherwise supported when the MX is in passthrough mode on any available firmware version. This can be set under Security & SD-WAN > Configure > Addressing & VLANs

When spoke sites are connected to a hub MX, the routes to spoke sites are advertised using an LS Update message. These routes are advertised as type 2 external routes.

BGP and Auto VPN

BGP VPNs are utilized for Data Center Failover and load sharing. This is accomplished by placing VPN Concentrators at each Data Center. Each VPN Concentrator will utilize BGP with DC edge devices. BGP is utilized for its scalability and tuning capabilities.


More information about implementing BGP and its use cases can be found in our BGP documentation.

Auto VPN Technology Deep Dive

The MX Security Appliance makes use of several types of outbound communication. Configuration of the upstream firewall may be required to allow this communication.

Dashboard & Cloud

The MX Security Appliance is a cloud managed networking device. As such, it is important to ensure that the necessary firewall policies are in place to allow for monitoring and configuration via the Meraki dashboard. The relevant destination ports and IP addresses can be found under the Help > Firewall Info page in the dashboard.

VPN Registry

Meraki's Auto VPN technology leverages a cloud-based registry service to orchestrate VPN connectivity. In order for successful Auto VPN connections to establish, the upstream firewall must allow the VPN concentrator to communicate with the VPN registry service. The relevant destination ports and IP addresses may vary by region, and can be found under the Help > Firewall Info page in the dashboard.

The MX also performs periodic uplink health checks by reaching out to well-known Internet destinations using common protocols. The full behavior is outlined here. In order to allow for proper uplink monitoring, the following communications must also be allowed:

  • DNS test to

  • Internet test to

VPN Registry

In order to participate in Auto VPN an MX device must register with the Meraki VPN registry. The VPN registry is a cloud-based system that stores data needed to connect all MX devices into an orchestrated VPN system. The VPN registry is always on and always updating in the case of a connection failure. This means no manual intervention is needed in the case of reboots, new public IP addresses hardware failovers etc. The VPN registry stores the following information for each MX:

  • Subnets (for creating the VPN route table)

  • Uplink IP (public or private)

  • Public IP

The process for adding a new MX into an infrastructure is as follows:

  1. A new MX reports its uplink IP address(es) and shared subnets to the registry

  2. The information is propagated to the other MXs in the infrastructure

  3. The MX establishes the proper VPN tunnels

    1. The MX will try the registry-reported private uplink IP of the peer first

    2. If a connection to the private uplink IP of the peer fails, the MX will try the public uplink IP of its peer



Auto VPN Routing

The VPN Registry stores the relevant information including, local routes participating in VPN for a particular Meraki Auto VPN infrastructure.  In the case of a failure, additional VPN device, or hub change the system automatically reconverges without any end user interaction. By updating all VPN routes to all devices in the system Auto VPN acts like a routing protocol and converges the system to maintain stability.

High Availability



Key use case

Cost consideration

Failover time

HW Redundancy

Mitigate an MX HW failure using 2 devices on the same broadcast domain

Two devices are required but only a single license

Less than 30 seconds (for hardware failover, not necessarily VPN failover)

DC DC Failover

Mitigate any problem that could prevent a spoke from reaching its primary hub

Two devices and two licenses are required

Between 30 seconds and 5 minutes (SD-WAN allows for faster failover)

Hardware Redundancy in VPN Concentrator Mode

MX VPN Concentrator warm spare is used to provide high availability for a Meraki Auto VPN head-end appliance. Each concentrator has its own IP address to exchange management traffic with the Meraki Cloud. However, the concentrators also share a virtual IP address that is used for non-management communication.



MX VPN Concentrator - Warm Spare Setup

Before deploying MXs as one-arm VPN concentrators, place them into Passthrough or VPN Concentrator mode on the Addressing and VLANs page. In one-armed VPN concentrator mode, the units in the pair are connected to the network "only" via their respective ‘Internet’ ports. Make sure they are NOT connected directly via their LAN ports. Each MX must be within the same IP subnet and able to communicate with each other, as well as with the Meraki dashboard. Only VPN traffic is routed to the MX, and both ingress and egress packets are sent through the same interface.


MX VPN Concentrator - Virtual IP Assignment

The virtual IP address (vIP) is shared by both the primary and warm spare VPN concentrator. VPN traffic is sent to the vIP rather than the physical IP addresses of the individual concentrators. The virtual IP is configured by navigating to Security appliance > Appliance status when a warm spare is configured. It must be in the same subnet as the IP addresses of both appliances, and it must be unique. It cannot be the same as either the primary or warm spare's IP address.

The two concentrators share health information over the network via the VRRP protocol. Failure detection does not depend on connectivity to the Internet/Meraki dashboard.


MX VPN Concentrator - Failure Detection

In the event that the primary unit fails, the warm spare will assume the primary role until the original primary is back online. When the primary VPN concentrator is back online and the spare begins receiving VRRP heartbeats again, the warm spare concentrator will relinquish the active role back to the primary concentrator. The total time for failure detection, failover to the warm spare concentrator, and ability to start processing VPN packets is typically less than 30 seconds.


MX Warm Spare Alerting

There are a number of options available in the Meraki dashboard for email alerts to be sent when certain network or device events occur, such as when a warm spare transition occurs. This is a recommended configuration option and allows a network administrator to be informed in the event of a failover.

The event, “A warm spare failover occurs,” sends an email if the primary MX of a High Availability pair fails over to the spare, or vice-versa.


This alert and others, can be referenced in our article on Configuring Network Alerts in Dashboard.


If you are having difficulties getting warm spare to function as expected, please refer to our MX Warm Spare - High Availability Pair document.


HW Redundancy in NAT mode

MX NAT Mode – Warm Spare

MX NAT Mode Warm Spare is used to provide redundancy for internet connectivity and appliance services when an MX Security Appliance is being used as a NAT gateway.


MX NAT Mode - Warm Spare Setup

In NAT mode, the units in the HA pair are connected to the ISP or ISPs via their respective Internet ports, and the internal networks are connected via the LAN ports.


WAN configuration: Each appliance must have its own IP address to exchange management traffic with the Meraki cloud. If the primary appliance is using a secondary uplink, the secondary uplink should also be in place on the warm spare. A shared virtual IP, while not required, will significantly reduce the impact of a failover on clients whose traffic is passing through the appliance. Virtual IPs can be configured for both uplinks.


LAN configuration: LAN IP addresses are configured based on the Appliance IPs in any configured VLANs. No virtual IPs are required on the LAN.


Additional warm spare configuration details can be found in our article, MX Warm Spare - High Availability Pair.


MX NAT Mode - Virtual IP Assignment

Virtual IP addresses (vPs) are shared by both the primary and warm spare appliance. Inbound and outbound traffic uses this address to maintain the same IP address during a failover to reduce disruption. The virtual IPs are configured on the Security Appliance > Monitor > Appliance status page. If two uplinks are configured, a vIP can be configured for each uplink. Each vIP must be in the same subnet as the IP addresses of the appliance uplink it is configured for, and it must be unique. It cannot be the same as either the primary or warm spare's IP address.


MX NAT Mode - Failure Detection

There are two failure detection methods for NAT mode warm spare. Failure detection does not depend on connectivity to the Internet / Meraki dashboard.

WAN Failover: WAN monitoring is performed using the same internet connectivity tests that are used for uplink failover. If the primary appliance does not have a valid Internet connection based on these tests, it will stop sending VRRP heartbeats which will result in a failover. When uplink connectivity on the original primary appliance is restored and the warm spare begins receiving VRRP heartbeats again, it will relinquish the active role back to the primary appliance.

LAN Failover: The two appliances share health information over the network via the VRRP protocol. These VRRP heartbeats occur at layer 2 and are performed on all configured VLANs. If no advertisements reach the spare on any VLAN, it will trigger a failover. When the warm spare begins receiving VRRP heartbeats again, it will relinquish the active role back to the primary appliance.


MX NAT Mode – DHCP Synchronisation

The MXs in a NAT mode high availability pair exchange DHCP state information over the LAN. This prevents a DHCP IP address from being handed out to a client after a failover if it has already been assigned to another client prior to the failover.


DC-DC Failover - Hub/Data Center Redundancy (Disaster Recovery)

Meraki's MX Datacenter Redundancy (DC-DC Failover) allows for network traffic sent across Auto VPN to failover between multiple geographically distributed datacenters.




DC Failover Architecture

A DC-DC failover architecture is as follows:

  • One-armed VPN concentrators or NAT mode concentrators in each DC

  • A subnet(s) or static route(s) advertised by two or more concentrators

  • Hub & Spoke or VPN Mesh topology

  • Split or full tunnel configuration


Operation and Failover

Deploying one or more MXs to act as VPN concentrators in additional data centers provides greater redundancy for critical network services. In a dual- or multi-datacenter configuration, identical subnets are advertised from each datacenter with a VPN concentrator mode MX.


In a DC-DC failover design, a remote site will form VPN tunnels to all configured VPN hubs for the network. For subnets that are unique to a particular hub, traffic will be routed directly to that hub. For subnets that are advertised from multiple hubs, spoke sites will send traffic to the highest priority hub that is reachable.


When an MX Security Appliance is configured to connect to multiple VPN concentrators advertising the same subnets, the routes to those subnets become tracked. Hello messages are periodically sent across the tunnels from the remote site to the VPN hubs to monitor connectivity. If the tunnel to the highest priority hub goes down, the route is removed from the route table and traffic is routed to the next highest priority hub that is reachable. This route failover operation only applies when identical routes are advertised from multiple Auto VPN hubs.


Concentrator Priority

When multiple VPN hubs are configured for an organization, the concentrator priority can be configured for the organization. This concentrator priority setting determines the order in which VPN mesh peers will prefer to connect to subnets advertised by multiple VPN concentrators.

This setting does not apply to remote sites configured as VPN spokes.


Other Datacenter Considerations

When implementing an DC-DC architecture, a warm spare concentrator configuration (see warm spare section above) and OSPF route advertisement should always be taken into consideration for MXs acting as VPN concentrators in a datacenter. Additionally, route flow logic should be considered for all applications in the deployment environment to ensure availability requirements are met. To assist with better understanding DC-initiated flows, please refer below.


Supported VPN Architectures

VPN Topologies

There are several options available for the structure of the VPN deployment.

Split Tunnel

In this configuration, branches will only send traffic across the VPN if it is destined for a specific subnet that is being advertised by another MX in the same dashboard organization. The remaining traffic will be checked against other available routes, such as static LAN routes and third-party VPN routes, and if not matched will be NATed and sent out from the branch MX unencrypted.



Full Tunnel  

In full tunnel mode all traffic that the branch or remote office does not have another route to is sent to a VPN hub.



Hub and Spoke

In a hub and spoke configuration, the MX security appliances at the branches and remote offices connect directly to specific MX appliances and will not form tunnels to other MX or Z1 devices in the organization. Communication between branch sites or remote offices is available through the configured VPN hubs. This is the recommended VPN topology for most SD-WAN deployments.

  • Hub and Spoke - Total Tunnel Count = (H x (H-1)2)xL1+ (S x N)xL2 - Where H is the number of hubs, S is the number of spokes, N is the number of hubs each spoke has and L is the number of uplinks the MX has (L1 for the hubs, L2 for the spokes).  If each MX has a different number of uplinks then a sum series, as opposed to a multiplication will be required.

For example, if all MXs have 2 uplinks, we have 4 hubs and 100 spokes, then the total number of VPN tunnels would be 12 + 1200 = 1212


Standard Hub & Spoke




How it works:

Utilizing the standard Meraki Auto VPN registry to ascertain how the VPN tunnels configured need to form (i.e. via public address space or via private interface address space) as described in Configuring Site-to-site VPN over MPLS.


When should it be used?

Whenever possible. It is strongly recommended that this model is the 1st, 2nd and 3rd option when designing a network. Only if the deployment has an exceptionally strong requirement should one of the following hub and spoke derivatives be considered.

VPN Mesh

It is also possible to use a VPN mesh configuration in an SD-WAN deployment.

In a mesh configuration, an MX appliance at the branch or remote office is configured to connect directly to any other MXs in the organization that are also in mesh mode, as well as any spoke MXs that are configured to use it as a hub.

  • Full Mesh - Total Tunnel Count = (N x (N-1)2)xL- Where N is the number of MXs and L is the number of uplinks each MX has.

For example, if all MXs have 2 uplinks and there are 50 MXs, then the total number of VPN tunnels would be 2450 and every MX would have to be able to support 196 tunnels from  the number of VPN peers for a single MX (49) multiplied by the number of VPN tunnels between each peer (4), in this case, we would need 50 MX100s as a minimum.


China Auto VPN

With regulatory constraints imposed by the Chinese government, specific architecture requirements are needed to deploy and interconnect the Auto VPN domain within China to the rest of the world.

Secure VPN technology provides the most cost-effective connectivity under most circumstances. Chinese regulations have placed restrictions affecting VPN technologies across international borders. For enterprises to achieve cross border connections, there are two options.


  1. The enterprise can directly lease international dedicated lines from the 3 Chinese telecom carriers (China Telecom, China Mobile, China Unicom) in China.

  2. Additionally, the enterprise can directly delegate a foreign telecom carrier with a presence in China to rent the international dedicated line (including VPN) from the 3 Chinese telecom carriers, and connect the corporate private network and equipment.

Note: The above cross-border connection methods must be used only for internal data exchange and office use. (Current as of 3 February 2018, subject to further regulatory developments.)


All devices located within mainland China will connect to Meraki China servers also located within China. Currently, only enterprise licensing is available for MX devices located within China.

China Auto VPN Architecture




In the above diagram, we are utilizing Meraki Auto VPN to connect the enterprise sites inside China. The above diagram also demonstrates the Chinese government-approved dedicated circuits connecting the Chinese parts of the enterprise to the rest of the global enterprise. Dynamic routing such as BGP or OSPF can be utilized to exchange routing information between the domains.


Meraki SD-WAN


All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces the various components of Meraki SD-WAN and the possible ways in which to deploy a Meraki AutoVPN architecture to leverage SD-WAN functionality, with a focus on the recommended deployment architecture.

What is SD-WAN?

Software-defined WAN (SD-WAN) is a suite of features designed to allow the network to dynamically adjust to changing WAN conditions without the need for manual intervention by the network administrator. By providing granular control over how certain traffic types respond to changes in WAN availability and performance, SD-WAN can ensure optimal performance for critical applications and help to avoid disruptions of highly performance-sensitive traffic, such as VoIP. Additionally, SD-WAN can be a scalable and often much cheaper alternative to traditional WAN circuits like MPLS lines.

Key Concepts

Before deploying SD-WAN, it is important to understand several key concepts.

Concentrator Mode

All MXs can be configured in either NAT or VPN concentrator mode. There are important considerations for both modes. For more detailed information on concentrator modes, click here.

One-Armed Concentrator

In this mode, the MX is configured with a single Ethernet connection to the upstream network. All traffic will be sent and received on this interface. This is the recommended configuration for MX appliances serving as VPN termination points into the datacenter.

NAT Mode Concentrator 

It is also possible to take advantage of the SD-WAN feature set with an MX configured in NAT mode acting as the VPN termination point in the datacenter. 

VPN Topology

There are several topology options available for VPN deployment.

Split Tunnel

In this configuration, branches will only send traffic across the VPN if it is destined for a specific subnet that is being advertised by another MX in the same Dashboard organization. The remaining traffic will be checked against other available routes, such as static LAN routes and third-party VPN routes, and if not matched will be NATed to MX WAN IP address and sent out of WAN interface of the branch MX, unencrypted.

Full Tunnel 

In full tunnel mode all traffic that the branch or remote office does not have another route to is sent to a VPN hub.

Hub and Spoke

In a hub and spoke configuration, the MX security appliances at the branches and remote offices connect directly to specific MX appliances and will not form tunnels to other MX or Z1 devices in the organization. Communication between branch sites or remote offices is available through the configured VPN hubs. This is the recommended VPN topology for most SD-WAN deployments.

VPN Mesh

It is also possible to use a VPN "mesh" configuration in an SD-WAN deployment.


In a mesh configuration, an MX appliance at the branch or remote office is configured to connect directly to any other MXs in the organization that are also in mesh mode, as well as any spoke MXs that are configured to use it as a hub.

Datacenter Redundancy (DC-DC Failover)

Deploying one or more MXs to act as VPN concentrators in additional datacenters provides greater redundancy for critical network services. In a dual- or multi-datacenter configuration, identical subnets can be advertised from each datacenter with a VPN concentrator mode MX. 


In a DC-DC failover design, a spoke site will form VPN tunnels to all VPN hubs that are configured for that site. For subnets that are unique to a particular hub, traffic will be routed directly to that hub so long as tunnels between the spoke and hub are established successfully. For subnets that are advertised from multiple hubs, spokes sites will send traffic to the highest priority hub that is reachable.

Warm Spare (High Availability) for VPN concentrators

When configured for high availability (HA), one MX serves as the primary unit and the other MX operates as a spare. All traffic flows through the primary MX, while the spare operates as an added layer of redundancy in the event of failure.


Failover between MXs in an HA configuration leverages VRRP heartbeat packets. These heartbeat packets are sent from the Primary MX to the Spare MX out the singular uplink in order to indicate that the Primary is online and functioning properly. As long as the Spare is receiving these heartbeat packets, it functions in the passive state. If the Passive stops receiving these heartbeat packets, it will assume that the Primary is offline and will transition into the active state. In order to receive these heartbeats, both VPN concentrator MXs should have uplinks on the same subnet within the datacenter.


Only one MX license is required for the HA pair, as only a single device is in full operation at any given time.

Connection Monitor

Connection monitor is an uplink monitoring engine built into every MX Security Appliance. The mechanics of the engine are described in this article

SD-WAN Technologies

The Meraki SD-WAN implementation is comprised of several key features, built atop our AutoVPN technology.

Prior to the SD-WAN release, Auto VPN tunnels would only form only over a single interface. With the SD-WAN release, it is now possible to form concurrent AutoVPN tunnels over both Internet interfaces of the MX.


The ability to form and send traffic over VPN tunnels on both interfaces significantly increases the flexibility of traffic path and routing decisions in AutoVPN deployments. In addition to providing administrators with the ability to load balance VPN traffic across multiple links, it also allows them to leverage the additional path to the datacenter in a variety of ways using the built-in Policy-based Routing and dynamic path selection capabilities of the MX.

Policy-Based Routing (PbR)

Policy-based Routing allows an administrator to configure preferred VPN paths for different traffic flows based on their source and destination IPs and ports.

Dynamic Path Selection

Dynamic path selection allows a network administrator to configure performance criteria for different types of traffic. Path decisions are then made on a per-flow basis based on which of the available VPN tunnels meet these criteria, determined by using packet loss, latency, and jitter metrics that are automatically gathered by the MX.

Performance Probes

Performance-based decisions rely on an accurate and consistent stream of information about current WAN conditions in order to ensure that the optimal path is used for each traffic flow. This information is collected via the use of performance probes.


The performance probe is a small payload (approximately 100 bytes) of UDP data sent over all established VPN tunnels every 1 second. MX appliances track the rate of successful responses and the time that elapses before receiving a response. This data allows the MX to determine the packet loss, latency, and jitter over each VPN tunnel in order to make the necessary performance-based decisions.

High-Level Architecture

This guide focuses on the most common deployment scenario but is not intended to preclude the use of alternative topologies. The recommended SD-WAN architecture for most deployments is as follows:


  • MX at the datacenter deployed as a one-armed concentrator.
  • Warm spare/High Availability at the datacenter.
  • OSPF route advertisement for scalable upstream connectivity to connected VPN subnets.
  • Datacenter redundancy
  • Split tunnel VPN from the branches and remote offices
  • Dual WAN uplinks at all branches and remote offices
SD-WAN Objectives

This guide focuses on two key SD-WAN objectives:

  • Redundancy for critical network services

  • Dynamic selection of the optimal path for VoIP traffic

Example Topology

The following topology demonstrates a fully featured SD-WAN deployment, including DC-DC failover for the redundancy.



Both tunnels from a branch or remote office location terminate at the single interface used on the one-armed concentrator.

High Level Traffic Flow

The decisions for path selection for VPN traffic are made based on a few key decision points:

  • Whether VPN tunnels can be established on both interfaces
  • Whether dynamic path selection rules are configured
  • Whether Policy-based Routing rules are configured
  • Whether load balancing is enabled

If tunnels are established on both interfaces, dynamic path selection is used to determine which paths meet the minimum performance criteria for particular traffic flow. Those paths are then evaluated against the policy-based routing and load balancing configurations.


For a more detailed description of traffic flow with an SD-WAN configuration, please see the appendix.

Failover Times

There are several important failover timeframes to be aware of, note on the the failovers called out as SD-WAN are SD-WAN failover times, otherwise the failovers are for non-SD-WAN sceanrios:


Service Failover Time Failback Time SD-WAN
AutoVPN Tunnels 30-40 seconds 30-40 seconds No 
DC-DC Failover 20-30 seconds 20-30 seconds No
Dynamic path selection Sub-second to up to 30 seconds* Sub-second to up to 30 seconds* Yes
Warm Spare 30 seconds or less 30 seconds or less No
WAN connectivity 300 seconds or less** 15-30 seconds No

* - This is the only SD-WAN based failover time listed, the failover time depends on the policy type and policy configuration.  Note that 300 seconds WAN connectivity failover is NOT an SD-WAN failover despite this being shared as such by less knowledgeable competititors.

** - Note that 300 seconds is an absolutely worst case failover for an MX in OAC/VPNC mode experiencing an intermittent upstream WAN service degradation, in the vast majority of scenarios this failover is 1-3 seconds and with proper policy and probing is less than 500ms.

Datacenter Deployment

This section will outline the configuration and implementation of the SD-WAN architecture in the datacenter.

Deploying a One-Armed Concentrator

Example Topology

A one-armed concentrator is the recommended datacenter design choice for an SD-WAN deployment. The following diagram shows an example of a datacenter topology with a one-armed concentrator:

Screen Shot 2015-12-11 at 1.37.53 PM.png

Dashboard Configuration

The Cisco Meraki Dashboard configuration can be done either before or after bringing the unit online.


  1. Begin by configuring the MX to operate in VPN Concentrator mode. This setting is found on the Security & SD-WAN > Configure > Addressing & VLANs page. The MX will be set to operate in Routed mode by default.

new addressing & vlans page - passthrough mode.PNG


  1. Next, configure the Site-to-Site VPN parameters. This setting is found on the Security & SD-WAN > Configure > Site-to-site VPN page.

  2. Begin by setting the type to "Hub (Mesh)."
  3. Configure the local networks that are accessible upstream of this VPN concentrator.
    1. For the Name, specify a descriptive title for the subnet.

    2. For the Subnet, specify the subnet to be advertised to other AutoVPN peers using CIDR notation

  4. NAT traversal can be set to either automatic or manual. See below for more details on these two options.

  5. An example screenshot is included below:

       hub mesh local networks new screenshot.PNG

NAT Traversal

Whether to use Manual or Automatic NAT traversal is an important consideration for the VPN concentrator.


Use manual NAT traversal when:

  • There is an unfriendly NAT upstream
  • Stringent firewall rules are in place to control what traffic is allowed to ingress or egress the datacenter
  • It is important to know which port remote sites will use to communicate with the VPN concentrator


If manual NAT traversal is selected, it is highly recommended that the VPN concentrator be assigned a static IP address. Manual NAT traversal is intended for configurations when all traffic for a specified port can be forward to the VPN concentrator.


Use automatic NAT traversal when:

  • None of the conditions listed above that would require manual NAT traversal exist


If automatic NAT traversal is selected, the MX will automatically select a high numbered UDP port to source AutoVPN traffic from. The VPN concentrator will reach out to the remote sites using this port, creating a stateful flow mapping in the upstream firewall that will also allow traffic initiated from the remote side through to the VPN concentrator without the need for a separate inbound firewall rule.

Adding Warm Spare

This section outlines the steps required to configure and implement warm spare (HA) for an MX Security Appliance operating in VPN concentrator mode.

Warm Spare Topology

The following is an example of a topology that leverages an HA configuration for VPN concentrators:

Screen Shot 2015-12-11 at 1.37.30 PM.png


When configured for high availability (HA), one MX is active, serving as the primary, and the other MX operates in a passive, standby capacity (spare mode). The VRRP protocol is leveraged to achieve failover. Please see here for more information.

Dashboard Configuration

High availability on MX Security appliances requires a second MX of the same model. The HA implementation is active/passive and will require the second MX also be connected and online for proper functionality. For more detailed information about MX warm spare, please see here.


High availability (also known as a warm spare) can be configured from Security & SD-WAN > Monitor > Appliance status. Begin by clicking "Configure warm spare" and then "Enabled". Next, enter the serial number of the warm spare MX or select one from the drop-down menu. Finally, select whether to use MX uplink IPs or virtual uplink IPs.


Uplink IPs

Use Uplink IPs is selected by default for new network setups. In order to properly communicate in HA, VPN concentrator MXs must be set to use the virtual IP (VIP).


Virtual IP (VIP)

The virtual uplink IPs option uses an additional IP address that is shared by the HA MXs. In this configuration, the MXs will send their cloud controller communications via their uplink IPs, but other traffic will be sent and received by the shared virtual IP address. 


new warm spare configuration.PNG

Configuring OSPF Route Advertisement

MX Security Appliances support advertising routes to connected VPN subnets via OSPF.

Note: MX devices in Routed mode only support OSPF on firmware versions 13.4+, when using the "Single LAN" LAN setting. OSPF is otherwise supported when the MX is in passthrough mode on any available firmware version. This can be set under Security & SD-WAN > Configure > Addressing & VLANs. 


An MX with OSPF route advertisement enabled will only advertise routes via OSPF; it will not learn OSPF routes.

When spoke sites are connected to a hub MX with OSPF enabled, the routes to spokes sites are advertised using an LS Update message. These routes are advertised as type 2 external routes.


Dashboard Configuration

In order to configure OSPF route advertisement, navigate to the Security & SD-WAN > Configure > Site-to-Site VPN page. From this page:

  • Set Advertise remote routes to Enabled
  • Configure the Router ID
  • Configure the Area ID
  • Adjust the Cost, if desired
  • Adjust the Hello timer, if needed
  • Adjust the Dead timer, if needed
  • Enable and configure MD5 authentication, if needed




Other Datacenter Configuration

MX IP Assignment

In the datacenter, an MX Security Appliance can operate using a static IP address or an address from DHCP. MX appliances will attempt to pull DHCP addresses by default. It is highly recommended to assign static IP addresses to VPN concentrators.


Static IP assignment can be configured via the device local status page.

The local status page can also be used to configure VLAN tagging on the uplink of the MX. It is important to take note of the following scenarios:

  • If the upstream port is configured as an access port, VLAN tagging should not be enabled.
  • If the port upstream is configured as a trunk port and the MX should communicate on the native or default VLAN, VLAN tagging should be left as disabled.
  • If the port upstream is configured as a trunk and the MX should communicate on a VLAN other than the native or default VLAN, VLAN tagging should be configured for the appropriate VLAN ID.

Upstream Considerations

This section discusses configuration considerations for other components of the datacenter network.


The MX acting as a VPN concentrator in the datacenter will be terminating remote subnets into the datacenter. In order for bi-directional communication to take place, the upstream network must have routes for the remote subnets that point back to the MX acting as the VPN concentrator.


If OSPF route advertisement is not being used, static routes directing traffic destined for remote VPN subnets to the MX VPN concentrator must be configured in the upstream routing infrastructure.

If OSPF route advertisement is enabled, upstream routers will learn routes to connected VPN subnets dynamically.

Firewall Considerations

The MX Security Appliance makes use of several types of outbound communication. Configuration of the upstream firewall may be required to allow this communication.


Dashboard & Cloud

The MX Security Appliance is a cloud managed networking device. As such, it is important to ensure that the necessary firewall policies are in place to allow for monitoring and configuration via the Cisco Meraki Dashboard. The relevant destination ports and IP addresses can be found under the Help > Firewall Info page in the Dashboard.


VPN Registry

Cisco Meraki's AutoVPN technology leverages a cloud-based registry service to orchestrate VPN connectivity. In order for successful AutoVPN connections to establish, the upstream firewall mush to allow the VPN concentrator to communicate with the VPN registry service. The relevant destination ports and IP addresses can be found under the Help > Firewall Info page in the Dashboard.


Uplink Health Monitoring

The MX also performs periodic uplink health checks by reaching out to well-known Internet destinations using common protocols. The full behavior is outlined here. In order to allow for proper uplink monitoring, the following communications must also be allowed:

  • ICMP to (Google's public DNS service)
  • HTTP port 80
  • DNS to the MX's configured DNS server(s)

Datacenter Redundancy (DC-DC Failover)

Cisco Meraki MX Security Appliances support datacenter to datacenter redundancy via our DC-DC failover implementation. The same steps used above can also be used to deploy one-armed concentrators at one or more additional datacenters. For further information about VPN failover behavior and route prioritization, please review this article.

Branch Deployment

This section will outline the configuration and implementation of the SD-WAN architecture in the branch.

Configuring AutoVPN at the Branch

Before configuring and building AutoVPN tunnels, there are several configuration steps that should be reviewed.

Subnet Configuration

AutoVPN allows for the addition and removal of subnets from the AutoVPN topology with a few clicks. The appropriate subnets should be configured before proceeding with the site-to-site VPN configuration.


Begin by configuring the subnets to be used at the branch from the Security & SD-WAN > Configure > Addressing & VLANs page.


new addressing & vlans page - routing.PNG


By default, a single subnet is generated for the MX network, with VLANs disabled. In this configuration, a single subnet and any necessary static routes can be configured without the need to manage VLAN configurations.


If multiple subnets are required or VLANs are desired, the Use VLANs box should be ticked. This allows for the creation of multiple VLANs, as well as allowing for VLAN settings to be configured on a per-port basis.

Configuring AutoVPN

Once the subnets have been configured, Cisco Meraki's AutoVPN can be configured via the Security & SD-WAN > Configure > Site-to-site VPN page in Dashboard.

Configuring Hub and Spoke VPN

From the Security & SD-WAN > Configure > Site-to-Site VPN page: 

  • Select Spoke for the type
  • Under Hubs, select Add a hub
  • To connect to additional hubs, select Add a hub and select the VPN concentrator configured in the datacenter deployment steps.
  • Additional hubs can be added using the Add a hub link


s2s vpn new page spoke.PNG

Hub Priorities

Hub priority is based on the position of individual hubs in the list from top to bottom. The first hub has the highest priority, the second hub the second highest priority, and so on. Traffic destined for subnets advertised from multiple hubs will be sent to the highest priority hub that a) is advertising the subnet and b) currently has a working VPN connection with the spoke. Traffic to subnets advertised by only one hub is sent directly to that hub.

Configuring Allowed Networks

To allow a particular subnet to communicate across the VPN, locate the local networks section in the Site-to-site VPN page. The list of subnets is populated from the configured local subnets and static routes in the Addressing & VLANs page, as well as the Client VPN subnet if one is configured.


To allow a subnet to use the VPN, set the Use VPN drop-down to yes for that subnet.

NAT Traversal

Please refer to the datacenter deployment steps here for more information on NAT Traversal options.

Adding Performance and Policy Rules

Rules for routing of VPN traffic can be configured on the Security & SD-WAN > Configure > SD-WAN & traffic shaping page in the dashboard.


Settings to configure Policy-based Routing (PbR) and dynamic path selection are found under the SD-WAN policies heading.

sd-wan & traffic shaping page - new.PNG


The following sections contain guidance on configuring several example rules.

Best for VoIP

One of the most common uses of traffic optimization is for VoIP traffic, which is very sensitive to loss, latency, and jitter. The Cisco Meraki MX has a default performance rule in place for VoIP traffic, Best for VoIP.


To configure this rule, click Add a preference under the VPN traffic section.

new vpn performance class rules.PNG 


In the Uplink selection policy dialogue, select Custom expressions, then UDP as the protocol and enter the appropriate source and destination IP address and ports for the traffic filter. Select the Best for VoIP policy for the preferred uplink, then save the changes.


This rule will evaluate the loss, latency, and jitter of established VPN tunnels and send flows matching the configured traffic filter over the optimal VPN path for VoIP traffic, based on the current network conditions.

Load Balance Video

Video traffic is increasingly prevalent as technologies like Cisco video conferencing continue to be adopted and integrated into everyday business operations. This branch site will leverage another pre-built performance rule for video streaming and will load balance traffic across both Internet uplinks to take full advantage of available bandwidth.


To configure this, click Add a preference under the VPN traffic section.

Screen Shot 2015-12-03 at 8.09.44 PM.png


In the Uplink selection policy dialogue, select UDP as the protocol and enter the appropriate source and destination IP address and ports for the traffic filter. For the policy, select Load balance for the Preferred uplink. Next, set the policy to only apply on uplinks that meet the Video streaming performance category. Finally, save the changes.


This policy monitors loss, latency, and jitter over VPN tunnels and will load balance flows matching the traffic filter across VPN tunnels that match the video streaming performance criteria.

PbR with Performance Failover for Web traffic

Web traffic is another common type of traffic that a network administrator may wish to optimize or control. This branch will leverage a PbR rule to send web traffic over VPN tunnels formed on the WAN 1 interface, but only if that matches a custom-configured performance class.


To configure this, select Create a new custom performance class under the Custom performance classes section.

new custom performance classes.PNG


In the Name field, enter a descriptive title for this custom class. Specify the maximum latency, jitter, and packet loss allowed for this traffic filter. This branch will use a "Web" custom rule based on a maximum loss threshold. Then, save the changes.


Next, click Add a preference" under the VPN traffic section.

web vpn custom class new.PNG


In the Uplink selection policy dialogue, select TCP as the protocol and enter in the appropriate source and destination IP address and ports for the traffic filter. For the policy, select WAN1 for the Preferred uplink. Next, configure the rule such that web traffic will Failover if there is Poor performance. For the Performance class, select "Web". Then, save the changes.


This rule will evaluate the packet loss of established VPN tunnels and send flows matching the traffic filter out of the preferred uplink. If the loss, latency, or jitter thresholds in the "Web" performance rule are exceeded, traffic can fail over to tunnels on WAN2 (assuming they meet the configured performance criteria).

Layer 7 Classification
Best for VoIP

To configure this rule, click Add a preference under the VPN traffic section.



In the uplink selection policy dialogue, click Add+ to configure a new traffic filter. From the filter selection menu, click the VoIP & video conferencing category and then select the desired layer 7 rules. This example will use the SIP (Voice) rule.



Then, select the Best for VoIP performance class for the preferred uplink and save the changes. This rule will evaluate the loss, latency, and jitter of established VPN tunnels and send flows matching the configured traffic filter over the optimal VPN path for VoIP traffic, based on the current network conditions.

WAN Interface Configuration

While automatic uplink configuration via DHCP is sufficient in many cases, some deployments may require manual uplink configuration of the MX security appliance at the branch. The procedure for assigning static IP addresses to WAN interfaces can be found here.


Some MX models have only one dedicated Internet port and require a LAN port be configured to act as a secondary Internet port via the device local status page if two uplink connections are required. This configuration change can be performed on the device local status page on the Configure tab.

To use SD-WAN over cellular the MX needs to be running MX16.2+ and have the feature enabled on an integrated cellular MX (MX67C and MX68CW only).

With this feature in place the cellular connection that was previously only enabled as backup can be configured as an active uplink in the SD-WAN & traffic shaping page as per:

Screenshot 2021-06-23 at 08.22.45.png

When this toggle is set to 'Enabled' the cellular interface details, found on the 'Uplink' tab of the 'Appliance status' page, will show as 'Active' even when a wired connection is also active, as per the below:

Screenshot 2020-11-26 at 15.38.32.png

At this point, the cellular connection inherits all the SD-WAN policies associated with WAN2 in the UI.  Given this feature takes ownership of the WAN2 logic, this means that when this feature is enabled, the use of 2 wired networks is not supported, as currently only 2 WAN connections can be used concurrently.

When using this feature on an MX67C, this results in the port LAN2 being unusable due to the fact that LAN2 is a multi-use port that can also operate as WAN2.

As such, to configured an SD-WAN policy to utilize the cellular connection associate it with WAN2 as per:

Screenshot 2020-11-26 at 15.41.31.png


Does the MX support unencrypted AutoVPN tunnels?

No, currently AutoVPN always uses AES-128 encryption for VPN tunnels.

If traffic is encrypted, what about QoS or DSCP tags?

Both QoS and DSCP tags are maintained within the encapsulated traffic and are copied over to the IPsec header.

Can a non-Meraki device be used as a VPN hub?

While it is possible to establish VPN connections between Meraki and non-Meraki devices using standard IPsec VPN, SD-WAN requires that all hub and spoke devices be Meraki MXs.

How does this inter-operate with IWAN using Cisco ISR routers?

Both products use similar, but distinct, underlying tunnelling technologies (DMVPN vs. AutoVPN). A typical hybrid solution may entail using ISR devices at larger sites and MX devices at smaller offices or branches. This will require dedicated IWAN concentration for ISR, as well as a separate SD-WAN head-end for MXs, at the datacenter.

Is dual active AutoVPN available over a 3G or 4G modem?

No, 3G or 4G modem cannot be used for this purpose. While the MX supports a range of 3G and 4G modem options, cellular uplinks are currently used only to ensure availability in the event of WAN failure and cannot be used for load balancing in conjunction with an active wired WAN connection or VPN failover scenarios.  

How does SD-WAN inter-operate with warm spare (HA) at the branch?

SD-WAN can be deployed on branch MX appliances configured in a warm spare capacity, however, only the primary MX will build AutoVPN tunnels and route VPN traffic.


Please see the following references for supplemental information.

Auto VPN White Paper

For further information on how Cisco Meraki's AutoVPN technology functions, please see this article

SD-WAN page

For further information on SD-WAN availability, please see our SD-WAN page.


Appendix 1: Detailed traffic flow for PbR and dynamic path selection
Complete Flowchart

The following flowchart breaks down the path selection logic of Meraki SD-WAN. This flowchart will be broken down in more detail in the subsequent sections.

Screen Shot 2015-12-11 at 12.35.54 PM.png

Decision Point 1: Can we establish Tunnels over both uplinks?

The very first evaluation point in SD-WAN traffic flow is whether the MX has active AutoVPN tunnels established over both interfaces.

Screen Shot 2015-12-11 at 12.20.34 PM (1).png

When VPN tunnels are not successfully established over both interfaces, traffic is forwarded over the uplink where VPN tunnels are successfully established.

Screen Shot 2015-12-11 at 12.21.07 PM (1).png

If we can establish tunnels on both interfaces, processing proceeds to the next decision point.


Decision Point 2: Are performance rules for dynamic path selection defined?

If we can establish tunnels on both uplinks, the MX appliance will then check to see if any dynamic path selection rules are defined.


If dynamic path selection rules are defined, we evaluate each tunnel to determine which satisfy those rules.

Screen Shot 2015-12-11 at 12.21.25 PM (1).png

If only one VPN path satisfies our performance requirements, traffic will be sent along that VPN path. The MX will not evaluate PbR rules if only one VPN path meets the performance rules for dynamic path selection.

Screen Shot 2015-12-11 at 12.21.38 PM (1).png


If there are multiple VPN paths that satisfy our dynamic path selection requirements or if there are no paths that satisfy the requirements, or if no dynamic path selection rules have been configured, PbR rules will be evaluated.


Screen Shot 2015-12-11 at 12.21.52 PM (1).png

After performance rules for dynamic path selection decisions are performed, the MX evaluates the next decision point.

Decision Point 3: Are PbR rules defined?

After checking dynamic path selection rules, the MX security appliance will evaluate PbR rules if multiple or no paths satisfied the performance requirements.


If a flow matches a configured PbR rule, then traffic will be sent using the configured path preference.

Screen Shot 2015-12-11 at 12.22.01 PM (1).png

If the flow does not match a configured PbR rule, then traffic logically progresses to the next decision point.


Decision Point 4: Is VPN load balancing configured?

After evaluating dynamic path selection and PbR rules, the MX Security appliance will evaluate whether VPN load balancing has been enabled.

Screen Shot 2015-12-11 at 12.22.07 PM (1).png

If VPN load balancing has not been enabled, traffic will be sent over a tunnel formed on the primary Internet interface. Which Internet interface is the primary can be configured from the Security & SD-WAN > Configure > SD-WAN & traffic shaping page in Dashboard.

Screen Shot 2015-12-11 at 12.22.20 PM (1).png

If load balancing is enabled, flows will be load balanced across tunnels formed over both uplinks.

Screen Shot 2015-12-11 at 12.22.29 PM (1).png

VPN load balancing uses the same load balancing methods as the MX's uplink load balancing. Flows are sent out in a round robin fashion with weighting based on the bandwidth specified for each uplink.

MX Templates Best Practices

As a network deployment grows to span multiple sites, managing individual devices can become highly cumbersome and unnecessary. To help alleviate these operating costs, the Meraki MX Security Appliance offers the use of templates to quickly roll out new site deployments and make changes in bulk.

This guide will outline how to create and use MX templates on the dashboard.

It should be noted that service providers or deployments that rely heavily on network management via API are encouraged to consider cloning networks instead of using templates, as the API options available for cloning currently provide more granular control than the API options available for templates.

Planning a Template Deployment for MX

Before rolling out a template deployment (or enabling templates on a production network), it may be helpful to plan the "units" that make up your deployments. This involves asking questions such as:

  • What are my sites? (e.g. retail location, school, branch office, etc.)

  • Are the MXs going to be in HA?

  • Do I need local overrides?

Template Networks

A "site" in network deployment terms is usually the same as a "network" in dashboard terms; each site gets its own dashboard network. As such, when planning multiple sites to be configured the same way, they will share a template network.

A template network is a network configuration that is shared by multiple sites/networks. Individual site networks can be bound to a template network, so changes to the template will trickle down to all bound sites. A new network can also be created based on a template, making it easy to spin-up new sites of the same type.

When planning a template deployment, you should have one template network for each type of site.


The following sections walk through the configuration and use of MX templates in the dashboard.

Creating a Template Network

As outlined above, a template network should be created for each type of site to be deployed.

To create a template network:

  1. In the dashboard, navigate to Organization > Monitor > Configuration templates

  2. Choose Create a new template

  3. Select a descriptive name for your template. If this is a completely new template, select Create new

    • If this template should be based on an existing network, select Copy settings from and select an existing Security appliance network from the drop-down menu.

  4. Choose Add:


MX Template.png


  1. If you would like to bind existing networks to this new template, select those networks as Target networks and choose Bind. Otherwise, choose Close.

Template VLAN Configuration
  1. In the dashboard, navigate to Security & SD-WAN > Configure > Addressing & VLANs

  2. Under Routing section, LAN setting sub-section click VLANs 

  3.  Choose Add VLAN under Subnets sub-section

  4. Select a descriptive name for your VLAN

  5. Choose whether the subnetting should be Same or Unique for every network bound to this template.

    • If Same is chosen, all the networks bound to the template will share the exact same subnet. This is not eligible for site-to-site VPNs.

    • If Unique is chosen, each network bound to the template will get a unique subnet based on the configured options. The MX does allow local VLAN overrides on templates, however, the chosen subnet needs to be from the same subnet pool assigned to the VLAN on the template and you can't override the VLAN ID. 

      • Subnets are assigned randomly to each network bound to the template.




For more information about template IP range VLAN allocation, reference our article on Managing Networks with Configuration Templates.

Template Static Routes


In template-based MX deployments, static routes can be configured on the parent template and passed to child networks like other configuration parameters.  The procedure for configuring a template-based static route is almost identical to the procedure for a regular network, with the exception of how next-hop IP addresses are defined as the next-hop value may be network specific.

  1. In the dashboard, navigate to Security & SD-WAN > Addressing & VLANs > Routing > Static Routes

  2. Choose Add Static Route

  • Name  

    • Text description for the static route(not parsed)

      • Ex: prodWirelessNet

  • Subnet 

    • Subnet reachable via static route specified in CIDR notation

      • Ex:  

  • Next Hop IP

    • Next-hop IP is the IP address of the device that connects the MX Security Appliance to this route.  There are two methods for specifying next-hop values on template-based networks. 

      • Option A: IP Assignment

        • Manually define next-hop value

        • Required Info: 

          • Next-hop IP address

      • Option B: IP offset

        • Calculate next-hop IP based on network address for specified VLAN 

          • NOTE: Next Hop IP will be calculated as Network Address + Offset and not MX IP + Offset

        • IP offset parameters: 

          • Select the desired VLAN from the dropdown

          • Offset value (a positive integer) 

        • Example:

          • VLAN configured:

          • VLAN MX IP:

          • Offset: 2 

          • Calculated route next-hop IP:

  • Active 

    • The active modifier controls conditions that must be met for the MX to deem the route usable and add the route to the local routing table. 

      • Always

        • The route will always be active in MXs' routing table 

      • While next-hop responds to ping

        • The route is available as long as the configured next hop is responding to pings

      • While host responds to ping:

        • The route is available as long as the configured host is responding to pings



Template Firewall Rules

When configuring layer 3 firewall rules, CIDR notation, as well as the VLAN name, can be used. The VLAN name is used when the entire subnet needs to be specified whereas CIDR notation is used when more flexibility is needed to specify the subnets.


  1. Go to Security & SD-WAN > Configure > Firewall > Layer 3, click Add a rule

  2. Choose the policy, specify if the rule matched should be allowed or denied

  3. Select the protocol to match in outbound traffic

  4. Specify the IP address or range using CIDR notation to match the outbound traffic. Note that also the name of the VLAN can be chosen as well

  5. Choose the Src/dst port to match in outbound traffic


Screen Shot 2018-07-06 at 12.05.59 PM.png

Template SD-WAN Policies
  1. SD-WAN policies can be configured to control and modify the flows for specific VPN traffic. You can have a specific type of traffic go over one Uplink over the other. 

  2. Go to Security & SD-WAN > Configure > SD-WAN & traffic shaping > SD-WAN policies > VPN traffic, and choose Add a preference

  3. You'll be prompted with the Uplink selection policy dialog box. From this box, you can define the type of traffic that should adhere to the policy on the Traffic filters section. You can either add Custom expressions to select traffic based on Protocol/Source/Destination criteria, or you can select traffic based on pre-defined applications. 

    • To add a custom expression to select traffic. 

      • Choose Add +

      • The Custom expressions option should already be selected.

      • Choose the Protocol. You can choose either TCP, UDP, ICMP or Any

      • Choose Source to define the source address criteria. You can select one of the following: 

        • You can choose Any

        • You can type in the source in CIDR format( eg:, and then choose Add

        • You can choose a VLAN from the drop-down menu with the list of VLANs and then choose Add VLAN

        • You can choose a VLAN from the drop-down menu with the list of VLANs and then click Host, type in the last octet of the host address, then choose on Add host

      • Choose the Src port. The Source port could be 'Any', a port number (eg: 2000), or a port range (eg: 2000-3000) within 1-65535.

      • Click Destination to define the source address criteria. You can select one of the following: 

        • You can choose Any

        • You can type in the source in CIDR format( eg:, and then choose Add

        • You can choose a VLAN from the drop-down menu with the list of VLANs and then choose Add VLAN

        • You can choose a VLAN from the drop-down menu with the list of VLANs and then choose Host, type in the last octet of the host address, then choose Add host

      • Choose the Dst port. The Destination port could be 'Any', a port number (eg: 2000), or a port range (eg: 2000-3000) within 1-65535.

    •  To add a pre-defined application to select traffic.

      • Select the application type from the menu and then the interesting application in question from the sub-menu (e.g., VoIP & video conferencing > Webex)

      • Add all the applications which you want them to adhere to the policy and then choose Add+ to exit the applications menu

  4. Under the Policy section, you can select one of the following as the Preferred uplink

    • WAN1 or WAN2. If you choose WAN1 or WAN2, you'll have the opportunity to configure failover criteria under Fail over if drop-down menu. You can select either Poor performance and then choose one of the performance classes from the Performance class drop-down menu or you can choose Uplink down.  By default, VoIP is the only pre-defined performance class. Any additional performance classes have to be defined under Security & SD-WAN > Configure > SD-WAN & traffic shaping > SD-WAN policies > Custom performance classes.

    • Best for VoIP. The uplink that is best for VoIP traffic will be chosen.

    • Load balance. The MX will balance traffic across the uplinks that meet the performance class selected from On uplinks that meet performance class drop-down menu.

    • Global preference. The uplink will be chosen based on the configuration under Security & SD-WAN > Configure > SD-WAN & traffic shaping > Uplink selection > Global preferences. 

  5. Once you are done with configuring the criteria to apply the policy and the policy, choose Save


Screen Shot 2018-07-06 at 12.18.46 PM.png


Screen Shot 2018-07-06 at 12.24.48 PM.png

Local Overrides

Once an MX Security Appliance network has been bound to a template, some options can still be configured normally through the dashboard. Any local configuration changes made directly on the MX network will override the template configuration.

In the example below, the bound MX was directly configured to have a custom Default VLAN. This change can be made in the template network, under Security & SD-WAN > Configure > Addressing & VLANs:


Screen Shot 2018-07-06 at 12.57.08 PM.png


If a network is removed from a template, local overrides will automatically be lost as well as any template related configuration. The MX will automatically get the configuration from the network it is on.


Note: Auto VPN hubs should not be added to templates at all. It is not possible to configure an MX as a spoke with an exit hub that is part of a template.

Note: Static Route local overrides are not supported at this moment for MX networks bound to templates.


DHCP Exceptions

The Meraki MX appliance provides a fully-featured DHCP service that can be enabled and configured on each VLAN individually. When bound to a template, local overrides can be made to the DHCP configurations under Security & SD-WAN > Configure > DHCP.


Screen Shot 2018-07-06 at 1.28.40 PM.png


Forwarding Rules Overrides

To override forwarding rules, navigate under Security & SD-WAN > Configure > Firewall > Forwarding rules overrides.

Screen Shot 2018-07-06 at 2.03.39 PM.png


Active-Active AutoVPN

To override the uplink selection rules for Active-Active AutoVPN, navigate to Security & SD-WAN > Configure > SD-WAN & traffic shaping  > Active-Active AutoVPN.



Templates with MXs of Different Port Counts

Port numbering can differ between MX models, which can cause confusion when assigning a configuration to a specific port number in a template. For example, a configuration on LAN 2 in a template doesn't affect any ports on an MX65.

The table presented at Port Mapping for different MXs models outlines template port numbers and their corresponding physical port on some MX models.

You can toggle the LAN2 port between LAN and Internet, through Uplink configuration under the Local status tab on the Local Status Page.

Performing MX Templates Firmware Upgrades

Firmware upgrades scheduled on the template will automatically be applied to the child networks’ network local timezone.


As a best practice, make sure that each MX has the correct local time zone configuration under Security & SD-WAN > Configure > General.


Screen Shot 2018-07-06 at 1.10.17 PM.png


MX Replacement Walkthrough

Below are instructions for how to copy configurations from a failed MX bound to a template.

  1. On the Organization > Configure > Inventory page, claim the new MX.

  2. Navigate to the network that has the faulty MX and remove it under Security & SD-WAN > Monitor > Appliance Status > Remove appliance from network

  3. Add the replacement MX to the same network by navigating to Network-wide > Configure > Add devices

  4. Select the network and choose Add devices.


For more information on replacing an MX, refer to our MX Cold Swap article.


Best Practice Design - MS Switching

General Switching Best Practices

Layer 2 Features

  • STP

    • RSTP is enabled by default and should always be enabled. Disable only after careful consideration.

    • PVST interoperability (Catalyst/Nexus)

      • VLAN 1 should be allowed on a trunk between Catalyst and MS. This is crucial for RSTP

      • Make Catalyst the root switch

    • Set root switch priority to “0 - likely root”

      • Higher end models such as the MS410, MS425 deployed at core or aggregation are suitable candidates for the role

      • Ideally, the switch designated as the root should be one which sees minimal changes (config changes, link up/downs etc.) during daily operation


Screen Shot 2018-04-15 at 6.45.24 PM.png


  • Keep the STP diameter under 7 hops, such that packets should not ever have to travel across more than 7 switches to travel from one point of the network to the other
  • BPDU Guard should be enabled on all end-user/server access ports to avoid rogue switch introduction in network
  • Loop Guard should be enabled on trunk ports that are connecting switches 
  • Root Guard should be enabled on ports connecting to switches outside of administrative control


  • MTU

    • Recommended to keep at default of 9578 unless intermediate devices don’t support jumbo frames. This is useful to optimize server-to-server and application performance. Avoid fragmentation when possible.


  • Switchports

    • Trunk

      • Prune unnecessary VLANs off trunk ports using allowed VLAN list in order to reduce scope of flooding

      • Ensure that the native VLAN and allowed VLAN lists on both ends of trunks are identical. Mismatched native VLANs on either end can result in bridged traffic


      • For ease of management, assign tags to switch ports. For example, switch<->switch links can be assigned “trunk”, switch<->AP can be “wireless” etc

    • Aggregation

      • Only LACP is supported for link aggregation. Ensure the other end supports LACP

      • It is recommended to configure aggregation on the dashboard before physically connecting to a partner device


Screen Shot 2018-04-15 at 6.55.56 PM.png


  • UDLD (Unidirectional Link Detection)
    • This should be enabled on fiber trunks - in “Alert Only” mode
  • Link Negotiation
    • This should be set to auto-negotiate for ports connecting Meraki devices
    • Use “forced” mode only if a device connected to the port does not support auto-negotiation


Screen Shot 2018-04-15 at 6.58.18 PM.png

  • Switchport count in a network
    • It is recommended to keep the total switch port count in a network to fewer than 8000 ports for reliable loading of the switch port page.

Layer 3 Features

  • IP addressing and subnetting schema
    • Dedicate /24 or /23 subnets for end-user access
    • Avoid overlapping subnets as this may lead to inconsistent routing and forwarding
  • L3 Interfaces

    • Assign a dedicated management VLAN

    • Avoid configuring a L3 interface for the management vlan. Use L3 interfaces only for data VLANs. This helps in separating management traffic from user data

In case of switch stacks, ensure that the management IP subnet does not overlap with the subnet of any configured L3 interface. Overlapping subnets on the management IP and L3 interfaces can result in packet loss when pinging or polling (via SNMP) the management IP of stack members. NOTE: This limitation does not apply to the MS390 series switches.

L3 configuration changes on MS210, MS225, MS250, MS350, MS355, MS410, MS425, MS450 require the flushing and rebuilding of L3 hardware tables. As such, momentary service disruption may occur. We recommend making such changes only during scheduled downtime/maintenance window.

  • Access Control Lists (ACLs)

    • Summarize IP addresses as much as possible (before-after examples below).


Screen Shot 2021-01-21 at 12.18.37 PM.png


Screen Shot 2021-01-21 at 12.19.31 PM.png


  • Maximum ACL limit is 128 access control entries (ACEs) per network
    Take control over your network traffic. Review user and application traffic profiles and other permissible network traffic to determine the protocols and applications that should be granted access to the network. Ensure traffic to the Meraki dashboard is permitted (Help > Firewall Info)


  • OSPF
    • Found under Switch > Configure > OSPF Routing
    • All configured interfaces should use broadcast mode for hello messages
    • We recommend leaving the “hello” and “dead” timers to a default of 10s and 40s respectively. If more aggressive timers are required, ensure adequate testing is performed.
    • Ensure all areas are directly attached to the backbone Area 0. Virtual links are not supported
    • Configure a Router ID for ease of management


Screen Shot 2018-04-15 at 7.23.27 PM.png


  • With multiple VLANs on a trunk, OSPF attempts to form neighbor relationships over each VLAN, which may be unnecessary. To exchange routing information, OSPF doesn’t need to form neighbor relationships over every VLAN. Instead, a dedicated transit VLAN can be defined and allowed on trunks, typically between the core and aggregation layers with OSPF enabled and “Passive” set to “no.” For all other subnets that need to be advertised, enable OSPF and set “Passive” to “Yes.” This will reduce unnecessary load on the CPU. If you follow this design, ensure that the management VLAN is also allowed on the trunks.


Screen Shot 2018-04-15 at 7.26.33 PM.png


  • Configure MD5 authentication for added security


Screen Shot 2018-04-15 at 7.28.06 PM.png


  • DHCP

    • Specify allowed DHCP servers to protect against rogue servers

    • In a warm spare configuration, the load balancing mechanism for DHCP, in some case, may be inefficient and cause an issue where devices may try to get an address from a member with no leases remaining. This is addressed in a stacked configuration, where this issue will not occur.



  • We highly recommend having the total switch count in any dashboard network to be less than or equal to 400 switches. If switch count exceeds 400 switches, it is likely to slow down the loading of the network topology/ switch ports page or result in display of inconsistent output.



  • The most important consideration before deploying a multicast configuration is to determine which VLAN the multicast source and receivers should be placed in. If there are no constraints, it is recommended to put the source and receiver in the same VLAN and leverage IGMP snooping for simplified configuration and operational management.


Screen Shot 2018-04-15 at 7.55.11 PM.png


  • Multicast Routing

    • Meraki switches provide support for 30 multicast routing enabled L3 interfaces on a per switch level

    • PIM SM requires the placement of a rendezvous point (RP) in the network to build the source and shared trees. It is recommended to place the RP as close to the multicast source as possible. Where feasible, connect the multicast source directly to the RP switch to avoid PIM’s source registration traffic which can be CPU intensive. Typically, core/aggregation switches are a good choice for RP placement

    • Ensure every multicast group in the network has an RP address configured on Dashboard

    • Ensure that the source IP address of the multicast sender is assigned an IP in the correct subnet. For example, if the sender is in VLAN 100 (, the sender's IP address can be but should not be

    • Make sure that all Multicast Routing enabled switches can ping the RP address from all L3 interfaces that have Multicast Routing enabled

    • Configure an ACL to block non-critical groups such as (SSDP). As of MS 12.12, Multicast Routing is no longer performed for the SSDP group of 

  • IGMP Snooping

    • Disable IGMP Snooping if there are no layer 2 multicast requirements. IGMP Snooping is a CPU dependent feature, therefore it is recommended to utilize this feature only when required. For example, IPTV.

    • It is recommended to use multicast address space for internal applications

    • Always configure an IGMP Querier if IGMP snooping is required and there are no Multicast routing enabled switches/routers in the network. A querier or PIM enabled switch/router is required for every VLAN that carries multicast traffic.


High Availability and Redundancy

Switch Stacking

The following steps explain how to prepare a group of switches for physical stacking, how to stack them together, and how to configure the stack in the dashboard:

  1. Add the switches into a dashboard network. This can be a new dashboard network for these switches, or an existing network with other switches. Do not configure the stack in the dashboard yet.

  2. Connect each switch with individual uplinks to bring them both online and ensure they can check in with the dashboard.

  3. Download the latest firmware build using the Firmware Upgrade Manager under Organization > Monitor > Firmware Upgrades. This helps ensure each switch is running the same firmware build.

  4. With all switches powered off and links disconnected, connect the switches together via stacking cables in a ring topology (as shown in the following image). To create a full ring, start by connecting switch 1/stack port 1 to switch 2/stack port 2, then switch 2/stack port 1 to switch 3/stack port 2 and so forth, with the bottom switch connecting to the top switch to complete the ring.




  1. Connect one uplink for the entire switch stack.

  2. Power on all the switches, then wait several minutes for them to download the latest firmware and updates from the dashboard. The switches may reboot during this process.

    • The power LEDs on the front of each switch will blink during this process.

    • Once the switches are done downloading and installing firmware, their power LEDs will stay solid white or green.

  3. Navigate to Switch > Monitor > Switch stacks.

  4. Configure the switch stack in the dashboard. If the dashboard has already detected the correct stack under Detected potential stacks, click Provision this stack to automatically configure the stack.

  5. Otherwise, to configure the stack manually:

  • Navigate to Switch > Monitor > Switch stacks.

  • Click add one / Add a stack:




  • Select the checkboxes of the switches you would like to stack, name the stack, and then click Create.




The configuration is complete and the stack should be up and running.

  • Use 2 ports on each of “top” and “bottom” switches of the stack for uplink connectivity and redundancy.
  • Configure cross-stack link aggregation for uplink connectivity  
Warm Spare for Layer 3 Switches

MS Series switches configured for layer 3 routing can also be configured with a “warm spare” for gateway redundancy. This allows two identical switches to be configured as redundant gateways for a given subnet, thus increasing network reliability for users.


Note that, while warm spare is a method to ensure reliability and high availability, generally, we recommend using switch stacking for layer 3 switches, rather than warm spare, for better redundancy and faster failover.


Warm Spare is built on VRRP to provide clients with a consistent gateway. The switch pair will share a virtual MAC address and IP address for each layer 3 interface. The MAC address will always begin with 00-00-5E-00-01, and the IP address will always be the configured interface IP address on the primary. Clients will always use this virtual IP and MAC address to communicate with their gateway.

  • For redundancy, ensure an alternate path exists for the exchange of VRRP messages between the Primary and Spare. A direct connection between the Primary and Spare is recommended

Any changes made to L3 interfaces of MS Switches in Warm Spare may cause VRRP Transitions for a brief period of time. This might result in a temporary suspension in the routing functionality of the switch for a few seconds. We recommend making any changes to L3 interfaces during a change window to minimize the impact of potential downtime.

Quality of Service

  • Classification

    • Identify different traffic classes within the network for prioritization. On a high level, traffic can be classified based on VLAN (user, voip, network control etc)

    • You can further classify traffic within a VLAN by adding a QoS rule based on protocol type, source port and destination port as data, voice, video etc.

    • Typical enterprise traffic classes are listed below:


Screen Shot 2018-04-04 at 6.09.42 PM.png


  • Marking

    • Meraki MS supports trusting or remarking of incoming DSCP values. Meraki MS supports marking (remarking/trusting) based on DSCP values only. CoS values carried within Dot1q headers are not acted upon. If the end device does not support automatic tagging with DSCP, configure a QoS rule to manually set the appropriate DSCP value.


Screen Shot 2018-04-15 at 8.00.47 PM.png


  • CoS markings within a Dot1q header are not preserved by default since MS switches support DSCP markings only.
  • Queueing and Scheduling

    • Assign an appropriate Class-of-Service queue to each DSCP value


Screen Shot 2018-04-15 at 8.15.43 PM.png


  • An MS network has 6 configurable CoS queues labeled 0-5. Each queue is serviced using FIFO. Without QoS enabled, all traffic is serviced in queue 0 (default class) using a FIFO model. The queues are weighted as follows:




0 (default class)













Take, for example, a switched environment where VoIP traffic should be in CoS queue 3, an enterprise application in CoS queue 2, and all of other traffic is unclassified. The percentage of bandwidth allocation can be calculated using the weight of the individual CoS queue as the numerator and the sum of all configured CoS queues as the denominator (in this example 8+4+1=13):

  • VoIP would be guaranteed 8/13 or ~62% percent of the bandwidth. The switch would forward 8 frames from the CoS queue 3 and move to CoS queue 2.

  • The enterprise application would be guaranteed 4/13 or ~30% bandwidth.The switch would forward 4 frames from the CoS queue 2 and move to the default queue.

  • All other traffic would receive 1/13 or ~8% of the bandwidth. The switch would forward 1 frame from the default queue, then cycle back to CoS queue 3.


Based on the information above, determine the appropriate CoS queue for each class of traffic in your network. Remember, QoS kicks in only when there is congestion so planning ahead for capacity is always a best practice.


Cabling Best Practices for Multi-Gigabit operations


While Category-5e cables can support multigigabit data rates upto 2.5/5 Gbps, external factors such as noise, alien crosstalk coupled with longer cable/cable bundle lengths can impede reliable link operation. Noise can originate from cable bundling, RFI, cable movement, lightning, power surges and other transient event. It is recommended to use Category-6a cabling for reliable multigigabit operations as it mitigates alien crosstalk by design.



Large Campus Switching Best Practices

This guide provides information and guidance to help the network administrator deploy the Meraki Switch (MS) line in a Campus environment.

Campus networks typically adopt a tiered design, scaled according to the specific needs of the individual campus. These larger networks generally comprise WAN access, a core, an aggregation layer and an access/edge layer. This blueprint is used over and over again as it’s proven to be scalable to fit all use cases. An example outline of this template/blueprint can be found below.


Large Campus Switching Best Practices.png


Campus Design

This document will walk through the configuration of an ideal campus network design with Meraki hardware, using a theoretical, recommended environment.

The Aggregation Layer

The best networks have redundancy, so our recommended environment will leverage the stackable switches capable of running layer 3 features like the MS425 at the aggregation layer. This will allow the network to get 40 Gigabit connections to interconnect the two aggregation layer switches for physical redundancy as well as the resiliency of protocol failover in this setup. This way, the gateway remains up.


To begin, the MS425s should be online and connected to their gateway upstream. This will allow them to download any available firmware updates, in addition to grabbing any configuration changes made (before or after deployment). If the switches are online, the status LED should be solid white. If the LED is not turning white several minutes after being connected, the MS425 Series Installation Guide can be used to help troubleshoot. In the dashboard, the units should show a green status, meaning the the device is connected, has received the latest configuration, and is ready to go.

One the switches are online, the next step is to configure the dedicated QSFP+ stacking ports on the back of the switch to act as stacking ports. This can be configured across many switches in the dashboard via Switch > Switch Ports or for each individual switch, by clicking on the ports.




In the switch port configuration window, select stacking and save the configuration. This will push the change to the switches and the ports will be enabled for stacking. The next step is to configure the stack members in dashboard. This can be done on the Switch > Switch Stacks page in the dashboard. Switch stack members can either be selected from the Detected potential stacks section, or by selecting add one near the top of the page. Once the switches have been added to a stack in the dashboard, it will take about a minute for them to receive their configuration. Once the switches have received their stacking configuration from the dashboard, they will appear on the detected stacks list.




Name the stack and save changes to get the switches ready for adding interfaces and routes. The next step is to configure VLANs on a switch or stack basis under Switch > Routing and DHCP. From there, interfaces and static routes can be configured to keep traffic restricted. Click Add a static route or Add an interface and fill out the appropriate information, making sure to select the switch stack that was defined earlier.


After this configuration has been completed, OSPF can also optionally be enabled. The need for this depends on the size of the campus, with larger environments generally being more likely to require a dynamic routing protocol among sites/locations. OSPF is enabled via Switch > OSPF, and enabling it will provide failover capabilities for redundant internet paths as well as connections between buildings. If choosing to use OSPF, enable it, advertise the appropriate interfaces and set the Hello and Dead timers to the desired values. It is generally bast practice to start with low timer values of 1 (Hello) and 3 (Dead), but your network may require higher values if there is a great deal of delay or distance between your devices.




This configuration will provide a foundation for the campus architecture at the aggregation layer. The next part to configure for the campus architecture is the Access Layer.

The Access Layer

Once the aggregation layer has been configured, a campus network architecture will require a way to connect end-users and clients to the network. It is generally recommended best practice to introduce physical stacking into the access or edge of the network. Physical stacking will provide a high-performance and redundant access layer to minimize any type of failure that might occur, while giving the network ample bandwidth for an enterprise deployment. The MS350 is an ideal example of an access layer switch that has been engineered for this purpose with fully redundant power supplies and fans, plus the ability to stack up to eight switches, providing up to 384 ports in a single stack. Stacked access layer switches like the MS350 should provide ample connectivity for a network closet to accommo­date a floor or a wing of a floor, and coupled with 160 Gbps stack bandwidth, multiple uplinks can be used with cross-stack link aggregation to achieve more throughput to aggregation or core layers. High bandwidth uplinks through access-layer port stacking will allow integration with the stacked aggregation layer MS425s that were setup earlier to provide redun­dancy from access to core, minimizing any sort of failures that might occur.


Before configuring a switch stack, every switch should be added to the same dashboard network and individually brought online with separate, individual uplinks. This ensures that the switches are up-to-date with the latest (and same) firmware versions, and helps ensure the entire stack will come online. 
To begin setting up stacking, the first step is to connect the stack links. Make sure the switches are powered down for this. As best practice, stacking should be set up in a full ring topology by connecting stack port "one" of one switch to stack port "two" of the next switch, continuing this pattern the whole way through the stack, and finishing with the bottom switch connecting to the top switch to complete the ring.




Once the stack has been cabled correctly, the stack should be brought online with a single uplink for the entire stack. This will allow the switches to connect to the Meraki cloud and configure everything or sync with the configuration that was set up ahead of time. Once the switches comes online from their single uplink, the Meraki cloud will automatically detect the stack if it’s not already configured and will ask to provision or name the stack:




Once the stack has been named and configured, clients can be connected and link aggregation can be configured for an uplink back to the aggregation layer or core. Enterprise Meraki switches are able to use Link Aggre­gation Control Protocol (LACP) to bundle up to eight links. It is best practice to make sure to accommodate for additional bandwidth by using at least two links for the uplink in any stack.


When selecting ports for an aggregated stack uplink, it is best to space the links out so there are minimal hops across the stack for traffic to get to an uplink. In a stack with four switches, this would mean that switch 1 and switch 3 have uplinks. In a stack with eight switches and two uplink cables, this would mean that switch 1 and switch 4 have uplinks. This "maximum spacing" method will provide optimal coverage, during normal operation and in failures. This should help avoid having traffic unnecessarily traverse the whole stack to get to an uplink. Link aggregation is simple to configure using the dashboard, as all ports for all switches can be viewed from a single page. Simply select the ports to be in the link aggregate and click the Aggregate button in dashboard.


Aggregate Ports.png


For even higher-capacity needs, the Meraki MS355 switch family has a multigigabit ethernet model that pairs with the Meraki multigigabit MR access points to provide single-cable speeds greater than a gigabit. This allows a campus to have the stacking redundancy as well as greater throughput for end clients should it be needed. 


We highly recommend having the total switch count in any dashboard network to be less than or equal to 400 switches. If switch count exceeds 400 switches, it is likely to slow down the loading of the network topology/ switch ports page or result in display of inconsistent output.

QoS Considerations in the Campus

In any campus deployment, traffic prioritization is key to keeping critical network applications run­ning, even under heavy load. This is done through the use of Quality of Service (QoS) configuration. The simplest explanation of QoS is the prioritization of traffic, ensuring that important or latency-sensitive traffic will get bandwidth before less demanding traffic. To read the guide for QoS on Meraki refer to our MS QoS documentation.


The table below lists the commonly used DSCP values as described by RFC 2475. To keep things standardized it’s recommended to utilize these values, unless the deployment already uses a differ­ent set of values.







101 110


High Priority Expedited Forwarding (EF)


101 - Critical

000 000


Best Effort


000 - Routine

001 010




001 - Priority

001 100




001 - Priority

001 110




001 - Priority

010 010




001 - Immediate

010 100




001 - Immediate

010 110




001 - Immediate

011 010




011 - Flash

011 100




011 - Flash

011 110




011 - Flash

100 010




100 - Flash Override

100 100




100 - Flash Override

100 110




100 - Flash Override

001 000





010 000





011 000





100 000





101 000





110 000





111 000





000 000





101 110





To help with the reasoning behind the above chart, here’s the IP precedence priority from lowest to highest as per the RFC 791.



000 (0)

Routine or Best Effort

001 (1)


010 (2)


011 (3)

Flash - Mainly used for Voice Signaling for for Video

100 (4)

Flash Override

101 (5)

Critical - mainly used for Voice RTP

110 (6)


111 (7)


We’ll want to mirror the rest of the network deployment on the access switches so that QoS is the same across the network, so match default Cisco settings we’ll be setting the following:


CoS Value









DSCP Value










DSCP Values


8, 10

16, 18

24, 26

32, 34

40, 46



CoS Values









You’ll configure these under Switch > Switch Settings and define the QoS through the user interface:


DSCP to COS Map.png

Security Settings

With QoS out of the way and traffic prioritized correctly, any network admin who wants to make sure they have the most control of their network will want to implement security settings. To achieve this, we can use the DHCP server detection mechanism and set it to automatically block new DHCP servers on the network. This will allow the network to remain resilient against people trying to reroute hosts and people who’ve accidentally plugged in something that they should not. This enables us to then allow the servers that should be handing out DHCP addresses to keep the network up and operational.


DHCP Servers.png


Another important area of campus deployment security is authentication. Most security-minded deployments will have a RADIUS server to authenticate clients within the network. The Meraki switch line can integrate with a RADIUS server to provide authentication via 802.1x or MAC Authentication Bypass (MAB) on any switch port. This allows control over who connects and the resources accessible to them. With the MS switch line, either is allowed. Alternatively, a deployment could also use a hybrid mode which will first attempt 802.1x and fall back to MAB before choosing to not allow the client access or move them into a guest or remediation VLAN. It is highly recommended to set up a remediation VLAN to isolate unauthorized, guest or non-compliant devices.


In addition to the authentication methods mentioned above, Meraki also includes a RADIUS server monitoring mechanism within the switch line to enable use of Hybrid Auth, whereby if the RADIUS server is offline, clients will be bounced for re-authentication when the server is available again. This allows a re-prompt login for clients so that they can get network resources when authentication is available, without manual intervention. Use of this particular feature is up to the design of the network, but is extremely useful if the network utilizes a data center or cloud hosted radius server that might become unreachable.


As with any of the above security considerations, no network is complete without the use of Access Control Lists (ACLs) to be able to keep traffic restricted between VLANs. For more information on configuring ACLs on MS switches, please refer to our article on Switch ACL Operation.

Multiple VLANs

With security in place, we can take some initiative in designing the campus to decrease the size of broadcast domains by limiting where VLANs traverse. This requires architecting for VLANs to be trunked to only certain floors of the building or even to only certain buildings depending on the physical environment. This reduces the flooding expanse of broadcast packets so that traffic doesn’t reach every corner of the network every time there’s a broadcast, reducing the potential impact of broadcast storms. For example, the network design may utilize VLAN 3 for Data and VLAN 4 for Voice on the first floor, then apply VLAN 5 for Data and VLAN 6 for Voice on the second floor.


When configuring trunks to the first floor Intermediate Distribution Frames (IDFs), only VLANs 5 and 6 would be required. Such a design can be easily repeated, scaling up to a multi-building campus, assigning one or more VLANs for traffic segregation. This also helps in troubleshooting, helping to locate a user based on the IP address received. Configuring VLANs allowed on a trunk is straightforward in dashboard by using a comma separated list, so to allow VLANs 1,2,3,4 and 10 through 20, simply type 1-4, 10-20.


Note that Meraki switches do not require VLANs to be created or added in order for them to be accepted on trunk interfaces. Therefore, protocols such as Cisco VTP are not necessary, but are compatible with MS switches (i.e. VTP transparent mode).


Update Port.png


For any deployment using Voice over IP (VoIP) it’s a good idea to ensure the voice traffic gets assigned to the correct VLAN. A lot of times, this voice VLAN is defined in addition to the normal traffic VLAN and modern phones will often have a PC connected through them. With Meraki switches we can easily assign a voice VLAN that the phone will get passed so that it can tag its traffic into the appropriate VLAN. This transaction is done via CDP or LLDP, depending on the phone model being used. The configuration of this VLAN is straightforward in the dashboard. Configure the port as an access port, defining the normal VLAN and then, additionally, the Voice VLAN.


Update Port 2.png

Administration & Access Control

Administration visibility and access can be custom-defined within the Meraki dashboard. With different privilege levels and the ability to tag ports/devices to control who has access visibility and control can be handed out as necessary. Within the dashboard, the options of full access, restricted access, or read only access can be defined on a per-individual basis. This allows your help desk staff to have visibility and control over user ports, allows your networking team to have control over specific building or the entire organization level, and allows the stakeholders to have visibility without the ability to break anything.


To isolate or reduce the amount of access certain admins have the ability to tag switches and/or ports provides the ability to give some access to certain ports/switches and reduce the ability of the admin to bring down critical network functions. This can be done easily in the UI on the list of switch ports. Once the ports are tagged, just tie the tag to the admin.


Update Port3.png



At this point the campus network setup is complete, utilizing the cloud. The next step would be ensuring users transition smoothly to the brand new network. The dashboard can be used to provide excellent reporting and knowledge of what’s happening in the new campus deployment. The dashboard will provide an overview of what clients are doing with Meraki’s built-in traffic analytics.


Meraki’s layer 7 application visibility is enhanced to dynamically detect applications running on the network, and provide hostname and IP address visibility. This information can be used to understand user behavior on the network and make policy decisions, such as creating custom traffic shaping rules, or applying group policies to specific users.


Enabling Traffic Analytics

This is an opt-in feature and can be enabled under the Configure > Network-wide settings page by selecting the Enable Hostname Visibility function.


Traffic Analysis.png


Using Traffic Analytics

The enhanced Traffic analytics page will be visible under the Monitor menu whenever hostname visibility is enabled. This page will provide a total unique client count across an entire network over time. The view can be customized for different time periods (last 2 hours, week, day, and month), and on a per-SSID basis.


This page will show the following information on a per-network or per-SSID basis:

  • Application

  • Specific destination for broad application categories such as ‘Miscellaneous secure web’

  • Protocol information

  • Port information

  • Usage breakdowns by %, data sent and received, number of flows

  • Total active time on application across all clients

  • Number of clients

Signature or Application-Level Analytics

By clicking on an application signature (e.g. ‘Dropbox’ or ‘Non-web TCP’), it’s possible to see a complete breakdown of hostnames and IP addresses comprising this application category. Use this information to understand the communication patterns of certain types of traffic.


Application Analysis.png


This page will show you the following information on a per-application basis:

  • Application name, category, ports, description

  • Usage over time

  • List of users

  • Destination list - hostnames and IP addresses contributing to this application

  • Total number of clients per destination

  • Time spent per client

User Level Analytics

By clicking on a specific user, it’s possible to see a complete breakdown of hostnames and IP addresses this user has visited, including the time spent on each destination. Use this information to understand individual user behavior and apply policies on a per-user basis.

User level analytics.png



Summary reports can be mailed directly to the network administrator, relieving them of at least one task when their plate is full. These reports provide updates on the network and can be shared directly to key stakeholders with little to no effort.


Report Email.png


Troubleshooting with MS

Traffic analytics and summary reports emailed directly to the inbox help ease the burden of the engineer. That is, until the first trouble ticket comes in from someone unable to print to their local printer or unable to access a resource on the file server. While this used to be a headache, the task can be done quickly by utilizing the visibility of the dashboard. Simply open up the clients page and type in the name associated with the user or their PC. This will filter the list directly to the machine in question.


Client Search.png


Once the machine in question has been located, there’s more information that can be provided. By clicking on the device we’re interested in, we can get details such as what switch/port or even AP it’s connected to as well as IP address, MAC address and even firewall information.


Client Details.png


This screen allows us to quickly get an overview of the client and even try to ping it from the dashboard. We can also simply take a packet capture of the traffic while having the end user attempt the action that was failing. There’s also additional information if we’re tracking clients using Systems Manager (SM), enabling us to see what software is installed and making sure it’s in sync with any updates or policies that might be applied via SM.

If we want to rule out a physical layer issue this can be done straight from dashboard using the cable test utility available on the switches.


Cable Test.png


This allows us to rule out the physical layer simply and easily without trying to find a cable tester or sending someone to a remote location with cables and a tester to verify. With the information provided by the cable test, extra cables or a new cable run can be implemented if the test comes back as failed.


Another dashboard tool is the topology view, which provides insight into the network and how it’s connected, even showing a redundant link that’s not plugged in. In the case of a hybrid deployment, information can be pulled from directly connected devices to see that things are wired properly in the infrastructure.




It should be noted that the dashboard topology view is only available on networks with at least one MS switch.

Templates for Switching Best Practices

As a network deployment grows to span multiple sites, managing individual devices can become highly cumbersome and unnecessary. To help alleviate these operating costs, the Meraki MS switch offers the use of templates to quickly roll out new site deployments and make changes in bulk.

This guide will outline how to create and use MS switch templates in Dashboard.

Planning a Template Deployment

Before rolling out a template deployment (or enabling templates on a production network), it may be helpful to plan the "units" that make up your deployments. This involves asking questions such as:

  • What are my sites? (e.g. retail location, school, branch office, etc.)
  • How many switches are at each site?
  • Are multiple switches at the same site configured the same way? (e.g. access switches, classroom switches, etc.)

The answers to these questions should directly affect your template deployment, specifically your use of template networks and switch profiles.

Layer 3 Routing & DHCP

Layer 3 settings for networks bound to a template act as exceptions to the template. The Routing & DHCP, OSPF routing, and DHCP servers & ARP pages will be configurable on each network bound to a template and behave the same as if the network was not bound to a template. The one difference is that setting email alerts for newly detected DHCP servers on the DHCP servers & ARP page is available from the parent template's Network-wide > Alerts page and therefore applies to all networks bound to the template. 

Template Networks

A "site" in network deployment terms is usually the same as a "network" in Dashboard terms; each site gets their own Dashboard network. As such, when planning multiple sites to be configured the same way, they will share a template network.

A template network is a network configuration that is shared by multiple sites/networks. Individual site networks can be bound to a template network, so changes to the template will trickle down to all bound sites. A new network can also be created based on a template, making it easy to spin up new sites of the same type.

When planning a template deployment, you should have one template network for each type of site.

Switch Profiles

If multiple switches on a site share the same port configuration, they can easily be deployed and updated using switch profiles. Within a template network, a switch profile defines the per-port configuration for a group of switches. For example, if all sites contain multiple MS220-24 switches that are all configured identically, an administrator can set up a switch profile for their MS220-24 switches. This way, whenever a new MS220-24 is installed at a site, it will automatically assume its switch profile and configure its ports accordingly.

When planning a template deployment, if multiple switches share the same configuration, consider using a switch profile for each type of switch.

Profiles can be overwritten by a local port configuration. If the port of a profile-bound switch is manually configured, that manual configuration will be "sticky" and remain on the switch, even if the profile is changed.

Different MS switch models cannot share the same profile.

Guidelines and Limitations

1. STP bridge priority cannot be changed on switch stacks using templates. In a network template, switch profiles can be assigned STP bridge priority values from Switch > Switch settings > STP configuration. A value assigned to a switch profile, however, will only propagate to the standalone switches bound to that profile; switch stacks will retain the default STP priority of 32768

If an MS switch stack must be the root bridge of an STP domain which has at least one other MS switch stack, it should be placed in a Dashboard network that is not bound to a network template for the STP priority value configured on it to take effect.


The following diagram shows the relationship between network templates and switch profiles:

Screen Shot 2016-03-15 at 2.32.51 PM.png


The following sections walk through configuration and use of switch templates in Dashboard:

Creating a Template Network

As outlined above, a template network should be created for each type of site to be deployed.

To create a template network:

  1. In Dashboard, navigate to Organization > Monitor > Configuration templates.
  2. Click Create a new template.
  3. Select a descriptive name for your template. If this is a completely new template, select Create new and Switch template.
    • If this template should be based on an existing network, select Copy settings from and an existing switch network.
  4. Click Add:
    Screen Shot 2016-11-11 at 2.28.10 PM.png
  5. If you would like to bind existing networks to this new template, select those networks as Target networks and click Bind. Otherwise, click Close.

Multiple device types can be managed within a single template, acting as a combined network. For more information on managing non-switch devices in a template, refer to our documentation.

Once a network has been bound to this template, the template network should appear in the network drop-down:

Screen Shot 2016-03-15 at 1.31.22 PM.png

Configuring a Template Network

Once a template has been created, it can be configured with some global switch settings. These settings will also apply to all networks bound to this template.

To configure a template network, select the template from the network drop-down and configure settings normally under the Switch > Configure menu options. This can include setting a global management VLAN, IPv4 ACLs, port schedules, etc. Switch profiles can also be configured here, as detailed below.

Please note that any configuration changes made here will apply to all bound networks.

Creating and Using Switch Profiles

Switch profiles can be used to bulk configure switches of the same model.

To create a switch profile:

  1. Select the appropriate network template from the network drop-down.
  2. Navigate to Switch > Configure > Profiles.
  3. If no profiles have been created, a Create Profile window will appear. Otherwise, click Create profile.
  4. Select a descriptive name and the switch model to be configured.
  5. Click Save:
    2017-07-13 13_39_22-Switch profiles - Meraki Dashboard.png
  6. Click on the newly created profile.
  7. To configure the profile, select View ports on this profile.
  8. The following port configuration page can be used identically to the normal switch port configuration page:
    2017-07-13 13_47_53-Switch profile ports - Meraki Dashboard.png
  9. Navigate back to the switch profile under Switch > Configure > Profiles > Profile name.
  10. Click Bind switches.
  11. Select one or more switches, then click Bind to profile:
    Screen Shot 2016-03-15 at 2.12.53 PM.png

All bound switches will now use the port configuration set in the switch profile. Any changes made in the profile will now affect all bound switches.

Local Overrides

Once a switch has been bound to a profile, it can still be configured normally through Dashboard. Any port configuration changes made directly on the switch will override the profile configuration, and be reported as a local override. If there is a need to override template configuration on a large scale, it is recommended to keep the the switch port count (with local overrides) per network up to 2500. For better performance on loading the switchport page, reduce the count to 2000. 

In the example below, the bound switch was directly configured to have a custom VLAN set on port 3. In the template network, under Switch > Configure > Switch Profiles, this configuration change is shown under the Local overrides column:
Screen Shot 2016-03-15 at 2.22.24 PM.png

Auto-Binding Switches

When a new network is bound to an existing template with at least one switch profile, the option is available to "auto-bind" switches to that template's profiles. This dramatically reduces the amount of work necessary to set up a new site; if profiles exist for every switch model to be deployed, the only necessary configuration is binding the network to the template.

To auto-bind switches in a new network:

  1. Create a new switch network.
  2. Add devices to the network as normal.
  3. Navigate to Organization > Configuration templates > Template name.
  4. Click Bind additional networks.
  5. Select the newly created network, and check Auto-bind target devices.
  6. Click Bind:

Screen Shot 2016-03-15 at 2.39.16 PM.png

All switches in the new network will now automatically be bound to their appropriate profiles.

Auditing a Template Deployment

Though templates can be used to consolidate most switch configuration, there may be exceptions for individual ports or settings that necessitate local configuration changes on the switch. Since local configuration changes override template and profile configurations, it is important to keep track of any local configuration changes across the organization.

To view local configuration overrides of a template:

  1. Navigate to Organization > Configuration templates > Template name.
  2. The Local overrides column will show if any networks are overriding the template configuration:


To view local configuration overrides of a switch profile:

  1. Select the appropriate template from the network drop-down.
  2. Navigate to Switch > Configure > Profiles > Profile name.
  3. The Local overrides column will show if any switches are overriding the profile configuration:

Screen Shot 2016-03-16 at 1.45.38 PM.png

Switch Replacement Walkthrough for Stacks

Below are instructions for how to copy configurations from a failed switch that is part of a stack and where the network is bound to a template.

  1. On the Organization > Configure > Inventory page, claim the new switch and then add the new switch to the existing network.


  1. Bind new switch to the template profile.
  • Navigate to the parent template in Dashboard.

  • Navigate to Switch > Configure > Profiles within that template.

  • Click on the corresponding profile.

  • Click the Bind switches button.

  • Click the checkbox next to the new switch and click the Bind to profile button.


  1. Firmware upgrade for the new switch.
  • Provide the new switch a physical uplink connection and then power it on. The new switch needs to be brought online as a standalone device, not yet added to the stack so that it can update its firmware.

  • Confirm via the connectivity graph or Support that the switch has upgraded its firmware.

  • While the new switch upgrades, you may proceed with the below steps, stopping before Step 8 until the new switch has had a chance to upgrade.


  1. Obtain Current Configuration.
  • Navigate to Switch > Configure > Profiles within the parent template.

  • Click on the profile in question.

  • Filter in the  Search switches… field for the name of the old switch.

  • Note the local override configuration. Save in a text editor for use in Step 5.


  • In the child network, navigate to the Switch > Monitor > Switch ports page.

  • In the Search switches… field, filter by the name of the old switch and select the below column options.


  • Then, take screenshots of the port configurations or copy and paste into a spreadsheet or text editor application.


  1. Configure replacement switch.
  • On the Switch > Monitor > Switch ports page of the child network, configure the switch ports of the new switch based on the configuration gathered in Step 4.

  • Once complete, navigate back to the template profile details page from Step 4 and ensure that the local overrides between the old and new switch match.


  1. Power down the old switch.
  2. Unbind Old Switch from profile.
  • On the template profile details page, click the check box next to the old switch and then click the Unbind button.


  1. Add the new switch to the stack.
  • After confirming that the new switch has upgraded its firmware as mentioned in Step 3, power down the new switch.

  • In the child network, navigate to the Switch > Monitor > Switch stacks page.

  • Click on the stack in question.

  • Click the Manage members tab.

  • Under Add members, click the checkbox next to the new switch and then click the Add switches button.


  1. Remove the old switch from the stack.
  • In the child network, navigate to the Switch > Monitor > Switch stacks page.

  • Click on the stack in question.

  • Click the Manage members tab.

  • Click the checkbox next to the old switch and then click the Remove switches button.


  1. Physically cable and stack the new switch.
  1. Power on the new switch.

Additional Notes and Resources

If many networks are being deployed at once, the bulk network creation tool can be used to bind templates and profiles in bulk.

Please reference our documentation for more information on Meraki configuration templates.

Best Practice Design - MR Wireless

High Density Wi-Fi Deployments

High-density Wi-Fi is a design strategy for large deployments to provide pervasive connectivity to clients when a high number of clients are expected to connect to Access Points within a small space. A location can be classified as high density if more than 30 clients are connecting to an AP. To better support high-density wireless, Cisco Meraki access points are built with a dedicated radio for RF spectrum monitoring allowing the MR to handle the high-density environments. Unless additional sensors or air monitors are added, access points without this dedicated radio have to use proprietary methods for opportunistic scans to better gauge the RF environment and may result in suboptimal performance.


Large campuses with multiple floors, distributed buildings, office spaces, and large event spaces are considered high density due to the number of access points and devices connecting. More extreme examples of high-density environments include sports stadiums, university auditoriums, casinos, event centers, and theaters.


As Wi-Fi continues to become ubiquitous, there is an increasing number of devices consuming an increasing amount of bandwidth. The increased need for pervasive connectivity can put additional strain on wireless deployments. Adapting to these changing needs will not always require more access points to support greater client density. As the needs for wireless connectivity have changed over time, the IEEE 802.11 wireless LAN standards have changed to adapt to greater density, from the earliest 802.11a and 802.11b standards in 1999 to the most recent 802.11ac standard, introduced in 2013 and the new 802.11ax standard currently being developed.


In the recent past, the process to design a Wi-Fi network centered around a physical site survey to determine the fewest number of access points that would provide sufficient coverage. By evaluating survey results against a predefined minimum acceptable signal strength, the design would be considered a success. While this methodology works well to design for coverage, it does not take into account requirements based on the number of clients, their capabilities, and their applications' bandwidth needs.


Understanding the requirements for the high density design is the first step and helps ensure a successful design. This planning helps reduce the need for further site surveys after installation and for the need to deploy additional access points over time. It is recommended to have the following details before moving onto the next steps in the design process:

  • Type of applications expected on the network

  • Supported technologies (802.11 a/b/g/n/ac/ax)

  • Type of clients to be supported (Number of spatial streams, technologies, etc.)

  • Areas to be covered

  • Expected number of simultaneous devices in each area

  • Aesthetic requirements (if any)

  • Cabling constraints (if any)

  • Power constraints (It’s best to have PoE+ capable infrastructure to support high performance APs)

Capacity Planning

Once the above mentioned details are available, capacity planning can then be broken down into the following phases:

  • Estimate Aggregate Application Throughput

  • Estimate Device Throughput

  • Estimate Number of APs

Calculating the number of access points necessary to meet a site's bandwidth needs is the recommended way to start a design for any high density wireless network.

Estimate Aggregate Application Throughput

Usually there is a primary application that is driving the need for connectivity. Understanding the throughput requirements for this application and any other activities on the network will provide will provide a per-user bandwidth goal. This required per-user bandwidth will be used to drive further design decisions. Throughput requirements for some popular applications is as given below:



Web Browsing

500 kbps (kilobits)


16 - 320 kbps

Video conferencing

1.5 Mbps

Streaming - Audio

128 - 320 kbps

Streaming - Video

768 kbps

Streaming - Video HD

768 kbps - 8mbps

Streaming - 4K

8 mbps - 20mbps

Note: In all cases, it is highly advisable to test the target application and validate its actual bandwidth requirements. It is also important to validate applications on a representative sample of the devices that are to be supported in the WLAN. Additionally, not all browsers and operating systems enjoy the same efficiencies, and an application that runs fine in 100 kilobits per second (Kbps) on a Windows laptop with Microsoft Internet Explorer or Firefox, may require more bandwidth when being viewed on a smartphone or tablet with an embedded browser and operating system

Once the required bandwidth throughput per connection and application is known, this number can be used to determine the aggregate bandwidth required in the WLAN coverage area. It is recommended to have an aggregate throughput for different areas such as classrooms, lobby, auditorium, etc. as the requirements for these areas might be different.


As an example, we will design a high-density Wi-Fi network to support HD video streaming that requires 3 Mbps of throughput. Based on the capacity of the auditorium, there may be up to 600 users watching the HD video stream. The aggregate application throughput can be calculated using the below given formula:

(Application Throughput) x (Number of concurrent Users) = Aggregate Application Throughput

3 Mbps x 600 users = 1800 Mbps 

Note that 1.8 Gbps exceeds the bandwidth offerings of almost all internet service providers. The total application bandwidth we are estimating is a theoretical demand upper bound, which will be used in subsequent calculations.

Estimate Device Throughput

While Meraki APs support the latest technologies and can support maximum data rates defined as per the standards, average device throughput available often dictated by the other factors such as client capabilities, simultaneous clients per AP, technologies to be supported, bandwidth, etc.


Client capabilities have a significant impact on throughput as a client supporting only legacy rates will have lower throughput as compared to a client supporting newer technologies. Additionally, bands supported by the client may also have some impact on the throughput. Meraki APs have band steering feature that can be enabled to steer dual band clients to 5 GHz.

Note: A client supporting only 2.4 GHz might have lower throughput as compared to a dual band client since higher noise level is expected on the 2.4GHz as compared to 5 GHz and the client might negotiate lower data rate on 2.4GHz.


In certain cases, having dedicated SSID for each band is also recommended to better manage client distribution across bands and also removes the possibility of any compatibility issues that may arise.

Note: The option to have 2.4GHz only SSID is disabled by default. Please contact Meraki support to enable this feature.


To assess client throughput requirements, survey client devices and determine their wireless capabilities. It is important to identify the supported wireless bands (2.4 GHz vs 5 GHz), supported wireless standards (802.11a/b/g/n/ac), and the number of spatial streams each device supports. Since it isn’t always possible to find the supported data rates of a client device through its documentation, the Client details page on Dashboard can be used as an easy way to determine capabilities.


Example Client details listing


Wi-Fi is based on CSMA/CA and is half-duplex. That means only one device can talk at a time while the other devices connected to the same AP wait to for their turn to access the channel. Hence, simultaneous client count also has an impact on AP throughput as the available spectrum is divided among all clients connected to the AP. While Meraki has client balancing feature to ensure clients are evenly distributed across AP in an area an expected client count per AP should be known for capacity planning.

Note: In order to ensure quality of experience it is recommended to have around 25 clients per radio or 50 clients per AP in high-density deployments.


Starting 802.11n, channel bonding is available to increase throughput available to clients but as a result of channel bonding the number of unique available channels for APs also reduces. Due to the reduced channel availability, co-channel interference can increase for bigger deployments as channel reuse is impacted causing a negative impact on overall throughput.

Note:In a high-density environment, a channel width of 20 MHz is a common recommendation to reduce the number of access points using the same channel.


Client devices don’t always support the fastest data rates. Device vendors have different implementations of the 802.11ac standard. To increase battery life and reduce size, most smartphone and tablets are often designed with one (most common) or two (most new devices) Wi-Fi antennas inside. This design has led to slower speeds on mobile devices by limiting all of these devices to a lower stream than supported by the standard. In the chart below, you can see the maximum data rates for single stream (433 Mbps), two stream (866 Mbps), and three stream (1300 Mbps). No devices on the market today support 4 spatial streams or wider 160 MHz channels, but these are often advertised as optional "Wave 2" features of the 802.11ac standard.



20 MHz Channel Width

40 MHz Channel Width

80 MHz Channel Width

1 Stream

87 Mbps

200 Mbps

433 Mbps

2 Streams

173 Mbps


866 Mbps

3 Streams

289 Mbps

600 Mbps

1300 Mbps

The actual device throughput is what matters to the end user, and this differs from the data rates. Data rates represent the rate at which data packets will be carried over the medium. Packets contain a certain amount of overhead that is required to address and control the packets. The actual throughput is payload data without the overhead. Based on the advertised data rate, next estimate the wireless throughput capability of the client devices. A common estimate of a device's actual throughput is about half of the data rate as advertised by its manufacturer. As noted above, it is important to also reduce this value to the data rate for a 20 MHz channel width. Below are the most common data rates and the estimated device throughput (half of the advertised rate). Given the multiple factors affecting performance it is a good practice to reduce the throughput further by 30%



Data rate (Mbps)

Estimated Throughput (1/2 advertised rate)

Throughput w/Overhead

802.11a or 802.11g

54 Mbps

27 Mbps

~19 Mbps

1 stream 802.11n

72 Mbps

36 Mbps

~25 Mbps

2 stream 802.11n

144 Mbps

72 Mbps

~50 Mbps

3 stream 802.11n

216 Mbps

108 Mbps

~76 Mbps

1 stream 802.11ac

87 Mbps

44 Mbps

~31 Mbps

2 stream 802.11ac

173 Mbps

87 Mbps

~61 Mbps

3 stream 802.11ac

289 Mbps

145 Mbps

~102 Mbps

1 stream 802.11ax

143 Mbps

72 Mbps

~50 Mbps

2 stream 802.11ax

287 Mbps

144 Mbps

~101 Mbps

3 stream 802.11ax

430 Mbps

215 Mbps

~151 Mbps

Estimate the Number of APs

It's important to document and review the requirements and assumptions and confirm they are reasonable. Changing one assumption will significantly impact the number of access points and the costs. If you assumed just 1.5 Mbps for HD video chat (as recommended by Microsoft Skype and Cisco Spark) you would need half the number of access points. If you assumed 5 Mbps was required for HD video streaming (as recommended by Netflix) you would need more access points. If you were designing to support 600 1 stream devices instead of 600 3 stream laptops, you would need roughly 3 times the number of access points. For this example, we now have the following requirements and assumptions:

  • Video streaming requires 3 Mbps for HD quality video

  • There will be 600 concurrent users streaming video to their laptop

  • Every user has an Apple MacBook Pro or similar

  • All laptops support 802.11ac and are capable of 3 spatial streams

  • The network will be configured to use 20 MHz channels

  • Each access point can provide up to 101 Mbps of wireless throughput

We can now calculate roughly how many APs are needed to satisfy the application capacity. Round to the nearest whole number.


 Number of Access Points based on throughput = (Aggregate Application Throughput) / (Device Throughput)

Number of Access Points based on throughput = 1800 Mbps/101Mbps = ~18 APs


In addition to the number of APs based on throughput, it is also important to calculate the number of APs based on clients count. To determine number of APs, first step is to estimate the clients per band. With newer technologies, more devices now support dual band operation and hence using proprietary implementation noted above devices can be steered to 5 GHz.

Note: A common design strategy is to do a 30/70 split between 2.4 GHz and 5 GHz     

For this example, we now have the following requirements and assumptions:

  • There will be 600 concurrent users streaming video to their laptop

  • Concurrent 2.4 GHz clients = 600 * 0.3 = 180

  • Concurrent 5 GHz clients = 600 * 0.7 = 420

We can now calculate roughly how many APs are needed to satisfy the client count. Round to the nearest whole number.


Number of Access Points based on client count = (Concurrent 5 GHz clients) / 25

Number of Access Points based on client count = 420 / 25 = ~17 APs


Now the Number of APs required can be calculated by using the higher of the two AP counts.

Number of Access Points = Max (Number of Access Points based on throughput, Number of Access Points based on client count)

Number of Access Points = Max (18,17) = 18 APs

Site Survey and Design

Performing an active wireless site survey is a critical component of successfully deploying a high-density wireless network and helps to evaluate the RF propagation in the actual physical environment. The active site survey also gives you the ability to actively transmit data and get data rate coverage in addition to the range.


In addition to verifying the RF propagation in the actual environment, it is also recommended to have a spectrum analysis done as part of the site survey in order to locate any potential sources of RF interference and take steps to remediate them. Site surveys and spectrum analysis are typically performed using professional grade toolkits such as Ekahau Site Survey or Fluke Networks Airmagnet. Ensure a minimum of 25 dB SNR throughout the desired coverage area. Remember to survey for adequate coverage on 5GHz channels, not just 2.4 GHz, to ensure there are no coverage holes or gaps. Depending on how big the space is and the number of access points deployed, there may be a need to selectively turn off some of the 2.4GHz radios on some of the access points to avoid excessive co-channel interference between all the access points.

Note: It is recommended to have complete coverage for both bands.

Note: Read our guide on Conducting Site Surveys with MR Access Points for more help on conducting an RF site survey.

Mounting Access Points

The two main strategies for mounting Cisco Meraki access points are ceiling mounted and wall mounted. Each mounting solution has advantages.



Ceiling mounted MR, Cisco San Francisco

Ceiling mounted access points are placed on a ceiling tile, T-bar, roof, or conduit extending down from the roof. This brings advantages such as a clear line-of-sight to the user devices below and flexibility in where to place the access point. Access points can be easily placed with even spacing in a grid and at the intersection of hallways. The disadvantage is the ceiling height and the height of the access point could negatively impact the coverage and capacity.

  • If access points have to be installed below 8 feet (~3 meters), indoor access points with integrated omni antennas or external dipole/can omni antennas are recommended.

  • If access points have to be installed between 8 - 25 feet (3 - 8 meters), indoor access points with external downtilt omni antennas are recommended.



Wall mounted MRs, Cisco San Francisco


When ceiling heights are too high (25+ feet) or not feasible to mount access points (hard ceiling), a wall mounted design is recommended. The access points are mounted on drywall, concrete or even metal on the exterior and interior walls of the environment.  Access points are typically deployed 10-15 feet (3-5 meters) above the floor facing away from the wall. Remember to install with the LED facing down to remain visible while standing on the floor. Designing a network with wall mounted omnidirectional APs should be done carefully and should be done only if using directional antennas is not an option. 



Pole mounted MR66 with Sector antennas, Cisco San Francisco

Directional Antennas

If there is no mounting solution to install the access point below 26 feet (8 meters), or where ceilings are replaced by the stars and the sky (outdoors), or if directional coverage is needed it is recommend to use directional antennas. When selecting a directional antenna, you should compare the horizontal/vertical beam-width and gain of the antenna.


When using directional antennas on a ceiling mounted access point, direct the antenna pointing straight down. When using directional antennas on a wall mounted access point, tilt the antenna at an angle to the ground. Further tilting a wall mounted antenna to pointing straight down will limit its range.


Cisco Meraki offers 6 types of indoor-rated external antennas (available for MR42E and MR53E):




Note: C/D/E/F series Meraki antennas are smart and are automatically detected when connected to the Meraki APs and don’t need additional configuration within dashboard.


Cisco Meraki offers 4 types of outdoor external antennas and supports 5 types of outdoor antennas. Cisco Meraki has certified the antennas for use with the Meraki MR84, MR74, MR72, MR66, and MR62 access points. AIR-ANT2514-P4M can only be used with MR84: 




Using 3rd party antennas with gain higher than 11 dBi on 2.4 GHz or 13 dBi on 5 GHz may violate regulations in some countries. Meraki certifies only Meraki antennas.

Access Point Placement

Once the number of access points has been established, the physical placement of the AP’s can then take place. A site survey should be performed not only to ensure adequate signal coverage in all areas but to additionally assure proper spacing of APs onto the floorplan with minimal co-channel interference and proper cell overlap. It’s very important to consider the RF environment and construction materials used for AP placement.


Review the designs below from the Cisco Meraki San Francisco office. The 4th Floor was constructed to support Cisco's sales team, customer briefings, and a cafe. In contrast, the 3rd floor was constructed to support Cisco's 24x7 technical support, our small IT department, and Cisco's Collaboration group with applications such as Telepresence and Cisco Spark HD video chat. The density of the 3rd floor is double that of the 4th floor.


High density with 30 access points, Cisco San Francisco, 4th Floor




Ultra High density with 60 access points, Cisco San Francisco, 3rd Floor

SSID Configuration

Making the changes described in this section will provide a significant improvement in overall throughput by following the best practices for configuring SSIDs, IP assignment, Radio Settings, and traffic shaping rules.

Number of SSIDs 

The maximum recommended number of SSIDs is 3, and in a high-density environment, this recommendation becomes a requirement. If needed, the number of SSIDs can be increased to 5 but should be done only when necessary.  Using more than 5 SSIDs creates substantial airtime overhead from management frames: consuming 20% or more of the bandwidth available and limiting the maximum throughput to less than 80% of the planned capacity. Create a separate SSID for each type of authentication required (Splash, PSK, EAP) and consolidate any SSIDs that use the same type authentication.

Adding several SSIDs has a negative impact on capacity and performance. See the article Multi-SSID Deployment Considerations for more detail.  

Enable Bridge Mode

Bridge mode is recommended to improve roaming for voice over IP clients with seamless Layer 2 roaming. In bridge mode, the Meraki APs act as bridges, allowing wireless clients to obtain their IP addresses from an upstream DHCP server. Bridge mode works well in most circumstances, provides seamless roaming with the fastest transitions. When using Bridge mode, all APs in the intended area (usually a floor or set of APs in an RF Profile) should support the same VLAN to allow devices to roam seamlessly between access points.


For seamless roaming in bridge mode, the wired network should be designed to provide a single wireless VLAN across a floor plan. If the network requires a user to roam between different subnets, using L3 roaming is recommended. Bridge mode will require a DHCP request when roaming between two subnets or VLANs. During this time, real-time video and voice calls will noticeably drop or pause, providing a degraded user experience.

NAT mode is not recommended for Voice over IP: With NAT mode enabled, devices will request a new DHCP IP address on each roam. Moving between APs in NAT mode will cause the connection to break when moving AP to AP. Applications requiring continuous traffic streams such as VoIP, VPN or media streams will be disrupted during roaming between APs. 

Layer 3 Roaming

Large wireless networks that need roaming across multiple VLANs may require layer 3 roaming to enable application and session persistence while a mobile client roams. With layer 3 roaming enabled, a client device will have a consistent IP address and subnet scope as it roams across multiple APs on different VLANs/subnets.


Cisco Meraki's Layer 3 roaming is a distributed, scalable way for Access Points to establish connections with each other without the need for a controller or concentrator. The first access point that a device connects to will become the anchor Access Point. The anchor access point informs all of the other Cisco Meraki access points within the network that it is the anchor for a particular client. Every subsequent roam to another access point will place the device/user on the VLAN that defined by the anchor AP. This is ideal for high-density environments that require Layer 3 roaming, and there is no throughput limitation on the network.

The MR continues to support Layer 3 roaming to a concentrator requires an MX security appliance or VM concentrator to act as the mobility concentrator. Clients are tunneled to a specified VLAN at the concentrator, and all data traffic on that VLAN is now routed from the MR to the MX. The concentrator creates a choke-point, and in a high-density environment, the number of clients may be limited by the throughput of the MX concentrator.

Radio Settings & Auto RF

Cisco Meraki access points feature a third radio dedicated to continuously and automatically monitoring the surrounding RF environment to maximize Wi-Fi performance even in the highest density deployment. By measuring channel utilization, signal strength, throughput, signals from non-Meraki APs, and non-WiFi interference, Cisco Meraki APs automatically optimize the radio transmit power and selected operating channels of individual APs to maximize system-wide capacity.


Additionally, it is recommend to use RF profiles to better tune the wireless network to support the performance requirements. A separate RF profile should be created for each area that needs unique set of RF settings. The following details can be set in the RF Profiles:

Band Selection

If the client devices require 2.4 GHz, enable 'Dual-band with band steering' to enable client devices to use both 2.4 GHz channels and 5 GHz. Devices will be steered to use the 5 GHz band. For more details refer to the Band Steering Overview article. With a dual-band network, client devices will be steered by the network. If 2.4 GHz support is not needed, it is recommended to use “5 GHz band only”. Testing should be performed in all areas of the environment to ensure there are no coverage holes.



Set Minimum Bitrate

Using RF Profiles, minimum bit rate can be set on a per band or a per SSID basis. For high-density networks, it is recommended to use minimum bit rates per band. If legacy 802.11b devices need to be supported on the wireless network, 11 Mbps is recommended as the minimum bitrate on 2.4 GHz. Adjusting the bitrates can reduce the overhead on the wireless network and improve roaming performance. Increasing this value requires proper coverage and RF planning. An administrator can improve the performance of clients on the 2.4 GHz and 5 GHz band by disabling lower bitrates. Management frames will be sent out at the lowest selected rate. Clients must use either the lowest selected rate or a faster one. Selecting a Minimum bitrate of 12Mbps or greater will prevent 802.11b clients from joining and will increase the efficiency of the RF environment by sending broadcast frames at a higher bitrate.

Note: As per standards, 6 Mbps, 12 Mbps and 24 Mbps are the mandatory data rates. Cisco's San Francisco office uses 18 Mbps as the Minimum bitrate.




Auto Power Reduction

Every second the access point's radios samples the signal-to-noise (SNR) of neighboring access points. The SNR readings are compiled into neighbor reports which are sent to the Meraki Cloud for processing. The Cloud aggregates neighbor reports from each AP. Using the aggregated data, the Cloud can determine each AP's direct neighbors and how by much each AP should adjust its radio transmit power so coverage cells are optimized. For determining the changes in TX power the cloud tries to ensure that there are at least 3 heard by each  in the area. The calculations are done every 20 minutes and once complete, the Cloud instructs each AP to decrease or increase the transmit power.TX power can be reduced by 1-3 dB per iteration and is increased in 1 dB iterations.


AutoRF tries to reduce the TX power uniformly for all APs within a network but in complex high density network it is necessary to limit the range and the values for the AP to use. To better support complex environments, minimum and maximum TX power settings can be configured in RF profiles.


Note: For 2.4 GHz, Auto Power reduction algorithm allows TX power to go down only up to 5 dBm. For 5 GHz, Auto Power reduction algorithm allows TX power to go down only up to 8 dBm. If lower TX power is needed, APs can be statically set to lower power.


Auto Channel selection

Adding additional access points on the same channel with overlapping coverage does not increase capacity. To prevent access points nearby from sharing the same channel, Cisco Meraki access points automatically adjusts the channels of the radios to avoid RF interference (Both 802.11 and non-802.11) and develop a channel plan for the Wireless Network. Channels can be selectively assigned to be used with each RF profile. By using channels selectively, network administrators can control the co-channel interference more effectively.




Default Channel Width
Cisco Meraki provides the ability to configure the MR series access points using either 20-MHz (VHT20), 40-MHz (VHT40) or 80-MHz (VHT80) channels on the 5GHz band. When deploying within a high-density environment, it is recommended that access points be configured using 20-MHz (VHT20) channel widths for the following reasons:
  • In moving towards 40-Mhz or 80-Mhz channels, you are effectively halving (if selecting 40-MHz) or quartering (80-MHz) the number of non-overlapping 5GHz channels by doubling or quadrupling the channel width due to channel bonding. This, in turn, increases the distance at which access points must be placed if co-channel interference (CCI) and adjacent channel interference (ACI) are to be kept to a minimum.

  • While using 40-MHz or 80-Mhz channels might seem like an attractive way to increase overall throughput, one of the consequences is reduced spectral efficiency due to legacy (20-MHz only) clients not being able to take advantage of the wider channel width resulting in the idle spectrum on wider channels. Depending on the RF environment, even clients capable of 40 and 80 MHz may only use the 20 MHz base channel and is often observed in highly contentious RF environments.  

  • Due to the mix of clients usually seen in high-density deployments (such as laptops, mobile phones, and tablets etc.) the capabilities of clients in such environments also vary (some will support 20-Mhz, some will support 40-MHz and some will support 80-Mhz channels). Due to this, it is better to have each client communicating at the lowest common channel width, giving each client equal access to the network. It is better to have 4 clients communication at 20-MHz with 4 access points, rather than 4 clients of mixed capability communicating with 1 access points at 80-MHz resulting in idle.

DFS Channels and Channel Reuse
UNII-2/2e band has additional channels that can be used for WLAN but overlap with radar applications and are commonly referred as DFS channels. Cisco Meraki APs support 802.11h that provide two key features: Dynamic Frequency Selection (DFS) and Transmit Power Control (TPC). By using these features customers can use the additional DFS channels thereby bringing the total available 5 GHz channels to 19. Using 19 channels increases the channel reuse to ensure better CCI.
Note: Channel reuse is the process of using the same channel on APs within a geographic area that are separated by sufficient distance to cause minimal interference with each other.

For an example deployment with DFS channels enabled and channel reuse is not required, the below grid shows 12 access points with no channel reuse. As there are 19 channels in the US, when you reach 20 access points in the same space, the APs will need to reuse a channel.





For a deployment example where DFS is disabled and channel reuse is required, the below diagram shows 4 channels being reused in the same space. When channel reuse cannot be avoided, the best practice is to separate the access points on the same channel as much as possible.




Using RX-SOP, the receive sensitivity of the AP can be controlled. The higher the RX-SOP level, the less sensitive the radio is and the smaller the receiver cell size will be. The reduction in cell size ensures that the clients are connected to the nearest access point using the highest possible data rates. In a high density environment, the smaller the cell size, the better. This should be used with caution however as you can create coverage area issues if this is set too high. It is best to test/validate a site with varying types of clients prior to implementing RX-SOP in production.
The table below gives the recommended values for RX-SOP in high density deployments:

802.11 Band

High Threshold

Medium Threshold

Low Threshold

5 GHz

-76 dBm

-78 dBm

-80 dBm

2.4 GHz

-79 dBm

-82 dBm

-85 dBm

Note: RX-SOP is supported on 802.11 ac Wave 2 and 802.11 ax APs i.e. MR30H/33/42/52/53/74/84/42E/53E/45/55

Client Balancing
Client balancing is recommended for high density applications as the feature tries to balance the number of users across APs. The feature is available starting MR25 firmware version and is enabled by default.
Roaming in High Density



Client roaming between access points

Enable Fast Roaming

Cisco Meraki MR access points support a wide array of fast roaming technologies.  For a high-density network, roaming will occur more often, and fast roaming is important to reduce the latency of applications while roaming between access points. All of these features are enabled by default, except for 802.11r. 

  • 802.11r (Fast BSS Transition) - 802.11r allows encryption keys to be stored on all of the APs in a network. This way, a client doesn't need to perform the full re-authentication process to a RADIUS server every time it roams to a new access point within the network. This feature can be enabled from the Configure > Access control page under Network access > 802.11r. If this option does not appear, a firmware update may be required
  • Opportunistic Key Caching (OKC) - 802.11r and OKC accomplish the same goal of reducing roaming time for clients, the key difference being that 802.11r is standard while OKC is proprietary. Client support for both of these protocols will vary but generally, most mobile phones will offer support for both 802.11r and OKC. 
  • 802.11i (PMKID caching)PMK Caching, defined by IEEE 802.11i, is used to increase roaming performance with 802.1X by eliminating the RADIUS exchange that occurs. From a high-level perspective, this occurs by the client sending a PMKID to the AP which has that PMKID stored. If it’s a match the AP knows that the client has previously been through 802.1X authentication and may skip that exchange.  
  • 802.11k (Neighbor BSS) -802.11k reduces the time required to roam by allowing the client to more quickly determine which AP it should roam to next and how. The AP the client is currently connected to will provide it with information regarding neighboring APs and their channels.

Traffic Shaping

Set Bandwidth Limits 

Consider placing a per-client bandwidth limit on all network traffic. Prioritizing applications such as voice and video will have a greater impact if all other applications are limited. For more details refer to the article Configuring Bandwidth Limitations and Enabling Speed Burst on Wireless Networks. 5 Mbps is a good recommendation for per-client bandwidth limit in high-density environment. You can override this limit for specific devices and applications.

Note: this is not limiting the wireless data rate of the client but the actual bandwidth as the traffic is bridged to the wired infrastructure.

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose the SSID from the SSID drop-down menu at the top of the screen.
  2. Set a 'Per-client bandwidth limit' to 5 Mbps with 'Speed Burst'. This will apply to all non-voice application traffic. This step in the guide is optional. 
  3. Set a 'Per-SSID bandwidth limit' to unlimited.




SpeedBurst enables a bursts of four times the allotted bandwidth limit for five seconds.  

Define Traffic Shaping Rules

Use traffic shaping to offer application traffic the necessary bandwidth. It is important to ensure that the application has enough bandwidth as estimated in the capacity planning section. Traffic shaping rules can be implemented to allow real-time voice and video traffic to use additional bandwidth, and the rules can be used to block or throttle applications such as P2P, social networks. 

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose the SSID from the SSID drop-down menu at the top of the screen.
  2. Click the drop down menu next to Shape traffic and choose Shape traffic on this SSID, then click Create a new rule.
  3. Click Add + and select 'All voice & video conferencing
  4. Set Per-client bandwidth limit to 'Ignore SSID per-client limit (unlimited)' and click Save changes.



Convert Multicast to Unicast

Cisco Meraki APs automatically perform a multicast-to-unicast packet conversion using the IGMP protocol. The unicast frames are then sent at the client negotiated data rates rather than the minimum mandatory data rates, ensuring high-quality video transmission to large numbers of clients. This can be especially valuables in instances such as classrooms, where multiple students may be watching a high-definition video as part a classroom learning experience. 

Limit Broadcasts 
Cisco Meraki APs automatically limits duplicate broadcasts, protecting the network from broadcast storms. The MR access point will limit the number of broadcasts to prevent broadcasts from taking up air-time. This also improves the battery life on mobile devices by reducing the amount of traffic they must process.
Airtime Fairness 
Cisco Meraki APs have Airtaime fairness turned on by default and ensures that co-existing clients connected to a single AP have equal access to the airtime in the APs coverage area.


Wireless Layer 3 Roaming Best Practices

Large WLAN networks (for example, those found on large campuses) may require IP session roaming at layer 3 to enable application and session persistence while a mobile client roams across multiple VLANs. For example, when a user on a VoIP call roams between APs on different VLANs without layer 3 roaming, the user's session will be interrupted as the external server must re-establish communication with the client's new IP address. During this time, a VoIP call will noticeably drop for several seconds, providing a degraded user experience. In smaller networks, it may be possible to configure a flat network by placing all APs on the same VLAN.


However, on large networks filled with thousands of devices, configuring a flat architecture with a single native VLAN may be an undesirable network topology from a best practices perspective; it may also be challenging to configure legacy setups to conform to this architecture. A turnkey solution designed to enable seamless roaming across VLANs is therefore highly desirable when configuring a complex campus topology. Using Meraki's secure auto-tunneling technology, layer 3 roaming can be enabled using a mobility concentrator, allowing for bridging across multiple VLANs in a seamless and scalable fashion.

Typical Campus Architecture 

Large campuses are often designed with a multi-VLAN architecture to segment broadcast traffic. Typically, network best practices dictate a one-to-one mapping of an IP subnet to a VLAN, e.g., client devices joining VLAN 10 will be assigned an IP address out of the subnet range In this design, clients in different VLANs will receive IP addresses in different subnets via a DHCP server. Multi-VLAN architectures can vary to include multiple subnets within a building (e.g., one for each floor/area), or multiple subnets across a large site (e.g., one for each building/region in a large campus or enterprise environment).


As seen in the diagram below, the typical campus architecture has the core L3 switch connected to multiple L3 distribution switches (one per site), with each distribution switch then branching off to L2 access switches configured on different VLANs. In this fashion, each site is assigned a different VLAN to segregate traffic from different sites. Without an L3 roaming service, a client connected to an L2 access switch at Site A will not be able to seamlessly roam to a L2 access switch connected to Site B. Upon associating with an AP on Site B, the client would obtain a new IP address from the DHCP service running on the Site B scope. In addition, a particular route configuration or router NAT may also prevent clients from roaming, even if they do retain their original IP address.



With layer 3 roaming, a client device must have a consistent IP address and subnet scope as it roams across multiple APs on different VLANs/subnets. Meraki's auto-tunnelling technology achieves this by creating a persistent tunnel between the L3 enabled APs and depending on the architecture, a mobility concentrator. The two layer 3 roaming architectures are discussed in detail below.  

Distributed Layer 3 Roaming

Distributed layer 3 roaming maintains layer 3 connections for end devices as they roam across layer 3 boundaries without a concentrator. The first access point that a device connects to will become the anchor access point. The anchor access point informs all of the other Meraki access points within the network that it is the anchor for a particular client. Every subsequent roam to another access point will place the device/user on the VLAN defined by the anchor AP.


Distributed layer 3 roaming is very scalable because the access points are establishing connections with each other without the need for a concentrator. The target access point will look up in the shared user database and contact the anchor access point. This communication does not traverse the Meraki Cloud and is a proprietary protocol for secure access point to access point communication. UDP port 9358 is used for this communication between the APs.



As you can see in the above diagram, Anchor AP is the AP where the client gets connected the first time. An AP to which the client is associated is called a hosting AP, it does not connect with the broadcast domain of the client. Hosting AP will create a tunnel with the Anchor AP to maintain the IP address of the client.

In case the hosting AP has direct access to the broadcast domain of the client, then the hosting AP will become the Anchor AP for that client.

A client's anchor AP will timeout after the client has left the network for 30 seconds.

Broadcast Domain Mapping

Each Meraki Access point sends layer 2 broadcast probes over the Ethernet uplink to discover broadcast domain boundaries on each VLAN that a client could be associated with when connected. This is done for multiple reasons. One reason is because there can be instances where AP1 is connected to an access port (no VLAN tag) and AP2 is connected to a trunk port where the same VLAN is used, but the VLAN ID is present and tagged on the uplink. These broadcast frames are of type 0x0a89 sent every 150 seconds.


There could also be situations where the same VLAN ID is used in different buildings (representing different broadcast domains), so it’s important to ensure exactly which APs and VLAN IDs can be found on which broadcast domains. Apart from tunnel load balancing and resiliency, the broadcast domain mapping and discovery process also allows for anchor APs and hosting APs to have a real-time view into which VLANs are shared between the two APs. This allows for efficient decision making when it comes to layer 2 vs layer 3 roaming for a client, as described in the “VLAN Testing and Dynamic Configuration” section below.  This is important so that anchor APs for clients can be dynamically switched for load balancing reasons, or in failover situations where the original anchor AP is no longer available. 
Meraki APs will send out probes to discover the following broadcast domains:

  • The AP’s native VLAN
  • Any VLAN that is configured for the SSID on the AP
  • Any VLAN that is dynamically learned via a client policy 
  • Any VLAN that an AP has recently received a broadcast probe on from another Meraki AP in Dashboard network

The power of the broadcast domain mapping is that this will discover broadcast domains agnostic of VLAN IDs configured on an AP. As a result of this methodology, each AP on a broadcast domain will eventually gather exactly the AP/VLAN ID pairs that currently constitute the domain. Whenever a client connects to another SSID the Anchor AP for that client is updated.

Broadcast Domain Discovery

The following steps establish the AP/VLAN ID (VID) pair that correspond to a broadcast domain:

  1. APs periodically broadcast a BCD announcement packet that contains the AP’s VLAN ID for that broadcast domain, giving a {sender AP,VID} pair on each broadcast domain the AP interacts with.
  2. Create equivalence classes based on AP/VID pairs recently observed in BCD announcement packets on the same broadcast domain.

Additional notes:

  • Each AP on a broadcast domain will eventually gather exactly the AP/VID pairs that currently constitute the domain.
  • In principle, any AP/VID pair can be used to refer to a broadcast domain. Given AP1/VID1, as long as you know the full list of pairs for that broadcast domain, you can tell whether some other AP2/VID2 refers to the same domain or not.


An AP could theoretically broadcast BCD announcement packets to all 4095 potentially attached VLANs, however it will limit itself to the VLANs outlined above.

Roaming with Broadcast Domains

The Meraki MRs leverage a distributed client database to allow for efficient storage of clients seen in the network and to easily scale for large networks where thousands of clients may be connecting. The client distributed database is accessed by APs in real-time to determine if a connecting client has been seen previously elsewhere in the network. This requires that the APs in the Meraki network have layer 3 IP connectivity with one another, communicating over UDP port 9358. Leveraging the Meraki Dashboard, the APs are able to dynamically learn about the other APs in the network (including those located on different management VLANs) to know whom they should communicate with to look up clients in the distributed client database. 

The following process describes how client roaming operates with distributed layer 3 roaming

  1. Anchor APs have a full set of AP/VLAN ID pairs for each attached broadcast domain as described above.

  2. On client association, the hosting AP retrieves the client data from the distributed store.

    • If the hosting AP does not find an entry in the store:

      • The hosting AP then becomes the anchor AP for the client. It stores the client in the distributed database, adding a candidate anchor AP set. The candidate anchor set consists of the AP’s own AP/VLAN ID pair plus two randomly chosen pairs from the same anchor broadcast domain.

    • If the hosting AP does find an entry in the store:

      • It checks to see if the client’s VLAN is available locally, from the previous broadcast domain discovery process outlined above. If the associated VLAN ID is available, the hosting AP will become the anchor AP and the VLAN for that client will dynamically be provisioned for the client. See the section “VLAN Testing and Dynamic Configuration” below.

      • Otherwise, the hosting AP sets up an anchor AP for the client (picking a random pair from the candidate anchor set).

  3. As long as the hosting AP continues to host the client, it periodically receives updates to the candidate anchor set from the anchor AP. The anchor AP replaces any AP/VLAN ID pair in the candidate anchor set that disappears with another randomly chosen AP/VLAN ID pair for that broadcast domain. The hosting AP updates the distributed store’s client entry with changes to the candidate

VLAN Testing and Dynamic Configuration

The anchor access point runs a test to the target access point to determine if there is a shared layer 2 broadcast domain for every client serving VLAN. If there is a VLAN match on both access points, the target access point will configure the device for the VLAN without establishing a tunnel to the anchor. This test will dynamically configure the VLAN for the roaming device despite the VLAN that is configured for the target access point and the clients served by it. If the VLAN is not found on the target AP either because it is pruned on the upstream switchport or the Access Point is in a completely separated layer 3 network, the Tunneling method described below will be used.

Local VLAN testing and dynamic configuration is one method used to prevent all clients from tunneling to a single anchor AP. To prevent excess tunneling the layer 3 roaming algorithm determines that it is able to place the user on the same VLAN that the client was using on the anchor AP. The client in this case does a layer 2 roam as it would in bridge mode.


If necessary, the target access point will establish a tunnel to the anchor access point. Tunnels are established using Meraki-proprietary access point to access point communication. To load balance multiple tunnels amongst multiple APs, the tunneling selector will choose a random AP that has access to the original broadcast domain the client is roaming from. If the target AP detects a connectivity failure to the currently selected anchor AP, as a failover mechanism the target AP will choose a new anchor AP. Hosting AP will ping Anchor AP every second to ensure that the Anchor AP has not failed. This ping is integrated as a part of the L3 communication on UDP port 9358.
All APs must be able to communicate with each other via IP.  This is required both for client data tunneling and for the distributed database. If a target access point is unable to communicate with the anchor access point the layer 3 roam will time out and the end device will be required to DHCP on the new VLAN. Data packets are not encrypted between two Anchor APs on the wired side but control and management frames are encrypted.

Fast roaming protocols such as OKC and 802.11r are not currently supported with distributed layer 3 roaming. The best roaming performance will be using Layer 2 roaming with 802.11r.

Design Example

Let’s walk through an example of the distributed layer 3 roaming architecture from start to finish. In this example network, we’ll use the following configuration:

  1. 5x MRs on management VLAN 10 tagged with ‘Group A’

  2. 5x MRs on management VLAN 20 tagged with ‘Group B’

  3. SSID: Corporate

  4. Client IP Assignment: layer 3 roaming

  5. VLAN ID:

    • Group A: VLAN 15

    • Group B: VLAN 25

We will assume that the total of 10 APs are online and connected to Dashboard, and have IP connectivity with one another.


Client A associates with a ‘Group A’ AP on management VLAN 10, and receives an IP address in VLAN 15 as expected. This AP becomes the anchor AP & hosting AP for the client. The APs in the Meraki network have built out the broadcast domain mapping pairs (AP/VLAN ID) and are exchanging periodic updates. Client A roams to a ‘Group B’ AP on management VLAN 20, client VLAN 25. The ‘Group B’ AP is now considered the hosting AP and reads the distributed client database to see if the client has connected previously. It finds an entry for the client and checks locally to see if the client’s broadcast domain is available on the switchport. The broadcast domain is not available, and the hosting AP will now pick an anchor AP out of the candidate anchor set (supplied from the distributed client database check) which will be any AP that has advertised itself to the distributed client database as having access to client VLAN 15. Once the anchor AP is selected, along with two candidate anchors for resiliency, the tunnel is established and the hosting AP updates the distributed client database with this information.


The hosting AP will periodically refresh the anchor AP and distributed database. The anchor AP’s entry for a client has an expiration time of 30 seconds. If the client disconnects from the network for 45 seconds, as an example, it may connect back to a new anchor AP on the same broadcast domain associated with the client. The distributed database expiration timer for a client is the DHCP lease time. This effectively determines how long a client’s broadcast domain binding is remembered in the distributed database. If a client disconnects from the network, and then reconnects before the DHCP lease time has expired, then the client will still be bound to its original broadcast domain.


In another scenario, let’s imagine a large enterprise campus with 10 floors. Following common enterprise campus design, the customer has segmented one VLAN per floor for the users. To accommodate for client mobility and seamless roaming throughout the campus building, the customer wishes to leverage distributed layer 3 roaming. Using AP tags, the configuration will specify a VLAN ID assignment for a given SSID based on the tag. In this case, the following configuration will be used:

  • SSID: Corporate

  • Client IP Assignment: layer 3 Roaming

  • VLAN ID:

    • Floor 1 - VLAN 11

    • Floor 2 - VLAN 12

    • Floor 3 - VLAN 13

    • Floor 4 - VLAN 14

    • Floor 5 - VLAN 15

    • Floor 6 - VLAN 16

    • Floor 7 - VLAN 17

    • Floor 8 - VLAN 18

    • Floor 9 - VLAN 19

    • Floor 10 - VLAN 20

The switchports which the MRs will be connecting to will be configured as trunk ports. Switches on floors 1-5 will allow VLANs 11,12,13,14,15. Switches on floors 6-10 will allow VLANs 16,17,18,19,20. With this configuration, a user who associates on floor 1 will receive an IP address on VLAN 11. As they roam throughout the building, changing floors, the roams will be layer 2 only with no tunneling required.

Only when the client roams to the upper half of the building (or vise versa) will a tunnel be formed to keep the client in its original broadcast domain. Keep in mind that even if the client originally received IP addressing on VLAN 11, since AP’s on Floor 5 have access to that broadcast domain (discovered via the Broadcast Domain Mapping & Discovery mechanism), then that client will maintain it’s VLAN 11 IP addressing information and will simply use the AP on floor 5 as it’s new anchor.

This type of design allows for maximum flexibility by allowing for traditional layer 2 roams for users who spend the majority of their time in a specific section of the building, and allowing for continued seamless roaming for the most mobile clients.

Repeaters don’t have their own IP address, so they cannot be anchor APs. When a client connects to a repeater, the repeater becomes the client’s hosting AP, and the repeater assigns its gateway as the client’s anchor AP.

Concentrator-Based Layer 3 Roaming

Any client that is connected to a layer 3 roaming enabled SSID is automatically bridged to the Meraki Mobility Concentrator. The Mobility Concentrator acts as a focal point to which all client traffic will be tunneled and anchored when the client moves between VLANs. In this fashion, any communication data directed towards a client by third party clients or servers will appear to originate at this central anchor. Any Meraki MX can act as a Concentrator, please refer to the MX sizing guides to determine the appropriate MX appliance for the expected users and traffic. 



The diagram below shows the traffic flow for a particular flow within a campus environment using the layer 3 roaming with concentrator. 






Wireless VoIP QoS Best Practices

This article is a guide to optimize Quality of Service (QoS) for wireless Voice over IP applications on Meraki MR wireless access points. Voice over IP (VoIP) has replaced telephones in enterprise networking with IP-based phones. While the majority of desk phones using VoIP require ethernet, there are many voice applications and wireless VoIP phones that operate over WiFi.

The Meraki MR series of WiFi access points have been tested by Cisco Meraki to provide the highest quality VoIP experience when using Cisco Jabber, Microsoft Lync, Microsoft Skype for Business, Broadsoft, Cisco 7900 Series phones, SpectraLINK phones, Ascom phones, Cisco phones, and Apple iPhones. This guide will provide recommendations for optimizing voice quality followed by product specific recommendations. 

Measuring Voice Quality 

By following this guide, you can significantly improve quality of service for the wireless voice applications and reduce or eliminate dropped calls, choppy speech, fuzzy speech, buzzing, echoing, long pauses, one-way audio, and issues while roaming between access points. 

To develop this guide, we performed testing using Microsoft Lync's Pre-Call Diagnostics Tool. The endpoints used during testing were Macbook Pros running Office 365's Cloud-hosted Skype for Business Online, also known as Lync Online. All tests were performed while connected to an MR32 access point inside Meraki's headquarters in San Francisco, a  high density corporate WiFi network. This tool measures 3 key metrics for voice quality: 

  • Network MOS - The Network Mean Opinion Score (MOS) is the network’s impact on the listening quality of the VoIP conversation. The score ranges from 1 to 5, with 1 being the poorest quality and 5 being the highest quality.
  • Packet Loss Rate - The packet loss rate is the percent of packets that are lost during transmission.
  • Interarrival Jitter - Interarrival jitter measures the variation in arrival times of packets being received in milliseconds (ms).

By combining this guide with best practices for configuring your client device configuration, application servers, WAN links, and wired network, you can measure and improve quality voice end-to-end. For more information on configuring your wired network to support Voice, please visit the article on Configuring MS Access Switch for Standard VoIP deployment article.

Voice Quality Before this Guide

With the default settings on the MR, we see the baseline for quality. Voice calls with Lync on this network would be acceptable to some users, but not acceptable to others. The results of the Lync testing show that the Network Mean Opinion Score (MOS) drops below 3.5. Values values dropping below 3.5 are termed unacceptable by many users. The packet loss jumps to 8 percent during a period of network congestion simulated by running a speed test. Jitter fluctuates from 12 milliseconds to over 36 milliseconds. Cisco recommends a target of 10 ms of jitter and no more than 50 ms of jitter. Jitter is handled using buffering in voice/video applications, adding a small delay. The human ear normally accepts up to about 140 milliseconds of delay without noticing it.

Voice Quality After this Guide

After making the changes exactly as described in this guide, we see a significant improvement in voice quality. The MOS score approaches 3.9, the packet loss is near zero, and the jitter is consistently below 6 milliseconds and always below 12 milliseconds. As the call starts, traffic shaping kicks in automatically with prioritization and QoS tagging. A speed test was performed with no impact to the voice traffic - packet loss increased to 0.5% during the congestion.


Wireless Voice Best Practices

A Cisco Meraki wireless network has the intelligence built-in with deep packet inspection to identify voice and video applications and prioritize the traffic using queuing and tagging to inform the rest of the network how to handle your voice traffic. Below is a summary of the best practices to provide the best voice quality over wireless.

  1. Perform a pre-install RF survey for overlapping 5 GHz voice-quality coverage with -67 dB signal strength in all areas.
  2. If possible, create a new SSID dedicated to your voice over IP devices.
    1. Set Authentication type to 'Pre-shared key with WPA2'
    2. Set WPA encryption mode to 'WPA2 only'
    3. Enable '5 GHz band only'
    4. Set Minimum bitrate to '12 Mbps' or higher. 
  3. Enable Bridge mode. If you cannot provide a VLAN across the entire floor, use Layer 3 roaming.
  4. Enable 'VLAN tagging' and assign a VLAN dedicated to wireless voice. If you cannot dedicate an SSID to voice, assign a VLAN dedicated to wireless.
  5. Set a 'Per-client bandwidth limit' to 5 Mbps with 'Speed Burst' to limit all non-voice traffic
  6. Set a 'Per-SSID bandwidth limit' to unlimited.
  7. Enable 'Traffic shaping' on the SSID to prioritize all voice traffic
    1. Create a traffic shaping rule for 'All voice & video conferencing
    2. Add 'Custom expressions' for the IP and ports used by your servers hosting Microsoft Lync / Skype for Business, Jabber, or Spark
    3. Set the Per-client bandwidth limit to 'Ignore SSID per-client limit (unlimited)' from the drop down.
    4. Set PCP to '6
    5. Set DSCP to '46 (EF - Expedited Forwarding, Voice)'
  8. Verify the Voice VLAN is tagged correctly
  9. Verify your uplinks and Switches have Quality of Service defined with the maximum PCP and DSCP values
  10. Verify DSCP trust is enabled on switch ports to APs and uplinks
  11. Verify your Windows Group Policy to ensure your devices are tagging application traffic with DSCP (not on by default)
  12. Verify your voice server configuration to ensure Microsoft Lync / Skype and Call Manager have DSCP enabled (not on by default)
Summary of 802.11 Standards

All Meraki MR series access points support the most recent 802.11 standards implemented to assist devices to roam between access points and ensure voice calls maintain a quality user experience. 

  • 802.11r: Fast BSS transition to permit fast and secure hand-offs from one access point to the other in a seamless manner
  • 802.11i: Enabling client devices authenticated via 802.1x to authenticate with decreased latency whilst roaming
  • 802.11k: assisted roaming allows clients to request neighbor reports for intelligent roaming across access points.
  • 802.11e: Wireless Multimedia Extensions (WMM) traffic prioritization ensures wireless VoIP phones receive higher priority.
  • WMM Power Save: maximizes power conservation and battery life on devices without sacrificing Quality of Service.
  • 802.11u: Hotspot 2.0 also known as Passpoint is a service provider feature that assists with carrier offload.

Pre-Install Survey

The design and layout of access points is critical to the quality of voice over WiFi. Configuration changes cannot overcome a flawed AP deployment. In a network designed for Voice, the wireless access points are grouped closer together and have more overlapping coverage, because voice clients should roam between access points before dropping a call. Designing with smaller cells and lower power settings on the access point are key elements to ensure the overlapping coverage from neighboring APs/cells. Set a clear requirement based on the device type when performing a survey. 

Pre-site surveys are useful for identifying and characterizing certain challenging areas and potential sources for interference, such as existing WiFi networks, rogues, and non-802.11 interference from sources such as microwave ovens and many cordless telephones. Post-site surveys should be performed at least 48 hours after installation to allow the network to settle on channel and power settings.

  • Prefer 5 GHz coverage for voice applications due to the lower noise floor compared to 2.4 GHz
  • Verify an AP can be seen from the phone at -67 dBm or better in all areas to be covered 
  • Verify that the AP sees the phone at -67 dBm or better in all areas as well
  • Signal to Noise Ratio  should always 25 dB or more in all areas to provide coverage for Voice applications
  • Channel utilization should be under 50%


For more guidelines on site surveys read our article on Performing a Wireless Site Survey. For more detailed guidelines on designing RF specifically for Cisco Voice over WiFi please read Cisco's Voice over WLAN Guide.

Network Configuration

Making the changes described in this section will provide a significant improvement in voice quality and user satisfaction by following the best practices for configuring your SSIDs, IP assignment, Radio Settings, and traffic shaping rules.

Add a Dedicated Voice SSID 

Voice optimization typically requires a different configuration including access control and traffic shaping to address device specific recommendations. You should create a separate Voice SSID for devices dedicated to voice applications. While this is not a requirement, we recommend to create a separate network to follow this guide. In networks with VoIP handsets from two different manufacturers, it is common to create two voice SSIDs. 

If you plan to deploy more than 4 SSIDs please read our guide on the Consequences of Multiple SSIDs.​

Authentication Type

Voice over WiFi devices are often mobile and moving between access points while passing voice traffic. The quality of the voice call is impacted by roaming between access points. Roaming is impacted by the authentication type. The authentication type depends on the device and it's supported auth types. It's best to choose the auth type that is the fastest and supported by the device. If your devices do not support fast roaming, Pre-shared key with WPA2 is recommended. ​WPA2-Enterprise without fast roaming can introduce delay during roaming due to its requirement for full re-authentication. When fast roaming is utilized with WPA2-Enterprise, roaming times can be reduced from 400-500 ms to less than 100 ms, and the transition time from one access point to another will not be audible to the user. The following list of auth types is in order of fastest to slowest.

  1. Open (no encryption)
  2. Pre-shared key with WPA2 and Fast roaming
  3. WPA2-Enterprise with Fast roaming
  4. ​Pre-shared key with WPA2 
  5. ​WPA2-Enterprise 
WPA2 only for Encryption Mode

Voice devices can benefit from having a single type of encryption used. ​By default, SSIDs on Cisco Meraki access points that are configured as WPA2 will utilize a combination of both WPA1 TKIP and WPA2 AES encryption. WPA2 (AES) is recommended and required in order to utilize caching or fast roaming. The WPA encryption setting is SSID specific, and can be found on the Wireless > Configure > Access control page.

  • If all Voice devices support WPA2, the 'WPA2 only' option is recommended for Voice over IP devices. 
  • If the device does not support AES, it is also possible to force TKIP only. Please contact Cisco Meraki support to configure this option.

For step-by-step instructions on changing the WPA encryption mode, see our document on Setting a WPA Encryption Mode.

Screenshot 2015-11-13 19.04.42.png

802.11r fast roaming

Enabling 802.11r is recommended to improve voice quality while roaming. The 802.11r standard was designed to improve VoIP and voice applications on mobile devices connected to Wi-Fi, in addition to or instead of cellular networks. When mobile devices roam from one area to another, they disassociate from one access point and reassociate to the next access point. Enabling 802.11r benefits VoIP devices by reducing the roaming time spent changing between access points. Some client devices are not compatible with Fast BSS Transition (802.11r). You may wish to check your devices for compatibility.

This feature can be enabled from the Configure > Access control page under Network access > 802.11r. If this option does not appear, a firmware update may be required.  For more details on 802.11r, refer to our guide on Fast Roaming Technologies

It has been determined that configuring an SSID with WPA2-PSK and 802.11r fast roaming poses a security risk due to a vulnerability.  The vulnerability allows potential attackers the ability to obtain the PSK for the SSID when a client fast roams to another AP.  While 802.11r does improve VOIP quality as the time for the necessary re-association process to the next AP is reduced, it is not recommended to use 802.11r fast roaming at this time with an SSID using WPA2-PSK. 

You can read more on the 802.11r  with WPA2-PSK vulnerability on this blog post.

Layer 2 and Layer 3 Roaming 

Bridge mode is recommended to improve roaming for voice over IP clients with seamless Layer 2 roaming. In bridge mode, the Meraki APs act as bridges, allowing wireless clients to obtain their IP addresses from an upstream DHCP server. Bridge mode works well in most circumstances, particularly for seamless roaming, and is the simplest option to put wireless clients on the LAN. To configure the client IP assignment modes please refer to our document on SSID Modes for Client IP Assignment.

When using Bridge mode, all APs on the same floor or area should support the same VLAN to allow devices to roam seamlessly between access points. Using Bridge mode will require a DHCP request when performing a Layer 3 roam between two subnets. For example, when a user on a VoIP call roams between APs on different VLANs without layer 3 roaming, the user's session will be interrupted as the external server must re-establish communication with the client's new IP address. During this time, a VoIP call will noticeably drop for several seconds, providing a degraded user experience. 

Large wireless networks with multiple VLANS per floor may require IP session roaming at layer 3 to enable application and session persistence while a mobile client roams across multiple VLANs. With layer 3 roaming enabled, a client device will have a consistent IP address and subnet scope as it roams across multiple APs on different VLANs/subnetsIf Layer 3 roaming is required on your network, please refer to our article on Layer 3 Roaming

Note: It is strongly recommended to consult with a Cisco Meraki SE or Cisco Partner when considering layer 3 roaming options.

NAT mode is not recommended for Voice over IP: With NAT mode enabled, your devices will request a new DHCP IP address on each roam. Moving between APs in NAT mode will cause the connection to break when moving AP to AP. Applications requiring continuous traffic streams such as VoIP, VPN or media streams will be disrupted hen roaming between APs. 

Segregate Traffic on a Voice VLAN 

Voice traffic tends to come in large amounts of two-way UDP communication. Since there is no overhead on UDP traffic ensuring delivery, voice traffic is extremely susceptible to bandwidth limitations, clogged links, or even just non-voice traffic on the same line. Separating out your voice traffic allows it to function independently of other network traffic, and allows for more granular control over different types of traffic.

If a voice VLAN is specified on a Meraki MS switch, the port will accept tagged traffic on the voice VLAN. In addition, the port will send out LLDP and CDP advertisements recommending devices use that VLAN for voice traffic.  The VLAN tagged on the wireless access point should match the Voice VLAN on your wired network.  For more information, please visit the article on Configuring MS Access Switch for Standard VoIP deployment.

Screenshot 2015-11-13 03.59.38.png

Band Selection

The 2.4 GHz band has only 3 channels that do not overlap, while the 5 GHz band has up to 19 individual channels in the US. A wireless network will provide the best quality of service for wireless voice when designed correctly to support 5 Ghz coverage for voice. This can be configured under Access Control > Wireless options > Band selection > '5 GHz band only'. After configuration, testing should be performed in all areas of your environment. If you do not have proper 5 GHz coverage after a post-install site survey, you can manually increase the power on the 5 GHz radio  in Radio Settings > Channel Planning. 

Screenshot 2015-11-13 04.12.10.png

If you do not have a dedicated Voice SSID, enable 'Dual-band with band steering' to enable your voice devices to use both 2.4 GHz channels and 5 GHz. Devices will be steered to use the 5 GHz band. For more details refer to the Band Steering Overview article. With a dual-band network, client devices will be steered by the network. However,  to improve voice quality, please follow our guide on configuring wireless band preference on client devices.

If you have devices that only support a 2.4 GHz network such as 802.11bgn devices, please contact Meraki Support to enable a 2.4 GHz only network.

Minimum Bitrate

For Voice networks, 12 Mbps is recommended as the minimum bitrate. Increasing this value requires proper coverage in the RF planning. An administrator can improve the performance of clients on the 2.4 GHz and 5Ghz band by disabling lower bitrates. Adjusting the bitrates can reduce the overhead on the wireless network and also in some cases improve roaming performance. 

The minimum bitrate is configured in Wireless > Radio Settings > RF Profiles > [Profile Name] and can be set on per band or on per SSID basis.

Bandwidth Limits

Consider placing a per-client bandwidth limit on the rest of your network traffic. Prioritizing voice will have a greater impact if all other applications are limited. For more details refer to the article Configuring Bandwidth Limitations and Enabling Speed Burst on Wireless Networks.

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose your SSID from the SSID drop down menu at the top of the screen.
  2. Set a 'Per-client bandwidth limit' to 5 Mbps with 'Speed Burst'. This will apply to all non-voice application traffic. This step in the guide is optional. 
  3. Set a 'Per-SSID bandwidth limit' to unlimited.

Screen Shot 2018-08-12 at 7.27.56 PM.png

SpeedBurst enables a bursts of four times the allotted bandwidth limit for five seconds. 

Traffic Shaping Rules

Use traffic shaping to offer voice traffic the necessary bandwidth. It is important to ensure that your voice traffic has enough bandwidth to operate. As such, traffic shaping rules can be implemented to allow voice traffic to use additional bandwidth, or limit other types of traffic to help prioritize voice traffic.

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose your SSID from the SSID drop down menu at the top of the screen.
  2. Click the drop down menu next to Shape traffic and choose Shape traffic on this SSID, then click Create a new rule.
  3. Click Add + and select 'All VoIP & video conferencing
  4. Set Per-client bandwidth limit to 'Ignore SSID per-client limit (unlimited)' and click Save changes.​

Screen Shot 2018-08-12 at 7.26.04 PM.png


PCP, DSCP, and WMM mapping

Many devices support Quality of Service (QoS) tags to maintain traffic priority across the network. Meraki MR access points support WMM to improve the performance of real-time data such as voice and video.  WMM improves the reliability of applications in progress by preventing oversubscription of bandwidth. WMM accomplished this by enhancing the prioritization of traffic using the access categories: voice, video, best effort, and background data. WMM has mapped values which can be found in this article. To configure correct mapping for PCP and DSCP values, the following steps should be followed:

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose your SSID from the SSID drop down menu at the top of the screen.
  2. Under the traffic shaping rules, make sure Shape Traffic for this SSID is selected and that  there's a rule for All voice & video conferencing.
  3. Set PCP to '6' or the setting recommended by your device/application vendor (Note that PCP values can only be changed if the SSID has VLAN tagging enabled. This ensures there's a field to which the CoS value can be written).
  4. Set DSCP to 46 (EF - Expedited Forwarding, Voice) or the setting recommended by your device/application vendor.

Note: The DSCP tag, 46 (EF - Expedited Forwarding, Voice) maps to WMM Access Category AC-VO for Voice, Layer 2 CoS 6)


If no DSCP values are configured, the default DSCP to WMM mapping will be used. The access point does the mapping between the LAN's Layer 2 priority and the radio's WMM class. Below is table showing the mapping between common traffic types and their respective markings:

RFC 4594-Based Model

802.3 DSCP 802.3 DSCP [Decimal]

IEEE 802.11 Model [802.11e WMM-AC]

Voice + DSCP-Admit

EF + 44


Voice AC (AC_VO)

Broadcast Video

CS5 24 Video AC (AC_VI)

Multimedia Conferencing

AF4n 34, 36, 38 Video AC  (AC_VI)
Realtime Interactive CS4 32 Video AC (AC_VI)
Multimedia Streaming AF3n 26, 28, 30 Video AC  (AC_VI)
Signaling CS3 40 Video AC  (AC_VI)
Transactional Data AF2n 18, 20, 22 Best Effort AC (AC_BE)
OAM CS2 16 Best Effort AC (AC_BE)
Bulk Data AF1n 10, 12, 14 Background AC (AC_BK)
Scavenger CS1 8 Background AC  (AC_BK)
Best Effort DF 0 Best Effort AC  (AC_BE)

* n as used in place for the drop indication of assured forwarding matches values 1-3.  


For QoS prioritization to work end to end, ensure that upstream networking equipment supports QoS prioritization as well. The PCP and DSCP tags applied on the wireless access point should match the wired network configuration to ensure end-to-end QoS. For more information, please visit the article on Configuring MS Access Switch for Standard VoIP deployment article.

Custom Traffic Shaping 

If your voice traffic does not match the built-in application signatures or is not listed, you can create your own signature for traffic shaping.

  1. Add the IP and ports used by your servers hosting Microsoft Lync / Skype for Business, Jabber, Spark, or other voice application.
    1. In the Definition field click Add + and Custom expressions 
    2. In the text field, enter the IP address of each of your voice servers for example or a range of server IPs
    3. Also, add your servers as source addresses by using the CIDR notation localnet: for an individual server, or localnet: for a range of IPs.
    4. Click the Add + button again when finished.
  2. If you have a dedicated voice SSID and dedicated voice VLAN add the local subnet of the client devices.
    1. In the Definition field click Add + and Custom expressions 
    2. In the text field, enter localnet: indicating the source subnet of your client devices in CIDRnotation.
    3. Click the Add + button again when finished.
  3. Set Per-client bandwidth limit to 'Ignore SSID per-client limit (unlimited)' and click Save changes.​

Product Specific Recommendations

Cisco Meraki works closely with device manufacturers, for example Apple, to provide them with their own access points for interoperability testing. Meraki performs our own testing across the entire spectrum of devices and our customer support team handles and reports bugs quickly. This section will provide recommendations based on real-world deployments by Meraki customers combined with the best practices developed by Meraki and the vendors mentioned below.

Microsoft Lync / Skype for Business

This section will provide guidance on how to implement QoS for Microsoft Lync and Skype for Business. Microsoft Lync is a widely deployed enterprise collaboration application which connects users across many types of devices. This poses additional challenges because a separate SSID dedicated to the Lync application may not be practical. When you install Microsoft Lync Server / Skype for Business, Quality of Service (QoS) will not be enabled for any devices used in your organization that use an operating system other than Windows. For more guidance on deploying Lync over Wi-Fi, please read Microsoft's deployment guide, Delivering Lync 2013 Real-Time Communications over Wi-Fi.


Meraki's deep packet inspection can intelligently identify Lync calls made on your wireless network and apply traffic shaping policies to prioritize the Lync traffic - using the SIP Voice protocol. In addition to the Meraki built-in signatures for Skype and SIP, you should also identify each Lync server by IP and any custom ports used by your Lync clients or servers. Follow these steps to configure your traffic shaping rules for Lync / Skype.

  1. Go to Wireless > Configure > Firewall & traffic shaping and choose your SSID from the SSID drop down menu at the top of the screen.
  2. Click the drop down menu next to Shape traffic and choose Shape traffic on this SSID, then click Create a new rule.
  3. Consider setting a 'Per-client bandwidth limit' to 5 Mbps with 'Speed Burst'. This will apply to all non-voice application traffic
  4. Set Per-client bandwidth limit to unlimited
  5. Create a traffic shaping rule for All voice & video conferencing > 'Skype' and 'SIP (Voice)'
  6. Set the Per-client bandwidth limit to 'Ignore SSID per-client limit (unlimited)' from the drop down.
  7. Set PCP to '6'. The 802.1p parameter is no longer supported in Lync Server 2013. The parameter is still valid for backward compatibility with Microsoft Lync Server 2010; however, it has no effect on devices used with Lync Server 2013.
  8. Set DSCP to '46 (EF - Expedited Forwarding, Voice)
  9. Add 'Custom expressions' for the IP and ports used by your servers hosting Microsoft Lync / Skype for Business
    1. For Cloud-hosted Lync / Skype, add the domain names from the table below
    2. Add the Port numbers from the table below or your own list of assigned port numbers
    3. Add the IP address of each of your on-premise Lync servers
  10. You can test that DSCP markings are applied using the Meraki packet capture tool. 


Lync Online / Skype for Business Online servers Lync & Skype port numbers On Premise Lync / Skype for Business Servers
  • ​444
  • 5060-5064
  • 5070-5072
  • 5086-5087
  • 8058-8061
  • 49152-57500
  • (Destination IP) 
  • (Destination IP range) 
  • localnet: (Source IP)
  • localnet: (Source IP range) 

The ports provided in the above table are the standard ports provided by Microsoft. Enabling QoS Configuration of the client device to modify the port ranges and assign the DSCP value 46. Microsoft's best practices include configuring the port ranges on both your servers and client devices. For details on enabling QoS, refer to Microsoft's article Managing Quality of Service (QoS) in Lync Server 2013

Cisco 7925G Phones

Cisco 7925G, 7925G-EX, and 7926G VoIP phones require specific settings to inter-operate with Meraki MR access points configured with WPA2-PSK association requirements. For more in-depth information on integrating Cisco 792xG with MR Access Points, see the Cisco Unified Wireless IP Phone 792xG + Cisco Meraki Wireless LAN Deployment Guide

  1. Cisco recommends to set band selection to 5 GHz only.
  2. Do not utilize the Dual band operation with Band Steering option. If the 2.4 GHz band needs to be used due to increased distance, select Dual band operation (2.4 GHz and 5 GHz) should be selected. 
  3. Set the Minimum bitrate to 11 Mbps or higher. 
  4. By default, Cisco Meraki access points currently tag voice frames marked with DSCP EF (46) as WMM UP 6 and call control frames marked with DSCP CS3 (24) as WMM UP 4.


Cisco Meraki access points will trust DSCP tags by default. Administrators should ensure that upstream QoS is in place and that the QoS markings outlined below are in place for the 7925 phones. To rewrite QoS tags for certain traffic types or source/destination, then create a traffic shaping rule as outlined in Custom Traffic Shaping above.


Below is the QoS and port information for voice and call control traffic used by the Cisco Unified Wireless IP Phone 7925G, 7925G-EX, and 7926G. For a full list of ports and protocols used by Cisco phones refer to the Cisco Unified Communications Manager TCP and UDP Port Usage Guide.

Traffic Type DSCP   PCP (802.1p) WMM Port Range
Voice (RTP, STRP) EF (46) 5 6 UDP 16384 - 32767
Call Control (SCCP, SCCPS) CS3 (24) 3 4 TCP 2000, TCP 2443

Cisco Unified Communications Manager only uses the port range 24576-32767 although other devices use the full range 16384 - 32767.

Apple iPhones

Apple and Cisco have created partnership to better support iOS business users by optimizing Cisco and Meraki networks for iOS devices and apps. For more information on this partnership, please see Apple's website. Meraki's group policies can be easily configured to optimize Apple devices on a Meraki network. First create a group policy you would like to apply to Apple devices.

  1. Browse to Network-wide > Configure > Group Policies, scroll down and click Add Group
  2. Under traffic shaping, create a new rule, and add All VoIP & video conferencing
    1. Set the Per-client bandwidth limit to Ignore network per-client limit
    2. Set PCP to 6 (highest priority) and DSCP to 46 (EF - Expedited Forwarding, Voice). 
  3. Create a second rule, and add All Video & music
    1. Set the Per-client bandwidth limit to Ignore network per-client limit
    2. Set PCP to and DSCP to 34 (AF41- Multimedia Conferencing, Low Drop).

Screen Shot 2018-08-12 at 8.04.29 PM.png


To apply your new Apple device group policy, browse to WirelessAccess Control and enable Assign group policies by device type. Click Add group policy for a device type for each Apple device type (iPhone, iPad, iPod, and Mac OS X) and assign the Apple device group policy you created. Click save and your optimization is complete.

Screenshot 2015-11-13 10.24.48.png

Vocera Badges

Vocera badges communicate to a Vocera server, and the server contains a mapping of AP MAC addresses to building areas. The server then sends an alert to security personnel for following up to that advertised location. Location accuracy requires a higher density of access points. In a high density deployment, you may need to reduce the transmit power of each AP manually to as low as 5 dB on all supported radios. Vocera provides additional documentation on deploying WLAN best practices to support Vocera badges. For more information download their document on Vocera WLAN Requirements and Best Practice

Some models of Vocera badges do not support 5 GHz or WPA2 AES encryption and require WPA1 TKIP.  Please contact Cisco Meraki support to configure a WPA1 TKIP on your network. 

WiFi Calling

Mobile network operators (MNOs) now allow their customers to place phone calls over Wi-Fi to save roaming costs and leverage WiFi coverage in buildings with poor cellular coverage. WiFi calling is expected to be supported by majority of mobile devices and MSPs by the end of 2015. Apple maintains a list of carriers that have wifi calling ready and devices that support it. 

An Enterprise WiFi infrastructure handles more than just carrier voice traffic, and this limited spectrum is shared by other applications and services like Video streaming & Web Conferencing. The requirements for voice in terms of latency and jitter warrants a network with proper end-to-end QoS design & Voice optimizations that would optimize delivery of WiFi calling packets in the presence of other applications.

Service Provider WiFi 

Service providers are using WiFi to offload data from cellular networks to meet the ever-increasing demands of mobile device users. Two technologies enabling WiFi to meet this demand are WiFi calling and Hotspot 2.0.

Hotspot 2.0

Hotspot 2.0 also known as Passpoint is a service provider feature that assists with carrier offloading. Part of the 802.11u amendment to the 802.11 standard additional information is included in Hotspot 2.0 configured SSIDs that Hotspot 2.0 client devices can analyze use to determine if it is able to join the network automatically. 

Managed Service Providers (MSPs) can now take enable of Hotspot 2.0 options on the Cisco Meraki MR access points. Meraki allows MSPs to customize the Hotspot 2.0 SSID advertisements to allow their subscribers to easy roam between networks.  Hotspot 2.0 options are only available to qualified Managed Service Providers. Please contact Cisco Meraki Support to check eligibility.

Troubleshooting VoIP

We have created a detailed article focused on troubleshooting VoIP on Meraki. Please visit the article: VoIP on Cisco Meraki: F.A.Q. and Troubleshooting Tips

Best Practice Design - MV Cameras

Meraki MV Cameras - Introduction and Features

Industry Terminology


This article goes through basic camera industry terminology and an introduction on MV features. The following is an explanation of some terminology you may come across when deploying, designing, or installing security camera networks.

Focal Length

The focal length is a technical measurement of a camera lens and affects the Field of View (FoV). The longer the focal length (typically measured in millimeters), the more zoomed in the picture will be.

Varifocal Lens

A camera with a variable focal length, sometimes called varifocal, can be adjusted to optically magnify (or zoom) to enhance detail of distance objects.

Fixed Lens

A camera with a fixed lens cannot have its focal length adjusted. Cameras with fixed lens are most common in indoor, multi-imager, or fisheye cameras, although some outdoor cameras have fixed focal lengths as well.


Meraki MV cameras have various models that have Varifocal and Fixed lenses.

The focal length of the MV12N is longer than the focal length of the MV12W and therefore the MV12N has a more narrow field of view.

Field of View

Field of View (FOV) is the term used to describe how much of a scene a camera can see. A narrow FOV (in layman’s terms, when the lens is more zoomed in) will show only a small part of a scene, e.g. the door entrance to the room. A wide FoV will show a large part of a scene, e.g. the entire room and not just the entrance door. FoV is often broken into parts Horizontal and Vertical and expressed in terms of degrees.




Depth of Field

Depth of field refers to the range of distance where objects appear acceptably sharp in an image. It varies depending on camera type, aperture and focusing distance. In security camera applications, it is almost always preferable to have as much of the image within the depth of field as possible. Cameras that are placed to cover both near and distant objects must reduce the aperture, softening the image due to diffraction (see circle of confusion for more information).


The aperture describes the iris or hole that lets light into the camera’s sensor. The larger the hole, the more light can enter the camera. The more light that can enter the camera, the better it can see in poor light and the brighter the picture will be.  The higher the aperture (less light) means an increase in depth of field. The lower the aperture (more light) means a decrease in the depth of field.


The lux (symbol: lx) is the SI derived unit of illuminance and luminous emittance, measuring luminous flux per unit area.  It is equal to one lumen per square meter. In photometry, this is used as a measure of the intensity, as perceived by the human eye, of light that hits or passes through a surface. It is analogous to the radiometric unit watt per square meter, but with the power at each wavelength weighted according to the luminosity function, a standardized model of human visual brightness perception. In English, "lux" is used as both the singular and plural form.

Dome Camera

A dome camera is a form factor of security cameras that are a dome or half a sphere. The benefits of this form factor are that it can be easily and discreetly installed in many locations.

IP Rating

An Ingress Protection rating (or IP rating) is a standardized measure of a device’s ability to withstand water and dust. An IP66 rating means the device is weatherproof. The official terminology states that it is completely protected from ingress of solid objects and water projected in powerful jets (12.5mm nozzle) against the camera from any direction, which covers rain.  More information about IP Codes can be found at


The Meraki MV71 camera is IP66 rated.

IK rating

An IK rating is a standardized measure of a device’s impact resistance. IK ratings fall between 0 and 10+. It provides a means for specifying the capacity of an enclosure to protect its contents from external impacts.  More information on IK ratings can be found at


The Meraki MV71 has the second highest level of protection—IK10—and is protected against a 5kg object dropped from 40cm in height.


PTZ, or pan-tilt-zoom, describes a type of camera that allows the user to adjust the camera lens along three axes remotely. Panning a camera moves its field of view back and forth along a horizontal axis. Tilting commands move it up and down the vertical axis. Zooming a camera affects how close objects appear in the field of view.

Image Sensor

An image sensor or imaging sensor (also: imager) is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, digital imaging tends to replace analog imaging.

Early analog sensors for visible light were video camera tubes. Currently, used types are semiconductor charge-coupled devices (CCD) or active pixel sensors in complementary metal–oxide–semiconductor (CMOS) or N-type metal-oxide-semiconductor (NMOS, Live MOS) technologies. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds. Digital sensors include flat panel detectors.

Shutter Speed

Shutter speed describes how long the shutter stays open, allowing the camera to collect light when it is taking a picture. As video is a series of pictures (frames), this setting applies to the video frames. The longer the camera collects light, the better it can see in low light.


Meraki MV shutter speed is automatically controlled by the camera and can be between 1/5th and 1/32,000th of a second.

Infrared (IR) Illuminators

Infrared (IR) illuminators are lights to illuminate dark scenes. The infrared range of wavelengths on the electromagnetic spectrum are invisible to the human eye but can be seen by cameras. Infrared illuminators allow cameras to see in the dark when humans cannot.


Meraki MV infrared illuminators are powerful for their size, with a range of up to 30 meters (or 98 feet) with the MV21/MV71 and up to 15 meters with the MV12.


Some security camera designs call for external IR illumination, especially where large or distant scenes need to be captured.  In these cases, separate IR “flood lights” are used to illuminate the scene.

Solid State Storage

Solid state storage is storage memory that has no physical moving parts. Some examples of solid state storage are the memory in a modern smartphone, flash memory on a thumb drive, or the SD card in a digital camera. The opposite of solid state storage would be magnetic storage; an example is a traditional hard disc with a spinning magnetic disc. Solid state storage is faster and more reliable than traditional spinning hard disks.

High Endurance

High endurance refers to integrity of a camera’s storage over an extended period of time and a large number of write cycles. Solid state storage wears out over time each time it is rewritten with new data. To ensure cameras can reliably store video, the MV uses a state-of-the-art, high-endurance and high capacity solid state memory technology. Other vendors’ cameras sometimes offer swappable memory; however, users will often replace factory memory with consumer-grade storage, which has not been designed for high frequency use (P/E cycles) and is more prone to failure.

Video Resolution

Video resolution is the number of distinct pixels in each dimension that can be displayed.  It is usually quoted as width x height with the units in pixels (example, 1920x1080 means the width is 1920 pixels and the height is 1080 pixels).  Resolution directly influences the amount of bandwidth consumed by the video surveillance traffic. Image quality (a function of the resolution) and frame rate are functions of the amount of bandwidth required. As image quality and frame rate increase, so do bandwidth requirements.

Analog Video Resolutions

Video surveillance solutions use a set of standard resolutions. The National Television System Committee (NTSC) and Phase Alternating Line (PAL) are the two prevalent analog video standards. PAL is used mostly in Europe, China, and Australia and specifies 625 lines per-frame with a 50-Hz refresh rate. NTSC is used mostly in the United States, Canada, and portions of South America and specifies 525 lines per-frame with a 59.94-Hz refresh rate. These video standards are displayed in interlaced mode, which means that only half of the lines are refreshed in each cycle. Therefore, the refresh rate of PAL translates into 25 complete frames per second and NTSC translates into 30 (29.97) frames per second.



Digital Video Resolutions

User expectations for resolution of video surveillance feeds are increasing rapidly partially due to the introduction and adoption of high-definition television (HDTV) for consumer broadcast television. A 4CIF resolution, which is commonly deployed in video surveillance, is a 4/10th megapixel resolution. The HDTV formats are megapixel or higher.




Digital Video Surveillance Resolutions (in pixels)

While image quality is influenced by the resolution configured on the camera, the quality of the lens, sharpness of focus, and lighting conditions also come into play. For example, harshly lit areas may not offer a well-defined image, even if the resolution is very high. Bright areas may be washed out and shadows may offer little detail. Cameras that offer wide dynamic range processing, an algorithm that samples the image several times with different exposure settings and provides more detail to the very bright and dark areas, can offer a more detailed image.


As a best practice, do not assume the camera resolution is everything in regards to image quality. For a camera to operate in a day-night environment, (the absence of light is zero lux), the night mode must be sensitive to the infrared spectrum.

Video Compression Codecs

A codec is a device or program that performs encoding and decoding on a digital video stream. In IP networking, the term frame refers to a single unit of traffic across an Ethernet or other Layer-2 network. In this guide, however, frame primarily refers to one image within a video stream. A video frame can be made up of multiple IP packets or Ethernet frames.


A video stream is fundamentally a sequence of still images. In a video stream with fewer images per second, or a lower frame rate, motion can be perceived as choppy or broken. At higher frame rates up to 30 frames per second, the video motion appears smoother; however, 15 frames per second video may be adequate for viewing and recording purposes.


Some of the most common digital video formats include the following:

  • Motion JPEG (MJPEG) is a format consisting of a sequence of compressed Joint Photographic Experts Group (JPEG) images. These images only benefit from spatial compression within the frame; there is no temporal compression leveraging change between frames. For this reason, the level of compression reached cannot compare to codecs that use a predictive frame approach.

  • MPEG-1 and MPEG-2 formats are Discrete Cosine Transform-based with predictive frames and scalar quantization for additional compression. They are widely implemented, and MPEG-2 is still in common use on DVD and in most digital video broadcasting systems. Both formats consume a higher level of bandwidth for a comparable quality level than MPEG-4. These formats are not typically used in IP video surveillance camera deployments.

  • MPEG-4 introduced object-based encoding, which handles motion prediction by defining objects within the field of view. MPEG-4 offers an excellent quality level relative to network bandwidth and storage requirements. MPEG-4 is commonly deployed in IP video surveillance but will be replaced by H.264 as it becomes available. MPEG-4 may continue to be used for standard definition cameras.

  • H.264 is a technically equivalent standard to MPEG-4 part 10, and is also referred to as Advanced Video Codec (AVC). This emerging new standard offers the potential for greater compression and higher quality than existing compression technologies. It is estimated that the bandwidth savings when using H.264 is at least 25 percent over the same configuration with MPEG-4. The bandwidth savings associated with H.264 is important for high definition and megapixel camera deployments.

  • H.265, also known as MPEG-H Part 2, is a video compression standard, one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10). In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD.  H.265 is more efficient than H.264, but its benefits are most often seen with higher resolution video, such as 4K.


As of October 2018, Meraki MV cameras use the H.264 codec.

HLS Streaming

HTTP Live Streaming (HLS) is a protocol originally developed by Apple for streaming media. It works by creating a continuous collection of small files which are downloaded by the web browser and played back seamlessly. Video delivered this way is simple for a browser to interpret and removes the need for special software or browser plugins that can show the video.  HLS provides superb video quality and solves an issue with video buffering seen in other protocols by using chunks to make streaming playback seamlessly. The trade off for seamless playback is a few seconds of latency for video feeds caused by distribution, encoding, decoding, and default playback buffers.


Meraki MV cameras use HLS streaming to provide frictionless viewing of live and recorded video within a browser.

Frame Rate

Video is made up of still images played back quickly in quick succession. Each still image is known as a frame and the number of frames played in a second (FPS) will dictate how smooth the motion in the video is. The higher the frame rate the smoother moving things will appear. TV shows are typically 30fps, movies 24fps, and security cameras are variable between 1fps and 30fps.  For motion JPEG sources, the play rate is the number of frames-per-second or fps. For MPEG sources, the play rate is the number of megabits-per-second or Mbps and kilobits per second or Kbps.


Frame rate control is a feature of some cameras that varies the frame rate depending on movement within the image.  Thus, when movement is detected, the frame rate is increased.

Bit Rate

Bit rate is the amount of data used to store one second of video. This is measured in bits per second and is typically measured in kilobits or megabits.


The bit rate determines the total amount of information that can be stored in one second. This is then divided by how many frames are in a second. The lower the frame rate for a given bit rate, the higher the quality each frame will be.

Constant Bit Rate

Constant bit rate (CBR) recording means that no matter what happens in the scene, the camera will encode video to satisfy the configured data bitrate.

Variable Bit Rate

With variable bitrate recording (VBR) a camera (or VMS) can adjust the amount of data in the bitrate to more efficiently record video. A target bitrate is normally chosen to serve as an average the camera will try to achieve. When the scene is empty or nothing is happening, the camera can reduce the bitrate. When a lot is happening in the scene, the camera can increase the bitrate.


Meraki MV cameras use CBR.

Dynamic Range (Wide and High)

High and wide dynamic range are camera techniques for capturing the same image at different exposures and then merging those images together to form a single image.  This is particularly useful where the image consists of very light and very dark areas (e.g., an indoor camera that faces a window to outside).


High dynamic range (HDR) is performed in software, and can be problematic in scenes with fast moving objects. Wide dynamic range (WDR) is a term more commonly used in the CCTV industry. Most often, HDR and WDR solve the problem of lack of detail in dark areas of an otherwise bright image and/or lack of detail in the bright areas of an image.


Meraki MV cameras uses HDR in the generation 2 (MV12, introduced 2018) and later cameras. Generation 1 cameras (MV21, MV71, introduced 2016) are 69dB which is typical of a camera with standard dynamic range.

MV Features
Immersive Imaging

By using a wide-angle lens, the camera can span a much wider field of view than normal cameras (some camera lens designs even cover a full 360 degrees). Immersive imaging facilitates digital PTZ. The result is the ability to pan, tilt, and zoom digitally within a frame, even though the camera itself does not move.

Intelligent Video

“Intelligent video” is an industry-adopted term that refers to a camera solution analyzing an image and performing an action or actions based on what it “sees.” At Meraki this can refer to the MV family’s motion analytics, dynamic retention policies, or object counting/detection abilities.

Motion-Based Recording

In motion based recording, a camera only records when it detects motion in a frame. Typically, recording is triggered by the amount of motion in the scene, e.g. a person walking through the door. Motion based recording allows for longer video retention than continuous recording using the same quantity of storage; however, this technology is prone to false negatives (and a subsequent loss of video data) when the minimum motion threshold is not triggered by an event.

Motion-Based Retention

Motion based retention differs from motion-based recording in that, instead of recording only when motion is detected, footage is deleted from the camera (using software) when there is no motion detected in the historical footage. This allows the camera to keep a few days of the most recent footage in its entirety, before removing older footage that does not contain motion, thus extending storage durations.

Video Transmission
Direct Streaming

In direct streaming (or local streaming), an MV camera sends video directly to a user's browser over the local network. This uses no WAN bandwidth when the user and camera are local to one another. No manual configuration is needed to enable this functionality.  The benefit is it is quicker and more efficient than cloud proxy streaming.

Cloud Proxy

Cloud proxy is used to stream video when dashboard automatically determines that a user’s device has no direct connection to an MV camera in the LAN. The video stream is then proxied through Meraki’s cloud infrastructure, allowing a user to view live and historical video. This uses WAN bandwidth and is slower to load than local streaming.

Video Wall

The video wall is a dynamic video interface for viewing a collection of tiled camera feeds. It can show both live and historical video in a user's web browser, without the need for any software or browser plugins. All video tiles in a single video wall will remain synchronized throughout the process of reviewing historical footage (even while using the Motion Search tool).

Adaptive Bitrate Streaming

Depending on configured resolution and quality settings, streaming video from a MV camera can consume up to ~3 Mbps per camera. Adaptive Bitrate Streaming (ABS) reduces the overall bandwidth consumption by selectively streaming a lower-quality bitrate stream when the size of the video element is below 540p, or full quality when above. With ABS, the bandwidth required to stream a video wall with 16 high-quality 1080p cameras is reduced by ~40 Mbps!

Security and Architectures


Cyber attacks have become more and more prevalent, with attackers leveraging any insecure means of entry into the network. Recent attacks have used poorly secured security cameras and NVRs as the path into the network. As such, these devices should have the same level of security as traditional network devices.

By simplifying the architecture and making many best practice security features enabled by default, Meraki’s MV security cameras offer extensive security out of the box.

Let’s compare traditional camera systems to the Meraki MV Camera solution.

Camera Architectures
Traditional Architecture

Traditional systems typically rely on an onsite network video recorder (NVR) or server-based recording solution. Additionally, proprietary software packages requiring manual download and configuration are often necessary. These additional moving parts all need to be securely configured, and managed, and require continuous security patching and software updates for the life of the system if network security is a priority. A greater number of devices on the network also means more possible entry points for attackers if these devices are not properly secured and kept up to date.  



Security patches are required and must be manually managed and deployed for the below:



  • VMS

  • Cameras

  • OS and OS modules like IIS, MS DB app like MS access

Meraki Architecture

Meraki MVs simplified architecture completely removes the need for a network video recorder (NVR), a video management system (VMS), servers and other proprietary software by storing and processing video at the edge, on the camera itself (not in the cloud). No NVR means one less point of vulnerability since the NVR/DVR is the second most targeted piece of the networking stack during cyber attacks. In conjunction with local storage, cloud management allows cameras to be configured and monitored from anywhere in the world with an internet connection. Metadata, thumbnails, and configuration data are stored in the cloud though video data is not.


Security patch management is automatically handled and deployed by the Meraki dashboard. This means MV cameras are always up to date with the latest security fixes and new features. As a Meraki security camera solution does not require additional servers, software, or devices, there is no need to update or maintain other systems.




With regard to our data centers, the Meraki service is colocated in tier-1 data centers with certifications such as SAS70 type II / SSAE16 and ISO 27001. These data centers feature state of the art physical and cyber security and highly reliable designs. All Meraki services are replicated across multiple independent data centers. This means services fail over rapidly in the event of a catastrophic data center failure. More information about our data centers can be found on our Cisco Meraki data centers information page.

Passwords and Administrators

Traditional Administration

With a traditional camera system, passwords are required for the NVR/DVR, cameras, VMS, and server operating systems. Typically, no central repository exists for managing all of these passwords. Therefore, many administrators opt to keep the default password or create very simple, easy to guess passwords, like “password.”  Also, as employees leave the organization, the lack of centralized password management makes it difficult to ensure those who should no longer have access are removed from the system. Traditional systems do have the ability to create admin and user accounts with varying levels of permissions. If site administrators do decide to implement this security best practice, the lack of a central repository for account credentials means distributed environments with multiple NVR/DVRs must manage accounts by connecting to each NVR/DVR in their deployment. The pitfalls of traditional security camera deployments’ password and administrator access follow:


  • Often have a default username/password which can be found online

  • Weak passwords - no complex password enforcement

  • No central repository for system password management


Meraki Administration

The Meraki dashboard makes centrally managing administrator accounts very simple, while still providing the most secure options for password management. Unlike traditional systems, multiple passwords are not needed for the different devices and servers on the network. Each administrator uses his/her unique credentials to access dashboard. This offers visibility and control of the camera they have permission to manage. The inherent centralized management capabilities of the dashboard mean that access and permissions are easily audited, and administrators are easy to add and remove from the system. The dashboard supports SAML integration with existing databases to enable use of a directory service for usernames and passwords.


Dashboard has advanced security options like two-factor authentication, strong password requirements, and password expirations that can be configured to meet an organization’s security policy. Two-factor authentication adds an extra layer of security to an organization's network. After an administrator enters his/her username and password, a one time passcode is sent via SMS that must be entered to complete the log in. In the event that a hacker guesses or learns an administrator's password, they still will not be able to access the organization's account, as the hacker does not have the administrator's phone.


Below are the security options built into the Meraki dashboard.




Role-based administration allows for the creation of administrators for certain subsets of an organization. These roles can be furthered customized with specific levels of access that an admin should have on the network. Role-based administration reduces the chance of accidental or malicious misconfiguration, and restricts errors to isolated parts of the network.


When a network admin with camera-only privileges logs into the dashboard, their view is restricted in terms of both devices and functionality. The menu is simplified for easy access to the cameras. Camera-only admins have view-only rights, so they are unable to make changes to the image settings, including focus and zoom, or quality and retention settings.


Camera-only admins are frequently used to allow an individual access to only specific cameras. A person could be given access to view live video footage of cameras on the building’s 5th floor. This allows them to see who is walking on this floor before admitting them into certain parts of the building. In this scenario, camera-only admin settings for the person “Daisy Leg” would be view live footage, for cameras with the "5th_floor" tag.



Secured Access and Encryption

Traditional Access and Encryption Solutions
Local Access

With traditional solutions, a camera continuously streams footage to an NVR for recording. This data stream is typically unencrypted and not secured. The default accessibility for local access to view recorded video from the GUI is via unencrypted HTTP ports. Enabling secure HTTPS access requires deploying and managing certificates. This is often beyond the knowledge or skill set of many administrators, so data traversing the network is left unencrypted.

Remote Access  

Remote access with traditional camera solutions is not available out of the box and requires additional VPN and/or complex firewall configuration. If an organization chooses to use VPN, a head-end VPN device needs to be deployed and configured on-site. VPN software must be installed and configured on all administrators’ devices that will be used for remote access. Devices without VPN configurations will not be able to remotely access the system.


If remote access is set up through port forwarding or 1:1 NAT, then SSL/TLS certificates must be used to ensure encrypted access to the system. This requires managing, deploying, and renewing certificates for the camera system. Managing certificates can be complex, and proper configuration and management require specific knowledge and skill sets. With port forwarding this can expose systems to known vulnerabilities. Examples include things such as hardcoded root username/passwords (set by vendors) that cannot be changed.


With traditional solutions, end-to-end encryption and security tend to be treated as optional. Some manufacturers have specific cameras and solutions with these features built in; they label them as “Cyber Secure” options.


End-to-end encryption is not always possible, as it requires that all components of a solution support this functionality. With traditional deployments, the cameras, NVRs, and VMS tend to be from different manufacturers.  Collaboration between manufacturers is required to implement an encryption solution integrated into all components.


Companies that support in-transit encryption from the NVR to the viewer offer it as an optional feature that is not enabled by default. If enabled, the encryption key needs to be managed and installed on all devices that will be used to to view the cameras locally or remotely.  Because of the need for manual installation and management, encryption in transit from the NVR to the user is rarely used.


Finally, with regard to encryption at rest, most manufactures do not have a solution to encrypt data stored on NVR/DVRs. If unauthorized users obtain to the drives, they may be able to access and view recorded footage.

The Meraki Access and Encryption Solution

MV cameras are easily accessed through any modern web browser (without downloading of plug-ins) at The Meraki dashboard intelligently determines if the viewing computer is on the same local network as the cameras or not.  If it is, video traffic will stream directly and securely over the LAN, saving WAN bandwidth. If not, the dashboard will proxy video through the cloud to a remote client.

Local Access

If the client has an IP route to the private IP address of the camera, a secure connection will be established between the two. This occurs when the client is either on the LAN, over site-to-site VPN or client VPN. In the LAN scenario, no WAN bandwidth is used for the video streaming. Direct streaming is indicated by a green check mark in the bottom left corner of the video stream.

Remote Access

No special configuration (VPNs, port forwarding etc) is needed for remote access. Dashboard is accessible anywhere with internet access. If the client does not have an IP route to the private IP address of the camera, the dashboard will automatically send video via cloud proxy. Cloud proxy streaming is indicated by a cloud symbol in the bottom left corner of the video stream. Meraki can detect if SSL inspection is occurring upstream or potential ‘Man in the Middle’ attacks. Since Meraki only trusts certificates from the Meraki CA (certificate authority) we will not establish any connections if a certificate is injected in the chain. Another layer of protection for customers data.


Whether viewing cameras remotely or locally, access to the Meraki dashboard is only available by HTTPS. This ensures all communications between an administrator’s browser, the MV management interface, and camera is always encrypted.  




Upon initial boot up, the MV camera goes through a full disk encryption process. This ensures all footage on the camera is encrypted at rest. Additionally, each camera automatically purchases, provisions, and renews their own publicly-signed SSL certificates. The result is that all footage is encrypted in transit between the camera and browser. Lastly, management data is encrypted using our Meraki secured mtunnel technology.  All of this is enabled by default, and cannot be turned off, ensuring access to cameras and the dashboard are always encrypted and secured, regardless of an organization’s security expertise.


The direct streaming certificate is very secure and is a popular format used by Google, Facebook, Yahoo, and others.  It allows for high throughput without compromising on security


Technical breakdown of certificates and encryption:

  • Streaming certificates:

    • Hashing algorithm is SHA256

    • Signing algorithm is RSA2048

    • Key parameters are secp384r1

    • Key exchange is Diffie-Hellman 2048

    • Cipher is AES128

  • Encryption at rest:

    • Hashing algorithm is SHA256

    • Key size is 256 bit

    • Cipher is AES256

Alerts and Logging

Traditional Alerts and Logging Solutions

In traditional systems, alerting requires integration of the NVR/DVR and/or VMS with an email server.  This integration adds additional complexity and requires technical skills to deploy. If an organization has multiple NVRs/DVRs, this integration must be configured on all devices. As alerts originate from the NVR/DVR, alerting is usually only available for camera status. This means that alerting will not function if an NVR/DVR goes offline, and organizations may not know that there are issues with cameras or storage until there is a need to pull footage.

The Meraki Solution

The Meraki dashboard can send email alerts when network configuration changes are made, enabling the entire IT organization to stay abreast of new policies. Change alerts are particularly important with large or distributed IT organizations. Knowing when a camera is malfunctioning or offline is crucial.  Alerts can be set to be proactive in system maintenance and monitoring.




The Login Attempts page displays historical login information for the Dashboard Organization. A login event will be generated any time any of the current Organization or Network Administrators attempts to login to the Dashboard. This includes regular Dashboard login attempts and SAML logins. These events will record the following information about the login attempt:

  • Email: the email address that was used for the login attempt

  • IP Address: the IP address that sourced the login attempt

  • Location: approximate Geo-location of the IP that sourced the login attempt

  • Type: type of login attempt, either 'Login' (normal Dashboard login) or 'SAML'

  • Status: displays the success or failure of the login attempt

  • Timestamp: the timestamp of the login attempt




Additionally, Meraki provides a searchable configuration change log, which indicates what configuration changes were made, who they were made by, and which part of the organization the change occurred in. Auditing configuration and login information provides greater visibility into your network.



Designing Meraki MV Security Camera Solutions

There are a number of ways to design an IP surveillance system. The most important part of the design is identifying areas of security concern and positioning cameras to cover those areas. There are a variety of ways to design camera coverage for the same building. Too many designs are primarily built with the cost of installation in mind, instead of looking at the system as an investment in protecting assets and/or people. A good IP surveillance system should be focused on protecting people and assets. The first steps to building such a system are analyzing the building or facility and conducting a site survey.

Site Survey

Conducting a site survey helps provide an understanding of the security needs of a building/facility, and determines the requirements to address those needs. At the conclusion of a site survey, there should be a clear understanding of what needs to be monitored, the materials/parts needed, labor required, total number and locations of cameras to be installed, and an estimated cost for deploying the solution.

Pre-site Survey Requirements

The list below is a starting point to identifying deployment needs, and will help ensure a better site survey outcome:


  • Acquire to-scale floor plans of the building(s)

    • A purpose of the site survey is to determine the mounting locations of cameras to be installed. Having floor plans helps the site administrator and the installer both fully understand the intent of the design and requirements. If outdoor camera coverage is also required, try to obtain external building plans as well.

  • Locate all network equipment closets

    • During the site survey it is necessary to understand existing network equipment, as the cameras will most likely be powered by and connected to the network. Identifying these locations beforehand is necessary.

  • Ask the simple questions:

    • Why is an IP video surveillance system needed?

    • What purpose does the system serve?

    • What are the requirements the system MUST have and why MUST the deployment have them?

    • When does the project need to be done?

    • Where is camera coverage required?

    • Who and/or what needs to be protected?

    • Are there any restrictions around mounting cameras to walls and/or ceilings?  

    • Are there any restrictions around running CAT5e/6/7 cabling throughout the building/facility?

  • Get keys and access to ALL parts of the building/facility

    • This may be difficult as some companies/entities have high levels of security. Be sure to coordinate with management/security/facilities, and explain the purpose of and requirements for survey.

  • Schedule and treat the site survey like a project

    • Coordination and communication is key to making the site survey a success.  Identifying things in the pre-site survey meeting can save a lot of time during the site survey.

  • Understand and know the County/State/Local regulations around cameras/surveillance for each facility to be surveyed

Conducting a Site Survey

The site survey determines where to place the cameras. It may also uncover additional suggestions or recommendations that were not initially considered.


Site Survey Checklist

  • Have a system to take extensive notes and make recommendations

  • Have several copies of the floor plans handy for marking where cameras should be installed

  • Take lots of pictures! Pictures can help convey design a lot more easily and are extremely helpful for documentation.

  • Consider the following to determine placement for each camera:

    • Take into consideration camera position and areas of high contrast - bright natural light and shaded darker areas.

    • It is HIGHLY recommended to have at least two (2) vantage points on each ingress and egress point. Having multiple cameras covering the same area is a GOOD thing, as it creates redundancy for backup.  

    • Consider the following areas as examples:

      • Entrances, exits, loading, and/or delivery entrances to the building(s)

      • Any gateway(s) or parking areas/lots where employees/guests park vehicles

      • Cashier, ATM, and other locations handling monetary transactions

      • Highly used areas such as lunch rooms, reception areas, break rooms, waiting areas, hallways, etc.

      • Perimeter coverage of building, including walkways, fences, patio, and outdoor accessible areas used by employees, guests, and others

      • High value items and areas where security and visibility are a necessity

    • The closer a camera is positioned with a narrow field of view, the easier things are to detect and recognize. General purpose coverage provides overall views.

    • Will camera placement require a man lift for installation?

      • How difficult and high is the mount? This may affect future maintenance.

    • What mounting equipment is required for each camera and how will they be mounted?

      • Height of installation

      • Distance to targets

      • Lighting

      • Angle of placement

      • Mounting style and mounts (ceiling vs. wall)

    • Determine if new cable runs are required and the distance of the cable run to the nearest networking closet

    • If vandalism of cameras is a concern, position the IP66 rated vandal resistant cameras offered in the Meraki portfolio

  • When it comes to audio recording be aware of local laws

  • Determine if existing networking equipment is adequate for new security cameras

    • Are there enough ports on the switch(es)?

    • Are additional network switches required?

    • Are these ports PoE and does the equipment have adequate PoE budget to compensate for security cameras?

    • Is the UPS adequate enough to handle the additional power draw and meet company policy standards for run time when down?

    • Are power injectors required?

    • Create network diagrams and document internet connections

    • What is the WAN design and LAN uplink connections and are they adequate to support the necessary streaming?

  • Are there dedicated computers for video surveillance?

    • What are the specifications of these computers and do they meet the minimum requirements to run a dedicated video wall?

    • Which employees have access now and who will require access to the new security camera system?

  • Other notable considerations:

    • Use tools such as as resources for determining proper placement for cameras and illustrating the outcome of an installation could look like

    • Consider using a live camera on a stick connected to Meraki MX65 (or similar device with PoE ports and access to internet via USB Cell provider) and UPS on a rolling cart (similar to an Active Wireless site survey)  [This is very time consuming but provides examples of what camera footage will look like after installation. Would require at least a MV21, MV12N, and/or MV12W (different optics in each of these models)]

Post-Site Survey Report and Deliverables

The site survey process starts with information gathering, then the actual on-site survey, and is concludes with documentation of the results and findings. The site survey report should be presented in a format that clearly shows the recommendations and design for the security system.  This ensures all parties fully understand the requirements and deliverables for implementation. Below are recommendations of what should be included in the final report from a site survey:


  • A full write-up on findings, obstacles, and recommendations

    • Include pictures

    • Explain in detail needs from network

      • Additional switches

      • Additional network cable drops

  • A floor plan indicating locations of all networking closets and recommendations on camera locations

  • LAN/WAN diagrams and information about Internet connection

    • Show ports and cable types connecting all network devices

    • Document VLANs, ACLs, Routing

  • Complete equipment/materials list necessary to complete design

  • Configuration details of existing network equipment

    • Is QoS adequate?

    • Will a new VLAN schema need to be built out?

    • Do firewall ports need to be opened to the Meraki cloud?

    • Is the existing security system computer adequate to handle video streams?


Designing a Meraki MV Security System Best Practices

Meraki MV cameras are designed to simplify deployment and enable the more efficient implementation of a security system.  Many people are fully capable of installing an MV camera without needing a site survey, and there are many instances in which a site survey may not be necessary. This may include smaller installations with fewer than 10-15 cameras, or replacing an existing system. However, it is always a good idea to have a plan, determine the scope of the project, and think through how a security system will perform and interact before deploying.  This section covers best practices for developing and implementing a new physical security system.

Build a Physical Security System Requirements Plan

A physical security system requirements plan (PSSRP) is developed by an organization or company to outline requirements for a security system. Borrowing ideas from plans of other entities is okay, as long as they meet the security requirements for the organization installing the system. The goal of a PSSRP is to ensure consistency across multiple buildings, which helps with network security and accessibility.

Define a Network Layout in the Meraki Dashboard

A network in the Meraki dashboard is a logical separation of devices managed in a single container. A network can be defined in a number of ways, but many Meraki customers divide networks up geographically so that each building/location can be managed independently of other locations. There are many ways to divide up networks, and some customers may choose to create only one. When planning the network design within the Meraki dashboard, it is helpful to think about users may need access to the system and which parts they will need to access. Within the dashboard, a user can be given rights at an organization or network level; separating each location into its own network makes it easy to grant an individual access to just the location they are responsible for. In addition to organization and network-wide administrators, the dashboard also allows a number of camera-only admin capabilities, as shown below.





Create a Naming and Tagging Schema for Cameras Being Installed

By default, the Meraki MV apply the MAC address given to the device as the name for each camera. This helps identify each camera initially, but it is not easy to remember which camera is represented by which 12 characters. Naming and tagging become even more important the larger the deployment. Make it a point to apply a name to each camera that makes sense, is logical, and is easily understood when looking at a list of cameras. Tags are applied to cameras to help group them with other like cameras. Multiple tags can be applied to a single camera. Tags are useful to sort or find a group of cameras and apply features or administrative rights to a group of cameras.

Design a VLAN Schema, Determine QoS Policies, and Apply Network Security

Creating layer 3 VLAN boundaries has been common in network security for years, as network administrators use VLAN separation to help protect devices and data. Creating a separate VLAN for security cameras only is highly recommended. Make sure this port is configured as an access port (trunk is not needed). This gives the ability to create access control lists on the layer 3 SVI to specify what can access to those devices specifically. It also enables the use of QoS on a VLAN basis throughout the campus/building network. The QoS marking recommendation is DSCP of 40 (CS5 - Broadcast Video). Be sure to have a QoS network policy in place, as this is needed throughout the network. Analyze all aspects of the routed traffic, including switches and routers.  The Meraki MV automatically tags for QoS and the upstream network devices only need to trust the tag.

Determine Administrative Access to the System

As mentioned previously, administrative access is important in considering and designing an organization or network schema. Organizational-level administrators have access to the entire dashboard, all networks, and organization-wide pages, including licensing, inventory, creating/deleting networks, etc. Network-level administrators have the ability to access and manage everything within a network in an organization. This means they can see and modify general settings. Organization and network-level admins can be given read or write access. Camera-only admins have only access to cameras. Access can be granted to all cameras, individual cameras by name, or groups of cameras by tags. Permissions can be set to allow the user to view/export all footage, view all footage, or view only live footage.

Determine Quality and Retention for Each Camera

It is important to create a policy for video quality and retention for each camera. Policies may differ based on camera location, purpose, and what is being recorded. It is possible that all cameras may have the same policy, but there may be different needs for different groups of cameras.




In most cases, having at least some amount of recorded video is recommended. However, some locations and/or countries have very specific privacy laws; if this is the case, it is important to configure cameras to adhere to these laws. By default, Meraki cameras are set to always record footage, and delete only when space runs out (files follow a first-in-first-out, or FIFO, methodology). When needed, cameras can be set to record based on a schedule, and delete footage after a certain period of time. Example configurations for footage deletion and recording schedules can be seen below:






Motion-based retention can be enabled when feasible to extend the storage capacity for each camera. When enabled, the camera will always retain the most recent 72 hours of continuous footage recorded. After three days, video clips which do not contain any motion are automatically deleted, allowing the camera to store only footage of interest and increase retention. Motion-based retention is disabled by default.




Video resolution isn’t a big mystery: the larger number in 1080p vs 720p means that there are more lines in the frame of video being displayed. Higher resolution means higher bandwidth and storage requirements. The same holds true in frame rates or fps.  Estimated retention is automatically calculated for each camera based on its current settings. These settings include 24x7, scheduled, or motion-based retention, along with video resolution and quality. Higher video resolution and quality is recommended, but isn’t always necessary in all deployments.




Some Meraki MV cameras, like the MV12, can record audio. Audio recording is disabled by default. Determine if recording audio is a requirement and if so, if it is legal.  Privacy laws vary from country to country, and not adhering to laws can result in serious repercussions.




Night Mode Configuration

When ambient light falls below certain thresholds, the cameras can switch to night mode, allowing them to be more sensitive to infrared illumination. By default night mode is set to “auto” and the built-in infrared illuminators of the Meraki MV cameras are set to “on during night mode.”  For first generation cameras, “Transition thresholds” can be adjusted to fine tune night (and day) mode. By default, the transition thresholds are configured at 0.5 lux for night and 3.0 lux for day. Keeping default settings is recommended.

Second generation cameras will automatically transition in and out of night mode based on optimal transition thresholds. 2nd gen cameras will prefer night mode in low light environments as it provides a much higher video quality in regards to physical security. Night mode allows for easier detection and classification of objects in the scene when video is reviewed for security purposes.




Motion Alerts

Motion alerts send notifications via email when motion is detected either within the frame or a selected region of a camera. These can be extremely useful in monitoring areas where tracking motion is important. Motion alerts can be configured to be sent always, or only during a certain schedule. Scheduling motion alerts is useful when monitoring an area after hours. Motion alerts are disabled by default.




Create an Inventory List for All Cameras

This is multifaceted as it will help with pre-installation setup and deployment as well as help with post-installation documentation. Having something as simple as an excel spreadsheet listing a camera by model number, serial number, name, MAC address, location description, and other details in the PSSRP is a big plus when coordinating efforts with multiple hands.  This can easily be done with the CSV export tool within the Meraki Dashboard from a network or organization inventory.

Define Necessary Video Walls for Monitoring Feeds

Video walls can be configured to allow for simultaneous viewing of multiple camera feeds. It may not be possible to determine all video wall needs prior to implementation, as customers may not know what they want or need. However, it is useful to have some basic video wall needs defined, as it is helpful in conveying how the system can operate/what it can do.

Determine which Users Need Alerts

A big advantage of the Meraki MV system is that the camera device (node) functions in relationship to other equipment in the dashboard. This enables alerting/notification of administrators in the event that the system goes offline. In many other systems, the security cameras, video management system, and storage solutions operate independently in silos. If one part of the system fails, there may not be a process to alert or notify administrators of the failure. This results in many administrators not knowing that a camera or storage system is offline until there is a need to pull footage. As part of the deployment, determine which users should be notified of issues, and configure alerts to be sent in the event of operational disruptions.

Dedicated Security or Monitoring Station(s)

Monitoring stations are common in instances that security guards need to watch multiple areas of a facility or campus. The Meraki dashboard can be configured with video walls to view up to 16 simultaneous camera streams at once per browser. System requirements for machines running video walls are available in our Hardware Guidelines for MV Streaming Workstations document.

Implementation and Installation of Meraki MV Cameras

Pre-Install Preparation for MV Cameras
Powering MVs

MV Cameras are powered by Power over Ethernet (PoE) via the ethernet cable. The consumption for MV12 and MV21 is within the 802.3af standard (PoE). The outdoor camera MV71 requires 802.3at (PoE+).

Time Synchronization

A vital part of a security camera system is time synchronisation for each camera. This is done automatically through the Meraki dashboard. No local NTP server necessary.

Assigning IP Addresses

Like any other network device, the MV camera requires an IP Address. MV units must be added to a subnet that uses DHCP and has available DHCP addresses to operate correctly. At this time, the MV cameras do not support static IP assignment. Consider using DHCP reservations if you need the IP address to remain constant.

Check and Configure Firewall Settings

If a firewall is in place, it MUST allow outgoing connections on particular ports to the Meraki dashboard. The most current list of outbound ports and IP addresses for your particular organization can be found here.

DNS Configuration

Each MV will generate a unique domain name to allow for secured direct streaming functionality. These domain names resolve an A record for the private IP address of the camera. Any public recursive DNS server will resolve this domain. If using an on site DNS server, please allow * or configure a conditional forwarder so that local domains are not appended to * and these domain requests are forwarded to Google public DNS.

Configuring a Network in Dashboard

All dashboard configurations should be done prior to installing any cameras. This allows the camera to associate with the correct organization/network in dashboard and download its configuration.


The following is a brief overview of the steps required to add an MV to your network. For detailed instructions on creating, configuring and managing Meraki Camera networks, refer to the online documentation (

  1. Login to If this is your first time, create a new account.

  2. Find the network you want to add your cameras to, or create a new network.

  3. Add your cameras to your network. You will need your Meraki order number (found on your invoice) or the serial number of each camera, in an “xxxx-xxxx-xxxx” format, located on the bottom of the unit.

  4. Verify that the camera is now listed under Cameras > Monitor > Cameras.

  5. For an easy way to view and configure, install the Meraki App on your smartphone or tablet.


Note: During first-time setup, MV cameras will automatically update to the latest stable firmware. Some features may be unavailable until this automatic update is completed. This process may take up to 10 minutes, as it also includes whole-disk encryption. If you view cameras in dashboard during the setup, you may see the error message “Could not reach camera" while attempting autofocus. You will need to wait until the camera finishes upgrading to the latest stable firmware to use this feature. Please do not unplug cameras until they fully complete the upgrade process.

Installing and Configuring MV Cameras


Note:  Leave the plastic film cover on all lenses until installation is complete. Take photos of installed cameras for reference.  Add a name and physical address to each camera.

Installation Steps
  1. Depending on your camera model, please follow the installation instructions in the respective Installation Guide.

  2. Adjust and focus the camera.

    1. Cisco Meraki App

    2. Computer (use any operating system and any browser - no plugins needed)

  3. Configure various video settings based on your design principles (see Design section):

    1. HDR (TBD)

    2. Sensor crop

    3. Video quality

    4. Scheduled recording

    5. Motion-based recording

    6. Motion alerts

    7. Night mode

  4. Restrict the video viewing and export permissions for administrators as described here.

Post-Implementation and Troubleshooting

Run Dark Mode

Run dark disables the LED lights on all MVs. This feature is useful in situations where the lights may be annoying, distracting, or overly conspicuous. For example, the LEDs can be disabled to prevent outdoor MVs from drawing attention at night. To enable dark mode, add the tag “run_dark” to the camera. More details can be found in our Dark Mode documentation.

Troubleshooting IR Reflections

All Meraki MVs, with the exception of the MV32, are equipped with infrared (IR) illuminators. These can be turned on in night mode and used to provide an illuminated view.  If the picture appears too blurry, please follow this Video Quality Troubleshooting guide.

Best Practice Design - Endpoint Management

General Systems Manager Deployment Guide

Cisco Meraki Systems Manager is a complete endpoint management solution that provides deep visibility and control over your Android, Chrome OS, iOS, macOS, and Windows devices. It unifies endpoint management into a single pane of glass through an easy-to-use, cloud-based Dashboard shared with the rest of the Meraki stack, and is the only solution to bring network level security and visibility to the endpoint.

Key capabilities include:

  • Manage and monitor endpoint devices
  • Manage and distribute mobile and enterprise applications
  • Investigate and change device functionality based on device status

Systems Manager Capabilities

Manage and Monitor Endpoint Devices

Cisco Meraki Systems Manager provides visibility into managed endpoint devices to help you better understand and react to changes. State changes include events such as a non-sanctioned application being installed, corporate applications being removed, and a device leaving a predetermined area. The dynamic tagging functionality in Systems Manager continually assesses the state of endpoints and automatically takes corrective action. Systems Manager profiles that are deployed on devices restrict or enable device capabilities based on your organization’s best practices.

Key capabilities include:

  • Enforcing endpoint-specific restrictions (screenshots, camera usage, password length, screen lock activation, etc.)
  • Applying network-based configuration (VPN, per-app VPN, wireless, etc.)
  • Applying security certificates
  • Monitoring and alerting when a device’s status changes (the device leaves a physical area, installs specific software, disconnects from a network, etc.)
Manage and Distribute Applications

Systems Manager application management enables the deployment of macOS, Windows, Android, and iOS applications. Organizing multi-device application management in a single interface allows you to monitor and set policies on applications holistically.

Key capabilities include:

  • Delivering the right applications to the right devices based on tagging mechanisms
  • Loading and distributing macOS and Windows applications from the cloud

  • Provisioning and managing licensing for iOS and macOS through Apple’s Volume Purchase Program

  • Provisioning Google Play Store applications through Android Enterprise

Apply and Investigate Device Security Status

Systems Manager applies and actively enforces the security status of macOS, Windows, iOS, and Android devices to maintain device security integrity. By setting and responding to security requirements, you can better understand and control device access and capabilities.

Key capabilities include:

  • Requiring that certain applications be loaded and running across all devices
  • Requiring minimum operating system levels on all devices
  • Requiring devices check in online with a specified frequency
  • Requiring devices to be locked with passwords
  • Requiring that devices not be jailbroken (iOS) or rooted (Android)
  • Requiring that firewalls be enabled (desktop devices)

Solution Design


Customers must set up a Cisco Meraki account and obtain licensing for Meraki Systems Manager. Systems Manager licensing is offered as a 1-, 3-, or 5-year subscription.

Logical Topology

The logical topology consists of endpoint devices enrolling and communicating with the Cisco Meraki cloud, and at times with MDM services from the operating system provider, such as Google or Apple notification services. If communication between the endpoints and that cloud traverses a firewall, you will need to open ports as described here. Additional info on Systems Manager specific firewall settings can be found here.

Best Practices

Deployment settings should vary depending on the level of control your organization has over devices, for example BYOD vs company-owned devices. Consider the following settings, especially for company-owned or single-purpose devices.

  • Apply device security settings:
    • Allow all mandatory apps
    • Deny all non-sanctioned apps—for example, ensure that apps like Tor browsers aren’t installed on devices
    • Mandate passcodes on devices combined with minimum screen-locking periods
    • Mandate that firewalls are running on devices
    • Mandate that devices are not jailbroken or rooted
    • Mandate a minimum secure operating system for each device type
  • Monitor and remediate devices
    • Alert when devices install non-sanctioned software
    • Alert when mission-critical devices leave the network or go offline
    • Alert when devices violate mandated security settings
    • Restrict non-corporate cloud apps
    • Remove device profiles when devices leave their proper geographical area or violate device security rules
    • Remove device applications when devices leave their proper geographical area or violate predefined device security rule

Solution Deployment Concepts

The following key concepts will be helpful in understanding how to set up your Cisco Meraki Systems Manager environment. Thinking about these steps beforehand will simplify initial deployment and ongoing management.

  • Enrollment: When devices enroll into Systems Manager, they give you full device access for setting user restrictions, managing applications, and enabling device visibility and management.
  • Tags: Systems Manager uses tags to verify that the right devices get the right applications, profiles, and restrictions. Tags can be applied manually, automatically, and even dynamically, based on a device’s status.
  • Profiles: Profiles control device configurations and enable or restrict device access, depending on the use case.
  • Applications: Systems Manager enables application delivery and management through public app stores and custom applications hosted in a cloud-based repository.
  • Security: Security-solution sets within Systems Manager include securing the device itself, verifying that only secure corporate applications are installed, and processes to wipe data in the case a device is lost or stolen.
  • Troubleshooting: Systems Manager has built-in tools to troubleshoot mobile and desktop devices.

Before devices can be managed within Systems Manager, they have to be enrolled in your Enterprise Mobility Management (EMM) network. Different types of enrollment can be used to meet the needs of different device types or deployment models. For example, while the simplicity of fully automated enrollment is ideal, this method does not suit Bring-Your-Own-Device (BYOD) deployments. It also isn’t compatible with all devices. Some of the available enrollment methods are described below.

This is a quick introduction to a few enrollment concepts. For comprehensive guides to enrolling devices for each operating system, refer to the Enrolling Devices article.

Screen Shot 2018-03-27 at 10.29.14 AM.png

Systems Manager Enrollment Authentication

To provide an extra layer of security, you can require authentication upon enrollment through services such as Active Directory, Azure, Google, or Meraki authentication. Authentication is compatible with all types of enrollment, and it has additional benefits beyond security. First, enrollment authentication ties an owner to a device automatically. Second, enrollment authentication can tie in a user’s groups (either LDAP or those managed by Cisco Meraki solutions) to all of their devices as dynamic tags, for automatic grouping.

For a comprehensive guide to enrollment authentication, refer to the SM Enrollment Authentication article.

sm authen.gif

Fully Automated Enrollment

With fully automated enrollment, a device is enrolled into Systems Manager automatically. It can be configured so that the user has no option to cancel or prevent the enrollment. In addition, the device will automatically have apps, controls, and settings provisioned based on the person using the device (device owner) with no direct user or administrator configuration required.

This type of enrollment allows for the highest levels of EMM control. It is possible only with iOS and macOS devices that are eligible for Apple’s Device Enrollment Program (DEP). With DEP, devices can be directed by Apple to automatically enroll into Systems Manager when the user first powers on the device. This eliminates all prestaging and the need for Apple Configurator.

Automated enrollment can significantly reduce the administrative cost of deploying devices. This benefit increases in proportion to the number of devices being deployed.

Partially Automated Enrollment

Partially automated enrollment supports a wider range of devices, but workflows vary based on the operating system of the device. It can be completed by the end user or by an administrator who prestages the device. For example, iOS devices that are not in DEP can be provisioned in bulk through the Mac application Apple Configurator, or Android devices configured in Device Owner mode can be provisioned during the device’s initial setup process. 

As with automated enrollment, two core functions are performed: installation of the Systems Manager profile, app, or agent to the device; and configuration of apps, settings, and controls.

Meraki wireless products can be integrated with Systems Manager to simplify and automate this process for users. This feature, called Systems Manager Sentry Enrollment, checks all clients connecting to an SSID and forces them to go through the onboarding process at if it detects that they are not enrolled in Systems Manager.

Manual Enrollment

Manual enrollment supports the widest range of devices since there is no reliance on vendor- or platform-specific features. This type of enrollment is often suitable for BYOD environments. Installation of Systems Manager can be performed by a user or administrator by visiting and following the instructions. Manual enrollment does not require a user database.

A unique Systems Manager network ID is used to identify which Systems Manager network the device should be joined to. The network ID and distribution options (email, text message (SMS), or QR bar code for the end user) can be found in dashboard.



Tags are used to group devices and to inform you of a device’s current state within Systems Manager. These tags, once generated, can be used to define the apps, profiles, and settings that are provisioned by Systems Manager. For a comprehensive guide to tags, refer to Using Tags in Systems Manager.

There are three main types of tags:

  • Static tags (category: Device Tag) are generally applied manually to individual devices by the administrator.
  • Dynamic tags (category: Policy Tag) are automatically applied to devices based on their state and can change depending upon certain factors:
    • Time-based tags use time-of-day information to implement device configurations, enforce policy restrictions, and enable application access.
    • Geofence tags use location-based information to implement device configurations, enforce policy restrictions, and enable application access.
    • Security policy tags use a device’s posture to implement device configurations, enforce policy restrictions, and enable application access.
  • Owner tags (category: User Tag) use identity information to implement device configurations, enforce policy restrictions, and enable application access. Owner tags can be created or automatically imported from certain directory systems.

Scoping apps and profiles with tags is accomplished by using logical operators (AND and OR), which allows for a high degree of granularity when setting a device scope.



After a device is enrolled, additional configuration profiles can be installed via Systems Manager. These profiles contain settings that are installed onto your managed devices, such as Wi-Fi access, VPN access, device restrictions, app home-screen layouts, email ActiveSync settings, and much more. Profiles allow you to easily customize and secure your enrolled devices. For more information, review Configuration Profiles.

In the Cisco Meraki dashboard, the Systems Manager > MDM > Settings page allows you to create or delete configuration profiles. These profiles are not to be confused with management profiles, which are installed when a device is first enrolled into Systems Manager.

Profiles are scoped to devices based on tags. This allows you to specify the devices, users, and specific conditions required for a device to receive a profile.

In an enterprise environment, it is often necessary to preconfigure or limit the availability of some features. These can be restrictions like disabling the camera or configuring settings like email or Wi-Fi. Restrictions and settings can be collected together into a profile, and devices can have multiple profiles applied to them.

multiple profiles.gif

Multiple Profiles

Systems Manager also uses tags to scope which devices get which profiles. Tags allow for a highly granular or hierarchical approach to applying restrictions to devices. In an enterprise environment, you may want to create a baseline or global profiles that apply to a larger group of devices, like BYOD or Corporate. You can then apply more specific profiles targeted at smaller groups, like Sales or Back Office. This eliminates the need for administrators to maintain the same global settings across multiple profiles for each device use case. This global profile can then be updated in the future, and all associated devices will automatically update. If a device receives multiple profiles that have conflicting settings, the most restrictive applies. The ability to use multiple profiles allows for granular device restrictions with simple management.

Assigning a profile to devices in the dashboard uses the same scoping method discussed earlier: profiles can be scoped to static or dynamic tags. Dynamic tags reduce the work required to manage a large number of devices, while also providing automated control. For example, if a device is not physically in the office, then it can have the office restrictions removed for home use through the geofencing feature. For more information, see Geofencing with Managed Devices.

Screen Shot 2018-03-27 at 11.05.52 AM.png


The Systems Manager > MDM > Settings page allows you to configure the specific settings associated with a particular configuration profile. These settings and profiles can be used to ensure that your devices meet business requirements and receive the configurations your users need to work.

After creating a new profile, click the Add Settings option on the left to begin adding settings payloads to your profile. Profiles can contain multiple payloads at once, and multiple profiles can be installed on a device. Your settings and profiles should be tailored to how your device deployment and tag structure are organized. For more information, see Configuration Settings.

Screen Shot 2018-05-02 at 11.43.19 AM.png


Cisco Meraki Systems Manager can deliver both mobile and desktop applications through the dashboard. This includes apps found in public app stores (iOS, macOS, Android) and custom apps (iOS, macOS, Android, Windows) that can be hosted externally or on the Cisco Meraki cloud.

Screen Shot 2018-03-27 at 11.11.07 AM.png

Mobile Apps— iOS and Android

For iOS apps, Systems Manager integrates directly with both public and enterprise app stores. It also supports deploying custom .ipa files. To silently install apps without Apple IDs or user prompting, please review how to use Systems Manager with Apple's Volume Purchase Program (VPP).

For Android apps, Systems Manager can be used to deploy Google Play apps or custom .apk files. Organizations using Android Enterprise can create managed stores that will display only approved applications. See Push Applications.

Desktop Apps—Windows and macOS

Systems Manager can install .msi, .exe, .pkg, .app, and .dmg files. For information on how to install software on Windows and Mac machines, see Installing Custom Apps. For macOS, VPP can also can be used to license apps directly to an end user's Apple ID, which allows end users install apps through the Mac App Store.

Application Delivery

For full guides on how to provision and update applications through Systems Manager, review the following articles:

App management scopes the applications that will be added to a device and installs them accordingly. Native (built-in) applications on devices provide functionality for managing everyday activities like email, calendars, contacts, and web browsing. For increased productivity and functionality on top of these native applications, hundreds of thousands of third-party apps are available in Apple’s App Store and the Google Play store for mobile devices, in addition to the many applications available for Windows and macOS desktop machines.

The Cisco Meraki Dashboard provides several ways distribute apps and apps licenses to devices and to scope your app installation to certain devices. For mobile devices, there are public apps, like those found in the Apple App Store and Google Play store, and private apps, which are custom or enterprise iOS and Android apps. You can also scope traditional software apps for Windows and macOS platforms.

Step 1: To add applications to your Cisco Meraki dashboard, navigate to Systems Manager > MDM > Apps.

Step 2: Click the Add New button on the far right-hand side (Figure 10). Options are provided for iOS apps (publicly available), iOS enterprise Apps, Android Apps, macOS applications, and Windows applications.

Publicly available apps can be searched and downloaded from the respective platform’s app store. For other types of applications, the install file host is specified. You can upload the file to the dashboard or point to a URL accessible by the device. For more information on installing software for Windows or Mac devices, see Installing Custom Apps.

app delv.gif


Endpoint device security takes on multiple roles in a multi-cloud environment. By leveraging security profiles, you can verify end users are using devices only in the manner you intend them to. By using a combination of notifications and remote wipe tools through Cisco Meraki Systems Manager, you can mitigate potential security breaches in the event a device is lost or stolen.

Breach Mitigation

By leveraging built-in notifications you can be alerted when a device:

  • Goes offline
  • Leaves a predefined geographical location
  • Fails security compliance

By leveraging built-in security tools you can:

  • Selectively wipe devices, removing all managed apps and managed profiles installed via Systems Manager without having to restore the device to factory settings
  • Erase the device, which restores the factory settings
  • Engage Lost Mode for supervised iOS devices

Using these two tools you can proactively address lost and stolen items to prevent potential security breaches.

Security Policies

Using geofencing and security policies allows you to constantly monitor devices for a myriad of compliance checks, including device location, root/jailbreak status, cellular data limits, and application denying list. Security policies can be used to alert you of violations and automatically add or remove apps and profiles based on that status. For example, a security policy can remove secure email credentials and wireless settings if a user installs Tor software.

Once a policy is created, compliance information can be used to generate scheduled reports or control deployment of apps and profiles to clients through the use of dynamic tags generated by security policies. For example, in the case where a device becomes jailbroken or falls below the minimum OS version, a device’s access to internal VPN and wireless credentials can automatically be revoked and the apps uninstalled until the device is remediated and returns to compliance. Systems Manager can also send out automatic alerts to your administrative team to notify them when devices fall out of compliance or take actions such as removing management profiles.

For information on how to scope apps and profiles using security policy tags, see Using Tags in Systems Manager.

Screen Shot 2018-03-27 at 11.13.52 AM.png


Systems Manager has built-in live tools that allow for device-level troubleshooting as well as live management of devices from the Cisco Meraki dashboard.

Mobile Live Tools

The live tools that are available depend on the device’s vendor and the level of access given to any Mobile Device Management (MDM) solution.

Screen Shot 2018-03-27 at 11.18.45 AM.png

  • Mobile security: Mobile security gives remote control over devices, including:
    • Locking devices
    • Clearing passcodes
    • Selectively wiping devices (removing MDM-delivered applications and profiles)
    • Erasing devices (resetting devices to factory defaults)
    • Lock bypass that allows you to disable the activation lock on an iOS device when you do not have access to the Apple ID account used to lock it.
  • AirPlay (iOS only): The AirPlay tool can be used to initiate streaming to known AirPlay-supported devices using MAC or device name.
  • Power Control (iOS only): Power Control allows remote rebooting or shutdown of a device.
  • Send Notification: The Send Notification feature sends a pop-up notification to the device to alert an end user.
  • GPS Location (iOS only): GPS Location enables a request of the current location of an iOS device and delivers it back to the interface.
  • Single App Mode (iOS only): Single App Mode locks an iOS device into a specific mode in which only the app you select will be available for use on the device. This is also referred to as Kiosk mode.
  • Beacon (Android only): The Beacon feature enables an alarm to sound on an Android device to locate it.
  • OS Upgrade (iOS only): OS Upgrade supports remote upgrading of an iOS device.
  • Lost Mode (iOS only): When enabled, Lost Mode can push specific messages and contact information to the device on the lock screen.
Desktop Live Tools

The live tools that are available depend on the device’s vendor and the level of access given to the MDM solution.

Screen Shot 2018-03-27 at 11.20.45 AM.png

  • Mobile Security (macOS only): Mobile security gives you remote control over devices, including:
    • Locking devices
    • Changing passwords
    • Selectively wiping devices (removing MDM-delivered applications and profiles)
    • Erasing devices (resetting devices to factory defaults)
  • Process List: The Process List tool provides an active list of all running process on the device at the time of execution.
  • Command Line: The Command Line tool allows administrators to run shell commands on Windows and Mac devices.
  • Network Stats: Network Stats gives remote visibility into current network-specific information such as TCP connections, TCP statistics, and the device’s routing table.
  • Screenshot: The Screenshot feature enables access to a real-time screenshot of the desktop device.
  • Remote Desktop: The Remote Desktop feature enables full remote access to the MDM-managed desktop device.
  • Power Control: Power Control allows remote rebooting or shutting down of a device.
  • Send Notification: The Send Notification tool sends a pop-up notification to the device to alert the user.
  • OS Update (macOS only): OS Upgrade supports remote updating of a macOS device.

  • Was this article helpful?