Bandwidth shaping ensures that users do not consume more bandwidth than they should. The Meraki cloud includes an integrated bandwidth shaping module that enforces upload and download limits. This setting could be used, for instance, to assign more bandwidth for VOIP handsets on one SSID and less bandwidth for data-only users on another SSID. The bandwidth limits are enforced by the Meraki APs so that they are applied consistently to a wireless client, even if that client roams from one AP to another.
The Meraki dashboard supports separate upload and download limits. Asymmetric upload and download limits are useful, for example, when a user only needs to periodically download large images (e.g., CAD drawings) but not upload them. Specific application requirements and available bandwidth should be considered to determine the optimum bandwidth settings.
Bandwidth limits can be applied per SSID or per user. To configure per SSID bandwidth limits, go to the Firewall and Traffic Shaping page under the Configure tab.
To provide a better user experience when using bandwidth shaping, an administrator can enable SpeedBurst using the checkbox in the Bandwidth Limits section on the Access Control page. SpeedBurst allows each client to exceed their assigned limit in a “burst” for a short period of time, making their experience feel snappier while still preventing any one user from using more than their fair share of bandwidth over the longer term. A user is allowed up to four times their allotted bandwidth limit for a period of up to five seconds.
The Meraki dashboard includes settings to allow support for per-user bandwidth limits when a customer-hosted RADIUS server is used.
Administrators can create shaping policies to apply per user controls on a per application basis. This allows the throttling of recreational applications such as peer-to-peer filesharing programs and the prioritization of enterprise applications such as Salesforce.com, ensuring that business-critical application performance is not compromised.
Traffic shaping policies consist of a series of rules that are evaluated in the order in which they appear in the policy, similar to custom firewall rules. There are two main components to each rule: rule definitions and rule actions.
Rules can be defined in two ways. An administrator can select from various pre-defined application categories such as Video & Music, Peer- to-Peer or Email. The second method of defining rules is to use custom rule definitions. Administrators can create rules by specifying HTTP hostnames (eg. salesforce.com), port number (eg. 80), IP ranges (eg. 192.168.0.0/16), or IP range and port combinations (eg. 192.168.0.0/16:80).
Traffic matching specified rule sets can be shaped and/or prioritized. Bandwidth limits can be specified to either:
Ignore any limits specified for a particular SSID on the Access Control page (allow unlimited bandwidth usage)
Obey the specified SSID limits
Apply more restrictive limits that than the SSID limits. To specify asymmetric limits on uploads and downloads, click on the Details link next to the bandwidth slider control.
The Meraki MR supports the Wireless Multimedia Extensions (WMM) standard for traffic prioritization. WMM is a Wi-Fi Alliance standard based on the IEEE 802.11e specification, with a focus on the EDCA component to help ensure that devices such as wireless VOIP phones operate well when connected to a Meraki wireless network. WMM provides four different traffic classes: voice, video, best effort, and background. Devices that support WMM and request a higher level of service, such as Wi-Fi handsets, will receive higher priority on the Meraki wireless network.
WMM Power Save allows devices to “sleep” differently when they receive critical vs. non-critical packets. Devices that support WMM Power Save should experience extended battery life when using a Meraki network.
QoS keeps latency, jitter, and loss for selected traffic types within acceptable boundaries. When providing QoS for downstream traffic (AP to client), upstream traffic (client to AP) is treated as best-effort. The application of QoS features might not be noticeable on lightly loaded networks. If latency, jitter, and loss are noticeable when the media is lightly loaded, it indicates a system fault, a network design problem, or a mismatch between the latency, jitter, and loss requirements of the application and the network over which the application is being run. QoS features start to be applied to application performance as the load on the network increases.
Quality of Service (QoS) prioritization can be applied to traffic at Layers 2 and 3. Layer 2 prioritization is accomplished by specifying a value for the PCP tag in the 802.1q header on outgoing traffic from the access point. This feature is only available for SSIDs where VLAN tagging is enabled. To prioritize traffic at Layer 3, a value is selected for the DSCP tag in the IP header on all incoming and outgoing IP packets. This also affects the WMM priority of the traffic. To fully benefit from this feature, upstream wired switches and routers must be configured for QoS prioritization as well.