Link Aggregation is a nebulous term used to describe various implementations and underlying technologies. In general, link aggregation looks to combine (aggregate) multiple network connections in parallel to increase throughput and provide redundancy. While there are many approaches, this article aims to highlight the differences in terminology.
Link Bonding (a.k.a. teaming, bundling, etc.)
This is generally implemented using 2 or more links between two logical devices. This could be 2 servers, 2 switches, a server to a switch, or various other combinations. Using standards such as LACP, the two links are combined into a single logical link, with traffic being spread across them evenly. Since this is typically done at Layer 2, failure detection and isolation can be done quite quickly. Thus limiting the impact of a link failure. This is also useful for increasing the available throughput between two devices, without purchasing much more expensive hardware (2x1Gbps vs 1x10Gbps). See Figure 1 below.
Load balancing can also be used to describe link bonding. Generally speaking, load balancing is a term reserved for Layer 3+ operations. While application load balancers can be used to distribute load across across an array of devices for a particular application or purpose, this article will concentrate on Layer 3. In that sense, load balancing is commonly defined as a (mostly) even distribution of IP traffic across 2 or more links. This can be done by providing a device multiple equal cost routes to the same destination over equal sized links. See Figure 2 below.
Load sharing is loosely defined as spreading network traffic across 2 or more equal or unequal links/paths. Load sharing can exist between a 10Mbps WAN link and a 100Mbps WAN link. While load sharing often provides the slowest recovery time (dependent on implementation and failure), it is the easiest to implement, most flexible, and still provides levels of redundancy that link bonding and load balancing cannot. As an example, load sharing can allow the use of two different ISPs, with different link speeds, when NAT is implemented. If not using NAT (with other vendor products), this becomes more difficult and can result in outbound traffic traversing one link, while inbound traffic for the same conversation uses another. See Figure 3 below.
Implementation by Cisco Meraki
Cisco Meraki MS switches allow the use of the open standard LACP to provide Layer 2 link aggregation, in the form of link bonding as described above. The MS's LACP hashing algorithm uses traffic's source/destination IP, MAC, and port to determine which bonded link to utilize. This provides highly resilient and equal load distribution across 2 or more links, between two logical devices, with rapid failure detection.
Please refer to the MS Series Administration Guide for details on how to implement.
Link Aggregation is supported on ports sharing similar characteristics such as link speed and media-type (SFP/Copper).
Cisco Meraki security appliances use a proprietary algorithm to provide load balancing across two Layer 3 links (if configured). This can be customized to use different ratios and specific rules for outbound traffic. As NAT is used, flows that are part of a particular conversation will remain on the link they are placed.
Please refer to MX Load Balancing and Uplink Preferences for details on how to implement.
Configuring Link Aggregation between MS and Cisco Switches
You may want to set up and configure a bonded link between your Meraki MS series switch and a Cisco switch. This is often referred to as link aggregation, link bonding or EtherChannel.
In order to configure 2 or more ports (up to 8) to be a port aggregate, simply navigate to Switch > Monitor > Switch ports and select the target ports, then choose "Aggregate". It is recommended that you do not have the target ports physically connected to anything during this step.
On your Cisco switch, you must enable LACP by setting the EtherChannel mode to active or passive depending on the behavior you desire. For further information, please see Cisco's documentation on Configuring EtherChannel (this document is for the Catalyst 3000 series).