This document goes over the step by step configuration for deploying the Google Network Connectivity Center (NCC) with Meraki vMXs as router appliance attachments for the NCC Hub.
Why Google Network Connectivity Center
Customers are increasingly moving towards multi-region deployments in the cloud and require a more dynamic way to connect from branch sites to cloud workloads across different regions. While this can be done using VPC peering across different regions and using static routes, but this can quickly become cumbersome and difficult to manage.
With NCC we can leverage BGP to dynamically exchange routes between the customers on-prem sites with their cloud workloads, letting them easily extend their SD-WAN fabric to the cloud.
Moreover, the NCC hub resource reduces operational complexity through a simple, centralized connectivity management model. The hub, combined with Google's network, delivers reliable connectivity on demand.
vMXs deployed as Router appliances work with Network Connectivity Center to enable dynamic route exchange with Google's network using BGP. It enables connectivity to a Virtual Private Cloud (VPC) network, letting customer seamlessly extend their SD-WAN fabric to the cloud.
Additionally, when adding new sites or even new subnets to existing sites you no longer need to manually update the routes on the GCP routing table.
Step 1) Prepare GCP environment
1. Log into your Google Cloud Console and navigate to the project selector page and select the appropriate project.
2. Create the required VPC resources for the SD-WAN VPC as mentioned here.
For more information about setting up your GCP environment for NCC deployment, please refer to the GCP documentation:
Step 2) Deploy vMXs from Google Marketplace
The steps for deploying virtual MXs from the GCP marketplace are out of scope for this document. For more information on deploying virtual MXs from the Google Marketplace please reference the following link: https://documentation.meraki.com/MX/MX_Installation_Guides/vMX_Setup_Guide_for_Google_Cloud_Platform_(GCP)
These vMX instance would be used as Router Appliance instances in the subsequent steps.
As part of the post-deployment, please keep a note of the following, they would be used in the subsequent steps:
Name: The name of the vMX instance, example: simar-vmx-gcp-ncc-1
Network: The network used to deploy the vMX, example: vmx-network
IP Address: The private IP address of the vMX instance, example: 192.168.4.3
Update the firewall rules for your vMX instance to allow TCP/179 for BGP
Step 3) Deploy the NCC Hub and Create Router Appliance Spokes
Currently, Network Connectivity Center permits only one hub per project.
Start by deploying a NCC Hub for your project, using the following command.
- Login to the GCP Cloud Console and select the appropriate project from the project pull-down menu.
- Navigate to Hybrid Connectivity > Network Connectivity Center from the GCP console
- Enter a Hub name.
- Enter an optional Description.
- Verify the Project ID. If the project ID is incorrect, select a different project by using the pull-down menu at the top of the screen.
- Click Continue to Add Spokes.
- In the New spoke form, enter a Spoke name and optionally, a Description.
- Enter router appliance details.
- Set the Spoke type drop-down list to Router appliance.
- Select the Region for the Spoke.
- Choose a router appliance instance:
- Click Add Instance
- From the Instance drop-down menu, select the vMX instance deployed in the previous steps.
- To add more router appliance instances to this spoke, repeat the preceding step. When you are finished, click Done and continue to Save your spoke.
Step 4) Create a Cloud Router and Configure BGP Peering
- Navigate to Hybrid Connectivity > Network Connectivity Center > Spokes from the GCP console.
- In the Spoke name column, select a spoke to view the Spoke details page.
- In the Name Column, click expand icon to display the Configure BGP sessions links. Click on any of these links to Configure Cloud Router and BGP sessions.
- Under the CloudRouter settings, choose create new router, input the required details and click Create & Continue
- Under BGP Sessions, setup two BGP sessions. For each router appliance instance, two BGP peers need to be configured, one for each cloud router interface. Fill in the form by entering a Name, a Peer ASN (your Meraki Org ASN), and an Advertised route priority (MED). Click Save and continue. Repeat the steps for the second peer and click Create.
Step 5) Configure BGP peering on the vMX
The next step is for us to enable Auto VPN (set the vMX to be an Auto VPN Hub on the site to site VPN page) and configure the BGP settings on the GCP vMXs. Before we can configure the BGP settings on the Meraki dashboard we need to obtain the BGP peer settings for the Cloud Router (Interface IPs and ASN).
- Navigate to Hybrid Connectivity > Network Connectivity Center > Cloud Routers from the GCP console.
- Select the appropriate CloudRouter by clicking on it
- From the Router Details section, obtain the Google ASN.
- From the BGP Sessions section, obtain the CloudRouter BGP IP.
Once these values have been obtained, you will navigate to your virtual appliance(s) in the Meraki Dashboard and navigate to the site to site vpn page, enable Auto VPN by selecting Hub and then scrolling down to the BGP settings.
As of MX17.10.2, BGP is only supported in Passthrough/VPN Concentrator vMXs, so you may need to first update your vMX configuration under Addressing & VLANs - Deployment Settings.
Select the dropdown to enable BGP and configure your local ASN (The Meraki Auto VPN Autonomous System, make sure this matches the peer ASN used in Step 4 ) and then configure two EBGP peers with the values that you were able to obtain from above. Below is a screenshot of what the BGP config should look like for both your vMXs:
Step 6) Adding workload customer VPCs
There are two manual steps needed each time a new customer workload VPC is added.
1. Setup VPC Network Peering with the SDWAN VPC
Setup VPC peering between the new workload VPC and the SDWAN VPC, using the instructions mentioned here.
Make sure to enable "Exchange of Custom Routes" while doing the VPC peering
2. Add custom route on the Cloud Router for the workload VPC
Add a custom route range for the newly added VPC subnet.
BGP Session Establishment
Post-deployment, the first thing to check would be to make sure that the BGP session is established between the vMXs and the NCC CloudRouter. This can be done by navigating to Hybrid Connectivity > Network Connectivity Center > Cloud Routers from the GCP console and clicking on the BGP session to check it's Status.
On the Meraki Side, event log for the vMX network can be checked to make sure the BGP session has been established.
To further verify that BGP session has been established and route exchange is happening as expected by looking at the branch side MX route table. For the branch MX network, navigate to Security & SD-WAN > Route Table (click on view new version in the upper right if not already on the new version of the route table).
Similarly, on the GCP console you should be able to see the auto-vpn branch routes learnt on the VPC route table with the next-hop as the vMX. To check the VPC route table navigate to VPC Networks > Routes > Dynamic