RFC 9014 | Interconnect for EVPN-Overlays | May 2021 |
Rabadan, et al. | Standards Track | [Page] |
This document describes how Network Virtualization Overlays (NVOs) can be connected to a Wide Area Network (WAN) in order to extend the Layer 2 connectivity required for some tenants. The solution analyzes the interaction between NVO networks running Ethernet Virtual Private Networks (EVPNs) and other Layer 2 VPN (L2VPN) technologies used in the WAN, such as Virtual Private LAN Services (VPLSs), VPLS extensions for Provider Backbone Bridging (PBB-VPLS), EVPN, or PBB-EVPN. It also describes how the existing technical specifications apply to the interconnection and extends the EVPN procedures needed in some cases. In particular, this document describes how EVPN routes are processed on Gateways (GWs) that interconnect EVPN-Overlay and EVPN-MPLS networks, as well as the Interconnect Ethernet Segment (I-ES), to provide multihoming. This document also describes the use of the Unknown MAC Route (UMR) to avoid issues of a Media Access Control (MAC) scale on Data Center Network Virtualization Edge (NVE) devices.¶
This is an Internet Standards Track document.¶
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9014.¶
Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
[RFC8365] discusses the use of Ethernet Virtual Private Networks (EVPNs) [RFC7432] as the control plane for Network Virtualization Overlays (NVOs), where VXLAN [RFC7348], NVGRE [RFC7637], or MPLS over GRE [RFC4023] can be used as possible data plane encapsulation options.¶
While this model provides a scalable and efficient multitenant solution within the Data Center, it might not be easily extended to the Wide Area Network (WAN) in some cases, due to the requirements and existing deployed technologies. For instance, a Service Provider might have an already deployed Virtual Private LAN Service (VPLS) [RFC4761] [RFC4762], VPLS extensions for Provider Backbone Bridging (PBB-VPLS) [RFC7041], EVPN [RFC7432], or PBB-EVPN [RFC7623] network that has to be used to interconnect Data Centers and WAN VPN users. A Gateway (GW) function is required in these cases. In fact, [RFC8365] discusses two main Data Center Interconnect (DCI) solution groups: "DCI using GWs" and "DCI using ASBRs". This document specifies the solutions that correspond to the "DCI using GWs" group.¶
It is assumed that the NVO GW and the WAN Edge functions can be decoupled into two separate systems or integrated into the same system. The former option will be referred to as "decoupled interconnect solution" throughout the document, whereas the latter one will be referred to as "integrated interconnect solution".¶
The specified procedures are local to the redundant GWs connecting a DC to the WAN. The document does not preclude any combination across different DCs for the same tenant. For instance, a "Decoupled" solution can be used in GW1 and GW2 (for DC1), and an "Integrated" solution can be used in GW3 and GW4 (for DC2).¶
While the Gateways and WAN Provider Edges (PEs) use existing specifications in some cases, the document also defines extensions that are specific to DCI. In particular, those extensions are:¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This section describes the interconnect solution when the GW and WAN Edge functions are implemented in different systems. Figure 1 depicts the reference model described in this section. Note that, although not shown in Figure 1, GWs may have local Attachment Circuits (ACs).¶
The following section describes the interconnect requirements for this model.¶
The decoupled interconnect architecture is intended to be deployed in networks where the EVPN-Overlay and WAN providers are different entities and a clear demarcation is needed. This solution solves the following requirements:¶
Support for the following optimizations at the GW:¶
In this option, the handoff between the GWs and the WAN Edge routers is based on VLANs [IEEE.802.1Q]. This is illustrated in Figure 1 (between the GWs in NVO-1 and the WAN Edge routers). Each MAC-VRF in the GW is connected to a different VSI/MAC-VRF instance in the WAN Edge router by using a different C-TAG VLAN ID or a different combination of S/C-TAG VLAN IDs that matches at both sides.¶
This option provides the best possible demarcation between the DC and WAN providers, and it does not require control plane interaction between both providers. The disadvantage of this model is the provisioning overhead, since the service has to be mapped to a C-TAG or S/C-TAG VLAN ID combination at both GW and WAN Edge routers.¶
In this model, the GW acts as a regular Network Virtualization Edge (NVE) towards the DC. Its control plane, data plane procedures, and interactions are described in [RFC8365].¶
The WAN Edge router acts as a (PBB-)VPLS or (PBB-)EVPN PE with Attachment Circuits (ACs) to the GWs. Its functions are described in [RFC4761], [RFC4762], [RFC6074], [RFC7432], and [RFC7623].¶
If MPLS between the GW and the WAN Edge router is an option, a PW-based interconnect solution can be deployed. In this option, the handoff between both routers is based on FEC128-based PWs [RFC4762] or FEC129-based PWs (for a greater level of network automation) [RFC6074]. Note that this model still provides a clear demarcation between DC and WAN (since there is a single PW between each MAC-VRF and peer VSI), and security/QoS policies may be applied on a per-PW basis. This model provides better scalability than a C-TAG-based handoff and less provisioning overhead than a combined C/S-TAG handoff. The PW-based handoff interconnect is illustrated in Figure 1 (between the NVO-2 GWs and the WAN Edge routers).¶
In this model, besides the usual MPLS procedures between GW and WAN Edge router [RFC3031], the GW MUST support an interworking function in each MAC-VRF that requires extension to the WAN:¶
If a PW-based handoff is used, the GW's AC (or point of attachment to the EVPN instance) uses a combination of a PW label and VLAN IDs. PWs are treated as service interfaces, defined in [RFC7432].¶
EVPN single-active multihoming -- i.e., per-service load-balancing multihoming -- is required in this type of interconnect.¶
The GWs will be provisioned with a unique ES for each WAN interconnect, and the handoff attachment circuits or PWs between the GW and the WAN Edge router will be assigned an ESI for each such ES. The ESI will be administratively configured on the GWs according to the procedures in [RFC7432]. This I-ES will be referred to as "I-ES" hereafter, and its identifier will be referred to as "I-ESI". Different ESI types are described in [RFC7432]. The use of Type 0 for the I-ESI is RECOMMENDED in this document.¶
The solution (on the GWs) MUST follow the single-active multihoming procedures as described in [RFC8365] for the provisioned I-ESI -- i.e., Ethernet A-D routes per ES and per EVI will be advertised to the DC NVEs for the multihoming functions, and ES routes will be advertised so that ES discovery and Designated Forwarder (DF) procedures can be followed. The MAC addresses learned (in the data plane) on the handoff links will be advertised with the I-ESI encoded in the ESI field.¶
The following GW features are optional and optimize the control plane and data plane in the DC.¶
The use of EVPN in NVO networks brings a significant number of benefits, as described in [RFC8365]. However, if multiple DCs are interconnected into a single EVI, each DC will have to import all of the MAC addresses from each of the other DCs.¶
Even if optimized BGP techniques like RT constraint [RFC4684] are used, the number of MAC addresses to advertise or withdraw (in case of failure) by the GWs of a given DC could overwhelm the NVEs within that DC, particularly when the NVEs reside in the hypervisors.¶
The solution specified in this document uses the Unknown MAC Route (UMR) that is advertised into a given DC by each of the DC's GWs. This route is defined in [RFC7543] and is a regular EVPN MAC/IP Advertisement route in which the MAC Address Length is set to 48, the MAC address is set to 0, and the ESI field is set to the DC GW's I-ESI.¶
An NVE within that DC that understands and processes the UMR will send unknown unicast frames to one of the DC's GWs, which will then forward that packet to the correct egress PE. Note that, because the ESI is set to the DC GW's I-ESI, all-active multihoming can be applied to unknown unicast MAC addresses. An NVE that does not understand the Unknown MAC Route will handle unknown unicast as described in [RFC7432].¶
This document proposes that local policy determine whether MAC addresses and/or the UMR are advertised into a given DC. As an example, when all the DC MAC addresses are learned in the control/management plane, it may be appropriate to advertise only the UMR. Advertising all the DC MAC addresses in the control/management plane is usually the case when the NVEs reside in hypervisors. Refer to [RFC8365], Section 7.¶
It is worth noting that the UMR usage in [RFC7543] and the UMR usage in this document are different. In the former, a Virtual Spoke (V-spoke) does not necessarily learn all the MAC addresses pertaining to hosts in other V-spokes of the same network. The communication between two V-spokes is done through the Default MAC Gateway (DMG) until the V-spokes learn each other's MAC addresses. In this document, two leaf switches in the same DC are recommended for V-spokes to learn each other's MAC addresses for the same EVI. The leaf-to-leaf communication is always direct and does not go through the GW.¶
Another optimization mechanism, naturally provided by EVPN in the GWs, is the Proxy ARP/ND function. The GWs should build a Proxy ARP/ND cache table, as per [RFC7432]. When the active GW receives an ARP/ND request/solicitation coming from the WAN, the GW does a Proxy ARP/ND table lookup and replies as long as the information is available in its table.¶
This mechanism is especially recommended on the GWs, since it protects the DC network from external ARP/ND-flooding storms.¶
Link/PE failures are handled on the GWs as specified in [RFC7432]. The GW detecting the failure will withdraw the EVPN routes, as per [RFC7432].¶
Individual AC/PW failures may be detected by OAM mechanisms. For instance:¶
When the DC and the WAN are operated by the same administrative entity, the Service Provider can decide to integrate the GW and WAN Edge PE functions in the same router for obvious reasons to save as relates to Capital Expenditure (CAPEX) and Operating Expenses (OPEX). This is illustrated in Figure 2. Note that this model does not provide an explicit demarcation link between DC and WAN anymore. Although not shown in Figure 2, note that the GWs may have local ACs.¶
The integrated interconnect solution meets the following requirements:¶
Regular MPLS tunnels and Targeted LDP (tLDP) / BGP sessions will be set up to the WAN PEs and RRs as per [RFC4761], [RFC4762], and [RFC6074], and overlay tunnels and EVPN will be set up as per [RFC8365]. Note that different route targets for the DC and the WAN are normally required (unless [RFC4762] is used in the WAN, in which case no WAN route target is needed). A single type-1 RD per service may be used.¶
In order to support multihoming, the GWs will be provisioned with an I-ESI (see Section 3.4), which will be unique for each interconnection. In this case, the I-ES will represent the group of PWs to the WAN PEs and GWs. All the [RFC7432] procedures are still followed for the I-ES -- e.g., any MAC address learned from the WAN will be advertised to the DC with the I-ESI in the ESI field.¶
A MAC-VRF per EVI will be created in each GW. The MAC-VRF will have two different types of tunnel bindings instantiated in two different split-horizon groups:¶
Attachment circuits are also supported on the same MAC-VRF (although not shown in Figure 2), but they will not be part of any of the above split-horizon groups.¶
Traffic received in a given split-horizon group will never be forwarded to a member of the same split-horizon group.¶
As far as BUM flooding is concerned, a flooding list will be composed of the sublist created by the inclusive multicast routes and the sublist created for VPLS in the WAN. BUM frames received from a local Attachment Circuit (AC) will be forwarded to the flooding list. BUM frames received from the DC or the WAN will be forwarded to the flooding list, observing the split-horizon group rule described above.¶
Note that the GWs are not allowed to have an EVPN binding and a PW to the same far end within the same MAC-VRF, so that loops and packet duplication are avoided. In case a GW can successfully establish both an EVPN binding and a PW to the same far-end PE, the EVPN binding will prevail, and the PW will be brought down operationally.¶
The optimization procedures described in Section 3.5 can also be applied to this model.¶
This model supports single-active multihoming on the GWs. All-active multihoming is not supported by VPLS; therefore, it cannot be used on the GWs.¶
In this case, for a given EVI, all the PWs in the WAN split-horizon group are assigned to I-ES. All the single-active multihoming procedures as described by [RFC8365] will be followed for the I-ES.¶
The non-DF GW for the I-ES will block the transmission and reception of all the PWs in the WAN split-horizon group for BUM and unicast traffic.¶
In this case, there is no impact on the procedures described in [RFC7041] for the B-component. However, the I-component instances become EVI instances with EVPN-Overlay bindings and potentially local attachment circuits. A number of MAC-VRF instances can be multiplexed into the same B-component instance. This option provides significant savings in terms of PWs to be maintained in the WAN.¶
The I-ESI concept described in Section 4.2.1 will also be used for the PBB-VPLS-based interconnect.¶
B-component PWs and I-component EVPN-Overlay bindings established to the same far end will be compared. The following rules will be observed:¶
The optimization procedures described in Section 3.5 can also be applied to this interconnect option.¶
This model supports single-active multihoming on the GWs. All-active multihoming is not supported by this scenario.¶
The single-active multihoming procedures as described by [RFC8365] will be followed for the I-ES for each EVI instance connected to the B-component. Note that in this case, for a given EVI, all the EVPN bindings in the I-component are assigned to the I-ES. The non-DF GW for the I-ES will block the transmission and reception of all the I-component EVPN bindings for BUM and unicast traffic. When learning MACs from the WAN, the non-DF MUST NOT advertise EVPN MAC/IP routes for those MACs.¶
If EVPN for MPLS tunnels (referred to as "EVPN-MPLS" hereafter) are supported in the WAN, an end-to-end EVPN solution can be deployed. The following sections describe the proposed solution as well as its impact on the procedures from [RFC7432].¶
The GWs MUST establish separate BGP sessions for sending/receiving EVPN routes to/from the DC and to/from the WAN. Normally, each GW will set up one BGP EVPN session to the DC RR (or two BGP EVPN sessions if there are redundant DC RRs) and one session to the WAN RR (or two sessions if there are redundant WAN RRs).¶
In order to facilitate separate BGP processes for DC and WAN, EVPN routes sent to the WAN SHOULD carry a different Route Distinguisher (RD) than the EVPN routes sent to the DC. In addition, although reusing the same value is possible, different route targets are expected to be handled for the same EVI in the WAN and the DC. Note that the EVPN service routes sent to the DC RRs will normally include a [RFC9012] BGP encapsulation extended community with a different tunnel type than the one sent to the WAN RRs.¶
As in the other discussed options, an I-ES and its assigned I-ESI will be configured on the GWs for multihoming. This I-ES represents the WAN EVPN-MPLS PEs to the DC but also the DC EVPN-Overlay NVEs to the WAN. Optionally, different I-ESI values are configured for representing the WAN and the DC. If different EVPN-Overlay networks are connected to the same group of GWs, each EVPN-Overlay network MUST get assigned a different I-ESI.¶
Received EVPN routes will never be reflected on the GWs but instead will be consumed and re-advertised (if needed):¶
MAC/IP advertisement routes will be received and imported, and if they become active in the MAC-VRF, the information will be re-advertised as new routes with the following fields:¶
The GWs will also generate the following local EVPN routes that will be sent to the DC and WAN, with their corresponding RTs and [RFC9012] BGP encapsulation extended community values:¶
Assuming GW1 and GW2 are peer GWs of the same DC, each GW will generate two sets of the above local service routes: set-DC will be sent to the DC RRs and will include an A-D per EVI, Inclusive Multicast, and MAC/IP routes for the DC encapsulation and RT. Set-WAN will be sent to the WAN RRs and will include the same routes but using the WAN RT and encapsulation. GW1 and GW2 will receive each other's set-DC and set-WAN. This is the expected behavior on GW1 and GW2 for locally generated routes:¶
The procedure explained at the end of the previous section will make sure there are no loops or packet duplication between the GWs of the same EVPN-Overlay network (for frames generated from local ACs), since only one EVPN binding per EVI (or per Ethernet Tag in the case of VLAN-aware bundle services) will be set up in the data plane between the two nodes. That binding will by default be added to the EVPN-MPLS flooding list.¶
As for the rest of the EVPN tunnel bindings, they will be added to one of the two flooding lists that each GW sets up for the same MAC-VRF:¶
Each flooding list will be part of a separate split-horizon group: the WAN split-horizon group or the DC split-horizon group. Traffic generated from a local AC can be flooded to both split-horizon groups. Traffic from a binding of a split-horizon group can be flooded to the other split-horizon group and local ACs, but never to a member of its own split-horizon group.¶
When either GW1 or GW2 receives a BUM frame on an MPLS tunnel, including an ESI label at the bottom of the stack, they will perform an ESI label lookup and split-horizon filtering as per [RFC7432], in case the ESI label identifies a local ESI (I-ESI or any other nonzero ESI).¶
This model supports single-active as well as all-active multihoming.¶
All the [RFC7432] multihoming procedures for the DF election on I-ES(s), as well as the backup-path (single-active) and aliasing (all-active) procedures, will be followed on the GWs. Remote PEs in the EVPN-MPLS network will follow regular [RFC7432] aliasing or backup-path procedures for MAC/IP routes received from the GWs for the same I-ESI. So will NVEs in the EVPN-Overlay network for MAC/IP routes received with the same I-ESI.¶
As far as the forwarding plane is concerned, by default, the EVPN-Overlay network will have an analogous behavior to the access ACs in [RFC7432] multihomed Ethernet Segments.¶
The forwarding behavior on the GWs is described below:¶
Single-active multihoming; assuming a WAN split-horizon group (comprised of EVPN-MPLS bindings), a DC split-horizon group (comprised of EVPN-Overlay bindings), and local ACs on the GWs:¶
All-active multihoming; assuming a WAN split-horizon group (comprised of EVPN-MPLS bindings), a DC split-horizon group (comprised of EVPN-Overlay bindings), and local ACs on the GWs:¶
The example in Figure 3 illustrates the forwarding of BUM traffic originated from an NVE on a pair of all-active multihoming GWs.¶
GW2 is the non-DF for the I-ES and blocks the BUM forwarding. GW1 is the DF and forwards the traffic to PE1 and GW2. Packets sent to GW2 will include the ESI label for the I-ES. Based on the ESI label, GW2 identifies the packets as I-ES-generated packets and will only forward them to local ACs (CE in the example) and not back to the EVPN-Overlay network.¶
MAC Mobility procedures described in [RFC7432] are not modified by this document.¶
Note that an intra-DC MAC move still leaves the MAC attached to the same I-ES, so under the rules of [RFC7432], this is not considered a MAC Mobility event. Only when the MAC moves from the WAN domain to the DC domain (or from one DC to another) will the MAC be learned from a different ES, and the MAC Mobility procedures will kick in.¶
The sticky-bit indication in the MAC Mobility extended community MUST be propagated between domains.¶
All the Gateway optimizations described in Section 3.5 MAY be applied to the GWs when the interconnect is based on EVPN-MPLS.¶
In particular, the use of the Unknown MAC Route, as described in Section 3.5.1, solves some transient packet-duplication issues in cases of all-active multihoming, as explained below.¶
Consider the diagram in Figure 2 for EVPN-MPLS interconnect and all-active multihoming, and the following sequence:¶
The "DCI using ASBRs" solution described in [RFC8365] and the GW solution with EVPN-MPLS interconnect may be seen as similar, since they both retain the EVPN attributes between Data Centers and throughout the WAN. However, the EVPN-MPLS interconnect solution on the GWs has significant benefits compared to the "DCI using ASBRs" solution:¶
PBB-EVPN [RFC7623] is yet another interconnect option. It requires the use of GWs where I-components and associated B-components are part of EVI instances.¶
EVPN will run independently in both components, the I-component MAC-VRF and B-component MAC-VRF. Compared to [RFC7623], the DC customer MACs (C-MACs) are no longer learned in the data plane on the GW but in the control plane through EVPN running on the I-component. Remote C-MACs coming from remote PEs are still learned in the data plane. B-MACs in the B-component will be assigned and advertised following the procedures described in [RFC7623].¶
An I-ES will be configured on the GWs for multihoming, but its I-ESI will only be used in the EVPN control plane for the I-component EVI. No unreserved ESIs will be used in the control plane of the B-component EVI, as per [RFC7623]. That is, the I-ES will be represented to the WAN PBB-EVPN PEs using shared or dedicated B-MACs.¶
The rest of the control plane procedures will follow [RFC7432] for the I-component EVI and [RFC7623] for the B-component EVI.¶
From the data plane perspective, the I-component and B-component EVPN bindings established to the same far end will be compared, and the I-component EVPN-Overlay binding will be kept down following the rules described in Section 4.3.1.¶
This model supports single-active as well as all-active multihoming.¶
The forwarding behavior of the DF and non-DF will be changed based on the description outlined in Section 4.4.3, substituting the WAN split-horizon group for the B-component, and using [RFC7623] procedures for the traffic sent or received on the B-component.¶
C-MACs learned from the B-component will be advertised in EVPN within the I-component EVI scope. If the C-MAC was previously known in the I-component database, EVPN would advertise the C-MAC with a higher sequence number, as per [RFC7432]. From the perspective of Mobility and the related procedures described in [RFC7432], the C-MACs learned from the B-component are considered local.¶
All the considerations explained in Section 4.4.5 are applicable to the PBB-EVPN interconnect option.¶
If EVPN for Overlay tunnels is supported in the WAN, and a GW function is required, an end-to-end EVPN solution can be deployed. While multiple Overlay tunnel combinations at the WAN and the DC are possible (MPLSoGRE, NVGRE, etc.), VXLAN is described here, given its popularity in the industry. This section focuses on the specific case of EVPN for VXLAN (EVPN-VXLAN hereafter) and the impact on the [RFC7432] procedures.¶
The procedures described in Section 4.4 apply to this section, too, only substituting EVPN-MPLS for EVPN-VXLAN control plane specifics and using [RFC8365] "Local Bias" procedures instead of Section 4.4.3. Since there are no ESI labels in VXLAN, GWs need to rely on "Local Bias" to apply split horizon on packets generated from the I-ES and sent to the peer GW.¶
This use case assumes that NVEs need to use the VNIs or VSIDs as globally unique identifiers within a Data Center, and a Gateway needs to be employed at the edge of the Data-Center network to translate the VNI or VSID when crossing the network boundaries. This GW function provides VNI and tunnel-IP-address translation. The use case in which local downstream-assigned VNIs or VSIDs can be used (like MPLS labels) is described by [RFC8365].¶
While VNIs are globally significant within each DC, there are two possibilities in the interconnect network:¶
In both options, NVEs inside a DC only have to be aware of a single VNI space, and only GWs will handle the complexity of managing multiple VNI spaces. In addition to VNI translation above, the GWs will provide translation of the tunnel source IP for the packets generated from the NVEs, using their own IP address. GWs will use that IP address as the BGP next hop in all the EVPN updates to the interconnect network.¶
The following sections provide more details about these two options.¶
Considering Figure 2, if a host H1 in NVO-1 needs to communicate with a host H2 in NVO-2, and assuming that different VNIs are used in each DC for the same EVI (e.g., VNI-10 in NVO-1 and VNI-20 in NVO-2), then the VNIs MUST be translated to a common interconnect VNI (e.g., VNI-100) on the GWs. Each GW is provisioned with a VNI translation mapping so that it can translate the VNI in the control plane when sending BGP EVPN route updates to the interconnect network. In other words, GW1 and GW2 MUST be configured to map VNI-10 to VNI-100 in the BGP update messages for H1's MAC route. This mapping is also used to translate the VNI in the data plane in both directions: that is, VNI-10 to VNI-100 when the packet is received from NVO-1 and the reverse mapping from VNI-100 to VNI-10 when the packet is received from the remote NVO-2 network and needs to be forwarded to NVO-1.¶
The procedures described in Section 4.4 will be followed, considering that the VNIs advertised/received by the GWs will be translated accordingly.¶
In this case, if a host H1 in NVO-1 needs to communicate with a host H2 in NVO-2, and assuming that different VNIs are used in each DC for the same EVI, e.g., VNI-10 in NVO-1 and VNI-20 in NVO-2, then the VNIs MUST be translated as in Section 4.6.1. However, in this case, there is no need to translate to a common interconnect VNI on the GWs. Each GW can translate the VNI received in an EVPN update to a locally assigned VNI advertised to the interconnect network. Each GW can use a different interconnect VNI; hence, this VNI does not need to be agreed upon on all the GWs and PEs of the interconnect network.¶
The procedures described in Section 4.4 will be followed, taking into account the considerations above for the VNI translation.¶
This document applies existing specifications to a number of interconnect models. The security considerations included in those documents, such as [RFC7432], [RFC8365], [RFC7623], [RFC4761], and [RFC4762] apply to this document whenever those technologies are used.¶
As discussed, [RFC8365] discusses two main DCI solution groups: "DCI using GWs" and "DCI using ASBRs". This document specifies the solutions that correspond to the "DCI using GWs" group. It is important to note that the use of GWs provides a superior level of security on a per-tenant basis, compared to the use of ASBRs. This is due to the fact that GWs need to perform a MAC lookup on the frames being received from the WAN, and they apply security procedures, such as filtering of undesired frames, filtering of frames with a source MAC that matches a protected MAC in the DC, or application of MAC-duplication procedures defined in [RFC7432]. On ASBRs, though, traffic is forwarded based on a label or VNI swap, and there is usually no visibility of the encapsulated frames, which can carry malicious traffic.¶
In addition, the GW optimizations specified in this document provide additional protection of the DC tenant systems. For instance, the MAC-address advertisement control and Unknown MAC Route defined in Section 3.5.1 protect the DC NVEs from being overwhelmed with an excessive number of MAC/IP routes being learned on the GWs from the WAN. The ARP/ND flooding control described in Section 3.5.2 can reduce/suppress broadcast storms being injected from the WAN.¶
Finally, the reader should be aware of the potential security implications of designing a DCI with the decoupled interconnect solution (Section 3) or the integrated interconnect solution (Section 4). In the decoupled interconnect solution, the DC is typically easier to protect from the WAN, since each GW has a single logical link to one WAN PE, whereas in the Integrated solution, the GW has logical links to all the WAN PEs that are attached to the tenant. In either model, proper control plane and data plane policies should be put in place in the GWs in order to protect the DC from potential attacks coming from the WAN.¶
This document has no IANA actions.¶
The authors would like to thank Neil Hart, Vinod Prabhu, and Kiran Nagaraj for their valuable comments and feedback. We would also like to thank Martin Vigoureux and Alvaro Retana for their detailed reviews and comments.¶
In addition to the authors listed on the front page, the following coauthors have also contributed to this document:¶