RFC 9469 | EVPN Applicability for NVO3 | September 2023 |
Rabadan, et al. | Informational | [Page] |
An Ethernet Virtual Private Network (EVPN) provides a unified control plane that solves the issues of Network Virtualization Edge (NVE) auto-discovery, tenant Media Access Control (MAC) / IP dissemination, and advanced features in a scablable way as required by Network Virtualization over Layer 3 (NVO3) networks. EVPN is a scalable solution for NVO3 networks and keeps the independence of the underlay IP Fabric, i.e., there is no need to enable Protocol Independent Multicast (PIM) in the underlay network and maintain multicast states for tenant Broadcast Domains. This document describes the use of EVPN for NVO3 networks and discusses its applicability to basic Layer 2 and Layer 3 connectivity requirements and to advanced features such as MAC Mobility, MAC Protection and Loop Protection, multihoming, Data Center Interconnect (DCI), and much more. No new EVPN procedures are introduced.¶
This document is not an Internet Standards Track specification; it is published for informational purposes.¶
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are candidates for any level of Internet Standard; see Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9469.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
In Network Virtualization over Layer 3 (NVO3) networks, Network Virtualization Edge (NVE) devices sit at the edge of the underlay network and provide Layer 2 and Layer 3 connectivity among Tenant Systems (TSes) of the same tenant. The NVEs need to build and maintain mapping tables so they can deliver encapsulated packets to their intended destination NVE(s). While there are different options to create and disseminate the mapping table entries, NVEs may exchange that information directly among themselves via a control plane protocol, such as Ethernet Virtual Private Network (EVPN). EVPN provides an efficient, flexible, and unified control plane option that can be used for Layer 2 and Layer 3 Virtual Network (VN) service connectivity. This document does not introduce any new procedures in EVPN.¶
In this document, we assume that the EVPN control plane module resides in the NVEs. The NVEs can be virtual switches in hypervisors, Top-of-Rack (ToR) switches or Leaf switches, or Data Center Gateways. As described in [RFC7365], Network Virtualization Authorities (NVAs) may be used to provide the forwarding information to the NVEs, and in that case, EVPN could be used to disseminate the information across multiple federated NVAs. The applicability of EVPN would then be similar to the one described in this document. However, for simplicity, the description assumes control plane communication among NVE(s).¶
This document uses the terminology of [RFC7365] in addition to the terms that follow.¶
Data Centers have adopted NVO3 architectures mostly due to the issues discussed in [RFC7364]. The architecture of a Data Center is nowadays based on a Clos design, where every Leaf is connected to a layer of Spines and there is a number of ECMPs between any two Leaf nodes. All the links between Leaf and Spine nodes are routed links, forming what we also know as an underlay IP Fabric. The underlay IP Fabric does not have issues with loops or flooding (like old Spanning Tree Data Center designs did), convergence is fast, and ECMP generally distributes utilization well across all the links.¶
On this architecture, and as discussed by [RFC7364], multi-tenant intra-subnet and inter-subnet connectivity services are provided by NVO3 tunnels. VXLAN [RFC7348] and Geneve [RFC8926] are two examples of such NVO3 tunnels.¶
Why is a control plane protocol along with NVO3 tunnels helpful? There are three main reasons:¶
"Flood and learn" is a possible approach to achieve points (a) and (b) above for multipoint Ethernet services. "Flood and learn" refers to "flooding" BUM traffic from the ingress NVE to all the egress NVEs attached to the same Broadcast Domain instead of using a specific control plane on the NVEs. The egress NVEs may then use data path source MAC address "learning" on the frames received over the NVO3 tunnels. When the destination host replies and the frames arrive at the NVE that initially flooded BUM frames, the NVE will also "learn" the source MAC address of the frame encapsulated on the NVO3 tunnel. This approach has the following drawbacks:¶
In order to flood a given BUM frame, the ingress NVE must know the IP addresses of the remote NVEs attached to the same Broadcast Domain. This may be done as follows:¶
EVPN provides a unified control plane that solves the issues of NVE auto-discovery, tenant MAC/IP dissemination, and advanced features in a scalable way and keeps the independence of the underlay IP Fabric; i.e., there is no need to enable PIM in the underlay network and maintain multicast states for tenant Broadcast Domains.¶
Section 4 describes how EVPN can be used to meet the control plane requirements in an NVO3 network.¶
This section discusses the applicability of EVPN to NVO3 networks. The intent is not to provide a comprehensive explanation of the protocol itself, but to give an introduction and point at the corresponding reference document so the reader can easily find more details if needed.¶
EVPN supports multiple Route Types, and each type has a different function. For convenience, Table 1 shows a summary of all the existing EVPN Route Types and their usages. In this document, we may refer to these route types as RT-x routes, where x is the type number included in the first column of Table 1.¶
Type | Description | Usage |
---|---|---|
1 | Ethernet Auto-Discovery | Multihoming: Used for MAC mass-withdraw when advertised per Ethernet Segment and for aliasing/backup functions when advertised per EVI. |
2 | MAC/IP Advertisement | Host MAC/IP dissemination; supports MAC Mobility and protection. |
3 | Inclusive Multicast Ethernet Tag | NVE discovery and BUM flooding tree setup. |
4 | Ethernet Segment | Multihoming: ES auto-discovery and DF election. |
5 | IP Prefix | IP Prefix dissemination. |
6 | Selective Multicast Ethernet Tag | Indicate interest for a multicast S,G or *,G. |
7 | Multicast Join Synch | Multihoming: S,G or *,G state synch. |
8 | Multicast Leave Synch | Multihoming: S,G or *,G leave synch. |
9 | Per-Region I-PMSI A-D | BUM tree creation across regions. |
10 | S-PMSI A-D | Multicast tree for S,G or *,G states. |
11 | Leaf A-D | Used for responses to explicit tracking. |
Although the applicability of EVPN to NVO3 networks spans multiple documents, EVPN's baseline specification is [RFC7432]. [RFC7432] allows multipoint Layer 2 VPNs to be operated as IP VPNs [RFC4364], where MACs and the information to set up flooding trees are distributed by Multiprotocol BGP (MP-BGP) [RFC4760]. Based on [RFC7432], [RFC8365] describes how to use EVPN to deliver Layer 2 services specifically in NVO3 networks.¶
Figure 1 represents a Layer 2 service deployed with an EVPN Broadcast Domain in an NVO3 network.¶
In a simple NVO3 network, such as the example of Figure 1, these are the basic constructs that EVPN uses for Layer 2 services (or Layer 2 Virtual Networks):¶
Auto-discovery is one of the basic capabilities of EVPN. The provisioning of EVPN components in NVEs is significantly automated, simplifying the deployment of services and minimizing manual operations that are prone to human error.¶
These are some of the auto-discovery and auto-provisioning capabilities available in EVPN:¶
Automation on Ethernet Segments (ESes): An Ethernet Segment is defined as a group of NVEs that are attached to the same Tenant System or network. An Ethernet Segment is identified by an Ethernet Segment Identifier (ESI) in the control plane, but neither the ESI nor the NVEs that share the same Ethernet Segment are required to be manually provisioned in the local NVE.¶
Auto-discovery via MP-BGP [RFC4760] is used to discover the remote NVEs attached to a given Broadcast Domain, the NVEs participating in a given redundancy group, the tunnel encapsulation types supported by an NVE, etc.¶
In particular, when a new MAC-VRF and Broadcast Domain are enabled, the NVE will advertise a new Inclusive Multicast Ethernet Tag route. Besides other fields, the Inclusive Multicast Ethernet Tag route will encode the IP address of the advertising NVE, the Ethernet Tag (which is zero in the case of VLAN-based and VLAN-bundle interfaces), and a PMSI Tunnel Attribute (PTA) that indicates the information about the intended way to deliver BUM traffic for the Broadcast Domain.¶
When BD1 is enabled in the example of Figure 1, NVE1 will send an Inclusive Multicast Ethernet Tag route including its own IP address, an Ethernet-Tag for BD1, and the PMSI Tunnel Attribute to the remote NVEs. Assuming Ingress Replication (IR) is used, the Inclusive Multicast Ethernet Tag route will include an identification for Ingress Replication in the PMSI Tunnel Attribute and the VNI that the other NVEs in the Broadcast Domain must use to send BUM traffic to the advertising NVE. The other NVEs in the Broadcast Domain will import the Inclusive Multicast Ethernet Tag route and will add NVE1's IP address to the flooding list for BD1. Note that the Inclusive Multicast Ethernet Tag route is also sent with a BGP encapsulation attribute [RFC9012] that indicates what NVO3 encapsulation the remote NVEs should use when sending BUM traffic to NVE1.¶
Refer to [RFC7432] for more information about the Inclusive Multicast Ethernet Tag route and forwarding of BUM traffic. See [RFC8365] for its considerations on NVO3 networks.¶
Tenant MAC/IP information is advertised to remote NVEs using MAC/IP Advertisement routes. Following the example of Figure 1:¶
Refer to [RFC7432] and [RFC8365] for more information about the MAC/IP Advertisement route and the forwarding of known unicast traffic.¶
[RFC9136] and [RFC9135] are the reference documents that describe how EVPN can be used for Layer 3 services. Inter-subnet forwarding in EVPN networks is implemented via IRB interfaces between Broadcast Domains and IP-VRFs. An EVPN Broadcast Domain corresponds to an IP subnet. When IP packets generated in a Broadcast Domain are destined to a different subnet (different Broadcast Domain) of the same tenant, the packets are sent to the IRB attached to the local Broadcast Domain in the source NVE. As discussed in [RFC9135], depending on how the IP packets are forwarded between the ingress NVE and the egress NVE, there are two forwarding models: Asymmetric and Symmetric.¶
The Asymmetric model is illustrated in the example of Figure 2, and it requires the configuration of all the Broadcast Domains of the tenant in all the NVEs attached to the same tenant. That way, there is no need to advertise IP Prefixes between NVEs since all the NVEs are attached to all the subnets. It is called "Asymmetric" because the ingress and egress NVEs do not perform the same number of lookups in the data plane. In Figure 2, if TS1 and TS2 are in different subnets and TS1 sends IP packets to TS2, the following lookups are required in the data path: a MAC lookup at BD1's table, an IP lookup at the IP-VRF, a MAC lookup at BD2's table at the ingress NVE1, and only a MAC lookup at the egress NVE. The two IP-VRFs in Figure 2 are not connected by tunnels, and all the connectivity between the NVEs is done based on tunnels between the Broadcast Domains.¶
In the Symmetric model, depicted in Figure 3, the same number of data path lookups is needed at the ingress and egress NVEs. For example, if TS1 sends IP packets to TS3, the following data path lookups are required: a MAC lookup at NVE1's BD1 table, an IP lookup at NVE1's IP-VRF, and an IP lookup and MAC lookup at NVE2's IP-VRF and BD3, respectively. In the Symmetric model, the inter-subnet connectivity between NVEs is done based on tunnels between the IP-VRFs.¶
The Symmetric model scales better than the Asymmetric model because it does not require the NVEs to be attached to all the tenant's subnets. However, it requires the use of NVO3 tunnels on the IP-VRFs and the exchange of IP Prefixes between the NVEs in the control plane. EVPN uses MAC/IP Advertisement routes for the exchange of host IP routes and IP Prefix routes for the exchange of prefixes of any length, including host routes. As an example, in Figure 3, NVE2 needs to advertise TS3's host route and/or TS3's subnet so that the IP lookup on NVE1's IP-VRF succeeds.¶
[RFC9135] specifies the use of MAC/IP Advertisement routes for the advertisement of host routes. Section 4.4.1 of [RFC9136] specifies the use of IP Prefix routes for the advertisement of IP Prefixes in an "Interface-less IP-VRF-to-IP-VRF Model". The Symmetric model for host routes can be implemented following either approach:¶
[RFC8365] describes how to use EVPN for NVO3 encapsulations, such us VXLAN, nvGRE, or MPLSoGRE. The procedures can be easily applicable to any other NVO3 encapsulation, particularly Geneve.¶
Geneve [RFC8926] is the proposed standard encapsulation specified in the IETF Network Virtualization Overlays Working Group. The EVPN control plane can signal the Geneve encapsulation type in the BGP Tunnel Encapsulation Extended Community (see [RFC9012]).¶
Geneve requires a control plane [NVO3-ENCAP] to:¶
The EVPN control plane can easily extend the BGP Tunnel Encapsulation attribute sub-TLV [RFC9012] to specify the Geneve tunnel options that can be received or transmitted over a Geneve tunnel by a given NVE. [BESS-EVPN-GENEVE] describes the EVPN control plane extensions to support Geneve.¶
EVPN Operations, Administration, and Maintenance (OAM), as described in [BESS-EVPN-LSP-PING], defines mechanisms to detect data plane failures in an EVPN deployment over an MPLS network. These mechanisms detect failures related to point-to-point (P2P) and Point-to-Multipoint (P2MP) connectivity, for multi-tenant unicast and multicast Layer 2 traffic, between multi-tenant access nodes connected to EVPN PE(s), and in a single-homed, Single-Active, or All-Active redundancy model.¶
In general, EVPN OAM mechanisms defined for EVPN deployed in MPLS networks are equally applicable for EVPN in NVO3 networks.¶
EVPN can be used to signal the security protection capabilities of a sender NVE, as well as what portion of an NVO3 packet (taking a Geneve packet as an example) can be protected by the sender NVE, to ensure the privacy and integrity of tenant traffic carried over the NVO3 tunnels [BESS-SECURE-EVPN].¶
This section describes how EVPN can be used to deliver advanced capabilities in NVO3 networks.¶
[RFC7432] replaces the classic Ethernet "flood and learn" behavior among NVEs with BGP-based MAC learning. In return, this provides more control over the location of MAC addresses in the Broadcast Domain and consequently advanced features, such as MAC Mobility. If we assume that Virtual Machine (VM) Mobility means the VM's MAC and IP addresses move with the VM, EVPN's MAC Mobility is the required procedure that facilitates VM Mobility. According to Section 15 of [RFC7432], when a MAC is advertised for the first time in a Broadcast Domain, all the NVEs attached to the Broadcast Domain will store Sequence Number zero for that MAC. When the MAC "moves" to a remote NVE within the same Broadcast Domain, the NVE that just learned the MAC locally increases the Sequence Number in the MAC/IP Advertisement route's MAC Mobility extended community to indicate that it owns the MAC now. That makes all the NVEs in the Broadcast Domain change their tables immediately with no need to wait for any aging timer. EVPN guarantees a fast MAC Mobility without flooding or packet drops in the Broadcast Domain.¶
The advertisement of MACs in the control plane allows advanced features such as MAC Protection, Duplication Detection, and Loop Protection.¶
In a MAC/IP Advertisement route, MAC Protection refers to EVPN's ability to indicate that a MAC must be protected by the NVE receiving the route [RFC7432]. The Protection is indicated in the "Sticky bit" of the MAC Mobility extended community sent along the MAC/IP Advertisement route for a MAC. NVEs' Attachment Circuits that are connected to subject-to-be-protected servers or VMs may set the Sticky bit on the MAC/IP Advertisement routes sent for the MACs associated with the Attachment Circuits. Also, statically configured MAC addresses should be advertised as Protected MAC addresses since they are not subject to MAC Mobility procedures.¶
MAC Duplication Detection refers to EVPN's ability to detect duplicate MAC addresses [RFC7432]. A "MAC move" is a relearn event that happens at an access Attachment Circuit or through a MAC/IP Advertisement route with a Sequence Number that is higher than the stored one for the MAC. When a MAC moves a number of times (N) within an M-second window between two NVEs, the MAC is declared as a duplicate and the detecting NVE does not re-advertise the MAC anymore.¶
[RFC7432] provides MAC Duplication Detection, and with an extension, it can protect the Broadcast Domain against loops created by backdoor links between NVEs. The same principle (based on the Sequence Number) may be extended to protect the Broadcast Domain against loops. When a MAC is detected as a duplicate, the NVE may install it as a drop-MAC and discard received frames with source MAC address or the destination MAC address matching that duplicate MAC. The MAC Duplication extension to support Loop Protection is described in Section 15.3 of [BESS-RFC7432BIS].¶
In Broadcast Domains with a significant amount of flooding due to Unknown Unicast and broadcast frames, EVPN may help reduce and sometimes even suppress the flooding.¶
In Broadcast Domains where most of the broadcast traffic is caused by the Address Resolution Protocol (ARP) and the Neighbor Discovery Protocol (NDP) on the Tenant Systems, EVPN's Proxy ARP and Proxy ND capabilities may reduce the flooding drastically. The use of Proxy ARP/ND is specified in [RFC9161].¶
Proxy ARP/ND procedures, along with the assumption that Tenant Systems always issue a Gratuitous ARP (GARP) or an unsolicited Neighbor Advertisement message when they come up in the Broadcast Domain, may drastically reduce the Unknown Unicast flooding in the Broadcast Domain.¶
The flooding caused by Tenant Systems' IGMP / Multicast Listener Discovery (MLD) or PIM messages in the Broadcast Domain may also be suppressed by the use of IGMP/MLD and PIM Proxy functions, as specified in [RFC9251] and [BESS-EVPN-PIM-PROXY]. These two documents also specify how to forward IP multicast traffic efficiently within the same Broadcast Domain, translate soft state IGMP/MLD/PIM messages into hard state BGP routes, and provide fast convergence redundancy for IP multicast on multihomed ESes.¶
When an NVE attached to a given Broadcast Domain needs to send BUM traffic for the Broadcast Domain to the remote NVEs attached to the same Broadcast Domain, Ingress Replication is a very common option in NVO3 networks since it is completely independent of the multicast capabilities of the underlay network. Also, if the optimization procedures to reduce/suppress the flooding in the Broadcast Domain are enabled (Section 4.7.3) in spite of creating multiple copies of the same frame at the ingress NVE, Ingress Replication may be good enough. However, in Broadcast Domains where Multicast (or Broadcast) traffic is significant, Ingress Replication may be very inefficient and cause performance issues on virtual switch-based NVEs.¶
[BESS-EVPN-OPTIMIZED-IR] specifies the use of Assisted Replication (AR) NVO3 tunnels in EVPN Broadcast Domains. AR retains the independence of the underlay network while providing a way to forward Broadcast and multicast traffic efficiently. AR uses AR-REPLICATORs that can replicate the broadcast/multicast traffic on behalf of the AR-LEAF NVEs. The AR-LEAF NVEs are typically virtual switches or NVEs with limited replication capabilities. AR can work in a single-stage replication mode (Non-Selective Mode) or in a dual-stage replication mode (Selective Mode). Both modes are detailed in [BESS-EVPN-OPTIMIZED-IR].¶
In addition, [BESS-EVPN-OPTIMIZED-IR] describes a procedure to avoid sending BUM to certain NVEs that do not need that type of traffic. This is done by enabling Pruned Flood Lists (PFLs) on a given Broadcast Domain. For instance, a virtual switch NVE that learns all its local MAC addresses for a Broadcast Domain via a Cloud Management System does not need to receive the Broadcast Domain's Unknown Unicast traffic. PFLs help optimize the BUM flooding in the Broadcast Domain.¶
Another fundamental concept in EVPN is multihoming. A given Tenant System can be multihomed to two or more NVEs for a given Broadcast Domain, and the set of links connected to the same Tenant System is defined as an ES. EVPN supports Single-Active and All-Active multihoming. In Single-Active multihoming, only one link in the Ethernet Segment is active. In All-Active multihoming, all the links in the Ethernet Segment are active for unicast traffic. Both modes support load-balancing:¶
There are two key aspects in the EVPN multihoming procedures:¶
While [RFC7432] describes the default algorithm for the Designated Forwarder election, [RFC8584] and [BESS-EVPN-PREF-DF] specify other algorithms and procedures that optimize the Designated Forwarder election.¶
The split-horizon function is specified in [RFC7432], and it is carried out by using a special ESI-label that it identifies in the data path with all the BUM frames originating from a given NVE and Ethernet Segment. Since the ESI-label is an MPLS label, it cannot be used in all the non-MPLS NVO3 encapsulations. Therefore, [RFC8365] defines a modified split-horizon procedure that is based on the source IP address of the NVO3 tunnel; it is known as "Local-Bias". It is worth noting that Local-Bias only works for All-Active multihoming, and not for Single-Active multihoming.¶
Section 4.3 describes how EVPN can be used for inter-subnet forwarding among subnets of the same tenant. MAC/IP Advertisement routes and IP Prefix routes allow the advertisement of host routes and IP Prefixes (IP Prefix route) of any length. The procedures outlined by Section 4.3 are similar to the ones in [RFC4364], but they are only for NVO3 tunnels. However, [RFC9136] also defines advanced inter-subnet forwarding procedures that allow the resolution of IP Prefix routes not only to BGP next hops but also to "overlay indexes" that can be a MAC, a Gateway IP (GW-IP), or an ESI, all of them in the tenant space.¶
Figure 4 illustrates an example that uses Recursive Resolution to a GW-IP as per Section 4.4.2 of [RFC9136]. In this example, IP-VRFs in NVE1 and NVE2 are connected by a Supplementary Broadcast Domain (SBD). An SBD is a Broadcast Domain that connects all the IP-VRFs of the same tenant via IRB and has no Attachment Circuits. NVE1 advertises the host route TS2-IP/L (IP address and Prefix Length of TS2) in an IP Prefix route with overlay index GW-IP=IP1. Also, IP1 is advertised in a MAC/IP Advertisement route associated with M1, VNI-S, and BGP next-hop NVE1. Upon importing the two routes, NVE2 installs TS2-IP/L in the IP-VRF with a next hop that is the GW-IP IP1. NVE2 also installs M1 in the Supplementary Broadcast Domain, with VNI-S and NVE1 as next hop. If TS3 sends a packet with IP DA=TS2, NVE2 will perform a Recursive Resolution of the IP Prefix route prefix information to the forwarding information of the correlated MAC/IP Advertisement route. The IP Prefix route's Recursive Resolution has several advantages, such as better convergence in scaled networks (since multiple IP Prefix routes can be invalidated with a single withdrawal of the overlay index route) or the ability to advertise multiple IP Prefix routes from an overlay index that can move or change dynamically. [RFC9136] describes a few use cases.¶
The concept of the Supplementary Broadcast Domain described in Section 4.7.6 is also used in [BESS-EVPN-IRB-MCAST] for the procedures related to inter-subnet multicast forwarding across Broadcast Domains of the same tenant. For instance, [BESS-EVPN-IRB-MCAST] allows the efficient forwarding of IP multicast traffic from any Broadcast Domain to any other Broadcast Domain (or even to the same Broadcast Domain where the source resides). The [BESS-EVPN-IRB-MCAST] procedures are supported along with EVPN multihoming and for any tree allowed on NVO3 networks, including IR or AR. [BESS-EVPN-IRB-MCAST] also describes the interoperability between EVPN and other multicast technologies such as Multicast VPN (MVPN) and PIM for inter-subnet multicast.¶
[BESS-EVPN-MVPN-SEAMLESS-INTEROP] describes another potential solution to support EVPN to MVPN interoperability.¶
Tenant Layer 2 and Layer 3 services deployed on NVO3 networks must often be extended to remote NVO3 networks that are connected via non-NOV3 Wide Area Networks (WANs) (mostly MPLS-based WANs). [RFC9014] defines some architectural models that can be used to interconnect NVO3 networks via MPLS WANs.¶
When NVO3 networks are connected by MPLS WANs, [RFC9014] specifies how EVPN can be used end to end in spite of using a different encapsulation in the WAN. [RFC9014] also supports the use of NVO3 or Segment Routing (encoding 32-bit or 128-bit Segment Identifiers into labels or IPv6 addresses, respectively) transport tunnels in the WAN.¶
Even if EVPN can also be used in the WAN for Layer 2 and Layer 3 services, there may be a need to provide a Gateway function between EVPN for NVO3 encapsulations and IP VPN for MPLS tunnels if the operator uses IP VPN in the WAN. [BESS-EVPN-IPVPN-INTERWORKING] specifies the interworking function between EVPN and IP VPN for unicast inter-subnet forwarding. If inter-subnet multicast forwarding is also needed across an IP VPN WAN, [BESS-EVPN-IRB-MCAST] describes the required interworking between EVPN and MVPNs.¶
This document does not introduce any new procedure or additional signaling in EVPN and relies on the security considerations of the individual specifications used as a reference throughout the document. In particular, and as mentioned in [RFC7432], control plane and forwarding path protection are aspects to secure in any EVPN domain when applied to NVO3 networks.¶
[RFC7432] mentions security techniques such as those discussed in [RFC5925] to authenticate BGP messages, and those included in [RFC4271], [RFC4272], and [RFC6952] to secure BGP are relevant for EVPN in NVO3 networks as well.¶
This document has no IANA actions.¶
The authors thank Aldrin Isaac for his comments.¶