<?xmlversion="1.0" encoding="US-ASCII"?>version='1.0' encoding='utf-8'?> <!DOCTYPE rfc SYSTEM"rfc2629.dtd"> <?rfc toc="yes"?> <?rfc tocompact="yes"?> <?rfc tocdepth="3"?> <?rfc tocindent="yes"?> <?rfc symrefs="yes"?> <?rfc sortrefs="yes"?> <?rfc comments="yes"?> <?rfc inline="yes"?> <?rfc compact="yes"?> <?rfc subcompact="no"?>"rfc2629-xhtml.ent"> <rfc xmlns:xi="http://www.w3.org/2001/XInclude" number="8670" category="info" consensus="true" submissionType="IETF" docName="draft-ietf-spring-segment-routing-msdc-11"ipr="trust200902">ipr="trust200902" obsoletes="" updates="" xml:lang="en" tocInclude="true" symRefs="true" sortRefs="true" version="3"> <front> <titleabbrev="BGP-Prefix SIDabbrev="BGP Prefix-SID inlarge-scale DCs">BGP-PrefixLarge-Scale DCs">BGP Prefix Segment inlarge-scale data centers</title>Large-Scale Data Centers</title> <seriesInfo name="RFC" value="8670"/> <author fullname="Clarence Filsfils" initials="C." role="editor" surname="Filsfils"> <organization>Cisco Systems, Inc.</organization> <address> <postal> <street/> <city>Brussels</city> <region/> <code/><country>BE</country><country>Belgium</country> </postal> <email>cfilsfil@cisco.com</email> </address> </author> <author fullname="Stefano Previdi" initials="S." surname="Previdi"> <organization>Cisco Systems, Inc.</organization> <address> <postal> <street/> <city/> <code/> <country>Italy</country> </postal> <email>stefano@previdi.net</email> </address> </author> <author fullname="Gaurav Dawra" initials="G." surname="Dawra"> <organization>LinkedIn</organization> <address> <postal> <street/> <city/> <code/><country>USA</country><country>United States of America</country> </postal> <email>gdawra.ietf@gmail.com</email> </address> </author> <author fullname="Ebben Aries" initials="E." surname="Aries"><organization>Juniper Networks</organization><organization>Arrcus, Inc.</organization> <address> <postal><street>1133 Innovation Way</street> <city>Sunnyvale</city><street>2077 Gateway Place, Suite #400</street> <city>San Jose</city> <code>CA94089</code> <country>US</country>95119</code> <country>United States of America</country> </postal><email>exa@juniper.net</email><email>exa@arrcus.com</email> </address> </author> <author fullname="Petr Lapukhov" initials="P." surname="Lapukhov"> <organization>Facebook</organization> <address> <postal> <street/> <city/> <code/><country>US</country><country>United States of America</country> </postal> <email>petr@fb.com</email> </address> </author> <dateyear="2018"/>month="December" year="2019"/> <workgroup>Network Working Group</workgroup> <keyword>example</keyword> <abstract> <t>This document describes the motivation for, and benefitsforof, applyingsegment routingSegment Routing (SR) in BGP-based large-scaledata-centers.data centers. It describes the design to deploysegment routingSR in thosedata-centers,data centers for both the MPLS and IPv6dataplanes.</t>data planes.</t> </abstract> </front> <middle> <section anchor="INTRO"title="Introduction">numbered="true" toc="default"> <name>Introduction</name> <t>Segment Routing (SR), as described in <xreftarget="I-D.ietf-spring-segment-routing"/>target="RFC8402" format="default"/>, leverages thesource routingsource-routing paradigm. A node steers a packet through an ordered list ofinstructions,instructions calledsegments."segments". A segment can represent any instruction, topological orservice-based.service based. A segment can have a local semantic to an SR node or a global semantic within an SR domain. SR allowsto enforcethe enforcement of a flow through any topological path while maintaining per-flow state onlyatfrom the ingress node to the SR domain.Segment RoutingSR can be applied to the MPLS and IPv6data-planes.</t>data planes.</t> <t>Theuse-casesuse cases described in this document should be considered in the context of the BGP-based large-scale data-center (DC) design described in <xreftarget="RFC7938"/>.target="RFC7938" format="default"/>. This document extends it by applying SR both with IPv6 and MPLSdataplane.</t>data planes.</t> </section> <section anchor="LARGESCALEDC"title="Large Scale Data Centernumbered="true" toc="default"> <name>Large-Scale Data-Center Network DesignSummary">Summary</name> <t>This section provides a brief summary of theinformational documentInformational RFC <xreftarget="RFC7938"/> thattarget="RFC7938" format="default"/>, which outlines a practical network design suitable fordata-centersdata centers of variousscales:<list style="symbols"> <t>Data-centerscales:</t> <ul spacing="normal"> <li>Data-center networks have highly symmetric topologies with multiple parallel paths between twoserver attachmentserver-attachment points. The well-known Clos topology is most popular among the operators (as described in <xreftarget="RFC7938"/>).target="RFC7938" format="default"/>). In a Clos topology, the minimum number of parallel paths between two elements is determined by the "width" of the "Tier-1" stage. See <xreftarget="FIGLARGE"/> belowtarget="FIGLARGE" format="default"/> for an illustration of theconcept.</t> <t>Large-scale data-centersconcept.</li> <li>Large-scale data centers commonly use a routing protocol, such as BGP-4 <xreftarget="RFC4271"/>target="RFC4271" format="default"/>, in order to provide endpoint connectivity.RecoveryTherefore, recovery after a network failure isthereforedriven either by local knowledge of directly available backup paths or by distributed signaling between the networkdevices.</t> <t>Withindevices.</li> <li>Within data-center networks, traffic isload-sharedload shared using the Equal Cost Multipath (ECMP) mechanism. With ECMP, every network device implements apseudo-randompseudorandom decision, mapping packets to one of the parallel paths by means of a hash function calculated over certain parts of the packet, typically a combination of various packet headerfields.</t> </list></t>fields.</li> </ul> <t>The following is a schematic of a five-stage Clostopology,topology with four devices in the "Tier-1" stage. Notice that the number of paths between Node1 and Node12 equalsto four:four; the paths have to cross all of the Tier-1 devices. At the same time, the number of paths between Node1 and Node2 equals two, and the paths only cross Tier-2 devices. Other topologies are possible, but forsimplicitysimplicity, only the topologies that have a single path from Tier-1 to Tier-3 are considered below. The rest could be treated similarly, with a few modifications to the logic.</t> <section anchor="REFDESIGN"title="Reference design">numbered="true" toc="default"> <name>Reference Design</name> <figureanchor="FIGLARGE" title="5-stageanchor="FIGLARGE"> <name>5-Stage Clostopology"> <artwork>Topology</name> <artwork name="" type="" align="left" alt=""><![CDATA[ Tier-1 +-----+ |NODE |+->|+->| 5 |--+ | +-----+ | Tier-2 | | Tier-2 +-----+ | +-----+ | +-----++------------>|NODE |--+->|NODE+------------>|NODE |--+->|NODE |--+--|NODE |-------------+ | +-----| 3 |--+ | 6 | +--| 9 |-----+ | | | +-----+ +-----+ +-----+ | | | | | | | | +-----+ +-----+ +-----+ | | |+-----+---->|NODE+-----+---->|NODE |--+ |NODE | +--|NODE |-----+-----+ | | | | +---| 4|--+->||--+->| 7 |--+--| 10 |---+ | | | | | | | +-----+ | +-----+ | +-----+ | | | | | | | | | | | | | | +-----+ +-----+ | +-----+ | +-----+ +-----+ |NODE | |NODE | Tier-3+->|NODE+->|NODE |--+ Tier-3 |NODE | |NODE | | 1 | | 2 | | 8 | | 11 | | 12 | +-----+ +-----+ +-----+ +-----+ +-----+ | | | | | | | | A O B O<-<- Servers->-> Z O OO </artwork>O]]></artwork> </figure> <t>In the reference topology illustrated in <xreftarget="FIGLARGE"/>, Ittarget="FIGLARGE" format="default"/>, it isassumed:<list style="symbols">assumed:</t> <ul spacing="normal"> <li> <t>Each node is its ownASautonomous system (AS) (Node X has AS X). 4-byte AS numbers are recommended (<xreftarget="RFC6793"/>).<list> <t>Fortarget="RFC6793" format="default"/>).</t> <ul spacing="normal"> <li>For simple and efficient route propagation filtering, Node5, Node6,Node7Node7, and Node8 use the sameAS,AS; Node3 and Node4 use the sameAS,AS; and Node9 and Node10 use the sameAS.</t> <t>InAS.</li> <li>In the caseofin which 2-byte autonomous system numbers are usedandfor efficient usage of the scarce 2-byte Private Use AS pool, different Tier-3 nodes might use the sameAS.</t> <t>WithoutAS.</li> <li>Without loss of generality, these details will be simplified in thisdocument and assumedocument. It is to be assumed that each node has its ownAS.</t> </list></t> <t>EachAS.</li> </ul> </li> <li>Each node peers with its neighbors with a BGP session. If not specified,eBGPexternal BGP (EBGP) is assumed. In a specificuse-case, iBGPuse case, internal BGP (IBGP) will beusedused, but this will be called out explicitly in thatcase.</t>case.</li> <li> <t>Each node originates the IPv4 address of its loopback interface into BGP and announces it to its neighbors.<list> <t>The</t> <ul spacing="normal"> <li>The loopback of Node X is192.0.2.x/32.</t> </list></t> </list></t>192.0.2.x/32.</li> </ul> </li> </ul> <t>In this document, the Tier-1,Tier-2Tier-2, and Tier-3 nodes are referred torespectivelyasSpine, Leaf"Spine", "Leaf", andToR"ToR" (top of rack)nodes.nodes, respectively. When a ToR node acts as a gateway to the "outside world", it is referred to as aborder node.</t>"border node".</t> </section> </section> <section anchor="OPENPROBS"title="Some open problems in large data-center networks">numbered="true" toc="default"> <name>Some Open Problems in Large Data-Center Networks</name> <t>Thedata-center networkdata-center-network design summarized above provides means for moving traffic between hosts with reasonable efficiency. There are few open performance and reliability problems that arise in such a design:<list style="symbols"> <t>ECMP</t> <ul spacing="normal"> <li>ECMP routing is most commonly realizedper-flow.per flow. This means that large, long-lived "elephant" flows may affect performance of smaller, short-lived“mouse”"mouse" flows and may reduce efficiency of per-flowload-sharing.load sharing. In other words, per-flow ECMP does not perform efficiently whenflow lifetimeflow-lifetime distribution isheavy-tailed.heavy tailed. Furthermore, due to hash-functioninefficienciesinefficiencies, it is possible to have frequent flowcollisions,collisions where more flows get placed on one path over theothers.</t> <t>Shortest-pathothers.</li> <li>Shortest-path routing with ECMP implements an oblivious routingmodel, whichmodel that is not aware of the network imbalances. If the network symmetry is broken, forexampleexample, due to link failures, utilization hotspots may appear. For example, if a link fails between Tier-1 and Tier-2 devices(e.g.(e.g., Node5 and Node9), Tier-3 devices Node1 and Node2 will not be aware ofthat,that since there are other paths available from the perspective of Node3. They will continue sending roughly equal traffic to Node3 and Node4 as if the failure didn'texistexist, which may cause a traffichotspot.</t> <t>Isolatinghotspot.</li> <li>Isolating faults in the network with multiple parallel paths and ECMP-based routing isnon-trivialnontrivial due to lack of determinism. Specifically, the connections from HostA to HostB may take a different path every time a new connection is formed, thus making consistent reproduction of a failure much more difficult. This complexity scales linearly with the number of parallel paths in thenetwork,network and stems from the random nature of path selection by the networkdevices.</t> </list></t> <t>First, it will be explained how to apply SR in the DC, for MPLS and IPv6 data-planes.</t>devices.</li> </ul> </section> <section anchor="APPLYSR"title="Applyingnumbered="true" toc="default"> <name>Applying Segment Routing in the DC with MPLSdataplane">Data Plane</name> <section anchor="BGPREFIXSEGMENT"title="BGPnumbered="true" toc="default"> <name>BGP Prefix Segment(BGP-Prefix-SID)">(BGP Prefix-SID)</name> <t>A BGP Prefix Segment is a segment associated with a BGP prefix. A BGP Prefix Segment is a network-wide instruction to forward the packet along the ECMP-aware best path to the related prefix.</t> <t>The BGP Prefix Segment is defined as theBGP-Prefix-SIDBGP Prefix-SID Attribute in <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>target="RFC8669" format="default"/>, which contains an index. Throughout thisdocumentdocument, the BGP Prefix Segment Attribute is referred to as theBGP-Prefix-SID"BGP Prefix-SID" and the encoded index as thelabel-index.</t>label index.</t> <t>In this document, the network design decision has been made to assume that all the nodes are allocated the same SRGB (Segment Routing Global Block),e.g.e.g., [16000, 23999]. This provides operational simplification as explained in <xreftarget="SINGLESRGB"/>,target="SINGLESRGB" format="default"/>, but this is not a requirement.</t> <t>For illustrationpurpose,purposes, when considering an MPLSdata-plane,data plane, it is assumed that thelabel-indexlabel index allocated to prefix 192.0.2.x/32 is X. As a result, a local label (16000+x) is allocated for prefix 192.0.2.x/32 by each node throughout the DC fabric.</t> <t>When the IPv6data-planedata plane is considered, it is assumed that Node X is allocated IPv6 address (segment) 2001:DB8::X.</t> </section> <section anchor="eBGP8277"title="eBGPnumbered="true" toc="default"> <name>EBGP Labeled Unicast(RFC8277)">(RFC 8277)</name> <t>Referring to <xreftarget="FIGLARGE"/>target="FIGLARGE" format="default"/> and <xreftarget="RFC7938"/>,target="RFC7938" format="default"/>, the following design modifications areintroduced:<list style="symbols"> <t>Eachintroduced:</t> <ul spacing="normal"> <li>Each node peers with its neighbors viaa eBGPan EBGP session with extensions defined in <xreftarget="RFC8277"/>target="RFC8277" format="default"/> (named"eBGP8277""EBGP8277" throughout this document) and with theBGP-Prefix-SIDBGP Prefix-SID attribute extension as defined in <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>.</t> <t>Thetarget="RFC8669" format="default"/>.</li> <li>The forwarding plane at Tier-2 and Tier-1 isMPLS.</t> <t>TheMPLS.</li> <li>The forwarding plane at Tier-3 is either IP2MPLS (if the host sends IP traffic) or MPLS2MPLS (if the host sendsMPLS- encapsulated traffic).</t> </list></t>MPLS-encapsulated traffic).</li> </ul> <t><xreftarget="FIGSMALL"/>target="FIGSMALL" format="default"/> zooms into a path fromserver AServerA toserver ZServerZ within the topology of <xreftarget="FIGLARGE"/>.</t>target="FIGLARGE" format="default"/>.</t> <figureanchor="FIGSMALL" title="Pathanchor="FIGSMALL"> <name>Path from A to Z vianodesNodes 1, 4, 7,1010, and11"> <artwork>11</name> <artwork name="" type="" align="left" alt=""><![CDATA[ +-----+ +-----+ +-----++---------->|NODE+---------->|NODE | |NODE | |NODE | | | 4|--+->||--+->| 7 |--+--| 10 |---+ | +-----+ +-----+ +-----+ | | | +-----+ +-----+ |NODE | |NODE | | 1 | | 11 | +-----+ +-----+ | | A<-<- Servers-> Z </artwork>-> Z]]></artwork> </figure> <t>Referring to Figures <xreftarget="FIGLARGE"/>target="FIGLARGE" format="counter"/> and <xreftarget="FIGSMALL"/>target="FIGSMALL" format="counter"/>, and assuming the IP address with the AS and label-index allocation previously described, the following sections detail thecontrol planecontrol-plane operation and thedata planedata-plane states for the prefix 192.0.2.11/32 (loopback ofNode11)</t>Node11).</t> <section anchor="CONTROLPLANE"title="Control Plane">numbered="true" toc="default"> <name>Control Plane</name> <t>Node11 originates 192.0.2.11/32 in BGP and allocates to it aBGP-Prefix-SIDBGP Prefix-SID with label-index: index11 <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>.</t>target="RFC8669" format="default"/>.</t> <t>Node11 sends the followingeBGP8277EBGP8277 update toNode10:<figure> <artwork>. IPNode10:</t> <ul empty="true"> <li> <dl> <dt>IP Prefix:192.0.2.11/32 . Label: Implicit-Null . Next-hop: Node11’s</dt> <dd>192.0.2.11/32 </dd> <dt>Label: </dt> <dd>Implicit NULL </dd> <dt>Next hop: </dt> <dd>Node11's interface address on the link to Node10. AS</dd> <dt>AS Path:{11} . BGP-Prefix-SID: Label-Index</dt> <dd>{11} </dd> <dt>BGP Prefix-SID: </dt> <dd>Label-Index 11</artwork> </figure></t></dd> </dl> </li> </ul> <t>Node10 receives the above update. As it is SR capable, Node10 is able to interpret theBGP-Prefix-SID and henceBGP Prefix-SID; therefore, it understands that it should allocate the label from its own SRGB block, offset by theLabel-Indexlabel index received in theBGP-Prefix-SID (16000+11 henceBGP Prefix-SID (16000+11, hence, 16011) to theNLRINetwork Layer Reachability Information (NLRI) instead of allocating anon-deterministicnondeterministic label out of a dynamically allocated portion of the local label space. Theimplicit-nullimplicit NULL label in the NLRI tells Node10 that it is the penultimate hop and that it must pop the top label on the stack before forwarding traffic for this prefix to Node11.</t> <t>Then, Node10 sends the followingeBGP8277EBGP8277 update toNode7:<figure> <artwork>. IPNode7:</t> <ul empty="true"> <li> <dl> <dt>IP Prefix:192.0.2.11/32 . Label: 16011 . Next-hop: Node10’s</dt> <dd>192.0.2.11/32 </dd> <dt>Label: </dt> <dd>16011 </dd> <dt>Next hop: </dt> <dd>Node10's interface address on the link to Node7. AS</dd> <dt>AS Path:{10,</dt> <dd>{10, 11}. BGP-Prefix-SID: Label-Index</dd> <dt>BGP Prefix-SID: </dt> <dd>Label-Index 11</artwork> </figure></t></dd> </dl> </li> </ul> <t>Node7 receives the above update. As it is SR capable, Node7 is able to interpret theBGP-Prefix-SID and henceBGP Prefix-SID; therefore, it allocates the local (incoming) label 16011 (16000 + 11) to the NLRI (instead of allocating a“dynamic”"dynamic" local label from its label manager). Node7 uses the label in the receivedeBGP8277EBGP8277 NLRI as the outgoing label (the index is only used to derive the local/incoming label).</t> <t>Node7 sends the followingeBGP8277EBGP8277 update toNode4:<figure> <artwork>. IPNode4:</t> <ul empty="true"> <li> <dl> <dt>IP Prefix:192.0.2.11/32 . Label: 16011 . Next-hop: Node7’s</dt> <dd>192.0.2.11/32 </dd> <dt>Label: </dt> <dd>16011 </dd> <dt>Next hop: </dt> <dd>Node7's interface address on the link to Node4. AS</dd> <dt>AS Path:{7,</dt> <dd>{7, 10, 11}. BGP-Prefix-SID: Label-Index</dd> <dt>BGP Prefix-SID: </dt> <dd>Label-Index 11</artwork> </figure></t></dd> </dl> </li> </ul> <t>Node4 receives the above update. As it is SR capable, Node4 is able to interpret theBGP-Prefix-SID and henceBGP Prefix-SID; therefore, it allocates the local (incoming) label 16011 to the NLRI (instead of allocating a“dynamic”"dynamic" local label from its label manager). Node4 uses the label in the receivedeBGP8277EBGP8277 NLRI as an outgoing label (the index is only used to derive the local/incoming label).</t> <t>Node4 sends the followingeBGP8277EBGP8277 update toNode1:<figure> <artwork>. IPNode1:</t> <ul empty="true"> <li> <dl> <dt>IP Prefix:192.0.2.11/32 . Label: 16011 . Next-hop: Node4’s</dt> <dd>192.0.2.11/32 </dd> <dt>Label: </dt> <dd>16011 </dd> <dt>Next hop: </dt> <dd>Node4's interface address on the link to Node1. AS</dd> <dt>AS Path:{4,</dt> <dd>{4, 7, 10, 11}. BGP-Prefix-SID: Label-Index</dd> <dt>BGP Prefix-SID: </dt> <dd>Label-Index 11</artwork> </figure></t></dd> </dl> </li> </ul> <t>Node1 receives the above update. As it is SR capable, Node1 is able to interpret theBGP-Prefix-SID and henceBGP Prefix-SID; therefore, it allocates the local (incoming) label 16011 to the NLRI (instead of allocating a“dynamic”"dynamic" local label from its label manager). Node1 uses the label in the receivedeBGP8277EBGP8277 NLRI as an outgoing label (the index is only used to derive the local/incoming label).</t> </section> <section anchor="DATAPLANE"title="Data Plane">numbered="true" toc="default"> <name>Data Plane</name> <t>Referring to <xreftarget="FIGLARGE"/>,target="FIGLARGE" format="default"/>, and assuming all nodes apply the same advertisement rules described above and all nodes have the same SRGB (16000-23999), here are the IP/MPLS forwarding tables for prefix 192.0.2.11/32 at Node1, Node4,Node7Node7, and Node10.</t><figure align="left" anchor="NODE1FIB" title="Node1<table anchor="NODE1FIB"> <name>Node1 ForwardingTable"> <artwork align="center">----------------------------------------------- Incoming label | outgoing label | Outgoing or IP destination | | Interface ------------------+----------------+----------- 16011 | 16011 | ECMP{3, 4} 192.0.2.11/32 | 16011 | ECMP{3, 4} ------------------+----------------+-----------</artwork> </figure> <figure anchor="NODE4FIB" suppress-title="false" title="Node4 Forwarding Table"> <artwork align="center"> ----------------------------------------------- Incoming label | outgoing label | OutgoingTable </name> <tbody> <tr> <td align="center">Incoming Label or IPdestination | |Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface------------------+----------------+----------- 16011 | 16011 | ECMP{7, 8} 192.0.2.11/32 | 16011 | ECMP{7, 8} ------------------+----------------+-----------</artwork> </figure> <figure anchor="NODE7FIB" suppress-title="false" title="Node7</td> </tr> <tr> <td align="center">16011 </td> <td align="center">16011 </td> <td align="center">ECMP{3, 4} </td> </tr> <tr> <td align="center">192.0.2.11/32 </td> <td align="center">16011 </td> <td align="center">ECMP{3, 4} </td> </tr> </tbody> </table> <table anchor="NODE4FIB"> <name>Node4 ForwardingTable"> <artwork align="center"> ----------------------------------------------- Incoming label | outgoing label | OutgoingTable </name> <tbody > <tr> <td align="center">Incoming Label or IPdestination | |Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface------------------+----------------+----------- 16011 | 16011 | 10 192.0.2.11/32 | 16011 | 10 ------------------+----------------+-----------</artwork> </figure> <figure anchor="NODE10FIB" suppress-title="true" title="Node10</td> </tr> <tr> <td align="center">16011 </td> <td align="center">16011 </td> <td align="center">ECMP{7, 8} </td> </tr> <tr> <td align="center">192.0.2.11/32 </td> <td align="center">16011 </td> <td align="center">ECMP{7, 8} </td> </tr> </tbody> </table> <table anchor="NODE7FIB"> <name>Node7 ForwardingTable"> <artwork align="center"> ----------------------------------------------- Incoming label | outgoing label | OutgoingTable </name> <tbody > <tr > <td align="center">Incoming Label or IPdestination | |Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface------------------+----------------+----------- 16011 | POP | 11 192.0.2.11/32 | N/A | 11 ------------------+----------------+-----------</artwork> </figure></td> </tr> <tr> <td align="center">16011 </td> <td align="center">16011 </td> <td align="center">10 </td> </tr> <tr> <td align="center">192.0.2.11/32 </td> <td align="center">16011 </td> <td align="center">10 </td> </tr> </tbody> </table> <table anchor="NODE10FIB"> <name>Node10 Forwarding Table </name> <tbody > <tr > <td align="center">Incoming Label or IP Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface </td> </tr> <tr> <td align="center">16011 </td> <td align="center">POP </td> <td align="center">11 </td> </tr> <tr> <td align="center">192.0.2.11/32 </td> <td align="center">N/A </td> <td align="center">11 </td> </tr> </tbody> </table> </section> <section anchor="VARIATIONS"title="Networknumbered="true" toc="default"> <name>Network DesignVariation">Variation</name> <t>A network design choice could consist of switching all the traffic through Tier-1 and Tier-2 as MPLS traffic. In this case, one could filter away the IP entries at Node4,Node7Node7, and Node10. This might be beneficial in order to optimize the forwarding table size.</t> <t>A network design choice could consistinof allowing the hosts to send MPLS-encapsulated traffic based on the Egress Peer Engineering (EPE)use-caseuse case as defined in <xreftarget="I-D.ietf-spring-segment-routing-central-epe"/>.target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>. For example, applications at HostA would send their Z-destined traffic to Node1 with an MPLS label stack where the top label is 16011 and the next label is an EPE peer segment (<xreftarget="I-D.ietf-spring-segment-routing-central-epe"/>)target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) at Node11 directing the traffic to Z.</t> </section> <section anchor="FABRIC"title="Globalnumbered="true" toc="default"> <name>Global BGP Prefix Segment through thefabric">Fabric</name> <t>When the previous design is deployed, the operator enjoys globalBGP-Prefix-SIDBGP Prefix-SID and label allocation throughout the DC fabric.</t> <t>A few examplesfollow:<list style="symbols"> <t>Normalfollow:</t> <ul spacing="normal"> <li>Normal forwarding to Node11:aA packet with top label 16011 received by any node in the fabric will be forwarded along the ECMP-aware BGPbest-pathbest path towardsNode11Node11, and the label 16011 ispenultimate-poppedpenultimate popped at Node10 (or at Node9).</t> <t>Traffic-engineered9).</li> <li>Traffic-engineered path to Node11:anAn application on a host behind Node1 might want to restrict its traffic to paths via the Spine node Node5. The application achieves this by sending its packets with a label stack of {16005, 16011}. BGPPrefix SIDPrefix-SID 16005 directs the packet up to Node5 along the path (Node1, Node3, Node5).BGP-Prefix-SIDBGP Prefix-SID 16011 then directs the packet down to Node11 along the path (Node5, Node9,Node11).</t> </list></t>Node11).</li> </ul> </section> <section anchor="INCRDEP"title="Incremental Deployments">numbered="true" toc="default"> <name>Incremental Deployments</name> <t>The design previously described can be deployed incrementally. Let us assume that Node7 does not support theBGP-Prefix-SIDBGP Prefix-SID, and let us show how the fabric connectivity is preserved.</t> <t>From a signaling viewpoint, nothing wouldchange:change; even though Node7 does not support theBGP-Prefix-SID,BGP Prefix-SID, it does propagate the attribute unmodified to its neighbors.</t> <t>From alabel allocationlabel-allocation viewpoint, the only difference is that Node7 would allocate a dynamic (random) label to the prefix 192.0.2.11/32(e.g.(e.g., 123456) instead of the "hinted" label as instructed by theBGP-Prefix-SID.BGP Prefix-SID. The neighbors of Node7 adapt automatically as they always use the label in the BGP8277 NLRI as an outgoing label.</t> <t>Node4 does understand theBGP-Prefix-SID and henceBGP Prefix-SID; therefore, it allocates the indexed label in the SRGB (16011) for 192.0.2.11/32.</t> <t>As a result, all the data-plane entries across the network would be unchanged except the entries at Node7 and its neighbor Node4 as shown in the figures below.</t> <t>The key point is that the end-to-end Label Switched Path (LSP) is preserved because the outgoing label is always derived from the received label within the BGP8277 NLRI. The index in theBGP-Prefix-SIDBGP Prefix-SID is only used as a hint on how to allocate the local label (the incoming label) but never for the outgoing label.</t><figure anchor="NODE7FIBINC" title="Node7<table anchor="NODE7FIBINC"> <name>Node7 ForwardingTable"> <artwork align="center">------------------------------------------ Incoming label | outgoing | OutgoingTable </name> <tbody > <tr > <td align="center">Incoming Label or IPdestination | label |Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface-------------------+---------------------- 12345 | 16011 | 10 </artwork> </figure> <figure anchor="NODE4FIBINC" title="Node4</td> </tr> <tr> <td align="center">12345 </td> <td align="center">16011 </td> <td align="center">10 </td> </tr> </tbody> </table> <table anchor="NODE4FIBINC"> <name>Node4 ForwardingTable"> <artwork align="center">------------------------------------------ Incoming label | outgoing | OutgoingTable </name> <tbody > <tr > <td align="center">Incoming Label or IPdestination | label |Destination </td> <td align="center">Outgoing Label </td> <td align="center">Outgoing Interface-------------------+---------------------- 16011 | 12345 | 7 </artwork> </figure></td> </tr> <tr> <td align="center">16011 </td> <td align="center">12345 </td> <td align="center">7 </td> </tr> </tbody> </table> <t>TheBGP-Prefix-SIDBGP Prefix-SID can thus be deployedincrementallyincrementally, i.e., one node at a time.</t> <t>When deployed together with a homogeneous SRGB(same(the same SRGB across the fabric), the operator incrementally enjoys the global prefix segment benefits as the deployment progresses through the fabric.</t> </section> </section> <section anchor="iBGP3107"title="iBGPnumbered="true" toc="default"> <name>IBGP Labeled Unicast(RFC8277)">(RFC 8277)</name> <t>The same exact design aseBGP8277EBGP8277 is used with the followingmodifications:<list> <t>Allmodifications:</t> <ul spacing="normal"> <li>All nodes use the same ASnumber.</t> <t>Eachnumber.</li> <li>Each node peers with its neighbors via an internal BGP session(iBGP)(IBGP) with extensions defined in <xreftarget="RFC8277"/>target="RFC8277" format="default"/> (named"iBGP8277""IBGP8277" throughout thisdocument).</t> <t>Eachdocument).</li> <li>Each node acts as aroute-reflectorroute reflector for each of its neighbors and with the next-hop-self option. Next-hop-self is awell knownwell-known operational featurewhichthat consists of rewriting thenext-hopnext hop of a BGP update prior tosendsending it to the neighbor. Usually,it’sit's a common practice to apply next-hop-self behavior towardsiBGPIBGP peers foreBGP learnedEBGP-learned routes. In the case outlined in thissectionsection, it is proposed to use the next-hop-self mechanism also toiBGP learned routes.</t> <t><figure anchor="IBGPFIG" title="iBGPIBGP-learned routes.</li></ul> <figure anchor="IBGPFIG"> <name>IBGP Sessions with Reflection andNext-Hop-Self"> <artwork>Next-Hop-Self</name> <artwork name="" type="" align="left" alt=""><![CDATA[ Cluster-1 +-----------+ | Tier-1 | | +-----+ | | |NODE | | | | 5 | | Cluster-2 | +-----+ | Cluster-3 +---------+ | | +---------+ | Tier-2 | | | | Tier-2 | | +-----+ | | +-----+ | | +-----+ | | |NODE | | | |NODE | | | |NODE | | | | 3 | | | | 6 | | | | 9 | | | +-----+ | | +-----+ | | +-----+ | | | | | | | | | | | | | | +-----+ | | +-----+ | | +-----+ | | |NODE | | | |NODE | | | |NODE | | | | 4 | | | | 7 | | | | 10 | | | +-----+ | | +-----+ | | +-----+ | +---------+ | | +---------+ | | | +-----+ | | |NODE | | Tier-3 | | 8 | | Tier-3 +-----+ +-----+ | +-----+ | +-----+ +-----+ |NODE | |NODE | +-----------+ |NODE | |NODE | | 1 | | 2 | | 11 | | 12 | +-----+ +-----+ +-----++-----+ </artwork> </figure></t>+-----+]]></artwork> </figure> <ul spacing="normal"> <li> <t>For simple and efficient route propagation filtering and as illustrated in <xreftarget="IBGPFIG"/>: <list> <t>Node5,target="IBGPFIG" format="default"/>: </t> <ul spacing="normal"> <li>Node5, Node6,Node7Node7, and Node8 use the same Cluster ID(Cluster-1)</t> <t>Node3(Cluster-1).</li> <li>Node3 and Node4 use the same Cluster ID(Cluster-2)</t> <t>Node9(Cluster-2).</li> <li>Node9 and Node10 use the same Cluster ID(Cluster-3)</t> </list></t> <t>The(Cluster-3).</li> </ul> </li> <li>The control-plane behavior is mostly the same as described in the previoussection:section; the only difference is that theeBGP8277EBGP8277 path propagation is simply replaced by aniBGP8277IBGP8277 path reflection withnext-hopnext hop changed toself.</t> <t>Theself.</li> <li>The data-plane tables are exactly thesame.</t> </list></t>same.</li> </ul> </section> </section> <section anchor="IPV6"title="Applyingnumbered="true" toc="default"> <name>Applying Segment Routing in the DC with IPv6dataplane">Data Plane</name> <t>The design described in <xreftarget="RFC7938"/>target="RFC7938" format="default"/> is reused with one single modification. It is highlighted using the example of the reachability to Node11 viaspineSpine node Node5.</t> <t>Node5 originates 2001:DB8::5/128 with the attachedBGP-Prefix-SIDBGP Prefix-SID for IPv6 packets destined to segment 2001:DB8::5 (<xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>).</t>target="RFC8402" format="default"/>).</t> <t>Node11 originates 2001:DB8::11/128 with the attachedBGP-Prefix-SIDBGP Prefix-SID advertising the support of theSRHSegment Routing Header (SRH) for IPv6 packets destined to segment 2001:DB8::11.</t> <t>The control-plane and data-plane processing of all the other nodes in the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and 2001:DB8::11 are installed in the FIB along theeBGP best-pathEBGP best path to Node5(spine(Spine node) and Node11 (ToR node) respectively.</t> <t>An application on HostAwhichthat needs to send traffic to HostZ via only Node5(spine(Spine node) can do so by sending IPv6 packets with a Segment RoutingheaderHeader (SRH, <xreftarget="I-D.ietf-6man-segment-routing-header"/>).target="I-D.ietf-6man-segment-routing-header" format="default"/>). The destination address and active segment is set to 2001:DB8::5. The next and last segment is set to 2001:DB8::11.</t> <t>The application must only use IPv6 addresses that have been advertised as capable for SRv6 segment processing(e.g.(e.g., for which the BGPprefix segmentPrefix Segment capability has been advertised). How applications learn this(e.g.:(e.g., centralized controller and orchestration) is outside the scope of this document.</t> </section> <section anchor="COMMHOSTS"title="Communicating path informationnumbered="true" toc="default"> <name>Communicating Path Information to thehost">Host</name> <t>There are two general methods for communicating path information to the end-hosts: "proactive" and "reactive", aka "push" and "pull" models. There are multiple ways to implement either of these methods. Here, it is noted that one way could be using a centralized controller: the controller either tells the hosts of the prefix-to-path mappings beforehand and updates them as needed (network event drivenpush),push) or responds to the hosts makingrequestrequests for a path to a specific destination (host event driven pull). It is also possible to use a hybrid model, i.e., pushing some state from the controller in response to particular network events, while the host pulls other state on demand.</t><t>It is<t>Note alsonoted,that when disseminating network-related data to theend-hostsend-hosts, a trade-off is made to balance the amount of informationVs.vs. the level of visibility in the network state. This appliesbothto both push and pull models. In the extreme case, the host would request path information on everyflow,flow and keep no local state at all. On the other end of the spectrum, information for every prefix in the network along with available paths could be pushed and continuously updated on all hosts.</t> </section> <section anchor="BENEFITS"title="Additional Benefits">numbered="true" toc="default"> <name>Additional Benefits</name> <section anchor="MPLSIMPLE"title="MPLS Dataplanenumbered="true" toc="default"> <name>MPLS Data Plane withoperational simplicity">Operational Simplicity</name> <t>As required by <xreftarget="RFC7938"/>,target="RFC7938" format="default"/>, no new signaling protocol is introduced. TheBGP-Prefix-SIDBGP Prefix-SID is a lightweight extension to BGP Labeled Unicast <xreftarget="RFC8277"/>.target="RFC8277" format="default"/>. It applies either toeBGPEBGP- oriBGP basedIBGP-based designs.</t> <t>Specifically, LDP and RSVP-TE are not used. These protocols would drastically impact the operational complexity of theData Centerdata center and would not scale. This is in line with the requirements expressed in <xreftarget="RFC7938"/>.</t>target="RFC7938" format="default"/>.</t> <t>Provided the same SRGB is configured on all nodes, all nodes use the same MPLS label for a given IP prefix. This is simpler from an operation standpoint, as discussed in <xreftarget="SINGLESRGB"/></t>target="SINGLESRGB" format="default"/>.</t> </section> <section anchor="MINFIB"title="Minimizingnumbered="true" toc="default"> <name>Minimizing the FIBtable">Table</name> <t>The designer may decide to switch all the traffic at Tier-1 andTier-2'sTier-2 based on MPLS,hencethereby drastically decreasing the IP table size at these nodes.</t> <t>This is easily accomplished by encapsulating the traffic either directly at the host or at the source ToRnodenode. The encapsulation is done by pushing theBGP-Prefix-SIDBGP Prefix-SID of the destination ToR for intra-DC traffic, or by pushing theBGP-Prefix-SIDBGP Prefix-SID for thetheborder node for inter-DC or DC-to-outside-world traffic.</t> </section> <section anchor="EPE"title="Egressnumbered="true" toc="default"> <name>Egress PeerEngineering">Engineering</name> <t>It is straightforward to combine the design illustrated in this document with the Egress Peer Engineering (EPE)use-caseuse case described in <xreftarget="I-D.ietf-spring-segment-routing-central-epe"/>.</t>target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>.</t> <t>In such a case, the operator is able to engineer its outbound traffic on aper host-flowper-host-flow basis, without incurring any additional state at intermediate points in the DC fabric.</t> <t>For example, the controller only needs to inject a per-flow state on the HostA to force it to send its traffic destined to a specific Internet destination D via a selected border node (say Node12 in <xreftarget="FIGLARGE"/>target="FIGLARGE" format="default"/> instead of another border node, Node11) and a specific egress peer of Node12 (say peer AS 9999 of local PeerNode segment 9999 at Node12 instead of any other peerwhichthat provides a path to the destination D). Any packet matching this state athost AHostA would be encapsulated with SR segment list (label stack) {16012, 9999}. 16012 would steer the flow through the DC fabric, leveraging any ECMP, along the best path to border node Node12. Once the flow gets to border node Node12, the active segment is 9999 (because ofPHPPenultimate Hop Popping (PHP) on the upstream neighbor of Node12). This EPE PeerNode segment forces border node Node12 to forward the packet to peer AS9999,9999 without any IP lookup at the border node. There is no per-flow state for this engineered flow in the DC fabric. A benefit ofsegment routingSR is that the per-flow state is only required at the source.</t> <t>As well as allowing fulltraffic engineering controltraffic-engineering control, such a design also offers FIBtable minimizationtable-minimization benefits as the Internet-scale FIB at border node Node12 is not required if all FIB lookups are avoided there by using EPE.</t> </section> <section anchor="ANYCAST"title="Anycast">numbered="true" toc="default"> <name>Anycast</name> <t>The design presented in this document preserves the availability and load-balancing properties of the base design presented in <xreftarget="I-D.ietf-spring-segment-routing"/>.</t>target="RFC8402" format="default"/>.</t> <t>For example, one could assign an anycast loopback 192.0.2.20/32 and associate segment index 20 to it on the border nodes Node11 and Node12 (in addition to their node-specific loopbacks). Doing so, the EPE controller could express a default "go-to-the-Internet via any border node" policy as segment list {16020}. Indeed, from any host in the DC fabric or from any ToR node, 16020 steers the packet towards the border nodes Node11 or Node12 leveraging ECMP where available along the best paths to these nodes.</t> </section> </section> <section anchor="SINGLESRGB"title="Preferrednumbered="true" toc="default"> <name>Preferred SRGBAllocation">Allocation</name> <t>In the MPLS case, it isrecommendrecommended to use the same SRGBs at each node.</t> <t>Different SRGBs in each node likely increase the complexity of the solution both from an operational viewpoint and from a controller viewpoint.</t> <t>From anoperationoperational viewpoint, it is much simpler to have the same global label at every node for the same destination (the MPLS troubleshooting is then similar to the IPv6 troubleshooting where this global property is a given).</t> <t>From a controller viewpoint, this allows us to construct simple policies applicable across the fabric.</t> <t>Let us consider twoapplicationsapplications, A andBB, respectively connected to Node1 and Node2 (ToR nodes). Application A has twoflowsflows, FA1 andFA2FA2, destined to Z. B has twoflowsflows, FB1 andFB2FB2, destined to Z. The controller wants FA1 and FB1 to beload-sharedload shared across the fabric while FA2 and FB2 must be respectively steered via Node5 and Node8.</t> <t>Assuming a consistent unique SRGB across the fabric as described inthethis document, the controller can simply do it by instructing A and B to use {16011} respectively for FA1 and FB1 and by instructing A and B to use {16005 16011} and {16008 16011} respectively for FA2 and FB2.</t> <t>Let us assume a design where the SRGB is different at every node and where the SRGB of each node is advertised using the Originator SRGB TLV of theBGP-Prefix-SIDBGP Prefix-SID as defined in <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>:target="RFC8669" format="default"/>: SRGB of Node K starts at valueK*1000K*1000, and the SRGB length is 1000(e.g. Node1’s(e.g., Node1's SRGB is [1000, 1999],Node2’sNode2's SRGB is [2000, 2999],…).</t>...).</t> <t>In this case,not onlythe controller would need to collect and store all of these differentSRGB’sSRGBs (e.g., through the Originator SRGB TLV of theBGP-Prefix-SID), furthermoreBGP Prefix-SID); furthermore, it would also need to adapt the policy for each host. Indeed, the controller would instruct A to use {1011} for FA1 while it would have to instruct B to use {2011} for FB1 (while with the same SRGB, both policies are the same {16011}).</t> <t>Even worse, the controller would instruct A to use {1005, 5011} for FA1 while it would instruct B to use {2011, 8011} for FB1 (while with the same SRGB, the second segment is the same across both policies: 16011). When combining segments to create a policy, oneneedneeds to carefully update the label of each segment. This is obviously moreerror-prone,error prone, morecomplexcomplex, and more difficult to troubleshoot.</t> </section> <section anchor="IANA"title="IANA Considerations">numbered="true" toc="default"> <name>IANA Considerations</name> <t>This documentdoes not make anyhas no IANArequest.</t>actions.</t> </section> <section anchor="MANAGE"title="Manageability Considerations">numbered="true" toc="default"> <name>Manageability Considerations</name> <t>The design and deployment guidelines described in this document are based on the network design described in <xreftarget="RFC7938"/>.</t>target="RFC7938" format="default"/>.</t> <t>The deployment model assumed in this document is based on a single domain where the interconnected DCs are part of the same administrative domain (which, of course, is split into different autonomous systems). The operator has full control of the wholedomaindomain, and the usual operational and management mechanisms and procedures are used in order to prevent any information related to internal prefixes and topology to be leaked outside the domain.</t> <t>As recommended in <xreftarget="I-D.ietf-spring-segment-routing"/>,target="RFC8402" format="default"/>, the same SRGB should be allocated in all nodes in order to facilitate the design,deploymentdeployment, and operations of the domain.</t> <t>When EPE (<xreftarget="I-D.ietf-spring-segment-routing-central-epe"/>)target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) is used (as explained in <xreftarget="EPE"/>,target="EPE" format="default"/>), the same operational model is assumed. EPE information is originated and propagated throughout the domain towards an internalserverserver, and unless explicitly configured by the operator, no EPE information is leaked outside the domain boundaries.</t> </section> <section anchor="SEC"title="Security Considerations">numbered="true" toc="default"> <name>Security Considerations</name> <t>This document proposes to applySegment RoutingSR to awell knownwell-known scalability requirement expressed in <xreftarget="RFC7938"/>target="RFC7938" format="default"/> using theBGP-Prefix-SIDBGP Prefix-SID as defined in <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>.</t>target="RFC8669" format="default"/>.</t> <t>It has to be noted, as described in <xreftarget="MANAGE"/>target="MANAGE" format="default"/>, that the design illustrated in <xreftarget="RFC7938"/>target="RFC7938" format="default"/> and in thisdocument,document refer to a deployment model where all nodes are under the same administration. In this context, it is assumed that the operator doesn't want to leak outside of the domain any information related to internal prefixes and topology. The internal information includesprefix-sidPrefix-SID and EPE information. In order to prevent such leaking, the standard BGP mechanisms (filters) are applied on the boundary of the domain.</t> <t>Therefore, the solution proposed in this document does not introduce any additional security concerns from what is expressed in <xreftarget="RFC7938"/>target="RFC7938" format="default"/> and <xreftarget="I-D.ietf-idr-bgp-prefix-sid"/>.target="RFC8669" format="default"/>. It is assumed that the security and confidentiality of the prefix and topology information is preserved by outbound filters at each peering point of the domain as described in <xreftarget="MANAGE"/>.</t>target="MANAGE" format="default"/>.</t> </section> </middle> <back> <displayreference target="I-D.ietf-spring-segment-routing-central-epe" to="SR-CENTRAL-EPE"/> <displayreference target="I-D.ietf-6man-segment-routing-header" to="IPv6-SRH"/> <references> <name>References</name> <references> <name>Normative References</name> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8277.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4271.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7938.xml"/> <!--I-D.ietf-spring-segment-routing became RFC 8402 --> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8402.xml"/> <!-- I-D.ietf-idr-bgp-prefix-sid-27: companion document--> <reference anchor='RFC8669' target='https://www.rfc-editor.org/info/rfc8669'> <front> <title>Segment Routing Prefix Segment Identifier Extensions for BGP</title> <author initials='S' surname='Previdi' fullname='Stefano Previdi'> <organization /> </author> <author initials='C' surname='Filsfils' fullname='Clarence Filsfils'> <organization /> </author> <author initials='A' surname='Lindem' fullname='Acee Lindem' role="editor"> <organization /> </author> <author initials='A' surname='Sreekantiah' fullname='Arjun Sreekantiah'> <organization /> </author> <author initials='H' surname='Gredler' fullname='Hannes Gredler'> <organization /> </author> <date month='December' year='2019' /> </front> <seriesInfo name='RFC' value='8669' /> <seriesInfo name="DOI" value="10.17487/RFC8669"/> </reference> </references> <references> <name>Informative References</name> <xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-spring-segment-routing-central-epe.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.6793.xml"/> <xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-6man-segment-routing-header.xml"/> <!-- I-D.ietf-6man-segment-routing-header: I-D exists --> </references> </references> <section anchor="Acknowledgements"title="Acknowledgements">numbered="false" toc="default"> <name>Acknowledgements</name> <t>The authors would like to thank Benjamin Black, Arjun Sreekantiah, Keyur Patel, AceeLindemLindem, and Anoop Ghanwani for their comments and review of this document.</t> </section> <section anchor="Contributors"title="Contributors"> <figure> <artwork>Gayanumbered="false" toc="default"> <name>Contributors</name> <artwork name="" type="" align="left" alt=""><![CDATA[Gaya Nagarajan FacebookUSUnited States of America Email:gaya@fb.com</artwork> </figure> <figure> <artwork>Gauravgaya@fb.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Gaurav Dawra Cisco SystemsUSUnited States of America Email:gdawra.ietf@gmail.com</artwork> </figure> <figure> <artwork>Dmitrygdawra.ietf@gmail.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Dmitry Afanasiev YandexRURussian Federation Email:fl0w@yandex-team.ru</artwork> </figure> <figure> <artwork>Timfl0w@yandex-team.ru]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Tim Laberge CiscoUSUnited States of America Email:tlaberge@cisco.com</artwork> </figure> <figure> <artwork>Edettlaberge@cisco.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Edet Nkposong Salesforce.com Inc.USUnited States of America Email:enkposong@salesforce.com</artwork> </figure> <figure> <artwork>Mohanenkposong@salesforce.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Mohan Nanduri MicrosoftUSUnited States of America Email:mnanduri@microsoft.com</artwork> </figure> <figure> <artwork>Jamesmohan.nanduri@oracle.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[James Uttaro ATTUSUnited States of America Email:ju1738@att.com</artwork> </figure> <figure> <artwork>Saikatju1738@att.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Saikat Ray UnaffiliatedUSUnited States of America Email:raysaikat@gmail.com</artwork> </figure> <figure> <artwork>Jonraysaikat@gmail.com]]></artwork> <artwork name="" type="" align="left" alt=""><![CDATA[Jon Mitchell UnaffiliatedUSUnited States of America Email:jrmitche@puck.nether.net</artwork> </figure>jrmitche@puck.nether.net]]></artwork> </section></middle> <back> <references title="Normative References"> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml"?> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.8277.xml"?> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.4271.xml"?> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.7938.xml"?> <?rfc include="reference.I-D.ietf-spring-segment-routing.xml"?> <?rfc include="reference.I-D.ietf-idr-bgp-prefix-sid.xml"?> <?rfc include="reference.I-D.ietf-spring-segment-routing-central-epe.xml"?> </references> <references title="Informative References"> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.6793.xml"?> <?rfc include="reference.I-D.ietf-6man-segment-routing-header.xml"?> </references></back> </rfc>