NETCONF Efficiency ExtensionsYumaWorks, Inc.andy@yumaworks.com
This document describes protocol extensions to improve
the efficiency of the Network Configuration Protocol (NETCONF).
Protocol capabilities and operations are defined to reduce
network usage and transaction complexity.
There is a need for standard mechanisms to
allow NETCONF application designers to
manage NETCONF servers more efficiently when used
in network environments with poor connectivity,
low bandwidth, and/or high latency. In such conditions,
it is desirable to minimize network usage wrt/
the size of protocol messages and the number of protocol
operations required to perform a network management function.
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP
14, .
The following terms are defined in :
candidate configuration datastore
client
configuration data
datastore
configuration datastore
protocol operation
running configuration datastore
server
startup configuration datastore
The following terms are defined in :
container
data node
key leaf
leaf
leaf-list
list
The following terms are defined in :
data resource
datastore resource
The following term is defined in :
YANG Patch
The following terms are defined:
config ID: An opaque string identifier that represents
the state of the running datastore contents on the server.
A new config ID is chosen by the server each time the
server running configuration datastore is altered in any way.
depth filter: A mechanism implemented within the NETCONF
server to allow a client to retrieve only a limited number
of levels within the a subtree, instead of retrieving
the entire subtree.
time filter: A mechanism implemented within the NETCONF
server to allow a client to retrieve only data that has been
modified since a specified data and time.
A simplified graphical representation of the data model is used in
this document. The meaning of the symbols in these
diagrams is as follows:
Brackets "[" and "]" enclose list keys.
Abbreviations before data node names: "rw" means configuration
(read-write) and "ro" state data (read-only).
Symbols after data node names: "?" means an optional node and "*"
denotes a "list" and "leaf‑list".
Parentheses enclose choice and case nodes, and case nodes are also
marked with a colon (":").
Ellipsis ("...") stands for contents of subtrees that are not shown.
This document attempts to address the following
problems with NETCONF protocol procedures.
A client application often needs to retrieve the entire
running configuration datastore contents, usually at the
start of an editing session. The <rpc‑reply> for
this <get‑config> request can be very large (e.g., greater
than 250,000 bytes).
If a large number of server connections are
lost and then restarted, the quantity of large <rpc‑reply>
messages from every server could severely impact network
performance.
It would be useful if the <hello> message exchange
could be enhanced so an entity-tag value for the current
running datastore configuration is included in
the server <hello> message. A client can cache the
server configuration identifier and omit an
initial <get‑config> operation if the value from the
server <hello> message matches the cached value.
NETCONF uses a hard-wired message encoding format, namely XML.
However, XML tends to be verbose, especially for YANG data models
that have long data node identifiers.
There is no reason for the NETCONF message encoding to be
hardwired, except for the <hello> message.
It would be useful if the NETCONF protocol could
support other message encoding formats, such as JSON .
The <hello> message exchange could be enhanced so
the client and server negotiated the message encoding to use
for all other messages via an capability exchange
included in both <hello> messages.
There are several deficiencies with the NETCONF editing
procedures that could be improved.
Multi-operation functions can be required. A single edit can
take up to 9 operations. Several operations are
required to complete a set of 1 or more edits on a NETCONF server.
Each operation uses 1 request and 1 response message.
If the candidate datastore is used, then 1 extra operation
is required (for the <commit> operation) to activate the edit(s).
If the startup datastore is used then 1 extra operation is
required (for the <copy‑config> operation) to save the
running datastore contents in non-volatile storage.
If global locking is used, then 2 extra operations
are required for each datastore involved (candidate, running, startup)
Since the datastore is locked at the start and unlocked at the
end of the entire edit operation, these extra roundtrip times
are intervals in which the datastore is being locked, but
no datastore access is being done.
Obtaining locks can be expensive. If the server has more than
1 datastore (e.g., candidate + running or running + startup),
then multiple lock requests are required, since the <lock>
and <unlock> operations on affect 1 datastore at a time.
This can cause a long delay or even deadlock if multiple
clients are attempting to obtain global locks at once.
E.g., client 1 holds a lock on the candidate datastore
and is trying to lock the running datastore. At the same
time, client 2 holds a lock on the running datastore
and is trying to lock the candidate datastore.
Using locks can be brittle. NETCONF clients are
intended to be programmatic, so is not likely that locks
will be long-lived. Global locks are designed to be
short-lived since they block write access to the entire datastore.
If lock collisions do occur, they are likely to be cleared
very quickly. It would be useful if the client could request
how long to wait for locks to clear instead of immediately
rejecting an edit request due to an 'in‑use' error.
Edit operations are implied by <config> content.
NETCONF uses a default operation and explicit operation attribute
within an arbitrarily complete XML subtree to represent
a configuration datastore. There are several corner-cases
that are not standardized, and very implementation-dependent:
Edit operations are not protected against multi-client alterations.
It is a simple and common practice to retrieve a
configuration data resource, changing 1 or more fields,
and then update the resource on the server.
Since retrieval and edit operations are separate there is always a
chance that another client has altered the resource after
the <get‑config> operation, but before the <edit‑config> operation,
by the first client. Each client could be protected if there was
an entity tag associated with each data resource, and an
edit request could be rejected if the client attempted to
edit a different version of the data resource than expected.
There is no bulk-edit support. If the same edit is needed in
multiple instances of a particular data resource, then the
data must be repeated for each instance in the <edit‑config>
or <copy‑config> request. The request message size could be
minimized if there was a way to apply a set
of edits to multiple target nodes at once.
There is no confirmed commit support for the running datastore.
The ability to backup the running datastore, change it,
and revert it unless the client confirms the changes
has nothing to do with the candidate datastore.
A NETCONF server with limited memory is not likely to support
the candidate datastore.
This feature is useful for any type of network-wide
configuration change, regardless of device size.
NETCONF data retrieval via the <get> and <get‑config> operations
can be very inefficient. Some vendors do not even support <get>
because it can be such a resource-intensive operation
and return an enormous amount of data,
especially if all server data is requested at once.
A client cannot retrieve just the non-configuration data.
The NETCONF <get> operation allows a client to
retrieve data from the server but it returns all data,
including configuration datastore nodes. The <get‑config>
operation already returns all configuration datastore nodes.
It was originally thought that <get> should return all nodes
so the client would not have to correlate configuration
and non-configuration data nodes, since they would be
mixed together in the reply.
Operational experience has shown that the <get> operation
without reasonable filters to reduce the returned data
can significantly degrade device performance and return
enormous XML instance documents in the <rpc‑reply>.
There is no "last‑modified" indication or time filtering.
The NETCONF protocol has no standard mechanisms to indicate
to a client when a datastore was last modified, or to allow
a client to retrieve data only if it has been modified
since a specified time. This makes polling applications
very inefficient because they will regularly burden the
server and the network and themselves with retrieval and
processing requests for data that has not changed.
There is no simple list instance discovery mechanism.
Sometimes the client application wants to discover what
data exists on the server, particularly list entries.
There is a need for a simple mechanism to retrieve
just the key leaf nodes within a subtree.
The NETCONF subtree filtering mechanism does provide
a very complex way for the client to request just key leafs
for specific list entries. A simpler mechanism is needed
which will allow the client to discover the list instances
present.
There is no subtree depth control.
NETCONF filters allow the client to select specific
sub-trees within the conceptual datastore on the server.
However, sometimes the client does not really need the
entire subtree, which may contain many nested list entries,
and be very large.
There is sometimes a need to limit the depth of the sub-trees
retrieved from the server. A consistent and simple algorithm
for determining what data nodes start a new level is needed.
The content filter specification is not extensible.
The NETCONF <get> and <get‑config> operations use
a hard-coded content filtering mechanism.
They use a "type" XML attribute to indicate which of two
filter specification types they support, and a "select"
XML attribute if the :xpath capability is supported and
an XPath expression filter specification is provided.
This design does not allow additional content filter specification
types to be supported by an implementation. It does not
allow the standard to be easily extended in a modular fashion.
In addition, this design does not allow YANG statements to be used
to properly describe the protocol operation.
The special "get‑filter‑element‑attributes" YANG extension in
the ietf-netconf module is not extensible, and it does not
really count as proper YANG, since this
extension is outside the YANG language definition.
There is no standard metadata or standard way to retrieve metadata.
The <with‑defaults> parameter allows 1 specific type of metadata
to be returned (i.e., 'report‑all‑tagged' mode). This ad-hoc approach
does not scale well and is not extensible. It would be useful
if standard and vendor-specific metadata could be identified
and retrieved with standard operations.
This document defines some NETCONF protocol operations
and new capabilities to reduce network usage and increase
functionality at the same time.
All NETCONF efficiency extensions are completely backward-compatible
with the current definitions in .
An old client will ignore any new <capability> URIs sent by the server,
and will not use the new operations. No existing operations are affected
by the new operations, so the extensions will be transparent to
an existing NETCONF client.
A new capability called "config‑id" is defined to
identify the current running datastore configuration contents
with an opaque string. A client
can cache this value for each server that supports this
capability, along with a copy of its running configuration.
When a new session is started, the client can examine
the "config‑id" <capability> URI sent by the server.
If it is the same as the cached value then the client
can use the cached running datastore copy instead of
sending an initial <get‑config> operation to the server.
The :config-id capability is ignored in the calculation
of the :capability-id capability.
Refer to for details on configuration ID advertisement.
A new capability called "encoding" is defined to allow a client
to request that an alternate message encoding be used for the NETCONF
session. The capability is encoded as a comma-separated list of
media types. This list is ordered by the client in the order
of highest preference first. The server list is unordered.
The first match (done in client priority) is the message
encoding used for the rest of session.
Refer to for details on message encoding negotiation.
A new NETCONF protocol operation called <edit2> is defined
to address the deficiencies described in .
This operation allows the entire NETCONF edit procedure
to be accomplished with 1 request message. The editing
procedures are aligned with the resource model defined in .
Refer to for details on <edit2> operation.
The "confirmed‑commit" procedure has been integrated into
the <edit2> operation, and can be supported by any server
without requiring support for the candidate datastore.
It is optional to implement, based on the "confirmed‑edit"
capability defined in .
Refer to for details on the <complete‑commit>
operation and for details on the <revert‑commit>
operation.
A new NETCONF protocol operation called <get2> is defined
to address the deficiencies described in .
This operation allows several filter types to be combined
to control the data that is returned in the <rpc‑reply>
message, and an extensible framework for retrieving metadata
associated with datastore or data resources.
Refer to for details on <get2> operation.
This section defines the NETCONF efficiency extensions:
The :config-id capability indicates that the server maintains a config
ID for the running configuration datastore. This identifier value is
selected by the server and treated as an opaque string by the client.
The server SHOULD save the config ID for the running
datastore in non-volatile storage. When the server boots
or restarts, the initial configuration ID SHOULD be the same
as the last instantiation, if the server does not support
the :startup capability (so the non-volatile stored version
mirrors the running datastore). If the server does support
the :startup capability, then the initial configuration ID
SHOULD be the same as the version last saved to
non-volatile storage.
The :config-id capability is sent in every server <hello> message.
The "id" parameter for the :config-id capability is set to
the current config ID for the running datastore on the server:
The :config-id capability is not dependent on any other capabilities.
The :config-id capability is identified by the following
capability string:
This capability MUST be advertised in every server <hello> message.
The :config-id capability URI MUST contain an "id" argument
assigned an opaque string value indicating the current
config ID value for the running datastore.
For example:
The current config ID value MUST be updated any time
a "netconf‑config‑change" event would be generated by the server.
If is supported, then the "config‑id" leaf
defined in MUST be included in <netconf‑config‑change>
event notifications.
If the "with‑metadata" parameter in the <get2> operation
specifies the "config‑id" identity, then the server MUST
return the current config ID for the running datastore,
if the "source" parameter identifies the running datastore.
The server MAY maintain config IDs for other datastores
as well.
The :config-id capability does not introduce any
new protocol operations.
The :config-id capability does not modify any existing
protocol operations.
The :config-id capability does not interact with any other
capabilities.
The :encoding capability is used by the client to request
an alternate message encoding be used instead of XML.
The client and server both send a list of media types
for the message encodings they support, encoded as a
comma-separated list (with no whitespace).
The client list is an ordered by preference.
The server list is unordered.
Both the client and server will examine the others <hello>
message for the "encoding" <capability> URI. If not present,
then the default encoding is used, which is XML.
The client list is compared against the server list,
checked in the client specified order. If the same
media type appears in the server list, then that is
the encoding that will be active for the remainder
of the session (i.e., starting with the first <rpc> request).
All <rpc>, <rpc‑reply>, and <notification> messages MUST
be encoded in the negotiated encoding.
Both the client and server MUST support the "application/xml" media type
to be backward-compatible with .
If "application/json" encoding is used, then the encoding defined
in MUST be used so namespaces
will be properly identified. Any metadata that needs to
encoded MUST be encoded according to the procedure defined
in , section 4.4.
The message framing used for the session is unaffected by this
capability. The "base1.0" vs. "base1.1" negotiation defined
in determines the message framing that is used for
the entire session.
In this example, the client supports the following message encodings,
shown in the preferred order.
Some extra whitespace has been added for display purposes only.
The server supports the following encodings:
Since the most preferred media type in common is "application/json",
the JSON encoding used for the remainder of the session.
In this example, the server sends an full <hello> message to the client,
truncated for brevity. Extra whitespace has been added for
display purposes only.
At this point, both the client and server switch to JSON encoding:
The :encoding capability is not dependent on any other capabilities.
The :encoding capability is identified by the following
capability string:
This capability MUST be advertised in every server <hello> message.
The "encoding" capability URI MUST contain a "types" argument
containing a comma-separated list of media types that represent
the message encoding formats supported by the server.
If the client supports the :encoding capability, it SHOULD
include an "encoding" <capability> URI in its <hello> message.
The client MAY omit this capability if XML encoding is desired.
For example (line wrapped for display purposes only)
The :encoding capability does not introduce any
new protocol operations.
The :encoding capability does not modify any
existing protocol operations.
The :encoding capability does not interact with any
other capabilities.
The <edit2> operation is specified with a YANG "rpc"
statement, defined in . This operation allows
the entire NETCONF transaction procedure to be performed
in a single operation or multiple operations, depending
on the input parameters used.
There are no XML attributes used (e.g., "operation" from
RFC 6241, "insert", "value" from RFC 6020).
Instead, configuration edits are specified with an edit list,
using the YANG Patch mechanism defined in .
This is used instead of a complete XML instance document,
e.g. <config> element, to represent an unordered
patch list inferred from the diffs. (Although YANG Patch
can be used in this mode if client wants to merge or
replace the entire configuration datastore).
target: name of the configuration datastore being edited
target-resource: XPath node-set expression representing
1 or more target resources within the datastore to edit.
yang-patch: container of ordered edits to apply to
the target resource(s).
test-only: flag to request that the edit request be validated
but no edits should actually be applied
if-match: if the entity tag for the target resource(s) does
not exactly match the supplied value then the edit request
is rejected.
with-locking: if present then the server will provide exclusive
write access to this <edit2> operation
and possible confirmed-commit procedure.
max-lock-wait: amount of time the client is willing to wait for
locks to clear, if "with‑locking" parameter is present.
activate-now: if present and the target is the candidate
datastore, then an implicit <commit> operation will be performed
if the edit operation is successfully applied.
nvstore-now: if present and the server supports the startup
datastore, and the edits have been activated in the running
datastore, then an implicit <copy‑config> operation (from the
running to the startup datastore) will be attempted by the server.
confirmed: request that a confirmed commit be started or
extended.
confirm-timeout: the amount of time for the server to wait
for an <edit2> request that extends,
a <complete‑commit> request to finish,
or a <revert‑commit> request to cancel
a confirmed commit procedure in progress.
persist: identifier string to use in the "persist‑id" parameter
to extend, complete, or cancel a confirmed commit procedure.
persist-id: identifier string to extend a confirmed commit
procedure in progress.
Positive Response:
This operation returns data containing a "yang‑patch‑status"
report (defined in ) instead of an "ok" element.
This report contains an "ok" element that is present if
the entire operation succeeded.
Error Response:
The <rpc‑error> element can be returned,
e.g., if the message contains invalid parameter syntax.
The server MUST report editing errors in the "edit"
list within the "yang‑patch‑status" container.
In this example, an "all‑in‑one" YANG Patch edit is shown.
the following conditions apply:
The starting state of the "/forests" data structure is described
in . The client is adding an "oak" tree
and changing the location of the "birch" tree in the "north" forest.
The edit succeeds, and the "yang‑patch‑status" container
is returned to the client with the <ok/> status for both
tree edits:
Refer to for additional <edit2> protocol
operation examples.
A new NETCONF protocol operation called <complete‑commit> is defined
to complete a confirmed commit procedure.
There is one optional parameter for this protocol operation:
persist-id: an identifier string that MUST match the "persist"
value, if it was used in the confirmed-commit procedure.
Positive Response:
The there is a confirmed-commit procedure in progress and it
is successfully completed, then an <ok/> element is returned.
Negative Response: An <rpc‑error> response is sent if the
request cannot be completed for any reason.
In this example, the client has previously started a
confirmed commit procedure using the "persist" parameter
set to the value "abcdef".
A new NETCONF protocol operation called <revert‑commit> is defined
to cancel a confirmed commit procedure and revert the running datastore.
The <cancel‑commit> operation in cannot be used because
it requires the implementation of the candidate capability.
There is one optional parameter for this protocol operation:
persist-id: an identifier string that MUST match the "persist"
value, if it was used in the confirmed-commit procedure.
Positive Response:
If there is a confirmed-commit procedure in progress and it
is successfully cancelled, and the running datastore
successfully reverted, then an <ok/> element is returned.
Negative Response: An <rpc‑error> response is sent if the
request cannot be completed for any reason.
In this example, the client has previously started a
confirmed commit procedure using the "persist" parameter
set to the value "abcdef".
The <get2> operation is specified with a YANG "rpc"
statement, defined in .
A specific datastore is selected for
the source of the retrieval operation. Several
different types of filters are provided. Filters
are combined in a conceptual "logical‑AND" operation,
and are optional to use by the client. Not all filtering
mechanisms are mandatory-to-implement for the server.
A depth filter indicates how many subtree levels
should be returned in the <rpc‑reply>. This filter
is specified with the "depth" input parameter for
the <get2> protocol operation. The default "0" indicates
that all levels from the requested subtrees should be returned.
A new level is started for each YANG data node
within the requested subtree.
All top level data nodes are considered to be
child nodes (level 1) of a conceptual <config> root.
If no content filters are provided, then level 1 is
considered to include all top-level data nodes
within the source datastore. Otherwise only the
levels in selected subtrees will be considered,
and not any additional top-level data nodes.
If the depth requested is equal to "1", then only the
requested data nodes (or top-level data nodes) will
be returned. This mechanism can be used to detect
the existence of containers and list entries within
a particular subtree, without returning any of the
descendant nodes.
Higher depth values indicates the number of descendant nodes
to include in the response. For example, if the depth
requested is equal to "2", then only the
requested data nodes (or top-level data nodes) and
their immediate child data nodes will be returned.
A time filter specifies that data should only be returned
if the last-modified timestamp for the target datastore
is more recent than the timestamp specified in
the "if‑modified‑since" parameter.
If this feature is supported, then the server will maintain
a "last‑modified" timestamp for the running datastore. The server
MAY support additional nested timestamps for data nodes within
the datastore. The server MAY support timestamps for other datastores.
When a request containing the "if‑modified‑since"
parameter is received, the server will compare that
timestamp to the "last‑modified" timestamp for the source
datastore. If it is greater than the specified value then
data may be returned (depending on other filters).
If the datastore timestamp value is less than or
equal to the specified value,
then an empty <data> element will be returned in the <rpc‑reply>.
If the "full‑delta" parameter is present,
and the server maintains "last‑modified" timestamps for any data nodes
within the source datastore, then the same type of comparison
will be done for the data node to determine if it should be
included in the response. If no "last‑modified" timestamp
is maintained for a data node, then the server will use
the "last‑modified" timestamp for its nearest ancestor,
or for the datastore itself if there are none.
source: A container indicating the conceptual
datastore for the retrieval request.
filter-spec: A choice indicating the content filter
specification for the retrieval request.
keys-only: A leaf indicating that only the key leafs,
combined with other filtering criteria, should be returned.
if-modified-since: A leaf indicating the time filter
specification for the retrieval request, according to
the procedures in .
full-delta: If present and the "if‑modified‑since" parameter
is also present, then the entire datastore will be filtered
by last modification time, not just the entire datastore.
depth: A leaf indicating the subtree depth level
for the retrieval request, according to the procedures
in .
with-defaults: A leaf indicating the type of defaults
handling requested, according to procedures in .
with-metadata: A leaf-list indicating the specific
metadata that the server should add to the response,
such as "last‑modified" or "etag", encoded in XML
according to the schema in .
with-locking: if present then the server will provide exclusive
write access to this <get2> operation
so the target datastore is not modified during
the entire retrieval operation.
max-lock-wait: amount of time the client is willing to wait for
locks to clear, if "with‑locking" parameter is present.
Positive Response: A <data> element is returned which contains
the data corresponding to the input parameters specified
in the request. The child nodes of the <data> container
correspond to top-level YANG data nodes.
If the server supports the "timestamps" YANG feature,
and the target is the running datastore,
then a "last‑modified" attribute SHOULD be included
in the <rpc‑reply> element.
Negative Response: An <rpc‑error> response is sent if the
request cannot be completed for any reason.
In this example, the retrieval the "forests" resource is shown.
the following conditions apply:
The starting state of the "/forests" data structure is described
in . The client is retrieving just the
"forests" node, along with the "last‑modified" and "etag"
metadata for that node. The "config‑id" for the datastore
is also requested. Locking is requested (with a maximum lock wait time
of 5 seconds), just to make sure the metadata does not change
during the request.
The server has a "forests" node so this node is returned along
with the requested metadata for the node. Note that the
XML namespace for the "ncex" metadata is the XSD target namespace
defined in , not the YANG namespace URI
defined in .
Refer to for additional <get2>
protocol operation examples.
This module imports the "with‑defaults‑parameters" grouping
from .
Several YANG features are imported from . These correspond
to the NETCONF capabilities (e.g., candidate, url, startup, xpath)
but defined as YANG features instead of URIs.
Some data types are imported from :
Two YANG groupings are imported from :
One notification is augmented from .
RFC Ed.: update the date below with the date of RFC publication and
remove this note.
<CODE BEGINS> file "ietf-netconf-ex@2014-04-21.yang"<CODE ENDS>
The following XML Schema document defines
the "last‑modified" and "etag" attributes, described within this document.
The "last‑modified" attribute is only relevant if the server supports
the "timestamps" YANG feature within the "ietf‑netconf‑ex"
YANG module.
The "last‑modified" attribute uses the XSD data type "dateTime",
in accordance with Section 3.2.7.1 of XML Schema Part 2: Datatypes.
This is equivalent to the YANG data type "date‑and‑time".
The "etag" attribute uses the XSD data type "string",
in accordance with the "yang‑entity‑tag" YANG typedef
defined in .
The "config‑id" attribute uses the XSD data type "string".
<CODE BEGINS> file "netconf‑ex.xsd"
<CODE ENDS>
This document registers a URI in the IETF XML registry
. Following the format in RFC 3688, the following
registration is requested:
This document registers a URI for the NETCONF XML schema in the IETF
XML registry .
This document registers 1 YANG module in the YANG Module Names
registry .
This document does not introduce any new security concerns
in addition to those specified in , section 9.
The :capability-id URI exchange was removed because the
NETCONF protocol does not allow the server to delay its <hello>
message so the client cannot choose the full or abbreviated <hello>.
This makes the :capability-id URI exchange unworkable for several reasons:
Since the client cannot select the server hello format based
on its own notion of the cached capability set, the server must be
configured to always use the full or always use the abbreviated
<hello> message.
All clients must support the new capability exchange or the
server cannot practically be configured to use the abbreviated <hello>
message.
Since the client will not know the capability-id value for
a server the first time the particular value is seen, the "schema"
list in the "ietf‑netconf‑monitoring" YANG module would have
to be mandatory-to-implement by both client and server,
and mandatory-to-use by the client.
Forcing the client to perform a <get> request and wait for
an <rpc‑reply> before using the NETCONF session introduces
1 round-trip of extra latency into the protocol.
Forcing the client to perform a <get> request and wait for
an <rpc‑reply> before using the NETCONF session introduces
extra complexity into the protocol.
The YANG module was updated to align with new RESTCONF
and YANG Patch drafts. The "location" leaf has been removed
from the "yang‑patch‑status" grouping.
Modeling JSON Text with YANGCZ.NICRESTCONF ProtocolYumaWorksTail-f SystemsJuniper NetworksCiscoYANG Patch Media TypeYumaWorksTail-f SystemsJuniper NetworksCiscoThe JSON Data Interchange FormatKey words for use in RFCs to Indicate Requirement LevelsHarvard UniversityIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents.The IETF XML RegistryThis document describes an IANA maintained registry for IETF standards which use Extensible Markup Language (XML) related items such as Namespaces, Document Type Declarations (DTDs), Schemas, and Resource Description Framework (RDF) Schemas.YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF), NETCONF remote procedure calls, and NETCONF notifications. [STANDARDS TRACK]Network Configuration Protocol (NETCONF)With-defaults Capability for NETCONF
Network Configuration Protocol (NETCONF) Base Notifications
The Network Configuration Protocol (NETCONF) provides mechanisms to manipulate configuration datastores. However, client applications often need to be aware of common events, such as a change in NETCONF server capabilities, that may impact management applications. Standard mechanisms are needed to support the monitoring of the base system events within the NETCONF server. This document defines a YANG module that allows a NETCONF client to receive notifications for some common system events. [STANDARDS-TRACK]
Common YANG Data TypesThis document introduces a collection of common data types to be used with the YANG data modeling language. This document obsoletes RFC 6021.XML Path Language (XPath) Version 1.0XML Schema Part 2: Datatypes Second Edition
The resource-identifier-type typedef from yang-patch
is a RESTCONF path expression, not an XPath path expression.
The error-path parameter also uses RESTCONF path strings.
Should either or both of these be XPath instead?
The YANG module of the node is needed for JSON encoding,
but there is no YANG schema definition for the <rpc>,
<rpc‑reply>, or <notification> elements. The namespace
for <rpc> and <rpc‑reply> is "ietf‑netconf", but no module
name at all exists for the <notification> element.
Should the "config‑id" (etag for the running datastore root)
be returned in every <get2> response or only if requested?
(Currently only if requested.)
Should there be a retrieval mode for <get2> where only
the nodes in an XPath node-set are returned? NETCONF returns
all ancestor nodes and all ancestor or sibling key leafs as well.
Sometime the XPath designer knows the context of the result node-set
(e.g. path expression for 1 instance of a nested list).
The XML scaffolding can add a lot of extra bytes to the <rpc‑reply>.
The "example‑ex" YANG module models a collection of forests.
Each forest has a collection of trees. For simplicity,
only 1 tree of each type is allowed in a forest.
The follow instances are assumed in the following examples.
The forests and trees are configured, which represent
trees the company has planted and growing over time.
The operational data (tree height) represents the
data that the company monitors for each tree over time.
In this example, the server supports the :writable-running
and :startup capabilities:
The edit succeeds, and the "yang‑patch‑status" container
is returned to the client with the <location> path expression
of the new oak tree resource. The candidate and running
datastores remain locked after this operation because a
confirmed commit procedure is in progress. The startup
datastore was not locked during this operation because
the "nvstore‑now" parameter was not provided.
After configuration verification (e.g., 20 seconds),
the client decides to keep these configuration changes
and sends a <complete‑commit> request.
The server completes the confirmed commit procedure
and returns an "ok" element to indicate success:
After the operation succeeds, the server releases all locks
that were being held to
allow exclusive write access for the entire confirmed commit
procedure.
The client can now save the activated configuration changes
to the startup configuration using the <copy‑config>
protocol operation, as described in RFC 6241, section 8.7.5.1.
In this example the client is going to change the
location of the "palm" tree is the "south" forest.
The entity tag for the tree resource is retrieved with
the resource:
The server returns a subtree containing data nodes representing
the "palm" tree. The "etag" attribute is returned for this
resource and its ancestors. Only the "tree" node itself,
as requested with the "depth parameter.
The client then edits the list entry (e.g,
reassigns tree location) but submits an "if‑match"
parameter with the "etag" value it received for
the tree resource being edited:
In this example the tree resource has been edited
by another client since the <get2> reply for this client,
so the edit request is not even attempted. Instead
an "operation‑failed" is returned:
In this example, the server supports the :candidate
and :startup capabilities, so all 3 datastores (including running)
are locked for the <edit2> operation. There is a new pine tree
for each forest that is being created and sent to the greenhouse.
The edit succeeds, and the "yang‑patch‑status" container
is returned to the client with the status information.
In this example, the client is checking if it can
change the location field in the "palm" tree list entry
by using the "test‑only" parameter:
Since "riverside" is not a supported location, an "invalid‑value"
error is returned for the requested edit operation:
In this example, the running datastore was last
modified at "2012‑09‑09T01:43:27Z" because the
forest named "north" was modified at this time.
The forest named "north" was last modified after
the specified "if‑modified‑since" timestamp.
The forest named "south" was last modified before
the specified "if‑modified‑since" timestamp.
The server maintains a last-modified timestamp for
the running datastore and the "forest" list entries.
The client is requesting only the changed entries
after 2012-09-09T01:43:27Z, so the "full‑delta" parameter
is set.
The client is also requesting that timestamps be
returned within the data nodes.
If any part of the "forest" subtree is modified
then this timestamp will be updated.
In this example the client has changed the
"if‑modified‑since" timestamp to a time in the future.
No "forest" list entry has been modified since
this time so an empty data node is returned.
Note that the "last‑modified" timestamp is returned for
the node representing the datastore, even though
no data nodes have been modified since the specified
time. This allows the client to easily retrieve the
last-modified timestamp for the entire datastore.
This example retrieves only the names
from the "forests" subtree in the running
datastore.
The default source (running) is used.
The default depth="0" is used to retrieve all subtree
levels.
The "keys‑only" leaf is set
The "forests" subtree is selected. The xpath-filter
is used instead of the subtree-filter.
Whitespace added to xpath-filter element for display
purposes only.
This example retrieves the "trees" node to determine
which forests have any trees.
Only 1 subtree level is requested,
instead of the default of all levels.
The default source (running) is used.
The "trees" subtree is selected.
The depth parameter is set to "1" to only retrieve
the requested layer "trees" and its ancestor nodes
and the configuration leaf nodes from each "forest" entry.
This example retrieves only the name leafs
from the "forest" list within the "forests" subtree, in the running
datastore.
The "source" leaf is set to the "operational" data source
The "forests" subtree is selected