<?xmlversion="1.0" encoding="us-ascii"?>version='1.0' encoding='UTF-8'?> <!DOCTYPE rfc SYSTEM"rfc2629.dtd"> <?rfc toc="yes" ?> <?rfc compact="yes" ?> <?rfc subcompact="no"?> <?rfc sortrefs="yes" ?> <?rfc symrefs="yes" ?> <?rfc rfcedstyle="yes" ?>"rfc2629-xhtml.ent"> <rfcdocTitle="draft-ietf-clue-data-model-schema-17"xmlns:xi="http://www.w3.org/2001/XInclude" submissionType="IETF"consensus="yes"consensus="true" category="std"ipr="trust200902">ipr="trust200902" number="8846" docName="draft-ietf-clue-data-model-schema-17" obsoletes="" updates="" xml:lang="en" tocInclude="true" sortRefs="true" symRefs="true" version="3"> <!-- xml2rfc v2v3 conversion 2.37.3 --> <front> <titleabbrev="draft-ietf-clue-data-model-schema-17"> Anabbrev="An XML Schema for the CLUEdata modelData Model"> An XML Schema for the Controlling Multiple Streams for Telepresence (CLUE) Data Model </title> <seriesInfo name="RFC" value="8846"/> <author initials="R." surname="Presta" fullname="Roberta Presta"> <organization>University of Napoli</organization> <address> <postal> <street>Via Claudio 21</street> <code>80125</code> <city>Napoli</city> <country>Italy</country> </postal> <email>roberta.presta@unina.it</email> </address> </author> <author initials="SP"P." surname="Romano" fullname="Simon Pietro Romano"> <organization>University of Napoli</organization> <address> <postal> <street>Via Claudio 21</street> <code>80125</code> <city>Napoli</city> <country>Italy</country> </postal> <email>spromano@unina.it</email> </address> </author> <datemonth="August" year="2016"/>month="January" year="2021"/> <area>ART</area> <workgroup>CLUE Working Group</workgroup><!-- [rfced] Please insert any keywords (beyond those that appear in the title) for use on http://www.rfc-editor.org/rfcsearch.html. --><keyword>CLUE</keyword> <keyword>Telepresence</keyword> <keyword>Data Model</keyword> <keyword>Framework</keyword> <abstract> <t> This document provides an XML schema file for the definition of CLUE data model types. The term "CLUE" stands for"ControLling mUltiple streams"Controlling Multiple Streams fortElepresence"Telepresence" and is the name of the IETF working group in which this document, as well as other companion documents, has been developed. The document defines a coherent structure for information associated with the description of a telepresence scenario. </t> </abstract> </front> <middle><!-- Introduction --><sectiontitle="Introduction" anchor="sec-intro">anchor="sec-intro" numbered="true" toc="default"> <name>Introduction</name> <t> This document provides an XML schema file for the definition of CLUE data model types. For the benefit of the reader, the term'CLUE'"CLUE" stands for"ControLling mUltiple streams"Controlling Multiple Streams fortElepresence"Telepresence" and is the name of the IETF working group in which this document, as well as other companion documents, has been developed. A thorough definition of the CLUE framework can be found in <xreftarget="I-D.ietf-clue-framework"/>.target="RFC8845" format="default"/>. </t> <t> The schema is based on information contained in <xreftarget="I-D.ietf-clue-framework"/>.target="RFC8845" format="default"/>. It encodes information and constraints defined in the aforementioned document in order to provide a formal representation of the concepts therein presented.<!-- The schema definition is intended to be modified according to changes applied to the above mentioned CLUE document. --></t> <t> The documentaims atspecifies the definition of a coherent structure for information associated with the description of a telepresence scenario. Such information is used within the CLUE protocol messages(<xref target="I-D.ietf-clue-protocol"/>)<xref target="RFC8847" format="default"/>, enabling the dialogue between a Media Provider and a Media Consumer. CLUE protocol messages, indeed, are XML messages allowing (i) a Media Provider to advertise its telepresence capabilities in terms of media captures, capture scenes, and other features envisioned in the CLUE framework, according to the format herein defined and (ii) a Media Consumer to request the desired telepresence options in the form of capture encodings, represented as described in this document. </t><t> </t></section><!-- Terminology --><sectiontitle="Terminology" anchor="sec-teminology">anchor="sec-teminology" numbered="true" toc="default"> <name>Terminology</name> <t> The key words"MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY","<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>", "<bcp14>MAY</bcp14>", and"OPTIONAL""<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as described in BCP 14 <xref target="RFC2119"/> <xreftarget="RFC2119"/>.target="RFC8174"/> when, and only when, they appear in all capitals, as shown here. </t> </section><!-- Definitions --><sectiontitle="Definitions" anchor="sec-definitions"> <t> Thisanchor="sec-definitions" numbered="true" toc="default"> <name>Definitions</name> <t>This document refers to the same definitions used in <xreftarget="I-D.ietf-clue-framework"/>,target="RFC8845" format="default"/>, except for the "CLUE Participant" definition. We briefly recall herein some of the main terms used in the document. </t><t> <list style="hanging"> <t hangText="Audio Capture:"> Media<dl newline="false" spacing="normal"> <dt>Audio Capture:</dt> <dd>Media Capture for audio. Denoted asACn"ACn" in the examples in this document.</t> <t hangText="Capture:">Same</dd> <dt>Capture:</dt><dd>Same as MediaCapture.</t> <t hangText="Capture Device:">ACapture.</dd> <dt>Capture Device:</dt><dd>A device that converts physical input, such as audio,videovideo, or text, into an electrical signal, in most cases to be fed into a media encoder.</t> <t hangText="Capture Encoding:">A</dd> <dt>Capture Encoding:</dt><dd>A specific encoding of a Media Capture, to be sent by a Media Provider to a Media Consumer via RTP.</t> <t hangText="Capture Scene:">A</dd> <dt>Capture Scene:</dt> <dd>A structure representing a spatial region captured by one or more Capture Devices, each capturing media representing a portion of the region. The spatial region represented by a Capture SceneMAYmay correspond to a real region in physical space, such as a room. A Capture Scene includes attributes and one or more Capture Scene Views, with each view including one or more Media Captures.</t> <t hangText="Capture</dd> <dt>Capture SceneView:">View (CSV):</dt> <dd> A list of Media Captures of the same media type that together form one way to represent the entire Capture Scene.</t> <t hangText="CLUE Participant:"></dd> <dt>CLUE Participant:</dt> <dd> This term is imported from the CLUE protocol(<xref target="I-D.ietf-clue-protocol"/>) document. </t> <t hangText="Consumer:">Shortdocument <xref target="RFC8847" format="default"/>. </dd> <dt>Consumer:</dt> <dd>Short for MediaConsumer.</t> <t hangText="EncodingConsumer.</dd> <dt>Encoding or IndividualEncoding:">Encoding:</dt> <dd> A set of parameters representing a way to encode a Media Capture to become a Capture Encoding.</t> <t hangText="Encoding Group:"></dd> <dt>Encoding Group:</dt> <dd> A set of encoding parameters representing a total media encoding capability to besub-dividedsubdivided across potentially multiple Individual Encodings.</t> <t hangText="Endpoint">A</dd> <dt>Endpoint:</dt> <dd>A CLUE-capable devicewhichthat is the logical point of final termination through receiving, decoding and rendering, and/or initiation through capturing, encoding, and sending of media streams. An endpoint consists of one or more physical deviceswhichthat source and sink media streams, and exactly one participant <xreftarget="RFC4353"/> Participanttarget="RFC4353" format="default"/> (which, in turn, includes exactly one SIP User Agent). Endpoints can be anything from multiscreen/multicamera rooms to handhelddevices.</t> <t hangText="Media:">Anydevices.</dd> <dt>Media:</dt> <dd>Any data that, after suitable encoding, can be conveyed over RTP, including audio,videovideo, or timedtext.</t> <t hangText="Media Capture:">Atext.</dd> <dt>Media Capture:</dt> <dd>A source of Media, such as from one or more Capture Devices or constructed from otherMediamedia streams.</t> <t hangText="Media Consumer:"></dd> <dt>Media Consumer:</dt> <dd> A CLUE-capable device that intends to receive Capture Encodings.</t> <t hangText="Media Provider:"></dd> <dt>Media Provider:</dt> <dd> A CLUE-capable device that intends to send Capture Encodings.</t> <t hangText="Multiple</dd> <dt>Multiple ContentCapture:">Capture (MCC):</dt> <dd> A Capture that mixes and/or switches other Captures of a single type(e.g.,(for example, all audio or allvideo.)video). Particular Media Captures may or may not be present in the resultant Capture Encoding depending on time or space. Denoted asMCCn"MCCn" in the example cases in this document.</t> <t hangText="Multipoint</dd> <dt>Multipoint Control Unit(MCU):">A(MCU):</dt> <dd>A CLUE-capable device that connects two or more endpoints together into one single multimedia conference <xreftarget="RFC7667"/>.target="RFC7667" format="default"/>. An MCU includesan <xref target="RFC4353"/> likea Mixer, similar to those in <xref target="RFC4353" format="default"/>, but without the<xref target="RFC4353"/>requirement to send media to each participant.</t> <t hangText="Plane</dd> <dt>Plane ofInterest:">Interest:</dt> <dd> The spatial plane within a scene containing themost relevantmost-relevant subject matter.</t> <t hangText="Provider:">Same</dd> <dt>Provider:</dt> <dd>Same as a MediaProvider.</t> <t hangText="Render: ">TheProvider.</dd> <dt>Render: </dt> <dd>The process ofreproducing the received Streams like, for instance, displaying of the remotegenerating a representation from Media, such as displayed motion videoon the Media Consumer's screens,orplaying of the remote audio throughsound emitted from loudspeakers.</t> <t hangText="Scene:">Same</dd> <dt>Scene:</dt> <dd>Same as a CaptureScene.</t> <t hangText="SimultaneousScene.</dd> <dt>Simultaneous TransmissionSet:">Set:</dt> <dd> A set of Media Captures that can be transmitted simultaneously from a Media Provider.</t> <t hangText="Single</dd> <dt>Single MediaCapture:">Capture:</dt> <dd> A capturewhichthat contains media from a single source capture device, e.g., an audio capture from a singlemicrophone,microphone or a video capture from a single camera.</t> <t hangText="Spatial Relation:"></dd> <dt>Spatial Relation:</dt> <dd> The arrangementin spaceof twoobjects,objects in space, in contrast to relation in time or other relationships.</t> <t hangText="Stream:"></dd> <dt>Stream:</dt> <dd> A Capture Encoding sent from a Media Provider to a Media Consumer via RTP <xreftarget="RFC3550"/>. </t> <t hangText="Stream Characteristics:">The union of the featurestarget="RFC3550" format="default"/>. </dd> <dt>Stream Characteristics:</dt> <dd>The media stream attributes commonly usedto describe a Stream in the CLUE environment andin non-CLUE SIP/SDP environments (such as media codec, bitrate, resolution, profile/level, etc.) as well as CLUE-specific attributes, such as theSIP-SDP environment.</t> <t hangText="Video Capture:">ACapture ID or a spatial location.</dd> <dt>Video Capture:</dt> <dd>A Media Capture forvideo.</t> </list> </t>video.</dd> </dl> </section><!-- Schema File --><sectiontitle="XML Schema" anchor="sec-schema">anchor="sec-schema" numbered="true" toc="default"> <name>XML Schema</name> <t> This section contains the XML schema for the CLUE data modelschemadefinition. </t> <t> The element and attribute definitions are formal representations of the concepts needed to describe the capabilities of a Media Provider and the streams that are requested by a Media Consumer given the Media Provider's ADVERTISEMENT(<xref target="I-D.ietf-clue-protocol"/>).<xref target="RFC8847" format="default"/>. </t> <t>The main groups of information are:</t><list style="empty"> <t><mediaCaptures>: the<ul empty="true" spacing="normal"> <li> <dl newline="false" spacing="normal"> <dt><mediaCaptures>:</dt><dd>the list of media captures available (<xreftarget="sec-media-captures"/>)</t> <t><encodingGroups>: thetarget="sec-media-captures" format="default"/>)</dd> <dt><encodingGroups>:</dt><dd>the list of encoding groups (<xreftarget="sec-encoding-groups"/>)</t> <t><captureScenes>: thetarget="sec-encoding-groups" format="default"/>)</dd> <dt><captureScenes>:</dt><dd>the list of capture scenes (<xreftarget="sec-capture-scenes"/>)</t> <t><simultaneousSets>: thetarget="sec-capture-scenes" format="default"/>)</dd> <dt><simultaneousSets>:</dt><dd>the list of simultaneous transmission sets (<xreftarget="sec-simultaneous-sets"/>)</t> <t><globalViews>: thetarget="sec-simultaneous-sets" format="default"/>)</dd> <dt><globalViews>:</dt><dd>the list of global views sets (<xreftarget="sec-global-views"/>)</t> <t><people>: meta datatarget="sec-global-views" format="default"/>)</dd> <dt><people>:</dt><dd>metadata about the participants represented in the telepresence session (<xreftarget="sec-participants"/>)</t> <t><captureEncodings>: thetarget="sec-participants" format="default"/>)</dd> <dt><captureEncodings>:</dt><dd>the list of instantiated capture encodings (<xreftarget="sec-capture-encodings"/>)</t> </list>target="sec-capture-encodings" format="default"/>)</dd> </dl></li> </ul> <t> All of the aboverefersrefer to concepts that have been introduced in <xreftarget="I-D.ietf-clue-framework"/>target="RFC8845" format="default"/> and further detailed in this document. </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <?xml version="1.0" encoding="UTF-8" ?> <xs:schema targetNamespace="urn:ietf:params:xml:ns:clue-info" xmlns:tns="urn:ietf:params:xml:ns:clue-info" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="urn:ietf:params:xml:ns:clue-info" xmlns:xcard="urn:ietf:params:xml:ns:vcard-4.0" elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0"> <!-- ImportxcardxCard XML schema --> <xs:import namespace="urn:ietf:params:xml:ns:vcard-4.0" schemaLocation="http://www.iana.org/assignments/xml-registry/schema/vcard-4.0.xsd"/>"https://www.iana.org/assignments/xml-registry/schema/ vcard-4.0.xsd"/> <!-- ELEMENT DEFINITIONS --> <xs:element name="mediaCaptures" type="mediaCapturesType"/> <xs:element name="encodingGroups" type="encodingGroupsType"/> <xs:element name="captureScenes" type="captureScenesType"/> <xs:element name="simultaneousSets" type="simultaneousSetsType"/> <xs:element name="globalViews" type="globalViewsType"/> <xs:element name="people" type="peopleType"/> <xs:element name="captureEncodings" type="captureEncodingsType"/> <!-- MEDIA CAPTURES TYPE --> <!-- envelope of media captures --> <xs:complexType name="mediaCapturesType"> <xs:sequence> <xs:element name="mediaCapture" type="mediaCaptureType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- DESCRIPTION element --> <xs:element name="description"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="lang" type="xs:language"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <!-- MEDIA CAPTURE TYPE --> <xs:complexType name="mediaCaptureType" abstract="true"> <xs:sequence> <!-- mandatory fields --> <xs:element name="captureSceneIDREF" type="xs:IDREF"/> <xs:choice> <xs:sequence> <xs:element name="spatialInformation" type="tns:spatialInformationType"/> </xs:sequence> <xs:element name="nonSpatiallyDefinable" type="xs:boolean" fixed="true"/> </xs:choice> <!-- for handlingmulti-contentmulticontent captures: --> <xs:choice> <xs:sequence> <xs:element name="synchronizationID" type="xs:ID" minOccurs="0"/> <xs:element name="content" type="contentType" minOccurs="0"/> <xs:element name="policy" type="policyType" minOccurs="0"/> <xs:element name="maxCaptures" type="maxCapturesType" minOccurs="0"/> <xs:element name="allowSubsetChoice" type="xs:boolean" minOccurs="0"/> </xs:sequence> <xs:element name="individual" type="xs:boolean" fixed="true"/> </xs:choice> <!-- optional fields --> <xs:element name="encGroupIDREF" type="xs:IDREF" minOccurs="0"/> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="priority" type="xs:unsignedInt" minOccurs="0"/> <xs:element name="lang" type="xs:language" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="mobility" type="mobilityType" minOccurs="0" /> <xs:element ref="presentation" minOccurs="0" /> <xs:element ref="embeddedText" minOccurs="0" /> <xs:element ref="view" minOccurs="0" /> <xs:element name="capturedPeople" type="capturedPeopleType" minOccurs="0"/> <xs:element name="relatedTo" type="xs:IDREF" minOccurs="0"/> </xs:sequence> <xs:attribute name="captureID" type="xs:ID" use="required"/> <xs:attribute name="mediaType" type="xs:string" use="required"/> </xs:complexType> <!-- POLICY TYPE --> <xs:simpleType name="policyType"> <xs:restriction base="xs:string"> <xs:pattern value="([a-zA-Z0-9])+[:]([0-9])+"/> </xs:restriction> </xs:simpleType> <!-- CONTENT TYPE --> <xs:complexType name="contentType"> <xs:sequence> <xs:element name="mediaCaptureIDREF"type="xs:IDREF"type="xs:string" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneViewIDREF"type="xs:IDREF"type="xs:string" minOccurs="0" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType> <!-- MAX CAPTURES TYPE --> <xs:simpleType name="positiveShort"> <xs:restriction base="xs:unsignedShort"> <xs:minInclusive value="1"> </xs:minInclusive> </xs:restriction> </xs:simpleType> <xs:complexType name="maxCapturesType"> <xs:simpleContent> <xs:extension base="positiveShort"> <xs:attribute name="exactNumber" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> <!-- CAPTURED PEOPLE TYPE --> <xs:complexType name="capturedPeopleType"> <xs:sequence> <xs:element name="personIDREF" type="xs:IDREF" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- PEOPLE TYPE --> <xs:complexType name="peopleType"> <xs:sequence> <xs:element name="person" type="personType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- PERSON TYPE --> <xs:complexType name="personType"> <xs:sequence> <xs:element name="personInfo" type="xcard:vcardType" maxOccurs="1" minOccurs="0"/> <xs:element ref="personType" minOccurs="0" maxOccurs="unbounded" /> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="personID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType> <!-- PERSON TYPE ELEMENT --> <xs:element name="personType" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<personType> registry",<personType>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element> <!-- VIEW ELEMENT --> <xs:element name="view" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<view> registry",<view>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element> <!-- PRESENTATION ELEMENT --> <xs:element name="presentation" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<presentation> registry",<presentation>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element> <!-- SPATIAL INFORMATION TYPE --> <xs:complexType name="spatialInformationType"> <xs:sequence> <xs:element name="captureOrigin" type="captureOriginType" minOccurs="0"/> <xs:element name="captureArea" type="captureAreaType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType> <!-- POINT TYPE --> <xs:complexType name="pointType"> <xs:sequence> <xs:element name="x" type="xs:decimal"/> <xs:element name="y" type="xs:decimal"/> <xs:element name="z" type="xs:decimal"/> </xs:sequence> </xs:complexType> <!-- CAPTURE ORIGIN TYPE --> <xs:complexType name="captureOriginType"> <xs:sequence> <xs:element name="capturePoint" type="pointType"></xs:element> <xs:element name="lineOfCapturePoint" type="pointType" minOccurs="0"> </xs:element> </xs:sequence> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- CAPTURE AREA TYPE --> <xs:complexType name="captureAreaType"> <xs:sequence> <xs:element name="bottomLeft" type="pointType"/> <xs:element name="bottomRight" type="pointType"/> <xs:element name="topLeft" type="pointType"/> <xs:element name="topRight" type="pointType"/> </xs:sequence> </xs:complexType> <!-- MOBILITY TYPE --> <xs:simpleType name="mobilityType"> <xs:restriction base="xs:string"> <xs:enumeration value="static" /> <xs:enumeration value="dynamic" /> <xs:enumeration value="highly-dynamic" /> </xs:restriction> </xs:simpleType> <!-- TEXT CAPTURE TYPE --> <xs:complexType name="textCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType> <!-- OTHER CAPTURE TYPE --> <xs:complexType name="otherCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType> <!-- AUDIO CAPTURE TYPE --> <xs:complexType name="audioCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:element ref="sensitivityPattern" minOccurs="0" /> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType> <!-- SENSITIVITY PATTERN ELEMENT --> <xs:element name="sensitivityPattern" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<sensitivityPattern> registry",<sensitivityPattern>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element> <!-- VIDEO CAPTURE TYPE --> <xs:complexType name="videoCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType> <!-- EMBEDDED TEXT ELEMENT --> <xs:element name="embeddedText"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:boolean"> <xs:attribute name="lang" type="xs:language"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <!-- CAPTURE SCENES TYPE --> <!-- envelope of capture scenes --> <xs:complexType name="captureScenesType"> <xs:sequence> <xs:element name="captureScene" type="captureSceneType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- CAPTURE SCENE TYPE --> <xs:complexType name="captureSceneType"> <xs:sequence> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneInformation" type="xcard:vcardType" minOccurs="0"/> <xs:element name="sceneViews" type="sceneViewsType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="sceneID" type="xs:ID" use="required"/> <xs:attribute name="scale" type="scaleType" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType> <!-- SCALE TYPE --> <xs:simpleType name="scaleType"> <xs:restriction base="xs:string"> <xs:enumeration value="mm"/> <xs:enumeration value="unknown"/> <xs:enumeration value="noscale"/> </xs:restriction> </xs:simpleType> <!-- SCENE VIEWS TYPE --> <!-- envelope of scene views of a capture scene --> <xs:complexType name="sceneViewsType"> <xs:sequence> <xs:element name="sceneView" type="sceneViewType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- SCENE VIEW TYPE --> <xs:complexType name="sceneViewType"> <xs:sequence> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="mediaCaptureIDs" type="captureIDListType"/> </xs:sequence> <xs:attribute name="sceneViewID" type="xs:ID" use="required"/> </xs:complexType> <!-- CAPTURE ID LIST TYPE --> <xs:complexType name="captureIDListType"> <xs:sequence> <xs:element name="mediaCaptureIDREF" type="xs:IDREF" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- ENCODING GROUPS TYPE --> <xs:complexType name="encodingGroupsType"> <xs:sequence> <xs:element name="encodingGroup" type="tns:encodingGroupType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- ENCODING GROUP TYPE --> <xs:complexType name="encodingGroupType"> <xs:sequence> <xs:element name="maxGroupBandwidth" type="xs:unsignedLong"/> <xs:element name="encodingIDList" type="encodingIDListType"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="encodingGroupID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- ENCODING ID LIST TYPE --> <xs:complexType name="encodingIDListType"> <xs:sequence> <xs:element name="encodingID" type="xs:string" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- SIMULTANEOUS SETS TYPE --> <xs:complexType name="simultaneousSetsType"> <xs:sequence> <xs:element name="simultaneousSet" type="simultaneousSetType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- SIMULTANEOUS SET TYPE --> <xs:complexType name="simultaneousSetType"> <xs:sequence> <xs:element name="mediaCaptureIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneViewIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="captureSceneIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="setID" type="xs:ID" use="required"/> <xs:attribute name="mediaType" type="xs:string"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- GLOBAL VIEWS TYPE --> <xs:complexType name="globalViewsType"> <xs:sequence> <xs:element name="globalView" type="globalViewType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- GLOBAL VIEW TYPE --> <xs:complexType name="globalViewType"> <xs:sequence> <xs:element name="sceneViewIDREF" type="xs:IDREF" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="globalViewID" type="xs:ID"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- CAPTURE ENCODINGS TYPE --> <xs:complexType name="captureEncodingsType"> <xs:sequence> <xs:element name="captureEncoding" type="captureEncodingType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <!-- CAPTURE ENCODING TYPE --> <xs:complexType name="captureEncodingType"> <xs:sequence> <xs:element name="captureID" type="xs:string"/> <xs:element name="encodingID" type="xs:string"/> <xs:element name="configuredContent" type="contentType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="ID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- CLUE INFO ELEMENT --> <xs:element name="clueInfo" type="clueInfoType"/> <!-- CLUE INFO TYPE --> <xs:complexType name="clueInfoType"> <xs:sequence> <xs:element ref="mediaCaptures"/> <xs:element ref="encodingGroups"/> <xs:element ref="captureScenes"/> <xs:element ref="simultaneousSets" minOccurs="0"/> <xs:element ref="globalViews" minOccurs="0"/> <xs:element ref="people" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="clueInfoID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType> </xs:schema>]]> </artwork> </figure>]]></sourcecode> <t>FollowingThe following sections describe the XML schema in more detail. As a general remark, please notice that optional elements that don't define what their absence means are intended to be associated with undefined properties. </t></section><!-- XML schema --></section> <sectiontitle="<mediaCaptures>" anchor="sec-media-captures">anchor="sec-media-captures" numbered="true" toc="default"> <name><mediaCaptures></name> <t> <mediaCaptures> represents the list of one or more media captures available at the Media Provider's side. Each media capture is represented by a <mediaCapture> element (<xreftarget="sec-media-capture"/>).target="sec-media-capture" format="default"/>). </t> </section> <sectiontitle="<encodingGroups>" anchor="sec-encoding-groups">anchor="sec-encoding-groups" numbered="true" toc="default"> <name><encodingGroups></name> <t> <encodingGroups> represents the list of the encoding groups organized on the Media Provider's side. Each encoding group is represented by an <encodingGroup> element (<xreftarget="sec-encoding-group"/>).target="sec-encoding-group" format="default"/>). </t> </section> <sectiontitle="<captureScenes>" anchor="sec-capture-scenes">anchor="sec-capture-scenes" numbered="true" toc="default"> <name><captureScenes></name> <t> <captureScenes> represents the list of the capture scenes organized on the Media Provider's side. Each capture scene is represented by a <captureScene>element.element (<xreftarget="sec-capture-scene"/>).target="sec-capture-scene" format="default"/>). </t> </section> <sectiontitle="<simultaneousSets>" anchor="sec-simultaneous-sets">anchor="sec-simultaneous-sets" numbered="true" toc="default"> <name><simultaneousSets></name> <t> <simultaneousSets> contains the simultaneous sets indicated by the Media Provider. Each simultaneous set is represented by a <simultaneousSet>element.element (<xreftarget="sec-simultaneous-set"/>).target="sec-simultaneous-set" format="default"/>). </t> </section> <sectiontitle="<globalViews>" anchor="sec-global-views">anchor="sec-global-views" numbered="true" toc="default"> <name><globalViews></name> <t> <globalViews> contains a set of alternative representations of all the scenes that are offered by a Media Provider to a Media Consumer. Each alternative is named "globalview"view", and it is represented by a <globalView>element.element (<xreftarget="sec-global-view"/>).target="sec-global-view" format="default"/>). </t> </section> <sectiontitle="<captureEncodings>" anchor="sec-capture-encodings">anchor="sec-capture-encodings" numbered="true" toc="default"> <name><captureEncodings></name> <t> <captureEncodings> is a list of capture encodings. It can represent the list of the desired capture encodings indicated by the Media Consumer or the list of instantiated captures on the provider's side. Each capture encoding is represented by a <captureEncoding>element.element (<xreftarget="sec-capture-encoding"/>).target="sec-capture-encoding" format="default"/>). </t> </section> <sectiontitle="<mediaCapture>" anchor="sec-media-capture">anchor="sec-media-capture" numbered="true" toc="default"> <name><mediaCapture></name> <t> AMedia Capturemedia capture is the fundamental representation of a media flow that is available on the provider's side. Media captures are characterized(i)by (i) a set of features that are independent from the specific type ofmedium,medium and (ii)bya set of features that aremedia-specific.media specific. The features that are common to all media types appear within the media capture type,thatwhich has been designed as an abstract complex type. Media-specific captures, such as video captures, audiocapturescaptures, and others, are specializations of that abstract media capture type, as in a typical generalization-specialization hierarchy. </t> <t>The following is the XMLSchemaschema definition of the media capture type: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- MEDIA CAPTURE TYPE --> <xs:complexType name="mediaCaptureType" abstract="true"> <xs:sequence> <!-- mandatory fields --> <xs:element name="captureSceneIDREF" type="xs:IDREF"/> <xs:choice> <xs:sequence> <xs:element name="spatialInformation" type="tns:spatialInformationType"/> </xs:sequence> <xs:element name="nonSpatiallyDefinable" type="xs:boolean" fixed="true"/> </xs:choice> <!-- for handlingmulti-contentmulticontent captures: --> <xs:choice> <xs:sequence> <xs:element name="synchronizationID" type="xs:ID" minOccurs="0"/> <xs:element name="content" type="contentType" minOccurs="0"/> <xs:element name="policy" type="policyType" minOccurs="0"/> <xs:element name="maxCaptures" type="maxCapturesType" minOccurs="0"/> <xs:element name="allowSubsetChoice" type="xs:boolean" minOccurs="0"/> </xs:sequence> <xs:element name="individual" type="xs:boolean" fixed="true"/> </xs:choice> <!-- optional fields --> <xs:element name="encGroupIDREF" type="xs:IDREF" minOccurs="0"/> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="priority" type="xs:unsignedInt" minOccurs="0"/> <xs:element name="lang" type="xs:language" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="mobility" type="mobilityType" minOccurs="0" /> <xs:element ref="presentation" minOccurs="0" /> <xs:element ref="embeddedText" minOccurs="0" /> <xs:element ref="view" minOccurs="0" /> <xs:element name="capturedPeople" type="capturedPeopleType" minOccurs="0"/> <xs:element name="relatedTo" type="xs:IDREF" minOccurs="0"/> </xs:sequence> <xs:attribute name="captureID" type="xs:ID" use="required"/> <xs:attribute name="mediaType" type="xs:string" use="required"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <sectiontitle="captureID attribute" anchor="sec-captureID">anchor="sec-captureID" numbered="true" toc="default"> <name>captureID Attribute</name> <t>The "captureID" attribute is a mandatory field containing the identifier of the media capture. Such an identifier serves as the way the capture is referenced from other data model elements (e.g., simultaneous sets, capture encodings, and others via <mediaCaptureIDREF>). </t> </section> <sectiontitle="mediaType attribute">numbered="true" toc="default"> <name>mediaType Attribute</name> <t>The "mediaType" attribute is a mandatory attribute specifying the media type of the capture. Common standard values are "audio", "video", and "text", as defined in <xreftarget="RFC6838"/>.target="RFC6838" format="default"/>. Other values can be provided. It is assumed that implementations agree on the interpretation of those other values. The "mediaType" attribute is as generic as possible. Here is why: (i) the basic media capture type is an abstract one; (ii) "concrete" definitions for the standard(<xref target="RFC6838"/>)audio,videovideo, and text capture types <xref target="RFC6838" format="default"/> have been specified; (iii) a generic "otherCaptureType" type has been defined; and (iv) the "mediaType" attribute has been generically defined as a string, with no particular template. From the considerations above, it is clear that if one chooses to rely on a brand new media type and wants to interoperate with others, an application-level agreement is needed on how to interpret such information. </t> </section> <sectiontitle="<captureSceneIDREF>">numbered="true" toc="default"> <name><captureSceneIDREF></name> <t><captureSceneIDREF> is a mandatory field containing the value of the identifier of the capture scene the media capture is defined in, i.e., the value of the<xref target="sec-sceneID">sceneID</xref>sceneID attribute (<xref target="sec-sceneID" format="default"/>) of that capture scene. Indeed, each media captureMUST<bcp14>MUST</bcp14> be defined within one and only one capture scene. When a media capture is spatially definable, some spatial information is provided along with it in the form of point coordinates (see <xreftarget="sec-spatial-info"/>).target="sec-spatial-info" format="default"/>). Such coordinates refer to the space of coordinates defined for the capture scene containing the capture. </t> </section> <sectiontitle="<encGroupIDREF>">numbered="true" toc="default"> <name><encGroupIDREF></name> <t><encGroupIDREF> is an optional field containing the identifier of the encoding group the media capture is associated with, i.e., the value of the<xref target="sec-encodingGroupID">encodingGroupID</xref>encodingGroupID attribute (<xref target="sec-encodingGroupID" format="default"/>) of that encoding group. Media captures that are not associated with any encoding groupcan notcannot be instantiated as media streams. </t> </section> <sectiontitle="<spatialInformation>" anchor="sec-spatial-info">anchor="sec-spatial-info" numbered="true" toc="default"> <name><spatialInformation></name> <t>Media captures are divided into two categories: (i) non spatially definable captures and (ii) spatially definable captures. </t> <t>Captures are spatially definable when at least(i)it is possible to provide (i) the coordinates of the device position within the telepresence room of origin (capture point) together with its capturing direction specified by a second point (point on line ofcapture),capture) or (ii)it is possible to providethe represented area within the telepresence room, by listing the coordinates of the fourco-planarcoplanar points identifying the plane of interest (area of capture). The coordinates of the above mentioned pointsMUST<bcp14>MUST</bcp14> be expressed according to the coordinate space of the capture scene the media capturesbelongsbelong to. </t> <t>Non spatially definable captures cannot be characterized within the physical space of the telepresence room of origin. Captures of this kindareare, forexampleexample, those related to recordings, text captures, DVDs, registered presentations, or external streams that are played in the telepresence room and transmitted to remote sites. </t> <t> Spatially definable captures represent a part of the telepresence room. The captured part of the telepresence room is described by means of the <spatialInformation> element. By comparing the <spatialInformation> element of different media captures within the same capture scene, a consumer can better determine the spatial relationships between them and render them correctly. Non spatially definable captures do not embed suchelementelements in their XML description: they are instead characterized by having the <nonSpatiallyDefinable> tag set to "true" (see <xreftarget="sub-sec-nonspatiallydef"/>).target="sub-sec-nonspatiallydef" format="default"/>). </t> <t>The definition of the spatial information type is the following:</t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- SPATIAL INFORMATION TYPE --> <xs:complexType name="spatialInformationType"> <xs:sequence> <xs:element name="captureOrigin" type="captureOriginType" minOccurs="0"/> <xs:element name="captureArea" type="captureAreaType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t>The <captureOrigin> contains the coordinates of the capture device that is taking the capture (i.e., the capturepoint),point) as well as, optionally, the pointing direction (i.e., the point on line ofcapture) (seecapture); see <xreftarget="sec-capture-origin"/>).target="sec-capture-origin" format="default"/>. </t> <t> The <captureArea> is an optional field containing four points defining the captured area covered by the capture (see <xreftarget="sec-capture-area"/>).target="sec-capture-area" format="default"/>). </t> <t>The scale of the points coordinates is specified in the<xref target="sec-scale">scale</xref>scale attribute (<xref target="sec-scale" format="default"/>) of the capture scene the media capture belongs to. Indeed, all the spatially definable media captures referring to the same capture scene share the same coordinate system and express their spatial information according to the same scale.</t> <sectiontitle="<captureOrigin>" anchor="sec-capture-origin">anchor="sec-capture-origin" numbered="true" toc="default"> <name><captureOrigin></name> <t> The <captureOrigin> element is used to represent the position and optionally the line of capture of a capture device. <captureOrigin>MUST<bcp14>MUST</bcp14> be included in spatially definable audio captures, while it is optional for spatially definable video captures.</t><t></t> <t> The XMLSchemaschema definition of the <captureOrigin> element type is the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CAPTURE ORIGIN TYPE --> <xs:complexType name="captureOriginType"> <xs:sequence> <xs:element name="capturePoint" type="pointType"/> <xs:element name="lineOfCapturePoint" type="pointType" minOccurs="0"/> </xs:sequence> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType> <!-- POINT TYPE --> <xs:complexType name="pointType"> <xs:sequence> <xs:element name="x" type="xs:decimal"/> <xs:element name="y" type="xs:decimal"/> <xs:element name="z" type="xs:decimal"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> The point type contains three spatial coordinates (x,y,z) representing a point in the space associated with a certain capture scene. </t> <t> The <captureOrigin> element includes a mandatory <capturePoint> element and an optional <lineOfCapturePoint> element, both of the type "pointType". <capturePoint> specifies the three coordinates identifying the position of the capture device. <lineOfCapturePoint> is another pointType element representing the "point on line of capture",thatwhich gives the pointing direction of the capture device. </t> <t> The coordinates of the point on line of captureMUST NOT<bcp14>MUST NOT</bcp14> be identical to the capture point coordinates. For a spatially definable video capture, if the point on line of capture is provided, itMUST<bcp14>MUST</bcp14> belong to the region between the point of capture and the capture area. For a spatially definable audio capture, if the point on line of capture is not provided, the sensitivity pattern should be considered omnidirectional. </t> </section> <sectiontitle="<captureArea>" anchor="sec-capture-area">anchor="sec-capture-area" numbered="true" toc="default"> <name><captureArea></name> <t> <captureArea> is an optional element that can be contained within the spatial information associated with a media capture. It represents the spatial area captured by the media capture. <captureArea>MUST<bcp14>MUST</bcp14> be included in the spatial information of spatially definable video captures, while itMUST NOT<bcp14>MUST NOT</bcp14> be associated with audio captures. </t> <t> The XML representation of that area is provided through a set of four point-type elements, <bottomLeft>, <bottomRight>, <topLeft>, and<topRight><topRight>, thatMUST<bcp14>MUST</bcp14> beco-planar.coplanar. The four coplanar points are identified from the perspective of the capture device. The XML schema definition is the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CAPTURE AREA TYPE --> <xs:complexType name="captureAreaType"> <xs:sequence> <xs:element name="bottomLeft" type="pointType"/> <xs:element name="bottomRight" type="pointType"/> <xs:element name="topLeft" type="pointType"/> <xs:element name="topRight" type="pointType"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure> </t> <t> </t>]]></sourcecode> </section> </section><!-- spatial info section --><sectiontitle="<nonSpatiallyDefinable>" anchor="sub-sec-nonspatiallydef">anchor="sub-sec-nonspatiallydef" numbered="true" toc="default"> <name><nonSpatiallyDefinable></name> <t>When media captures are non spatially definable, theyMUST<bcp14>MUST</bcp14> be marked with the boolean <nonSpatiallyDefinable> element set to"true""true", and no <spatialInformation>MUST<bcp14>MUST</bcp14> be provided. Indeed, <nonSpatiallyDefinable> and <spatialInformation> are mutually exclusive tags, according to the <choice> section within the XMLSchemaschema definition of the media capture type. </t> </section> <sectiontitle="<content>" anchor="sub-sec-content">anchor="sub-sec-content" numbered="true" toc="default"> <name><content></name> <t> A media capture can be (i) an individual media capture or (ii)a multiple content capture (MCC). A multiple content capturean MCC. An MCC is made by different captures that can be arranged spatially (by a composition operation), or temporally (by a switching operation), or that can result from the orchestration of both the techniques. If a media capture is an MCC, then itMAY<bcp14>MAY</bcp14> show in its XML data model representation the <content> element. It is composed by a list of media capture identifiers ("mediaCaptureIDREF") and capture scene view identifiers ("sceneViewIDREF"), where the latter ones are used as shortcuts to refer to multiple capture identifiers. The referenced captures are used to create the MCC according to a certain strategy. If the <content> element does not appear inaan MCC, or it has no child elements, then the MCC is assumed to be made of multiplesourcessources, but no information regarding those sources is provided. </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CONTENT TYPE --> <xs:complexType name="contentType"> <xs:sequence> <xs:element name="mediaCaptureIDREF"type="xs:IDREF"type="xs:string" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneViewIDREF"type="xs:IDREF"type="xs:string" minOccurs="0" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType>]]> </artwork> </figure>]]></sourcecode> </section> <sectiontitle="<synchronizationID>">numbered="true" toc="default"> <name><synchronizationID></name> <t><synchronizationID> is an optional element for multiple content captures that contains a numeric identifier. Multiple content captures marked with the same identifier in the <synchronizationID> contain at all times captures coming from the same sources. It is the Media Provider that determines what the source is for thecaptures is.captures. In this way, the Media Provider can choose how to group together single captures for the purpose of keeping them synchronized according to the <synchronizationID> element. </t> </section> <sectiontitle="<allowSubsetChoice>">numbered="true" toc="default"> <name><allowSubsetChoice></name> <t><allowSubsetChoice> is an optional boolean element for multiple content captures. It indicates whether or not the Provider allows the Consumer to choose a specific subset of the captures referenced by the MCC. If this attribute is true, and the MCC references other captures, then the ConsumerMAY<bcp14>MAY</bcp14> specify in a CONFIGURE message a specific subset of those captures to be included in the MCC, and the ProviderMUST<bcp14>MUST</bcp14> then include only that subset. If this attribute is false, or the MCC does not reference other captures, then the ConsumerMUST NOT<bcp14>MUST NOT</bcp14> select a subset. If <allowSubsetChoice> is not shown in the XML description of the MCC, its value is to be considered "false". </t> </section> <sectiontitle="<policy>">numbered="true" toc="default"> <name><policy></name> <t> <policy> is an optional element that can be used only for multiple content captures. It indicates the criteria applied to build the multiple content capture using the media captures referenced in the <mediaCaptureIDREF> list. The <policy> value is in the form of a token that indicates the policy and an index representing an instance of the policy, separated by a ":" (e.g., SoundLevel:2, RoundRobin:0, etc.). The XML schema defining the type of the <policy> element is the following: </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- POLICY TYPE --> <xs:simpleType name="policyType"> <xs:restriction base="xs:string"> <xs:pattern value="([a-zA-Z0-9])+[:]([0-9])+"/> </xs:restriction> </xs:simpleType>]]> </artwork> </figure>]]></sourcecode> <t>At the time of writing, only two switching policies aredefineddefined; they are in <xreftarget="I-D.ietf-clue-framework"/>:</t> <list> <t>SoundLevel:target="RFC8845" format="default"/> as follows:</t> <blockquote> <dl newline="false" spacing="normal"> <dt>SoundLevel:</dt><dd> This indicates that the content of the MCC is determined by asound level detectionsound-level-detection algorithm. The loudest (active) speaker (or a previous speaker, depending on the index value) is contained in the MCC.Index 0 represents the most current instance of the policy, i.e., the currently active speaker, 1 represents the previous instance, i.e., the previous active speaker, and so on. </t> <t>RoundRobin:</dd> <dt>RoundRobin:</dt><dd>This indicates that the content of the MCC is determined by a time-based algorithm. For example, the Provider provides content from a particular source for a period of timebased algorithm.</t> </list>and then provides content from another source, and so on.</dd> </dl> </blockquote> <t>Other values for the <policy> element can be used. In this case, it is assumed that implementations agree on the meaning of those other values and/or those new switching policies are defined in later documents.</t> </section> <sectiontitle="<maxCaptures>" anchor="sub-sec-maxCaptures">anchor="sub-sec-maxCaptures" numbered="true" toc="default"> <name><maxCaptures></name> <t> <maxCaptures> is an optional element that can be used only formultiple content captures (MCC).MCCs. It provides information about the number of media captures that can be represented in the multiple content capture at a time. If <maxCaptures> is not provided, all the media captures listed in the <content> element can appear at a time in the capture encoding. The type definition is provided below. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- MAX CAPTURES TYPE --> <xs:simpleType name="positiveShort"> <xs:restriction base="xs:unsignedShort"> <xs:minInclusive value="1"> </xs:minInclusive> </xs:restriction> </xs:simpleType> <xs:complexType name="maxCapturesType"> <xs:simpleContent> <xs:extension base="positiveShort"> <xs:attribute name="exactNumber" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t>When the "exactNumber" attribute is set to "true", it means the <maxCaptures> element carries the exact number of the media captures appearing at a time. Otherwise, the number of the represented media capturesMUST<bcp14>MUST</bcp14> be considered "<=" the <maxCaptures> value. </t> <t> For instance, an audio MCC having the <maxCaptures> value set to 1 means that a media stream from the MCC will only contain audio from a single one of its constituent captures at a time. On the other hand, if the <maxCaptures> value is set to 4 and the exactNumber attribute is set to "true", it would mean that the media stream received from the MCC will always contain a mix of audio from exactly four of its constituent captures. </t> </section> <sectiontitle="<individual>">numbered="true" toc="default"> <name><individual></name> <t> <individual> is a boolean element thatMUST<bcp14>MUST</bcp14> be used for single-content captures. Its value is fixed and set to "true". Such element indicates the capture that is being described is nota multiple content capture.an MCC. Indeed, <individual> and the aforementioned tags related to MCC attributes (from Sections <xreftarget="sub-sec-content"/>target="sub-sec-content" format="counter"/> to <xreftarget="sub-sec-maxCaptures"/>)target="sub-sec-maxCaptures" format="counter"/>) are mutually exclusive, according to the <choice> section within the XMLSchemaschema definition of the media capture type. </t> </section> <sectiontitle="<description>" anchor="sec-description">anchor="sec-description" numbered="true" toc="default"> <name><description></name> <t> <description> is used to provide human-readable textual information. This element is included in the XML definition of media captures, capturescenesscenes, and capture scene views tothe aim of providingprovide human-readabledescriptiondescriptions of, respectively, media captures, capturescenesscenes, and capture scene views. According to the data model definition of a media capture (<xreftarget="sec-media-capture"/>)),target="sec-media-capture" format="default"/>)), zero or more <description> elements can be used, each providing information in a different language. The <description> element definition is the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- DESCRIPTION element --> <xs:element name="description"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="lang" type="xs:language"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element>]]> </artwork> </figure> </t>]]></sourcecode> <t>As can be seen, <description> is a string element with an attribute ("lang") indicating the language used in the textual description. Such an attribute is compliant with the Language-Tag ABNF production from <xreftarget="RFC5646"/>.target="RFC5646" format="default"/>. </t> </section> <sectiontitle="<priority>">numbered="true" toc="default"> <name><priority></name> <t> <priority> is an optional unsigned integer field indicating the importance of a media capture according to the Media Provider's perspective. It can be used on the receiver's side to automatically identify the most relevant contribution from the Media Provider. The higher the importance, the lower the contained value. If no priority is assigned, no assumptions regarding relative importance of the media capture can be assumed.</t> </section> <sectiontitle="<lang>">numbered="true" toc="default"> <name><lang></name> <t> <lang> is an optional element containing the language used in the capture. Zero or more <lang> elements can appear in the XML description of a media capture. Each such element has to be compliant with the Language-Tag ABNF production from <xreftarget="RFC5646"/>.target="RFC5646" format="default"/>. </t> </section> <sectiontitle="<mobility>">numbered="true" toc="default"> <name><mobility></name> <t> <mobility> is an optional element indicating whether or not the capture device originating the capture may move during the telepresence session. That optional element can assume one of the three following values:<list style="hanging"> <t hangText="static">SHOULD NOT</t> <ul empty="true"><li> <dl newline="false" spacing="normal"> <dt>static:</dt> <dd><bcp14>SHOULD NOT</bcp14> change for the duration of the CLUE session, across multiple ADVERTISEMENTmessages. </t> <t hangText="dynamic"> MAYmessages.</dd> <dt>dynamic:</dt> <dd> <bcp14>MAY</bcp14> change in each new ADVERTISEMENT message. Can be assumed to remain unchanged until there is a new ADVERTISEMENTmessage.</t> <t hangText="highly-dynamic"> MAYmessage.</dd> <dt>highly-dynamic:</dt> <dd> <bcp14>MAY</bcp14> change dynamically, even between consecutive ADVERTISEMENT messages. The spatial information provided in an ADVERTISEMENT message is simply a snapshot of the current values at the time when the message issent.</t> </list> </t>sent.</dd> </dl></li> </ul> </section> <sectiontitle="<relatedTo>">numbered="true" toc="default"> <name><relatedTo></name> <t> The optional <relatedTo> element contains the value of the <xreftarget="sec-captureID">captureIDtarget="sec-captureID" format="default">captureID attribute</xref> of the media capture to which the considered media capture refers. The media capture marked with a <relatedTo> element canbebe, forexampleexample, the translation of the referred media capture in a different language. </t> </section> <sectiontitle="<view>" anchor="sec-view">anchor="sec-view" numbered="true" toc="default"> <name><view></name> <t>The <view> element is an optional tag describing what is represented in the spatial area covered by a media capture. It has been specified as a simple string with an annotation pointing to anad hoc definedIANAregistry:registry that is defined ad hoc: </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- VIEW ELEMENT --> <xs:element name="view" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<view> registry",<view>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element>]]> </artwork> </figure>]]></sourcecode> <t> The current possible values, as per the CLUE framework document <xreftarget="I-D.ietf-clue-framework"/>,target="RFC8845" format="default"/>, are: "room", "table", "lectern", "individual", and "audience". </t> </section> <sectiontitle="<presentation>" anchor="sec-presentation">anchor="sec-presentation" numbered="true" toc="default"> <name><presentation></name> <t>The <presentation> element is an optional tag used for media captures conveying information about presentations within the telepresence session. It has been specified as a simple string with an annotation pointing to anad hoc definedIANAregistry:registry that is defined ad hoc: </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- PRESENTATION ELEMENT --> <xs:element name="presentation" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<presentation> registry",<presentation>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element>]]> </artwork> </figure>]]></sourcecode> <t> The current possible values, as per the CLUE framework document <xreftarget="I-D.ietf-clue-framework"/>,target="RFC8845" format="default"/>, are "slides" and "images". </t> </section> <sectiontitle="<embeddedText>" anchor="sec-embedded-text">anchor="sec-embedded-text" numbered="true" toc="default"> <name><embeddedText></name> <t> The <embeddedText> element is a boolean element indicating that there is text embedded in the media capture (e.g., in a video capture). The language used in such an embedded textual description is reported in the <embeddedText> "lang" attribute. </t> <t> The XMLSchemaschema definition of the <embeddedText> element is: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- EMBEDDED TEXT ELEMENT --> <xs:element name="embeddedText"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:boolean"> <xs:attribute name="lang" type="xs:language"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="<capturedPeople>" anchor="sec-participantIDs">anchor="sec-participantIDs" numbered="true" toc="default"> <name><capturedPeople></name> <t>This optional element is used to indicate which telepresence session participants are represented in within the media captures. For each participant, a <personIDREF> element is provided.</t> <sectiontitle="<personIDREF>">numbered="true" toc="default"> <name><personIDREF></name> <t> <personIDREF> contains the identifier of the represented person, i.e., the value of the related <xreftarget="sub-sec-participantID">target="sub-sec-participantID" format="default"> personID attribute</xref>. Metadata about the represented participant can be retrieved by accessing the <people> list (<xreftarget="sec-participants"/>).target="sec-participants" format="default"/>). </t> </section> </section></section><!-- media capture section --></section> <sectiontitle="Audio captures">numbered="true" toc="default"> <name>Audio Captures</name> <t>Audio captures inherit all the features of a generic media capture and present further audio-specific characteristics. The XMLSchemaschema definition of the audio capture type is reported below: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- AUDIO CAPTURE TYPE --> <xs:complexType name="audioCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:element ref="sensitivityPattern" minOccurs="0" /> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> An example of audio-specific information that can be included is represented by the <sensitivityPattern>element.element (<xreftarget="sec-sensitivity-pattern"/>).target="sec-sensitivity-pattern" format="default"/>). </t> <sectiontitle="<sensitivityPattern>" anchor="sec-sensitivity-pattern">anchor="sec-sensitivity-pattern" numbered="true" toc="default"> <name><sensitivityPattern></name> <t> The <sensitivityPattern> element is an optional field describing the characteristics of the nominal sensitivity pattern of the microphone capturing the audio signal. It has been specified as a simple string with an annotation pointing to anad hoc definedIANAregistry:registry that is defined ad hoc: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- SENSITIVITY PATTERN ELEMENT --> <xs:element name="sensitivityPattern" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<sensitivityPattern> registry",<sensitivityPattern>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element>]]> </artwork> </figure> </t>]]></sourcecode> <t> The current possible values, as per the CLUE framework document <xreftarget="I-D.ietf-clue-framework"/>,target="RFC8845" format="default"/>, are "uni", "shotgun", "omni", "figure8","cardioid""cardioid", and "hyper-cardioid". </t> </section></section><!-- audio capture --></section> <sectiontitle="Video captures">numbered="true" toc="default"> <name>Video Captures</name> <t>Video captures, similarly to audio captures, extend the information of a generic media capture with video-specific features.</t> <t> The XMLSchemaschema representation of the video capture type is provided in the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- VIDEO CAPTURE TYPE --> <xs:complexType name="videoCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType>]]> </artwork> </figure> </t> </section><!-- video capture -->]]></sourcecode> </section> <sectiontitle="Text captures"> <t>Alsonumbered="true" toc="default"> <name>Text Captures</name> <t>Similar to audio captures and video captures, text captures can be described by extending the generic media captureinformation, similarly to audio captures and video captures.</t>information.</t> <t>There are no known properties of a text-based mediawhichthat aren't already covered by the generic mediaCaptureType. Text captures are hence defined as follows:</t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- TEXT CAPTURE TYPE --> <xs:complexType name="textCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t>Text capturesMUST<bcp14>MUST</bcp14> be marked as non spatially definable (i.e., theyMUST<bcp14>MUST</bcp14> present in their XML description the <xreftarget="sub-sec-nonspatiallydef">target="sub-sec-nonspatiallydef" format="default"> <nonSpatiallyDefinable></xref> element set to "true"). </t> </section> <sectiontitle="Other capture types">numbered="true" toc="default"> <name>Other Capture Types</name> <t> Other media capture types can be described by using the CLUE data model. They can be represented by exploiting the "otherCaptureType" type. This media capture type is conceived to be filled in with elements defined within extensions of the current schema, i.e., with elements defined in other XML schemas (see <xreftarget="sec-extension"/>target="sec-extension" format="default"/> for an example). The otherCaptureType inherits all the features envisioned for the abstract mediaCaptureType. </t> <t>The XMLSchemaschema representation of the otherCaptureType is the following:</t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- OTHER CAPTURE TYPE --> <xs:complexType name="otherCaptureType"> <xs:complexContent> <xs:extension base="tns:mediaCaptureType"> <xs:sequence> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:extension> </xs:complexContent> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t>When defining new media capture types that are going to be described by means of the <otherMediaCapture> element, spatial properties of such new media capture typesSHOULD<bcp14>SHOULD</bcp14> be defined (e.g., whether or not they are spatiallydefinable,definable and whether or not they should be associated with an area ofcapture,capture or other properties that may be defined).</t> </section> <sectiontitle="<captureScene>" anchor="sec-capture-scene">anchor="sec-capture-scene" numbered="true" toc="default"> <name><captureScene></name> <t>A Media Provider organizes the available captures in capture scenes in order to help the receiverbothin both the rendering andinthe selection of the group of captures. Capture scenes are made of media captures and capture scene views,thatwhich are sets of media captures of the same media type. Each capture scene view is an alternative torepresentcompletely represent a capture scene for a fixed media type.</t> <t>The XMLSchemaschema representation of a <captureScene> element is the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CAPTURE SCENE TYPE --> <xs:complexType name="captureSceneType"> <xs:sequence> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneInformation" type="xcard:vcardType" minOccurs="0"/> <xs:element name="sceneViews" type="sceneViewsType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="sceneID" type="xs:ID" use="required"/> <xs:attribute name="scale" type="scaleType" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> Each capture scene is identified by a "sceneID" attribute. The <captureScene> element can contain zero or more textual <description> elements,definedas defined in <xreftarget="sec-description"/>.target="sec-description" format="default"/>. Besides <description>, there is the optional <sceneInformation> element (<xref target="sec-scene-info"/>),format="default"/>), which contains structured information about the scene in thevcardvCard format, and the optional <sceneViews> element (<xreftarget="sec-scene-views"/>),target="sec-scene-views" format="default"/>), which is the list of the capture scene views. When no <sceneViews> is provided, the capture scene is assumed to be made of all the media captureswhichthat contain the value of its sceneID attribute in their mandatory captureSceneIDREF attribute. </t> <sectiontitle="<sceneInformation>" anchor="sec-scene-info">anchor="sec-scene-info" numbered="true" toc="default"> <name><sceneInformation></name> <t> The <sceneInformation> element contains optional information about the capture scene according to thevcardvCard format, as specified in theXcard RFCxCard specification <xreftarget="RFC6351"/>.target="RFC6351" format="default"/>. </t> </section> <sectiontitle="<sceneViews>" anchor="sec-scene-views">anchor="sec-scene-views" numbered="true" toc="default"> <name><sceneViews></name> <t> The <sceneViews> element is a mandatory field of a capture scene containing the list of scene views. Each scene view is represented by a <sceneView> element (<xreftarget="sec-scene-view"/>).target="sec-scene-view" format="default"/>). </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- SCENE VIEWS TYPE --> <!-- envelope of scene views of a capture scene --> <xs:complexType name="sceneViewsType"> <xs:sequence> <xs:element name="sceneView" type="sceneViewType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="sceneID attribute" anchor="sec-sceneID">anchor="sec-sceneID" numbered="true" toc="default"> <name>sceneID Attribute</name> <t>The sceneID attribute is a mandatory attribute containing the identifier of the capture scene. </t> </section> <sectiontitle="scale attribute" anchor="sec-scale">anchor="sec-scale" numbered="true" toc="default"> <name>scale Attribute</name> <t> The scale attribute is a mandatory attribute that specifies the scale of the coordinates provided in the spatial information of the media capture belonging to the considered capture scene. The scale attribute can assume three different values: </t><t> <list style="empty"> <t>"mm" - the<ul empty="true" spacing="normal"><li> <dl newline="false" spacing="normal"> <dt>"mm":</dt><dd>the scale is in millimeters. Systemswhichthat know their physical dimensions (forexampleexample, professionally installed telepresence room systems) should always provide such real-worldmeasurements. </t> <t>"unknown" - themeasurements.</dd> <dt>"unknown":</dt><dd>the scale is the same for every media capture in the capturescenescene, but the unity of measure is undefined. Systemswhichthat are not aware of specific physical dimensions yet still know relative distances should select "unknown" in the scale attribute of the capture scene to bedescribed. </t> <t>"noscale" - theredescribed.</dd> <dt>"noscale":</dt><dd>there is no common physical scale among the media captures of the capture scene. That means the scale could be different for each mediacapture. </t> </list> </t> <t> <figure> <artwork> <![CDATA[capture.</dd></dl></li> </ul> <sourcecode type="xml"><![CDATA[ <!-- SCALE TYPE --> <xs:simpleType name="scaleType"> <xs:restriction base="xs:string"> <xs:enumeration value="mm"/> <xs:enumeration value="unknown"/> <xs:enumeration value="noscale"/> </xs:restriction> </xs:simpleType>]]> </artwork> </figure> </t>]]></sourcecode> </section> </section></section><!-- capture scene section --><sectiontitle="<sceneView>" anchor="sec-scene-view">anchor="sec-scene-view" numbered="true" toc="default"> <name><sceneView></name> <t> A <sceneView> element represents a capture scene view, which contains a set of media captures of the same media type describing a capture scene. </t> <t>A <sceneView> element is characterized as follows. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- SCENE VIEW TYPE --> <xs:complexType name="sceneViewType"> <xs:sequence> <xs:element ref="description" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="mediaCaptureIDs" type="captureIDListType"/> </xs:sequence> <xs:attribute name="sceneViewID" type="xs:ID" use="required"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> One or more optional <description> elements provide human-readable information about what the scene view contains. <description> is definedas already seenin <xreftarget="sec-description"/>.target="sec-description" format="default"/>. </t> <t>The remaining child elements are described in the following subsections.</t> <sectiontitle="<mediaCaptureIDs>">numbered="true" toc="default"> <name><mediaCaptureIDs></name> <t>The<mediaCaptureIDs> is the list of the identifiers of the media captures included in the scene view. It is an element of the captureIDListType type, which is defined as a sequence of <mediaCaptureIDREF>, each containing the identifier of a media capture listed within the <mediaCaptures> element: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CAPTURE ID LIST TYPE --> <xs:complexType name="captureIDListType"> <xs:sequence> <xs:element name="mediaCaptureIDREF" type="xs:IDREF" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="sceneViewID attribute">numbered="true" toc="default"> <name>sceneViewID Attribute</name> <t>The sceneViewID attribute is a mandatory attribute containing the identifier of the capture scene view represented by the <sceneView> element.</t> </section></section><!-- scene view section --></section> <sectiontitle="<encodingGroup>" anchor="sec-encoding-group">anchor="sec-encoding-group" numbered="true" toc="default"> <name><encodingGroup></name> <t> The <encodingGroup> element represents an encoding group, which is made by a set of one or more individual encodings and some parameters that apply to the group as a whole. Encoding groups contain references to individual encodings that can be applied to media captures. The definition of the <encodingGroup> element is the following: </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- ENCODING GROUP TYPE --> <xs:complexType name="encodingGroupType"> <xs:sequence> <xs:element name="maxGroupBandwidth" type="xs:unsignedLong"/> <xs:element name="encodingIDList" type="encodingIDListType"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="encodingGroupID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> In thefollowing,following subsections, the contained elements are further described. </t> <sectiontitle="<maxGroupBandwidth>">numbered="true" toc="default"> <name><maxGroupBandwidth></name> <t><maxGroupBandwidth> is an optional field containing the maximum bitrate expressed in bits per second that can be shared by the individual encodings included in the encoding group. </t> </section> <sectiontitle="<encodingIDList>" anchor="sec-encodingIDList">anchor="sec-encodingIDList" numbered="true" toc="default"> <name><encodingIDList></name> <t><encodingIDList> is the list of the individual encodings grouped together in the encoding group. Each individual encoding is represented through its identifier contained within an <encodingID> element. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- ENCODING ID LIST TYPE --> <xs:complexType name="encodingIDListType"> <xs:sequence> <xs:element name="encodingID" type="xs:string" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="encodingGroupID attribute" anchor="sec-encodingGroupID">anchor="sec-encodingGroupID" numbered="true" toc="default"> <name>encodingGroupID Attribute</name> <t>The encodingGroupID attribute contains the identifier of the encoding group.</t> </section></section><!-- encoding group --></section> <sectiontitle="<simultaneousSet>" anchor="sec-simultaneous-set">anchor="sec-simultaneous-set" numbered="true" toc="default"> <name><simultaneousSet></name> <t><simultaneousSet> represents a simultaneous transmission set, i.e., a list of captures of the same media type that can be transmitted at the same time by a Media Provider. There are different simultaneous transmission sets for each media type. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- SIMULTANEOUS SET TYPE --> <xs:complexType name="simultaneousSetType"> <xs:sequence> <xs:element name="mediaCaptureIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="sceneViewIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="captureSceneIDREF" type="xs:IDREF" minOccurs="0" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="setID" type="xs:ID" use="required"/> <xs:attribute name="mediaType" type="xs:string"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <t> Besides the identifiers of the captures (<mediaCaptureIDREF> elements),alsothe identifiers of capture scene views andofcapturescenescenes can also be exploited as shortcuts (<sceneViewIDREF> and <captureSceneIDREF> elements). As an example, let's consider the situation where there are two capture scene views (S1 and S7). S1 contains captures AC11, AC12, and AC13. S7 contains capturesAC71,AC71 and AC72. Provided that AC11, AC12, AC13, AC71, and AC72 can be simultaneously sent to themedia consumer,Media Consumer, instead of having 5 <mediaCaptureIDREF> elements listed in the simultaneous set (i.e., one <mediaCaptureIDREF> for AC11, one for AC12, and so on), there can be just two <sceneViewIDREF> elements (one for S1 and one for S7). </t> <sectiontitle="setID attribute">numbered="true" toc="default"> <name>setID Attribute</name> <t> The "setID" attribute is a mandatory field containing the identifier of the simultaneous set. </t> </section> <sectiontitle="mediaType attribute">numbered="true" toc="default"> <name>mediaType Attribute</name> <t> The "mediaType" attribute is an optional attribute containing the media type of the captures referenced by the simultaneous set. </t> <t>When only capture scene identifiers are listed within a simultaneous set, the media type attributeMUST<bcp14>MUST</bcp14> appear in the XML description in order to determine which media captures can be simultaneously sent together. </t> </section> <sectiontitle="<mediaCaptureIDREF>">numbered="true" toc="default"> <name><mediaCaptureIDREF></name> <t><mediaCaptureIDREF> contains the identifier of the media capture that belongs to the simultaneous set. </t> </section> <sectiontitle="<sceneViewIDREF>">numbered="true" toc="default"> <name><sceneViewIDREF></name> <t><sceneViewIDREF> contains the identifier of the scene view containing a group of captures that are able to be sent simultaneously with the other captures of the simultaneous set. </t> </section> <sectiontitle="<captureSceneIDREF>">numbered="true" toc="default"> <name><captureSceneIDREF></name> <t><captureSceneIDREF> contains the identifier of the capture scene where all the included captures of a certain media type are able to be sent together with the other captures of the simultaneous set. </t> </section></section><!-- simultaneous set section --></section> <sectiontitle="<globalView>" anchor="sec-global-view">anchor="sec-global-view" numbered="true" toc="default"> <name><globalView></name> <t><globalView> is a set of captures of the same media type representing a summary of the complete Media Provider's offer. The content of a global view is expressed by leveraging only scene view identifiers, put within <sceneViewIDREF> elements. Each global view is identified by a unique identifier within the "globalViewID" attribute. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- GLOBAL VIEW TYPE --> <xs:complexType name="globalViewType"> <xs:sequence> <xs:element name="sceneViewIDREF" type="xs:IDREF" maxOccurs="unbounded"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="globalViewID" type="xs:ID"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="<people>" anchor="sec-participants">anchor="sec-participants" numbered="true" toc="default"> <name><people></name> <t> Information about the participants that are represented in the media captures is conveyed via the <people> element. As it can be seen from the XMLSchemaschema depicted below, for each participant, a <person> element is provided. </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- PEOPLE TYPE --> <xs:complexType name="peopleType"> <xs:sequence> <xs:element name="person" type="personType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType>]]> </artwork> </figure>]]></sourcecode> <sectiontitle="<person>" anchor="sub-sec-participantInfo">anchor="sub-sec-participantInfo" numbered="true" toc="default"> <name><person></name> <t><person> includes all the metadata related to a person represented within one or more media captures. Such element provides thevcardvCard of the subject (via the <personInfo>element,element; see <xreftarget="sub-sec-vcard"/>)target="sub-sec-vcard" format="default"/>) andhisits conference role(s) (via one or more <personType>elements,elements; see <xreftarget="sub-sec-participantType"/>).target="sub-sec-participantType" format="default"/>). Furthermore, it has a mandatory "personID" attribute (<xreftarget="sub-sec-participantID"/>).target="sub-sec-participantID" format="default"/>). </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- PERSON TYPE --> <xs:complexType name="personType"> <xs:sequence> <xs:element name="personInfo" type="xcard:vcardType" maxOccurs="1" minOccurs="0"/> <xs:element ref="personType" minOccurs="0" maxOccurs="unbounded" /> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="personID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> <section title="personID attribute" anchor="sub-sec-participantID">]]></sourcecode> <section anchor="sub-sec-participantID" numbered="true" toc="default"> <name>personID Attribute</name> <t> The "personID" attribute carries the identifier of a represented person. Such an identifier can be used to refer to the participant, as in the <capturedPeople> element in the media captures representation (<xreftarget="sec-participantIDs"/>).target="sec-participantIDs" format="default"/>). </t> </section> <sectiontitle="<personInfo>" anchor="sub-sec-vcard">anchor="sub-sec-vcard" numbered="true" toc="default"> <name><personInfo></name> <t>The <personInfo> element is the XML representation of all the fields composing avcardvCard as specified in theXcard RFCxCard document <xreftarget="RFC6351"/>.target="RFC6351" format="default"/>. The vcardType is imported by theXcardxCard XMLSchemaschema provided inAppendix A of<xreftarget="I-D.ietf-ecrit-additional-data"/>.target="RFC7852" sectionFormat="of" section="A" format="default" derivedLink="https://rfc-editor.org/rfc/rfc7852#appendix-A" derivedContent="RFC7852"/>. As such schema specifies, the <fn> element within <vcard> is mandatory. </t> </section> <sectiontitle="<personType>" anchor="sub-sec-participantType">anchor="sub-sec-participantType" numbered="true" toc="default"> <name><personType></name> <t>The value of the <personType> element determines the role of the represented participant within the telepresence session organization. It has been specified as a simple string with an annotation pointing to anad hoc definedIANAregistry:registry that is defined ad hoc: </t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- PERSON TYPE ELEMENT --> <xs:element name="personType" type="xs:string"> <xs:annotation> <xs:documentation> Acceptable values (enumerations) for this type are managed by IANA in the "CLUE Schema<personType> registry",<personType>" registry, accessible atTBD-IANA.https://www.iana.org/assignments/clue. </xs:documentation> </xs:annotation> </xs:element>]]> </artwork> </figure>]]></sourcecode> <t> The current possible values, as per the CLUE framework document <xreftarget="I-D.ietf-clue-framework"/>,target="RFC8845" format="default"/>, are: "presenter", "timekeeper", "attendee", "minute taker", "translator", "chairman", "vice-chairman", and "observer".</t> <t> A participant can play more than one conference role. In that case, more than one <personType> element will appear inhisits description. </t> </section> </section> </section><!-- </section> --><sectiontitle="<captureEncoding>" anchor="sec-capture-encoding">anchor="sec-capture-encoding" numbered="true" toc="default"> <name><captureEncoding></name> <t>A capture encoding is given from the association of a media capture with an individual encoding, to form a capture stream as defined in <xreftarget="I-D.ietf-clue-framework"/>.target="RFC8845" format="default"/>. Capture encodings are used within CONFIGURE messages from a Media Consumer to a Media Provider for representing the streams desired by the Media Consumer. For each desired stream, the Media Consumer needs to be allowed to specify: (i) the capture identifier of the desired capture that has been advertised by the Media Provider; (ii) the encoding identifier of the encoding to use, among those advertised by the Media Provider; and (iii) optionally, in case ofmulti-contentmulticontent captures, the list of the capture identifiers of the desired captures. All the mentioned identifiers are intended to be included in the ADVERTISEMENT message that the CONFIGURE message refers to. The XML model of <captureEncoding> is provided in the following. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CAPTURE ENCODING TYPE --> <xs:complexType name="captureEncodingType"> <xs:sequence> <xs:element name="captureID" type="xs:string"/> <xs:element name="encodingID" type="xs:string"/> <xs:element name="configuredContent" type="contentType" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="ID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##any" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> <sectiontitle="<captureID>">numbered="true" toc="default"> <name><captureID></name> <t><captureID> is the mandatory element containing the identifier of the media capture that has been encoded to form the capture encoding.</t> </section> <sectiontitle="<encodingID>">numbered="true" toc="default"> <name><encodingID></name> <t><encodingID> is the mandatory element containing the identifier of the applied individual encoding. </t> </section> <sectiontitle="<configuredContent>">numbered="true" toc="default"> <name><configuredContent></name> <t><configuredContent> is an optional element to be used in case of the configuration of MCC. It contains the list of capture identifiers and capture scene view identifiers the Media Consumer wants within the MCC. That element is structured as the <content> element used to describe the content of an MCC. The total number of media captures listed in the <configuredContent>MUST<bcp14>MUST</bcp14> be lower than or equal to the value carried within the <maxCaptures> attribute of the MCC. </t> </section></section><!-- capture encoding section --></section> <sectiontitle="<clueInfo>">numbered="true" toc="default"> <name><clueInfo></name> <t>The <clueInfo> element includes all the information needed to represent the Media Provider's description of its telepresence capabilities according to the CLUE framework. Indeed, it is made by:</t><list> <t>the<ul empty="false" spacing="normal"> <li>the list of the available media captures(<xref target="sec-media-captures"><mediaCaptures></xref>)</t> <t>the(see "<mediaCaptures>", <xref target="sec-media-captures" format="default"/>)</li> <li>the list of encoding groups(<xref target="sec-encoding-groups"><encodingGroups></xref>)</t> <t>the(see "<encodingGroups>", <xref target="sec-encoding-groups" format="default"/>)</li> <li>the list of capture scenes(<xref target="sec-capture-scenes"><captureScenes></xref>)</t> <t>the(see "<captureScenes>", <xref target="sec-capture-scenes" format="default"/>)</li> <li>the list of simultaneous transmission sets(<xref target="sec-simultaneous-sets"><simultaneousSets></xref>) </t> <t>the(see "<simultaneousSets>", <xref target="sec-simultaneous-sets" format="default"/>)</li> <li>the list of global views sets(<xref target="sec-global-views"><globalViews></xref>)</t> <t>meta data(see "<globalViews>", <xref target="sec-global-views" format="default"/>)</li> <li>metadata about the participants represented in the telepresence session(<xref target="sec-participants"><people> </xref>)</t> </list>(see "<people>", <xref target="sec-participants" format="default"/>)</li> </ul> <t> It has been conceived only for data model testingpurposes and,purposes, and though it resembles the body of an ADVERTISEMENT message, it is not actually used in the CLUE protocol message definitions. The telepresence capabilities descriptions compliant to this data model specification that can be found in Sections <xreftarget="sec-XML-sample"/>target="sec-XML-sample" format="counter"/> and <xreftarget="sec-MCC-sample"/>target="sec-MCC-sample" format="counter"/> are provided by using the <clueInfo> element. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <!-- CLUE INFO TYPE --> <xs:complexType name="clueInfoType"> <xs:sequence> <xs:element ref="mediaCaptures"/> <xs:element ref="encodingGroups"/> <xs:element ref="captureScenes"/> <xs:element ref="simultaneousSets" minOccurs="0"/> <xs:element ref="globalViews" minOccurs="0"/> <xs:element ref="people" minOccurs="0"/> <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="clueInfoID" type="xs:ID" use="required"/> <xs:anyAttribute namespace="##other" processContents="lax"/> </xs:complexType>]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="XMLanchor="sec-extension" numbered="true" toc="default"> <name>XML Schemaextensibility" anchor="sec-extension">Extensibility</name> <t> The telepresence data model defined in this document is meant to be extensible. Extensions are accomplished by defining elements or attributes qualified by namespaces other than "urn:ietf:params:xml:ns:clue-info" and "urn:ietf:params:xml:ns:vcard-4.0" for use wherever the schema allows such extensions (i.e., where the XMLSchemaschema definition specifies "anyAttribute" or "anyElement"). Elements or attributes from unknown namespacesMUST<bcp14>MUST</bcp14> be ignored. Extensibility was purposefully favored as much as possible based on expectations about custom implementations. Hence, the schema offers people enough flexibility as to define custom extensions, without losing compliance with the standard. This is achieved by leveraging <xs:any> elements and <xs:anyAttribute> attributes, which is a common approach with schemas, while still matching theUPA (UniqueUnique ParticleAttribution)Attribution (UPA) constraint. </t> <sectiontitle="Examplenumbered="true" toc="default"> <name>Example ofextension">Extension</name> <t>When extending the CLUE data model, a new schema with a new namespace associated with it needs to be specified. </t> <t> In the following, an example of extension is provided. The extension defines a new audio capture attribute ("newAudioFeature") and an attribute for characterizing the captures belonging to an "otherCaptureType" defined by the user. An XML document compliant with the extension is also included. The XML file results are validated against the current XML schema for the CLUE datamodel schema.model. </t><t> <figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <?xml version="1.0" encoding="UTF-8" ?> <xs:schema targetNamespace="urn:ietf:params:xml:ns:clue-info-ext" xmlns:tns="urn:ietf:params:xml:ns:clue-info-ext" xmlns:clue-ext="urn:ietf:params:xml:ns:clue-info-ext" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="urn:ietf:params:xml:ns:clue-info-ext" xmlns:xcard="urn:ietf:params:xml:ns:vcard-4.0" xmlns:info="urn:ietf:params:xml:ns:clue-info" elementFormDefault="qualified" attributeFormDefault="unqualified"> <!-- ImportxcardxCard XML schema --> <xs:import namespace="urn:ietf:params:xml:ns:vcard-4.0" schemaLocation="http://www.iana.org/assignments/xml-registry/schema/vcard-4.0.xsd"/>"https://www.iana.org/assignments/xml-registry/schema/ vcard-4.0.xsd"/> <!-- Import CLUE XML schema --> <xs:import namespace="urn:ietf:params:xml:ns:clue-info" schemaLocation="clue-data-model-schema.xsd"/> <!-- ELEMENT DEFINITIONS --> <xs:element name="newAudioFeature" type="xs:string"/> <xs:element name="otherMediaCaptureTypeFeature" type="xs:string"/> </xs:schema>]]> </artwork> </figure> </t> <t> <figure> <artwork> <![CDATA[]]></sourcecode> <sourcecode type="xml"><![CDATA[ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <clueInfo xmlns="urn:ietf:params:xml:ns:clue-info" xmlns:ns2="urn:ietf:params:xml:ns:vcard-4.0" xmlns:ns3="urn:ietf:params:xml:ns:clue-info-ext" clueInfoID="NapoliRoom"> <mediaCaptures> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="audioCaptureType" captureID="AC0" mediaType="audio"> <captureSceneIDREF>CS1</captureSceneIDREF> <nonSpatiallyDefinable>true</nonSpatiallyDefinable> <individual>true</individual> <encGroupIDREF>EG1</encGroupIDREF> <ns3:newAudioFeature>newAudioFeatureValue </ns3:newAudioFeature> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="otherCaptureType" captureID="OMC0" mediaType="other media type"> <captureSceneIDREF>CS1</captureSceneIDREF> <nonSpatiallyDefinable>true</nonSpatiallyDefinable> <encGroupIDREF>EG1</encGroupIDREF> <ns3:otherMediaCaptureTypeFeature>OtherValue </ns3:otherMediaCaptureTypeFeature> </mediaCapture> </mediaCaptures> <encodingGroups> <encodingGroup encodingGroupID="EG1"> <maxGroupBandwidth>300000</maxGroupBandwidth> <encodingIDList> <encodingID>ENC4</encodingID> <encodingID>ENC5</encodingID> </encodingIDList> </encodingGroup> </encodingGroups> <captureScenes> <captureScene scale="unknown" sceneID="CS1"/> </captureScenes> </clueInfo>]]> </artwork> </figure> </t>]]></sourcecode> </section> </section> <sectiontitle="Security considerations" anchor="sec:security">anchor="sec_security" numbered="true" toc="default"> <name>Security Considerations</name> <t> This documentdefinesdefines, through an XMLSchemaschema, a data model for telepresence scenarios. The modeled information is identified in the CLUE framework as necessary in order to enable a full-fledged media stream negotiation and rendering. Indeed, the XML elements herein defined are used within CLUE protocol messages to describe both the media streams representing the Media Provider's telepresence offer and the desired selection requested by the Media Consumer. Security concerns described in <xreftarget="I-D.ietf-clue-framework"/>, Section 15,target="RFC8845" sectionFormat="comma" section="15"/> apply to this document. </t> <t> Data model information carried within CLUE messagesSHOULD<bcp14>SHOULD</bcp14> be accessed only by authenticated endpoints. Indeed, authenticated access is strongly advisable, especially if you convey information about individuals (<personalInfo>) and/or scenes (<sceneInformation>). There might be more exceptions, depending on the level of criticality that is associated with the setup and configuration of a specific session. In principle, one might even decide that no protection at all is needed for a particular session; here is why authentication has not been identified as a mandatory requirement. </t> <t> Going deeper into details, some information published by the Media Provider might reveal sensitive data about who and what is represented in the transmitted streams. The vCard included in the <personInfo> elements (<xreftarget="sub-sec-participantInfo"/>)target="sub-sec-participantInfo" format="default"/>) mandatorily contains the identity of the represented person.OptionallyOptionally, vCards can also carry the person's contact addresses, together withhis/hertheir photo and other personal data. Similar privacy-critical information can be conveyed by means of <sceneInformation> elements (<xreftarget="sec-scene-info"/>)target="sec-scene-info" format="default"/>) describing the capture scenes. The <description> elements (<xreftarget="sec-description"/>)target="sec-description" format="default"/>) also can specify details about the content of media captures, capturescenesscenes, and scene views that should be protected. </t> <t> Integrity attacks to the data model information encapsulated in CLUE messages can invalidate the success of the telepresence session's setup by misleading the Media Consumer's and Media Provider's interpretation of the offered and desired media streams.</t><t></t> <t> The assurance of the authenticated access and of the integrity of the data model information is up to the involved transport mechanisms, namely the CLUE protocol <xreftarget="I-D.ietf-clue-protocol"/>target="RFC8847" format="default"/> and the CLUE data channel <xreftarget="I-D.ietf-clue-datachannel"/>.target="RFC8850" format="default"/>. </t> <t> XML parsers need to be robust with respect to malformed documents. Reading malformed documents from unknown or untrusted sources could result in an attacker gaining privileges of the user running the XML parser. In an extreme situation, the entire machine could be compromised. </t> </section> <sectiontitle="IANA considerations">numbered="true" toc="default"> <name>IANA Considerations</name> <t> This document registers a new XML namespace, a new XML schema, theMIMEmedia type for theschemaschema, and four new registries associated, respectively, with acceptable <view>, <presentation>,<sensitivityPattern><sensitivityPattern>, and <personType> values. </t> <sectiontitle="XML namespace registration"> <t> </t> This section registers a new XML namespace: <t> <t> URI: urn:ietf:params:xml:ns:clue-info </t> <t> Registrant Contact: IETFnumbered="true" toc="default"> <name>XML Namespace Registration</name> <dl newline="false" spacing="normal"> <dt>URI:</dt><dd>urn:ietf:params:xml:ns:clue-info</dd> <dt>Registrant Contact:</dt><dd>IETF CLUE Working Group <clue@ietf.org>, Roberta Presta<roberta.presta@unina.it> </t> <t>XML:</t> <figure> <artwork> <![CDATA[ BEGIN<roberta.presta@unina.it></dd> <dt>XML:</dt><dd></dd> </dl> <sourcecode type="xml" markers="true"><![CDATA[ <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML Basic 1.0//EN" "http://www.w3.org/TR/xhtml-basic/xhtml-basic10.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html;charset=iso-8859-1"/><title> CLUE<title>CLUE Data Model Namespace</title> </head> <body> <h1>Namespace for CLUE Data Model</h1> <h2>urn:ietf:params:xml:ns:clue-info</h2> <p>See <ahref="http://www.rfc-editor.org/rfc/rfcXXXX.txt"> RFC XXXX</a>. <!--[[NOTE TO IANA/RFC-EDITOR: Please update RFC URL and replace XXXX with the RFC number for this specification.]]-->href="https://www.rfc-editor.org/rfc/rfc8846.txt">RFC 8846</a>. </p> </body> </html>END ]]> </artwork> </figure> </t>]]></sourcecode> </section> <sectiontitle="XMLnumbered="true" toc="default"> <name>XML Schemaregistration">Registration</name> <t> This section registers an XML schema per the guidelines in <xreftarget="RFC3688"/>.target="RFC3688" format="default"/>. </t><t> URI: urn:ietf:params:xml:schema:clue-info </t> <t> Registrant Contact: CLUE working group<dl newline="false" spacing="normal"> <dt>URI:</dt><dd>urn:ietf:params:xml:schema:clue-info</dd> <dt>Registrant Contact:</dt> <dd>CLUE Working Group (clue@ietf.org), Roberta Presta(roberta.presta@unina.it). </t> <t> Schema: The(roberta.presta@unina.it).</dd> <dt>Schema:</dt> <dd>The XML for this schema can be foundas thein its entiretyofin <xreftarget="sec-schema"/>target="sec-schema" format="default"/> of thisdocument. </t>document.</dd> </dl> </section> <sectiontitle='MIME Medianumbered="true" toc="default"> <name>Media Type Registration for"application/clue_info+xml"'>"application/clue_info+xml"</name> <t>This section registers the<spanx style="verb">application/clue_info+xml</spanx> MIME"application/clue_info+xml" media type. </t><t>To: ietf-types@iana.org</t> <t>Subject: Registration<dl newline="false" spacing="normal"> <dt>To:</dt><dd>ietf-types@iana.org</dd> <dt>Subject:</dt><dd>Registration ofMIME media type application/clue_info+xml </t> <t>MIMEmedia typename: application</t> <t>MIME subtype name: clue_info+xml</t> <t>Required parameters: (none)</t> <t>Optional parameters: charset <vspace/>application/clue_info+xml</dd> <dt>Type name:</dt><dd>application</dd> <dt>Subtype name:</dt><dd>clue_info+xml</dd> <dt>Required parameters:</dt><dd>(none)</dd> <dt>Optional parameters:</dt><dd>charset Same as the charset parameter of "application/xml" as specified in <xreftarget="RFC7303"/>, Section 3.2. </t> <t>Encoding considerations: Sametarget="RFC7303" sectionFormat="comma" section="3.2"/>.</dd> <dt>Encoding considerations:</dt><dd>Same as the encoding considerations of "application/xml" as specified in <xreftarget="RFC7303"/>, Section 3.2. </t> <t>Security considerations: Thistarget="RFC7303" sectionFormat="comma" section="3.2"/>. </dd> <dt>Security considerations:</dt><dd>This content type is designed to carry data related to telepresence information. Some of the data could be considered private. This media type does not provide any protection and thus other mechanisms such as those described in <xreftarget="sec:security"/>target="sec_security" format="default"/> are required to protect the data. This media type does not contain executable content.</t> <t>Interoperability considerations: None. </t> <t>Published specification: RFC XXXX [[NOTE TO IANA/RFC-EDITOR: Please replace XXXX with the RFC number for this specification.]] </t> <t>Applications</dd> <dt>Interoperability considerations:</dt><dd>None.</dd> <dt>Published specification:</dt><dd>RFC 8846</dd> <dt>Applications that use this mediatype: CLUE-capabletype:</dt><dd>CLUE-capable telepresencesystems. </t> <t>Additional Information: Magic Number(s): (none), <vspace/> File extension(s): .clue, <vspace/> Macintoshsystems.</dd> <dt>Additional Information:</dt> <dd> <t><br/></t> <dl newline="false" spacing="compact"> <dt>Magic Number(s):</dt><dd>none</dd> <dt>File extension(s):</dt><dd>.clue</dd> <dt>Macintosh File TypeCode(s): TEXT. <vspace/> </t> <t>PersonCode(s):</dt><dd>TEXT</dd> </dl> </dd> <dt>Person & email address to contact for furtherinformation: Robertainformation:</dt><dd>Roberta Presta(roberta.presta@unina.it). </t> <t>Intended usage: LIMITED USE </t> <t>Author/Change controller: The IETF </t> <t>Other information: This(roberta.presta@unina.it).</dd> <dt>Intended usage:</dt><dd>LIMITED USE</dd> <dt>Author/Change controller:</dt><dd>The IETF</dd> <dt>Other information:</dt><dd>This media type is a specialization ofapplication/xml"application/xml" <xreftarget="RFC7303"/>,target="RFC7303" format="default"/>, and many of the considerations described there also apply toapplication/clue_info+xml. </t>"application/clue_info+xml". </dd></dl> </section> <sectiontitle="Registrynumbered="true" toc="default"> <name>Registry foracceptableAcceptable <view>values">Values</name> <t> IANAis requested to createhas created a registry of acceptable values for thethe<view> tag as defined in <xreftarget="sec-view"/>.target="sec-view" format="default"/>. The initial values for this registry are "room", "table", "lectern", "individual", and "audience". </t> <t> New values are assigned by Expert Review per <xreftarget="RFC5226"/>.target="RFC8126" format="default"/>. This reviewer will ensure that the requested registry entry conforms to the prescribed formatting. </t><t> IANA is further requested to update this draft with the URL to the new registry in <xref target="sec-view"/>, marked as "TBD-IANA". </t></section> <sectiontitle="Registrynumbered="true" toc="default"> <name>Registry foracceptableAcceptable <presentation>values">Values</name> <t> IANAis requested to createhas created a registry of acceptable values for thethe<presentation> tag as defined in <xreftarget="sec-presentation"/>.target="sec-presentation" format="default"/>. The initial values for this registry are "slides" and "images". </t> <t> New values are assigned by Expert Review per <xreftarget="RFC5226"/>.target="RFC8126" format="default"/>. This reviewer will ensure that the requested registry entry conforms to the prescribed formatting. </t><t> IANA is further requested to update this draft with the URL to the new registry in <xref target="sec-presentation"/>, marked as "TBD-IANA". </t></section> <sectiontitle="Registrynumbered="true" toc="default"> <name>Registry foracceptableAcceptable <sensitivityPattern>values">Values</name> <t> IANAis requested to createhas created a registry of acceptable values for thethe<sensitivityPattern> tag as defined in <xreftarget="sec-sensitivity-pattern"/>.target="sec-sensitivity-pattern" format="default"/>. The initial values for this registry are "uni", "shotgun", "omni", "figure8","cardioid""cardioid", and "hyper-cardioid". </t> <t> New values are assigned by Expert Review per <xreftarget="RFC5226"/>.target="RFC8126" format="default"/>. This reviewer will ensure that the requested registry entry conforms to the prescribed formatting. </t><t> IANA is further requested to update this draft with the URL to the new registry in <xref target="sec-sensitivity-pattern"/>, marked as "TBD-IANA". </t></section> <sectiontitle="Registrynumbered="true" toc="default"> <name>Registry foracceptableAcceptable <personType>values">Values</name> <t> IANAis requested to createhas created a registry of acceptable values for thethe<personType> tag as defined in <xreftarget="sub-sec-participantType"/>.target="sub-sec-participantType" format="default"/>. The initial values for this registry are "presenter", "timekeeper", "attendee", "minute taker", "translator", "chairman", "vice-chairman", and "observer". </t> <t> New values are assigned by Expert Review per <xreftarget="RFC5226"/>.target="RFC8126" format="default"/>. This reviewer will ensure that the requested registry entry conforms to the prescribed formatting. </t><t> IANA is further requested to update this draft with the URL to the new registry in <xref target="sub-sec-participantType"/>, marked as "TBD-IANA". </t></section> </section> <sectiontitle="Sampleanchor="sec-XML-sample" numbered="true" toc="default"> <name>Sample XMLfile" anchor="sec-XML-sample">File</name> <t>The following XML document represents aschema compliantschema-compliant example of a CLUE telepresence scenario. Taking inspiration from the examples described in the frameworkdraft (<xref target="I-D.ietf-clue-framework"/>), it is providedspecification <xref target="RFC8845" format="default"/>, the XML representation of an endpoint-style Media Provider'sADVERTISEMENT. </t><t>ADVERTISEMENT is provided. </t> <t> There are three cameras, where the central one is also capable of capturing a zoomed-out view of the overall telepresence room. Besides the three video captures coming from the cameras, the Media Provider makes available a furthermulti-contentmulticontent capture of the loudest segment of the room, obtained by switching the video source across the three cameras. For the sake of simplicity, only one audio capture is advertised for the audio of the whole room.</t> <t> The three cameras are placed in front of three participants (Alice,BobBob, and Ciccio), whosevcardvCard and conference role details are also provided. </t> <t> Media captures are arranged into four capture scene views: </t><t> <list style="numbers"> <t>(VC0,<ol spacing="normal" type="1"> <li>(VC0, VC1, VC2) - left,centercenter, and right camera video captures</t> <t>(VC3)</li> <li>(VC3) - video capture associated with loudest roomsegment</t> <t>(VC4)segment</li> <li>(VC4) - video capturezoomed outzoomed-out view of all people in the room</t> <t>(AC0)</li> <li>(AC0) - mainaudio</t> </list> </t>audio</li> </ol> <t> There are two encoding groups: (i) EG0, for video encodings, and (ii) EG1, for audio encodings. </t> <t>As to the simultaneous sets, VC1 and VC4 cannot be transmitted simultaneously since they are captured by the same device, i.e., the central camera (VC4 is a zoomed-out view while VC1 is a focused view of the front participant). On the other hand, VC3 and VC4 cannot be simultaneous either, since VC3, the loudest segment of the room, might be at a certain point in time focusing on the central part of the room, i.e., the same as VC1. The simultaneous sets would then be the following:<list style="hanging"> <t hangText="SS1"> made</t> <ul empty="true"> <li> <dl newline="false" spacing="normal"> <dt>SS1:</dt> <dd>made by VC3 and all the captures in the first capture scene view(VC0,VC1,VC2); </t> <t hangText="SS2"> made(VC0,VC1,and VC2) </dd> <dt>SS2:</dt> <dd>made by VC0, VC2, and VC4</t> </list> </t> <figure> <artwork> <![CDATA[</dd> </dl> </li> </ul> <sourcecode type="xml"><![CDATA[ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <clueInfo xmlns="urn:ietf:params:xml:ns:clue-info" xmlns:ns2="urn:ietf:params:xml:ns:vcard-4.0" clueInfoID="NapoliRoom"> <mediaCaptures> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="audioCaptureType" captureID="AC0" mediaType="audio"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> <lineOfCapturePoint> <x>0.0</x> <y>1.0</y> <z>10.0</z> </lineOfCapturePoint> </captureOrigin> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG1</encGroupIDREF> <description lang="en">main audio from the room </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>room</view> <capturedPeople> <personIDREF>alice</personIDREF> <personIDREF>bob</personIDREF> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC0" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>-2.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>-1.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>-1.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">left camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC1" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>-1.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>1.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-1.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>1.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">central camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>alice</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC2" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>2.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>1.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>1.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">right camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>bob</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC3" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <content> <sceneViewIDREF>SE1</sceneViewIDREF> </content> <policy>SoundLevel:0</policy> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">loudest room segment</description> <priority>2</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC4" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>7.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>7.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>13.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>13.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <descriptionlang="en">zoomed outlang="en">zoomed-out view of all people in the room</description> <priority>2</priority> <lang>it</lang> <mobility>static</mobility> <view>room</view> <capturedPeople> <personIDREF>alice</personIDREF> <personIDREF>bob</personIDREF> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> </mediaCaptures> <encodingGroups> <encodingGroup encodingGroupID="EG0"> <maxGroupBandwidth>600000</maxGroupBandwidth> <encodingIDList> <encodingID>ENC1</encodingID> <encodingID>ENC2</encodingID> <encodingID>ENC3</encodingID> </encodingIDList> </encodingGroup> <encodingGroup encodingGroupID="EG1"> <maxGroupBandwidth>300000</maxGroupBandwidth> <encodingIDList> <encodingID>ENC4</encodingID> <encodingID>ENC5</encodingID> </encodingIDList> </encodingGroup> </encodingGroups> <captureScenes> <captureScene scale="unknown" sceneID="CS1"> <sceneViews> <sceneView sceneViewID="SE1"> <mediaCaptureIDs> <mediaCaptureIDREF>VC0</mediaCaptureIDREF> <mediaCaptureIDREF>VC1</mediaCaptureIDREF> <mediaCaptureIDREF>VC2</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE2"> <mediaCaptureIDs> <mediaCaptureIDREF>VC3</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE3"> <mediaCaptureIDs> <mediaCaptureIDREF>VC4</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE4"> <mediaCaptureIDs> <mediaCaptureIDREF>AC0</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> </sceneViews> </captureScene> </captureScenes> <simultaneousSets> <simultaneousSet setID="SS1"> <mediaCaptureIDREF>VC3</mediaCaptureIDREF> <sceneViewIDREF>SE1</sceneViewIDREF> </simultaneousSet> <simultaneousSet setID="SS2"> <mediaCaptureIDREF>VC0</mediaCaptureIDREF> <mediaCaptureIDREF>VC2</mediaCaptureIDREF> <mediaCaptureIDREF>VC4</mediaCaptureIDREF> </simultaneousSet> </simultaneousSets> <people> <person personID="bob"> <personInfo> <ns2:fn> <ns2:text>Bob</ns2:text> </ns2:fn> </personInfo> <personType>minute taker</personType> </person> <person personID="alice"> <personInfo> <ns2:fn> <ns2:text>Alice</ns2:text> </ns2:fn> </personInfo> <personType>presenter</personType> </person> <person personID="ciccio"> <personInfo> <ns2:fn> <ns2:text>Ciccio</ns2:text> </ns2:fn> </personInfo> <personType>chairman</personType> <personType>timekeeper</personType> </person> </people> </clueInfo>]]> </artwork> </figure>]]></sourcecode> </section> <sectiontitle="MCC example" anchor="sec-MCC-sample">anchor="sec-MCC-sample" numbered="true" toc="default"> <name>MCC Example</name> <t> Enhancing the scenario presented in the previous example, the Media Provider is able to advertise a composed capture VC7 made by a big picture representing the current speaker (VC3) and two picture-in-picture boxes representing the previous speakers (the previousone -VC5-one, VC5, and the oldestone -VC6).one, VC6). The provider does not want to instantiate and send VC5 and VC6, so it does not associate any encoding group with them. Their XML representations are provided for enabling the description of VC7. </t> <t>A possible description for that scenario could be the following:</t><figure> <artwork> <![CDATA[<sourcecode type="xml"><![CDATA[ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <clueInfo xmlns="urn:ietf:params:xml:ns:clue-info" xmlns:ns2="urn:ietf:params:xml:ns:vcard-4.0" clueInfoID="NapoliRoom"> <mediaCaptures> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="audioCaptureType" captureID="AC0" mediaType="audio"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> <lineOfCapturePoint> <x>0.0</x> <y>1.0</y> <z>10.0</z> </lineOfCapturePoint> </captureOrigin> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG1</encGroupIDREF> <description lang="en">main audio from the room </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>room</view> <capturedPeople> <personIDREF>alice</personIDREF> <personIDREF>bob</personIDREF> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC0" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.5</x> <y>1.0</y> <z>0.5</z> </capturePoint> <lineOfCapturePoint> <x>0.5</x> <y>0.0</y> <z>0.5</z> </lineOfCapturePoint> </captureOrigin> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">left camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC1" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>-1.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>1.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-1.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>1.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">central camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>alice</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC2" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>2.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>1.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>1.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">right camera video capture </description> <priority>1</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> <capturedPeople> <personIDREF>bob</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC3" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <content> <sceneViewIDREF>SE1</sceneViewIDREF> </content> <policy>SoundLevel:0</policy> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">loudest room segment</description> <priority>2</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC4" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureOrigin> <capturePoint> <x>0.0</x> <y>0.0</y> <z>10.0</z> </capturePoint> </captureOrigin> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>7.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>7.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>13.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>13.0</z> </topRight> </captureArea> </spatialInformation> <individual>true</individual> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">zoomed outzoomed-out view of all people in the room </description> <priority>2</priority> <lang>it</lang> <mobility>static</mobility> <view>room</view> <capturedPeople> <personIDREF>alice</personIDREF> <personIDREF>bob</personIDREF> <personIDREF>ciccio</personIDREF> </capturedPeople> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC5" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <content> <sceneViewIDREF>SE1</sceneViewIDREF> </content> <policy>SoundLevel:1</policy> <descriptionlang="en">penultimatelang="en">previous loudest room segment per the most recent iteration of the sound level detection algorithm </description> <lang>it</lang> <mobility>static</mobility> <view>individual</view> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC6" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <content> <sceneViewIDREF>SE1</sceneViewIDREF> </content> <policy>SoundLevel:2</policy> <descriptionlang="en">last but twolang="en">previous loudest room segment per the second most recent iteration of the sound level detection algorithm </description> <lang>it</lang> <mobility>static</mobility> <view>individual</view> </mediaCapture> <mediaCapture xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="videoCaptureType" captureID="VC7" mediaType="video"> <captureSceneIDREF>CS1</captureSceneIDREF> <spatialInformation> <captureArea> <bottomLeft> <x>-3.0</x> <y>20.0</y> <z>9.0</z> </bottomLeft> <bottomRight> <x>3.0</x> <y>20.0</y> <z>9.0</z> </bottomRight> <topLeft> <x>-3.0</x> <y>20.0</y> <z>11.0</z> </topLeft> <topRight> <x>3.0</x> <y>20.0</y> <z>11.0</z> </topRight> </captureArea> </spatialInformation> <content> <mediaCaptureIDREF>VC3</mediaCaptureIDREF> <mediaCaptureIDREF>VC5</mediaCaptureIDREF> <mediaCaptureIDREF>VC6</mediaCaptureIDREF> </content> <maxCaptures exactNumber="true">3</maxCaptures> <encGroupIDREF>EG0</encGroupIDREF> <description lang="en">big picture of the current speaker + pips about previous speakers</description> <priority>3</priority> <lang>it</lang> <mobility>static</mobility> <view>individual</view> </mediaCapture> </mediaCaptures> <encodingGroups> <encodingGroup encodingGroupID="EG0"> <maxGroupBandwidth>600000</maxGroupBandwidth> <encodingIDList> <encodingID>ENC1</encodingID> <encodingID>ENC2</encodingID> <encodingID>ENC3</encodingID> </encodingIDList> </encodingGroup> <encodingGroup encodingGroupID="EG1"> <maxGroupBandwidth>300000</maxGroupBandwidth> <encodingIDList> <encodingID>ENC4</encodingID> <encodingID>ENC5</encodingID> </encodingIDList> </encodingGroup> </encodingGroups> <captureScenes> <captureScene scale="unknown" sceneID="CS1"> <sceneViews> <sceneView sceneViewID="SE1"> <description lang="en">participants' individual videos</description> <mediaCaptureIDs> <mediaCaptureIDREF>VC0</mediaCaptureIDREF> <mediaCaptureIDREF>VC1</mediaCaptureIDREF> <mediaCaptureIDREF>VC2</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE2"> <description lang="en">loudest segment of the room</description> <mediaCaptureIDs> <mediaCaptureIDREF>VC3</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE5"> <description lang="en">loudest segment of the room + pips</description> <mediaCaptureIDs> <mediaCaptureIDREF>VC7</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE4"> <description lang="en">room audio</description> <mediaCaptureIDs> <mediaCaptureIDREF>AC0</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> <sceneView sceneViewID="SE3"> <description lang="en">room video</description> <mediaCaptureIDs> <mediaCaptureIDREF>VC4</mediaCaptureIDREF> </mediaCaptureIDs> </sceneView> </sceneViews> </captureScene> </captureScenes> <simultaneousSets> <simultaneousSet setID="SS1"> <mediaCaptureIDREF>VC3</mediaCaptureIDREF> <mediaCaptureIDREF>VC7</mediaCaptureIDREF> <sceneViewIDREF>SE1</sceneViewIDREF> </simultaneousSet> <simultaneousSet setID="SS2"> <mediaCaptureIDREF>VC0</mediaCaptureIDREF> <mediaCaptureIDREF>VC2</mediaCaptureIDREF> <mediaCaptureIDREF>VC4</mediaCaptureIDREF> </simultaneousSet> </simultaneousSets> <people> <person personID="bob"> <personInfo> <ns2:fn> <ns2:text>Bob</ns2:text> </ns2:fn> </personInfo> <personType>minute taker</personType> </person> <person personID="alice"> <personInfo> <ns2:fn> <ns2:text>Alice</ns2:text> </ns2:fn> </personInfo> <personType>presenter</personType> </person> <person personID="ciccio"> <personInfo> <ns2:fn> <ns2:text>Ciccio</ns2:text> </ns2:fn> </personInfo> <personType>chairman</personType> <personType>timekeeper</personType> </person> </people> </clueInfo>]]> </artwork> </figure>]]></sourcecode> </section><section title="Diff with draft-ietf-clue-data-model-schema-16 version"> <t> As per Alexey Melnikov's and Stefan Winter's comments: replaced wrong references to RFC2119 in section 11.3 and section 11.5. The updated reference</middle> <back> <references> <name>References</name> <references> <name>Normative References</name> <!--draft-ietf-clue-framework-25 is 8845 --> <reference anchor='RFC8845' target='https://www.rfc-editor.org/info/rfc8845'> <front> <title>Framework for Telepresence Multi-Streams</title> <author initials='M' surname='Duckworth' fullname='Mark Duckworth' role='editor'> <organization /> </author> <author initials='A' surname='Pepperell' fullname='Andrew Pepperell'> <organization /> </author> <author initials='S' surname='Wenger' fullname='Stephan Wenger'> <organization /> </author> <date month='January' year='2021' /> </front> <seriesInfo name='RFC' value='8845' /> <seriesInfo name='DOI' value='10.17487/RFC8845' /> </reference> <!--draft-ietf-clue-datachannel-18 isto RFC5646. </t> </section> <section title="Diff8850 --> <reference anchor="RFC8850" target="https://www.rfc-editor.org/info/rfc8850"> <front> <title>Controlling Multiple Streams for Telepresence (CLUE) Protocol Data Channel</title> <author initials="C." surname="Holmberg" fullname="Christer Holmberg"> <organization/> </author> <date month="January" year="2021"/> </front> <seriesInfo name="RFC" value="8850"/> <seriesInfo name="DOI" value="10.17487/RFC8850"/> </reference> <!--draft-ietf-clue-protocol-19 is 8847 --> <reference anchor='RFC8847' target='https://www.rfc-editor.org/info/rfc8847'> <front> <title>Protocol for Controlling Multiple Streams for Telepresence (CLUE)</title> <author initials='R' surname='Presta' fullname='Roberta Presta'> <organization /> </author> <author initials='S P.' surname='Romano' fullname='Simon Pietro Romano'> <organization /> </author> <date month='January' year='2021' /> </front> <seriesInfo name="RFC" value="8847" /> <seriesInfo name='DOI' value='10.17487/RFC8847' /> </reference> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.6351.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.5646.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7303.xml"/> <!--draft-ietf-ecrit-additional-data-38 is now RFC 7852--> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7852.xml"/> <!--RFC 5226 was replaced withdraft-ietf-clue-data-model-schema-15 version">RFC 8126--> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8126.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml"/> </references> <references> <name>Informative References</name> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.3688.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4353.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.3550.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.6838.xml"/> <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7667.xml"/> </references> </references> <section numbered="false" toc="default"> <name>Acknowledgements</name> <t>Applied modifications as perThe authors thank all thefollowing reviews: (i) Alexey Melnikov's discuss and comments (abstract amendments, typo corrections, insertion of references, etc.); (ii) Kathleen Moriarty's discussCLUE contributors for their valuable feedback andcomments (amendmentssupport. Thanks also tothe Security Considerations section); (iii) Stefan Winter's OPS-DIR<contact fullname="Alissa Cooper"/>, whose AD review(usehelped us improve the quality ofenumerated types intheschema).document. </t> </section><section title="Diff with draft-ietf-clue-data-model-schema-14 version"> <t> Applied modifications as per the following reviews: (i) Henry S. Thompson's APPS-DIR review; (ii) Stefan Winter's OPS-DIR review; (iii) Francis Dupont's GEN-ART review; (iv) Rich Salz's review (as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG.) </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-13 version"> <t> Applied modifications as per the latest Area Director (Alissa Cooper's) review comments. </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-12 version"> <t> Removed some typos and inconsistencies. Applied modifications as per Alissa Cooper's review comments. </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-11 version"> <t> Applied modifications as per Mark Duckworth's review (example corrections and maxCapturesType modification)</t> <t> maxCapturesType has been changed from positiveInteger to unsignedShort excluding value 0.</t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-10 version"> <t> Minor modifications have been applied to address nits at page https://www.ietf.org/tools/idnits?url=https://www.ietf.org/archive/id/ draft-ietf-clue-data-model-schema-10.txt. </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-09 version"> <list style="symbols"> <t>We have introduced a <captureOrigin> element containing a mandatory <capturePoint> and an optional <lineOfCapturePoint> in the definition of <spatialInformation> as per Paul's review </t> <t>A new type definition for switching policies (resembled by <policy> element) has been provided in order to have acceptable values in the form of "token:index". </t> <t>Minor modifications suggested in WGLC reviews have been applied.</t> </list> </section> <section title="Diff with draft-ietf-clue-data-model-schema-08 version"> <list style="symbols"> <t>Typos correction</t> </list> </section> <section title="Diff with draft-ietf-clue-data-model-schema-07 version"> <list style="symbols"> <t>IANA Considerations: text added</t> <t>maxCaptureEncodings removed</t> <t>personTypeType values aligned with CLUE framework</t> <t>allowSubsetChoice added for multiple content captures</t> <t>embeddedText moved from videoCaptureType definition to mediaCaptureType definition </t> <t>typos removed from section Terminology</t> </list> </section> <section title="Diff with draft-ietf-clue-data-model-schema-06 version"> <list style="symbols"> <t> Capture Scene Entry/Entries renamed as Capture Scene View/Views in the text, <sceneEntry>/<sceneEntries> renamed as <sceneView>/<sceneViews> in the XML schema. </t> <t> Global Scene Entry/Entries renamed as Global View/Views in the text, <globalSceneEntry>/<globalSceneEntries> renamed as <globalView>/<globalViews> </t> <t> Security section added. </t> <t> Extensibility: a new type is introduced to describe other types of media capture (otherCaptureType), text and example added. </t> <t> Spatial information section updated: capture point optional, text now is coherent with the framework one. </t> <t> Audio capture description: <sensitivityPattern> added, <audioChannelFormat> removed, <captureArea> disallowed. </t> <t>Simultaneous set definition: added <captureSceneIDREF> to refer to capture scene identifiers as shortcuts and an optional mediaType attribute which is mandatory to use when only capture scene identifiers are listed. </t> <t> Encoding groups: removed the constraint of the same media type. </t> <t>Updated text about media captures without <encodingGroupIDREF> (optional in the XML schema).</t> <t> "mediaType" attribute removed from homogeneous groups of capture (scene views and globlal views) </t> <t> "mediaType" attribute removed from the global view textual description. </t> <t>"millimeters" scale value changed in "mm"</t> </list> </section> <section title="Diff with draft-ietf-clue-data-model-schema-04 version"> <t> <list syle="symbols"> <t>globalCaptureEntries/Entry renamed as globalSceneEntries/Entry;</t> <t>sceneInformation added;</t> <t>Only capture scene entry identifiers listed within global scene entries (media capture identifiers removed);</t> <t><participants> renamed as <people> in the >clueInfo< template</t> <t><vcard> renamed as <personInfo> to synch with the framework terminology</t> <t><participantType> renamed as <personType> to synch with the framework terminology</t> <t><participantIDs> renamed as <capturedPeople> in the media capture type definition to remove ambiguity</t> <t>Examples have been updated with the new definitions of <globalSceneEntries> and of <people>.</t> </list> </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-03 version"> <t> <list syle="symbols"> <t>encodings section has been removed</t> <t>global capture entries have been introduced</t> <t>capture scene entry identifiers are used as shortcuts in listing the content of MCC (similarly to simultaneous set and global capture entries)</t> <t>Examples have been updated. A new example with global capture entries has been added.</t> <t><encGroupIDREF> has been made optional.</t> <t><single> has been renamed into <individual></t> <t>Obsolete comments have been removed.</t> <t>participants information has been added.</t> </list> </t> </section> <section title="Diff with draft-ietf-clue-data-model-schema-02 version"> <t> <list syle="symbols"> <t>captureParameters and encodingParameters have been removed from the captureEncodingType</t> <t>data model example has been updated and validated according to the new schema. Further description of the represented scenario has been provided.</t> <t>A multiple content capture example has been added.</t> <t>Obsolete comments and references have been removed.</t> </list> </t> </section> <section title="Acknowledgments"> <t> The authors thank all the CLUErs for their precious feedbacks and support. Thanks also to Alissa Cooper, whose AD reviews helped us improve the quality of the document. </t> </section> </middle> <back> <references title="Normative References"> <!-- clue framework --> <?rfc include="reference.I-D.ietf-clue-framework"?> <!-- clue data channel --> <?rfc include="reference.I-D.ietf-clue-datachannel"?> <!-- RFC7303 --> <?rfc include="reference.RFC.7303"?> <!-- clue protocol --> <?rfc include="reference.I-D.ietf-clue-protocol"?> <!-- ecrit --> <?rfc include="reference.I-D.ietf-ecrit-additional-data"?> <!-- RFC2119 --> <?rfc include="reference.RFC.2119"?> <!-- RFC6351 --> <?rfc include="reference.RFC.6351"?> <!-- RFC5646 --> <?rfc include="reference.RFC.5646"?> <!-- RFC5646 --> <?rfc include="reference.RFC.5226"?> </references> <references title="Informative References"> <!-- RFC3688 --> <?rfc include="reference.RFC.3688"?> <!-- RFC4353 --> <?rfc include="reference.RFC.4353"?> <!-- RFC3550 --> <?rfc include="reference.RFC.3550"?> <!-- RFC6838 --> <?rfc include="reference.RFC.6838"?> <!-- RFC7667 --> <?rfc include="reference.RFC.7667"?> </references></back> </rfc>