Virtualization and Network Evolution Virtual Provisioning ...

135
Virtualization and Network Evolution Virtual Provisioning Interfaces Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424 RELEASED Notice This Virtualization and Network Evolution technical report is the result of a cooperative effort undertaken at the direction of Cable Television Laboratories, Inc. for the benefit of the cable industry and its customers. You may download, copy, distribute, and reference the documents herein only for the purpose of developing products or services in accordance with such documents, and educational use. Except as granted by CableLabs® in a separate written license agreement, no license is granted to modify the documents herein, or to use, copy, modify or distribute the documents for any other purpose. This document may contain references to other documents not owned or controlled by CableLabs. Use and understanding of this document may require access to such other documents. Designing, manufacturing, distributing, using, selling, or servicing products, or providing services, based on this document may require intellectual property licenses from third parties for technology referenced in this document. To the extent this document contains or refers to documents of third parties, you agree to abide by the terms of any licenses associated with such third-party documents, including open source licenses, if any. Cable Television Laboratories, Inc. 2016-2017

Transcript of Virtualization and Network Evolution Virtual Provisioning ...

Page 1: Virtualization and Network Evolution Virtual Provisioning ...

Virtualization and Network Evolution Virtual Provisioning Interfaces

Virtual Provisioning Interfaces Technical Report

VNE-TR-VPI-V01-170424 RELEASED

Notice

This Virtualization and Network Evolution technical report is the result of a cooperative effort undertaken at the direction of Cable Television Laboratories, Inc. for the benefit of the cable industry and its customers. You may download, copy, distribute, and reference the documents herein only for the purpose of developing products or services in accordance with such documents, and educational use. Except as granted by CableLabs® in a separate written license agreement, no license is granted to modify the documents herein, or to use, copy, modify or distribute the documents for any other purpose.

This document may contain references to other documents not owned or controlled by CableLabs. Use and understanding of this document may require access to such other documents. Designing, manufacturing, distributing, using, selling, or servicing products, or providing services, based on this document may require intellectual property licenses from third parties for technology referenced in this document. To the extent this document contains or refers to documents of third parties, you agree to abide by the terms of any licenses associated with such third-party documents, including open source licenses, if any.

Cable Television Laboratories, Inc. 2016-2017

Page 2: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

2 CableLabs 04/24/17

DISCLAIMER

This document is furnished on an "AS IS" basis and neither CableLabs nor its members provides any representation or warranty, express or implied, regarding the accuracy, completeness, noninfringement, or fitness for a particular purpose of this document, or any document referenced herein. Any use or reliance on the information or opinion in this document is at the risk of the user, and CableLabs and its members shall not be liable for any damage or injury incurred by any person arising out of the completeness, accuracy, or utility of any information or opinion contained in the document.

CableLabs reserves the right to revise this document for any reason including, but not limited to, changes in laws, regulations, or standards promulgated by various entities, technology advances, or changes in equipment design, manufacturing techniques, or operating procedures described, or referred to, herein.

This document is not to be construed to suggest that any company modify or change any of its products or procedures, nor does this document represent a commitment by CableLabs or any of its members to purchase any product whether or not it meets the characteristics described in the document. Unless granted in a separate written agreement from CableLabs, nothing contained herein shall be construed to confer any license or right to any intellectual property. This document is not to be construed as an endorsement of any product or company or as the adoption or promulgation of any guidelines, standards, or recommendations.

Page 3: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 3

Document Status Sheet

Document Control Number: VNE-TR-VPI-V01-170424

Document Title: Virtual Provisioning Interfaces Technical Report

Revision History: V01 – 04/24/17

Date: April 24, 2017

Status: Work in Progress

Draft Released Closed

Distribution Restrictions: Author Only CL/Member CL/ Member/ Vendor

Public

Trademarks CableLabs® is a registered trademark of Cable Television Laboratories, Inc. Other CableLabs marks are listed at http://www.cablelabs.com/certqual/trademarks. All other marks are the property of their respective owners.

Page 4: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

4 CableLabs 04/24/17

Contents 1 INTRODUCTION AND PURPOSE .................................................................................................................. 9

1.1 Overall Virtualization Vision ....................................................................................................................... 10 1.2 VPI Scope and Objectives ........................................................................................................................... 13

2 INFORMATIVE REFERENCES .................................................................................................................... 15 2.1 Reference Acquisition.................................................................................................................................. 16

3 TERMS AND DEFINITIONS .......................................................................................................................... 18

4 ABBREVIATIONS AND ACRONYMS .......................................................................................................... 19

5 VPI ARCHITECTURE ..................................................................................................................................... 20 5.1 Architectural Assumptions and Agreements ................................................................................................ 20

5.1.1 Assumptions ......................................................................................................................................... 20 5.1.2 Working Agreements ............................................................................................................................ 21

5.2 High-Level Architecture .............................................................................................................................. 22 5.2.1 User-to-Service Interface ..................................................................................................................... 23 5.2.2 Service Layer ....................................................................................................................................... 23 5.2.3 Controller Layer .................................................................................................................................. 25 5.2.4 Network Layer ..................................................................................................................................... 26

5.3 System Operation ......................................................................................................................................... 26 5.3.1 Service Provisioning State Management ............................................................................................. 27 5.3.2 Service Provisioning API Detail .......................................................................................................... 29 5.3.3 Management Considerations ............................................................................................................... 30 5.3.4 Topology Resolution ............................................................................................................................ 32

6 USE CASES ....................................................................................................................................................... 33 6.1 IP High Speed Data Use Cases .................................................................................................................... 34

6.1.1 Initial CM/ONU Bootup and Service Creation for New Customer ...................................................... 34 6.1.2 CM/ONU Device Reboot – Resume Services ....................................................................................... 37 6.1.3 Delete Service ...................................................................................................................................... 38 6.1.4 Dynamically Change Services ............................................................................................................. 38

6.2 Layer 2 VPN Use Cases............................................................................................................................... 40 6.2.1 Initial CM/ONU Bootup and Service Creation for New Customer ...................................................... 40 6.2.2 CM/ONU Device Reboot - Resume Services ........................................................................................ 44 6.2.3 Delete Existing “L2VPN” Between Two Endpoints ............................................................................ 46

7 PROTOCOLS .................................................................................................................................................... 49 7.1 Overview ..................................................................................................................................................... 49 7.2 REST ........................................................................................................................................................... 49 7.3 NETCONF ................................................................................................................................................... 49 7.4 RESTCONF ................................................................................................................................................. 50 7.5 Service-to-Controller Protocols ................................................................................................................... 50 7.6 Controller-to-Network Protocols ................................................................................................................. 51 7.7 Application of RESTCONF, YANG for VPI .............................................................................................. 51 7.8 Application of Asynchronous Notifications for VPI ................................................................................... 52 7.9 Application of RPC to VPI .......................................................................................................................... 54

8 INFORMATION AND DATA MODELS ....................................................................................................... 55 8.1 Component Model ....................................................................................................................................... 55 8.2 Service-Controller Model ............................................................................................................................ 56

8.2.1 Service-Controller (NorthBound) Information Model ......................................................................... 56 8.2.2 Type Definitions ................................................................................................................................... 64

Page 5: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 5

8.2.3 Service-Controller Data Model ........................................................................................................... 65 8.3 Controller-Network Model .......................................................................................................................... 65

8.3.1 Controller-Network (SouthBound) Information Model ....................................................................... 65 8.3.2 CmServicesCfg Model.......................................................................................................................... 67 8.3.3 QosCfg Model ...................................................................................................................................... 68 8.3.4 L2vpnCfg Model .................................................................................................................................. 73 8.3.5 McastJoinAuthorization Model ............................................................................................................ 76 8.3.6 SubscriberMgmt Model ........................................................................................................................ 77 8.3.7 CpeMgmt Model .................................................................................................................................. 78 8.3.8 CmtsStatus Model ................................................................................................................................ 80 8.3.9 CMTS DocsQosCfg Model ................................................................................................................... 82 8.3.10 Controller-Network Data Model .......................................................................................................... 84

9 DISAGGREGATED DPOE ARCHITECTURE ............................................................................................ 85 9.1 State of PON and DPoE ............................................................................................................................... 85 9.2 Problem Statement ....................................................................................................................................... 85

9.2.1 Industry Drivers ................................................................................................................................... 86 9.3 Evolution of DPoE ....................................................................................................................................... 86

9.3.1 Current DPoE Architecture ................................................................................................................. 86 9.3.2 DPoE Evolution: Next Steps/Beyond the Chassis ................................................................................ 87

9.4 Next Step: Disaggregated DPoE with Defined Tunnel Interfaces ............................................................... 91 9.4.1 Disaggregated DPoE Architecture ...................................................................................................... 91 9.4.2 Generalized Disaggregated DPoE Architecture .................................................................................. 92 9.4.3 Disaggregated DPoE Architecture – Defined Tunnel Interfaces - Proposals ..................................... 93 9.4.4 System Operation (A Day in the Life of a Packet) ............................................................................. 101

10 CONCLUSIONS .......................................................................................................................................... 103 10.1 Security Threat Analysis ............................................................................................................................ 103 10.2 Challenges and Gaps .................................................................................................................................. 103

10.2.1 Operational ........................................................................................................................................ 103 10.2.2 Dependencies ..................................................................................................................................... 104 10.2.3 Technical ........................................................................................................................................... 104

10.3 Next Steps .................................................................................................................................................. 105 10.3.1 VPI Requirements Imposed on Other Specifications ......................................................................... 105 10.3.2 Virtualized network Architecture ....................................................................................................... 105 10.3.3 Disaggregated EPON Architecture ................................................................................................... 105 10.3.4 Security Architecture ......................................................................................................................... 105

10.4 Summary .................................................................................................................................................... 105

APPENDIX I VPI RESTCONF ENDPOINTS .................................................................................................. 107 I.1 Service-to-Controller VPI Endpoints ......................................................................................................... 107 I.2 Controller-to-Network VPI Endpoints ....................................................................................................... 109

APPENDIX II TRANSPORT BETWEEN R-OLT AND DPOE SYSTEM VNF(S) ................................... 116 II.1 Disaggregated EPON Transport Technology Evaluation Criteria ............................................................. 116

II.1.1 Provisioning ....................................................................................................................................... 116 II.1.2 Operations ......................................................................................................................................... 116 II.1.3 Network Layer and Data Link Layer Protocol Encapsulation .......................................................... 117 II.1.4 Quality of Service Differentiation ...................................................................................................... 117 II.1.5 Security .............................................................................................................................................. 118 II.1.6 Distribution Limitations ..................................................................................................................... 118 II.1.7 Deployment Readiness ....................................................................................................................... 118

II.2 Transport Protocols Evaluated ................................................................................................................... 118 II.2.1 VLAN or Q-in-Q (802.3ad) ................................................................................................................ 119 II.2.2 L2TPv3 ............................................................................................................................................... 120 II.2.3 GRE / SoftGRE .................................................................................................................................. 122

Page 6: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

6 CableLabs 04/24/17

II.2.4 VxLAN ................................................................................................................................................ 122 II.2.5 NVGRE .............................................................................................................................................. 124 II.2.6 MPLS ................................................................................................................................................. 125 II.2.7 Segment Routing ................................................................................................................................ 125 II.2.8 IPsec .................................................................................................................................................. 126

II.3 Transport Technologies Analysis .............................................................................................................. 128

APPENDIX III PROOF OF CONCEPT (INFORMATIVE) .......................................................................... 129 III.1 Summary .................................................................................................................................................... 129 III.2 The Problem .............................................................................................................................................. 129 III.3 The Solution .............................................................................................................................................. 129 III.4 The Demonstration .................................................................................................................................... 130 III.5 Demo Screenshots ..................................................................................................................................... 130 III.6 Expected Benefits ...................................................................................................................................... 132

APPENDIX IV DPOE OVERVIEW ................................................................................................................. 133

APPENDIX V ACKNOWLEDGEMENTS .................................................................................................... 135

Figures Figure 1 - Virtualized Access Network Architecture................................................................................................... 11 Figure 2 - High-Level SDN Architecture .................................................................................................................... 13 Figure 3 - High-Level Service Provider Architecture with VPI architecture Focus .................................................... 20 Figure 4 - High-Level Service Provider Architecture .................................................................................................. 23 Figure 5 - Service Provisioning Flow State Management ........................................................................................... 29 Figure 6 - VPI System Operation Overview ................................................................................................................ 30 Figure 7 - IPHSD Service - Initial Bootup and Service Creation Use Case Flows: DOCSIS Variation ..................... 35 Figure 8 - IPHSD Service –Initial Bootup and Service Creation Use Case Flows: DPoE Variation .......................... 36 Figure 9 - IPHSD Service - Service Delete and Change Use Case Flows ................................................................... 39 Figure 10 - DOCSIS Network L2VPN Service - Initial Bootup and Service Creation Use Case Flows ..................... 41 Figure 11 - DPoE Network L2VPN Service - Initial Bootup and Service Creation Use Case Flows ......................... 43 Figure 12 - DOCSIS Network L2VPN Service - Service Deletion Use Case Flow .................................................... 47 Figure 13 - DPoE Network L2VPN Service - Service Deletion Use Case Flow ......................................................... 48 Figure 14 - VPI Architecture Component Model ........................................................................................................ 55 Figure 15 - Service-Controller Information Model...................................................................................................... 57 Figure 16 - Controller to Network Information Model - High Level: CmServicesCfg ............................................... 66 Figure 17 - CMServices ............................................................................................................................................... 67 Figure 18 - Controller to Network Information Model – QoSCfg ............................................................................... 68 Figure 19 - Controller to Network Information Model – L2vpnCfg Class .................................................................. 73 Figure 20 - MulticastJoinAuthorization Object ........................................................................................................... 76 Figure 21 - SubscriberMgmt Object ............................................................................................................................ 77 Figure 22 - CpeMgmt Object ....................................................................................................................................... 78 Figure 23 - Controller to Network Information Model – CMTS Status ...................................................................... 80 Figure 24 - CMTS DocsQosCfg Objects ..................................................................................................................... 83 Figure 25 - DPoEv2.0 Interfaces and Reference Points ............................................................................................... 87 Figure 26 - Fully Disaggregated DPoE Architecture Option 1 .................................................................................... 88 Figure 27 - Proprietary Disaggregated Architecture .................................................................................................... 88

Page 7: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 7

Figure 28 - Disaggregated Architecture – Defined Tunnel .......................................................................................... 89 Figure 29 - DPOE to Native EPON Provisioning Migration ....................................................................................... 90 Figure 30 - Synthesized Solution ................................................................................................................................. 92 Figure 31 - Option 1 Disaggregated DPoE Architecture ............................................................................................. 93 Figure 32 - Option 1 Partially Distributed Disaggregated DPoE Architecture ............................................................ 94 Figure 33 - Option 1 Fully Distributed Disaggregated DPoE Architecture ................................................................. 94 Figure 34 - Disaggregated DPoE Architecture Option 2 ............................................................................................. 95 Figure 35 - Disaggregated DPoE Architecture Option 3 ............................................................................................. 98 Figure 36 - Disaggregated DPoE Architecture Option 4 - OLT in the Headend ......................................................... 99 Figure 37 - Disaggregated DPoE Architecture Option 4 - Remote OLT ..................................................................... 99 Figure 38 - infrastructure over which the data packets will flow .............................................................................. 101 Figure 39 - A Day in the Life of a Packet on an IPHSD service flow ....................................................................... 102 Figure 40 - A Day in the Life of a Packet on an L2VPN Service Flow ..................................................................... 102 Figure 41 - Q-in-Q Topology Example ..................................................................................................................... 120 Figure 42 - VLAN and Q-in-Q header comparison ................................................................................................... 120 Figure 43 - L2TPv3 Example .................................................................................................................................... 121 Figure 44 - L2TPv3 Frame Format ............................................................................................................................ 121 Figure 45 - GRE Tunnel Configuration Example ...................................................................................................... 122 Figure 46 - GRE Header ............................................................................................................................................ 122 Figure 47 - VxLAN Example .................................................................................................................................... 123 Figure 48 - VxLAN Header ....................................................................................................................................... 123 Figure 49 - VTEP Example ....................................................................................................................................... 123 Figure 50 - VxLAN and NVGRE Header Example .................................................................................................. 124 Figure 51 - MPLS Label Format ............................................................................................................................... 125 Figure 52 - Segment Routing Example ...................................................................................................................... 126 Figure 53 - Segment Routing Header Example ......................................................................................................... 126 Figure 54 - IPsec Example ......................................................................................................................................... 127 Figure 55 - AH Header Example ............................................................................................................................... 127 Figure 56 - ESP Header Example .............................................................................................................................. 128 Figure 57 - Layered Service Provisioning Architecture with Standardized Interfaces .............................................. 129 Figure 58 - Demo Components communicate across VPI Defined Interfaces .......................................................... 130 Figure 59 - WebPortal of Application to provision services ..................................................................................... 130 Figure 60 - OpenDaylight Controller: Status of current services .............................................................................. 131 Figure 61 - RESTCONF Transactions: between Application and SDN Controller ................................................... 131 Figure 62 - WebPortal: Showing Customer/Service status ....................................................................................... 132 Figure 63 - DPoE Architecture .................................................................................................................................. 133

Page 8: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

8 CableLabs 04/24/17

Tables Table 1 - VPI Model Status Reported in Response to Queries .................................................................................... 31 Table 2 - Example VPI RESTCONF Use Case Transactions: Service Layer to Controller Layer .............................. 51 Table 3 - Example VPI RESTCONF Use Case Transactions: Controller Layer to Network Layer ............................ 52 Table 4 - VPI RESTCONF Server Event Streams ....................................................................................................... 53 Table 5 - VPI Remote Procedure Calls ........................................................................................................................ 54 Table 6 - Product Object .............................................................................................................................................. 58 Table 7 - Subscriber Object ......................................................................................................................................... 58 Table 8 - Service Object .............................................................................................................................................. 59 Table 9 - ServiceStatus Object .................................................................................................................................... 59 Table 10 - VpnParamSet Object .................................................................................................................................. 60 Table 11 - L2vpnEncapsulation Object ....................................................................................................................... 60 Table 12 - EndpointParameters Object ........................................................................................................................ 61 Table 13 - Flow Object ................................................................................................................................................ 61 Table 14 - FlowStatus Object ...................................................................................................................................... 62 Table 15 - CPE Object ................................................................................................................................................. 63 Table 16 - CPE Object ................................................................................................................................................. 63 Table 17 - Data Types for Service-Controller Information Model .............................................................................. 64 Table 18 - CmServicesCfg Object ............................................................................................................................... 67 Table 19 - AggregatedSF Object ................................................................................................................................. 69 Table 20 - DpoeAsf Object .......................................................................................................................................... 69 Table 21 - DocsisAsf Object........................................................................................................................................ 69 Table 22 - Serviceflow Object ..................................................................................................................................... 70 Table 23 - Serviceflow Object ..................................................................................................................................... 70 Table 24 - DpoeParamset Object ................................................................................................................................. 71 Table 25 - DpoeParamset Object ................................................................................................................................. 71 Table 26 - DpoeParamset Object ................................................................................................................................. 72 Table 27 - DpoeL2vpnCfg Object ............................................................................................................................... 74 Table 28 - DpoeL2NetworkCfg Object ....................................................................................................................... 74 Table 29 - L2vpnNsiEncap Object .............................................................................................................................. 75 Table 30 - CmtsStatus Object ...................................................................................................................................... 80 Table 31 - CmtsList Object.......................................................................................................................................... 81 Table 32 - MacDomain Status Object .......................................................................................................................... 81 Table 33 - CM Status Object ....................................................................................................................................... 82 Table 34 - MESP Reference Table Object ................................................................................................................... 83 Table 35 - Protocols Proposed for Disaggregated DPoE Architecture Option 1 ......................................................... 94 Table 36 - Summary of Disaggregated DPoE Architecture Options ......................................................................... 100

Page 9: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 9

1 INTRODUCTION AND PURPOSE DOCSIS® is well established as a highly effective technology for delivering high-speed IP data service over the cable industry’s Hybrid Fiber/Coax (HFC) network. The success of the DOCSIS platform has led to deployment of data networks on a very large scale, connecting millions of cable modems to the Internet around the world. Deployment, operation, and maintenance of data networks with such huge scale comes with large scale challenges. Operational challenges include the need to provision, monitor, and maintain multiple services over hundreds of network-connected devices. The industry continually strives to improve operational efficiency to improve the quality, reliability, and availability of its services to customers.

Recent developments in new data networking technology paradigms, focused on improving operational efficiency by leveraging the flexibility of software to automate network functions, offer potential to radically change how data services are delivered. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are emerging technologies that have been applied to great effect in data centers and are beginning to find applications in data distribution networks.

CableLabs, its member cable operator companies, and the cable industry vendor community have begun investigating and experimenting with use of SDN and NFV technology to improve efficiency of high-speed data service operations. CableLabs published the SDN Architecture for Cable Access Networks [SDN TR] that describes an architecture for leveraging benefits of SDN to enable automated on-demand provisioning of services on the cable operator’s access network. There are several capabilities enabled by SDN technology with potentially great value to the delivery of data services over the cable industry’s access network:

• Enable a software-programmable network for an operator across various access technologies.

• Enable on-demand and minimum human intervention (“low touch”) provisioning and management of various network devices from a centralized controller.

• Respond instantaneously to events arising in the data plane, session setup or termination, which require changes in network state. Event-driven actions include those in response to signaling inputs.

• Improve automation using common APIs to abstract the underlying access network technologies such as DOCSIS, Ethernet Passive Optical Networks (EPON), and wireless networks.

• Allow rapid creation and testing of new services based on the platform created.

Studies show that residential and commercial consumer demand for bandwidth continues to increase. These customers will fuel the demand for improved service velocity and drive changes to customer service interactions (e.g., add/change services dynamically through a real-time application instead of Customer Support calls). Satisfying the sophisticated customer also reduces Operator OPEX costs through reduced service calls.

The Virtual Provisioning Interfaces (VPI) Technical Report extends concepts introduced in CableLabs SDN Architecture TR [SDN TR]. The VPI architecture defines data models and interfaces abstracting the functions of the underlying access network components. This allows the provisioning of those components to be automated and eventually allows implementation as virtual functions. These network functions when implemented as virtual functions are intended to be disaggregated from traditional equipment. These virtual functions are deployed on generic computing platforms distributed throughout the network, and are instantiated where and when they are needed.

The focus of the VPI architecture is to determine the flow logistics needed to implement the use cases, to define and collect the parameters needed, to create the needed data models and APIs. Once the APIs for the underlying network are created, service providers and vendors can propose solutions that make use of the APIs and be assured of interoperability.

Section 1.1 provides a vision for virtualization for the cable telecommunications industry. Section 1.2 provides an overview and scope of the VPI model within this industry vision.

The VPI Technical Report also explores how the DOCSIS Provisioning of EPON architecture, described in [DPoE Arch] and related specifications, could be disaggregated to leverage SDN and NFV concepts and the VPI architecture for deployment of services on EPON access networks.

Page 10: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

10 CableLabs 04/24/17

1.1 Overall Virtualization Vision

This section aims to describe how SDN and NFV will be used in a cable operator's network. A logical virtualized network architecture is shown in Figure 1. The figure includes three new and important components in a virtualized network: the Service Orchestrator, the NFV Orchestrator and the SDN Controllers in the service operator’s network. These components will orchestrate, provision and manage services on the access network. This section leverages terms and architectural concepts described in [ETSI NFV] and other related ETSI specifications. Note, however, that the scope of the VPI architecture described in this technical report is limited to the provisioning of services (for now, IP High-Speed Data and L2VPN service) on a single operator’s HFC and EPON access networks and does not encompass the complete virtualization architecture described in [ETSI NFV]. ETSI NFV concepts such as Management and Orchestration (MANO) and policy management are out of scope for the VPI TR.

There is already a new wave of “cloud native” NFV that are not ETSI-NFV based. ONAP (Open Network Automation Platform and CORD (Central Office Re-architected as a Datacenter) from AT&T are not entirely ETSI-NFV compliant, so there is some alignment a service provider will need to do as they build their SDN/NFV solution.

In the architecture shown in Figure 1, a master service orchestrator or “orchestrator of orchestrators” exchanges information with and receives commands from the operator’s BSS and OSS and translates the commands to network configuration commands for the SDN Controller and to virtual component commands to the NFV Orchestrator (NFVO).

The SDN Controller translates the infrastructure/network request from the Service Orchestrator to specific network configuration commands for the network devices.

A service provider might also implement a hierarchy of SDN Controllers (Master SDN Controller with underlying SDN Controllers) depending on the needs of the infrastructure at each layer. Responsibility for network control is pushed toward the edge of the network to the extent possible. Devices at the edge have more, and more real-time, information about network topology, network performance, and network health at the edge, than controllers near the network core do. Some examples of functions that might make sense to control near the edge are address assignment, edge and network device provisioning.

Our vision of the virtualized access network combines the NFV and SDN pieces together in a seamless fashion. Service orchestrator plays the master orchestrator role. It calls upon the NFVO and the SDN Controller to set up the services that it needs, along with the underlying network setup.

Page 11: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 11

Figure 1 - Virtualized Access Network Architecture

The NFVO manages and controls the virtualization platform and the VNFs which are running on that platform. A VNF manager and a VIM handle the lifecycle management of the VNFs and the setup of the underlying hardware, respectively. The VIM establishes a virtual machine (VM) environment on available hardware platforms, and the VNF manager instantiates the required functions, shown figuratively in the green boxes in Figure 1, using resources of the deployed VMs. The SDN Controller is the master of the networking domain; it interacts with physical network devices as well as the VNFs. The SDN Controller establishes required connectivity between the VNFs and maintains a record of the network topology.

The VPI TR describes the interfaces listed below:

• between the application or service orchestrator (Service Layer) and the SDN Controller (Controller Layer), and

• between the Controller Layer and the VNFs (or physical access network elements) (Network Layer)

These interfaces are currently for the establishment and maintenance of IP High Speed Data (IPHSD) services and Layer 2 VPN (L2VPN) services. VPI TR also describes the lifecycle of individual control and data flows, shown as dotted and dashed lines, respectively, in Figure 1.

BSS/OSS is a suite of service provider operations, administration, and management utilities and applications, hosted on network applications servers, operating on data hosted by a database or a larger data center, and exchanging

Page 12: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

12 CableLabs 04/24/17

information with and issuing commands to networked elements to ensure the network is operating correctly and delivering services to customers.

Introducing an Orchestrator and an SDN Controller abstracts the underlying network for the OSS and BSS, assuming some of the support systems’ responsibilities and presenting the OSS/BSS with a more unified interface.

Some BSS/OSS functions may shift from interfacing directly with servers, appliances, network infrastructure equipment and customer premises equipment, to interfacing with the orchestrator, the SDN Controller, or both. It may be necessary for the operator to buy or develop software components for the orchestration platform and for the SDN Controller to adapt them to the operator’s OSS and BSS; for example, an OSS-specific plug-in for the SDN Controller.

A service orchestrator is responsible for receiving business logic messages from the operator’s BSS, translating them to infrastructure commands that can be interpreted and acted upon by the SDN Controller, and keeping track of the state of the service.

An orchestrator receives service requests and translates them onto resources in its scope, which likely includes the ability to negotiate with peer or subordinate domains for the use of resources beyond its control. A service orchestrator serves as the master of the virtualization system, and represents the network to the application layer. A service orchestrator receives instructions from the operator’s BSS, such as a request to deliver a service with a defined set of attributes to a customer. The service orchestrator interprets the request and, if necessary, translates it to a series of control commands to the NFV MANO to schedule and instantiate virtual infrastructure and virtual functions as required by the requested service. The service orchestrator also translates BSS and OSS requests to instructions for the SDN Controller, to configure equipment needed for service delivery and establish connections needed between them. Note, however, that while a single Service Layer, a single orchestrator, and/or a single network Controller Layer may be valid in sufficiently constrained networks, realization of the true value of a virtualized system lies in the scale and reusability enabled when there are multiple instances of each. In such a multiple-layer system, a peer or subordinate orchestration domain perceives a service request, which it then decomposes onto the resources within its scope. Orchestration & SDN Control thus occurs at every level, recursively.

SDN Controllers are the entities which manage the network connectivity for the devices (physical or virtual). The SDN Controller abstracts for the service provider the underlying network, to the applications running above, it manages/controls the devices ‘below’ (via southbound APIs) and the applications and business logic ‘above’ (via northbound APIs) to create intelligent, dynamically programmable networks.

The SDN Controller receives instructions from the Service Orchestrator to configure networked devices and the connections between them to enable and facilitate the delivery of the service ordered by the service provider’s subscriber. The SDN Controller is used for both connectivity within the NFV infrastructure (NFVI) and for provisioning and management of flows within the physical access network. The SDN Controller is perceived as the center of knowledge of resources that are present in the physical and virtual networks, and also as the service-oriented driver for lifecycle management as needed. SDN has a view of network topology and load that permits it to optimize resource placement and use according to service demand. SDN Controllers accept user and network input along with policy, and continually optimize network state according to policy criteria and network conditions. The SDN Controller isolates each customer from the others, polices customer requests to ensure they do not exceed the limits of their agreement with the service provider, and optimize network resources within its control according to policy to rationalize network state, service demands, and traffic flows.

The NFVO builds the NFV services. This includes on-boarding of new Network Service (NS) or VNF Packages. The next duty for the orchestrator is NS lifecycle management (instantiation, scale-out/in, performance measurements, event correlation, and termination). It also does validation and authorization of NFVI resource requests and policy management for NS instances.

So, as described above , the SDN and NFV frameworks support a virtualized programmable network. This is an area with a big scope, of which this VPI architecture tackles mainly the dynamic service provisioning and configuration of services on the network. This project also touches upon virtualization in the disaggregated EPON Architecture area.

Page 13: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 13

Ultimately the Virtualized Provisioning Interface concept extends to include on-demand provisioning based on sources of input beyond the operator’s traditional business and operations support systems. However, that is beyond the scope of this phase of the VPI Technical Report, and is left for future projects.

1.2 VPI Scope and Objectives

The scope of the VPI architecture described in this Technical Report is definition of interfaces between the Service Layer and the Controller Layer and between the Controller Layer and the Network Layer within a single service provider administrative domain to enable on-demand and programmable provisioning of IPHSD service and Layer 2 VPN service to endpoints in the customers’ premises. VPI architecture addresses service provisioning only and not device provisioning. See Figure 2.

The main goal of the VPI architecture is to make the access network programmable. The idea is to abstract the underlying network and its complex configuration in a simple fashion to higher layer applications which need to implement services on the network. This abstraction is done by using an SDN controller, which acts as a translation layer between any application and any type of underlying access network. For example, provisioning an IPHSD service or a gaming service should be the same independent of whether the underlying network is DOCSIS or EPON. The VPI architecture creates the building blocks for this by creating common data models and interfaces for the SDN controller and the underlying access network devices (DOCSIS or DPoE).

The SDN and NFV concepts can be applied to a solve problems in the access network. This can become a huge body of work with various areas such as SDN control of network elements, virtualization of network functions, orchestration of virtual infrastructure, management of services and virtual functions lifecycles, instantiation of hardware and software elements required to realize the network architecture, and establishment of the connections between them. The VPI architecture specifically focuses mainly on the dynamic service provisioning aspects within this bigger SDN/NFV framework. The VPI architecture is designed to fit in any bigger SDN/NFV framework.

Figure 2 - High-Level SDN Architecture

The VPI architecture is scoped to address some of the shortcomings of the existing provisioning process in use by cable operators to deliver IPHSD and L2VPN services, as well as the need to migrate to a virtualized networking environment in the industry. Some of these shortcomings are listed below:

• Service orchestrator platforms and software defined networking controller platforms being developed by various industry groups, including OpenDaylight, Open Networking Operating System (ONOS), and other proprietary controllers, do not focus on migrating the provisioning of services from the current static methods used for DOCSIS, PacketCable, and EPON networks to the dynamic provisioning model cable operators need to deploy.

• Service establishment models being defined and developed by open source projects and standards development organizations like IETF and MEF are largely focused on enterprise networks, commercial services and data center networking. This is not meeting the needs of the cable industry for delivery of residential services such as high-speed IP data connectivity, or supporting commercial services specifically over DOCSIS/EPON access networks, IP video streaming, or other DOCSIS network-based applications such as DOCSIS 3.1 profile management.

Page 14: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

14 CableLabs 04/24/17

• Virtualization solutions being developed for Passive Optical Networks (PON) are largely focused on Gigabit PON (GPON) network architectures, but the cable industry needs a solution for Ethernet PON (EPON) and DPoE.

• Provisioning systems for services provided by cable operators are typically separate systems for each type of access network on which the service is implemented. These present different interfaces to the operator’s Operations Support System (OSS), resulting in the operator managing different vertical silos to provision similar kinds of service.

This project does not aspire to solve all of the problems/gaps in the Virtualization and SDN space. Some relevant topics may be addressed in future revisions or new projects. It is appropriate to clarify some of the assumptions that bound the VPI architecture. Some of these assumptions are listed below:

• The intended environment is internal to a single MSO administration. Questions of multi-company information hiding, policy enforcement, orchestration, service assurance, contention for shared resources, and the like may need to be addressed by an upper-level service orchestrator or BSS, but lie beyond the scope of the VPI architecture.

• Because the scope is internal to a single MSO, security across interfaces can be designed to internal company standards, which may involve mutual authentication, but full trust in the messages exchanged between validated counterparties.

• Because the scope is comparatively narrow, recursive abstraction of resources or recursive decomposition of entities (like the SDN Controller) are unnecessary.

• Because the scope of this work is basic L2VPN and IPHSD service, added-value functions that would require the provisioning of customer- or flow-specific attributes in a possibly virtualized network function are out of scope.

Another objective of this technical report is to describe disaggregation of a DPoE System and a distributed EPON architecture (Section 9). This report describes a disaggregated architecture for EPON in an MSO network and aligns it with concepts from Distributed Access Architectures such as Remote PHY and Remote MACPHY for DOCSIS. The architecture will include separation of the management plane and data plane components and describe how they connect and interact.

Page 15: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 15

2 INFORMATIVE REFERENCES This technical report uses the following informative references. References are either specific (identified by date of publication, edition number, version number, etc.) or non-specific. For a non-specific reference, the latest version applies.

[AGG SF YANG] docsis-aggregated-service-flow.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[CCAP-OSSIv3.1] CCAP Operations Support System Interface Specification, CM-SP-CCAP-OSSIv3.1-I08-170111, January 11, 2017, Cable Television Laboratories, Inc.

[CCAPv3.1 YANG] [email protected], http://www.cablelabs.com/YANG/DOCSIS/3.1/

[CLASS YANG] docsis-classifier.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[CM CPE MGMT YANG] docsis-cm-cpe-mgmt.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[CM OSSIv3.1] DOCSIS 3.1 CM OSSI Specification, CM-SP-CM-OSSIv3.1-I08-170111, January 11, 2017, Cable Television Laboratories, Inc.

[CM SERVICES CFG YANG]

docsis-cm-services-cfg.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[CMTS STATUS YANG] docsis-cmts-status.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[DCA-MHAv2] Modular Headend Architecture v2 Technical Report, CM-TR-MHAv2-V01-150615, June 15, 2015, Cable Television Laboratories, Inc.

[DOCS TYPES YANG] cl-docsis-types.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[DOCSIS NSI] CMTS Network Side Interface, SP-CMTS-NSI-I01-960702, July 2, 1996, Cable Television Laboratories, Inc.

[DPoE Arch] DPoE Architecture Specification, DPoE-SP-ARCHv2.0-I05-160602, June 2, 2016, Cable Television Laboratories, Inc.

[DPoE MEF] DOCSIS Provisioning of EPON, DPoE Metro Ethernet Forum Specification, DPoE-SP-MEFv2.0-I05-170111, January 11, 2017. Cable Television Laboratories, Inc.

[DPoE MULPI] DPoE MAC and Upper Layer Protocols Interface Specification, DPoE-SP-MULPIv2.0-I11-170111, January 11, 2017. Cable Television Laboratories, Inc.

[DPoE OAM] DPoE OAM Extensions Specification, DPoE-SP-OAMv2.0-I10-170111, January 11, 2017, Cable Television Laboratories, Inc.

[DPoE OSSI] DPoE Operations and Support System Interface Specification, DPoE-SP-OSSIv2.0-I10-170111, January 11, 2017, Cable Television Laboratories, Inc.

[ETSI NFV] Network Functions Virtualisation (NFV); Architectural Framework. ETSI GS NFV 002 V1.1.1 (2013-10). European Telecommunications Standards Institute Network Functions Virtualisation (NFV) Industry Specification Group (ISG). 2013. http://www.etsi.org/deliver/etsi_gs/nfv/001_099/002/01.01.01_60/gs_nfv002v010101p.pdf

[Fielding 2000] Architectural Styles and the Design of Network-based Software Architectures, Roy Thomas Fielding, 2000, https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

[IEEE 802.3] IEEE Std 802.3-2015 - IEEE Standard for Ethernet, September 2015.

Page 16: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

16 CableLabs 04/24/17

[IEEE 1904.2] IEEE draft 1904.2 - Standard for Management Channel for Customer-Premises Equipment Connected to Ethernet-based Subscriber Access Networks

[ISO 10040] ISO/IEC 10040:1998 Information technology -- Open Systems Interconnection -- Systems management overview, October 1998. International Organization for Standardization.

[JOIN AUTH YANG] docsis-mcast-join-authorization.yang, http://www.cablelabs.com/YANG/DOCSIS/

[L2VPN CFG YANG] docsis-l2vpn-cfg.yang, http://www.cablelabs.com/YANG/DOCSIS/

[L2VPN] Layer 2 Virtual Private Networks Specification, CM-SP-L2VPN-I15-150528, May 28, 2015, Cable Television Laboratories, Inc.

[MIB YANG] docsis-snmp-mib.yang, http://www.cablelabs.com/YANG/DOCSIS/

[MULPIv3.1] MAC and Upper Layer Protocols Specification, CM-SP-MULPIv3.1-I10-170111, January 11, 2017, Cable Television Laboratories, Inc.

[Remote PHY] Remote PHY Specification, CM-SP-R-PHY-I06-170111, January 11, 2017, Cable Television Laboratories, Inc.

[RFC 4001] IETF RFC 4001, Textual Conventions for Internet Network Addresses, February 2005

[RFC 4448] IETF RFC 4448, Encapsulation Methods for Transport of Ethernet over MPLS Networks, April 2006.

[RFC 5277] IETF RFC 5277, NETCONF Event Notifications, July 2008.

[RFC 6020] IETF RFC 6020, YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF), October 2015.

[RFC 6241] IETF RFC 6241, Network Configuration Protocol (NETCONF), October 2015.

[RFC 8040] IETF RFC 8040, RESTCONF protocol, January 2017.

[SDN TR] Software Defined Networking Architecture for Cable Access Networks Technical Report, VNE-TR-SDN-ARCH-V01-150625, June 25, 2015, Cable Television Laboratories, Inc.

[SF YANG] docsis-service-flow.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[SSE] World Wide Web Consortium (W3C), Server Sent Events, W3C Working Draft 22, https://www.w3.org/TR/2009/WD-eventsource-20091222/, December 2009.

[SUB MGT YANG] docsis-subscriber-mgmt.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[VPI TYPES YANG] cl-vpi-types.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

[VPI YANG] vpi.yang, http://www.cablelabs.com/YANG/DOCSIS/VPI

2.1 Reference Acquisition

• Cable Television Laboratories, Inc., 858 Coal Creek Circle, Louisville, CO 80027; Phone +1-303-661-9100; Fax +1-303-661-9199; http://www.cablelabs.com

• Internet Engineering Task Force (IETF) Secretariat, 46000 Center Oak Plaza, Sterling, VA 20166, Phone +1-571-434-3500, Fax +1-571-434-3535, http://www.ietf.org

Page 17: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 17

• European Telecommunications Standards Institute (ETSI), 650 Route des Lucioles, Sophia Antipolis, 06560, Valbonne, FRANCE, Phone +33 (0)4 92 94 42 00, Fax +33 (0)4 93 65 47 16 (ETSI Reception) http://www.etsi.org/standards-search#Pre-defined Collections

Page 18: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

18 CableLabs 04/24/17

3 TERMS AND DEFINITIONS This document uses the following terms:

Converged Interconnect Network

The network (generally gigabit Ethernet) that connects a CCAP Core in a headend to a Remote PHY Device(RPD) (at a node location) for DOCSIS networks. The same network is also expected to connect a Remote OLT device (at a node location) to the virtualized DPoE System function in the headend.

DPoE System Controller/ DPoE System VNF

The DPoE System Controller is composed of one or more VNFs supporting the vCMTS and vCM functionality. This is essentially all the control and management plane functionality of an Integrated DPoE System.

vCM Implements the vCM functionality as defined in the DPoE v2.0 specifications.

vCMTS Implements a RESTCONF client to expose the underlying DPoE network to the SDN controller, supports the Controller to Network YANG data model described in Section 8.3.10. The vCMTS also implements other CCAP functionality as defined in the DPoE v2.0 specifications.

Optical Distribution Network

The physical tree of optical fiber and optical devices that distribute signals from an Optical Line Terminal to users. Dual-rooted trees are recognized for some applications such as for protection.

Optical Line Terminal A device that terminates the root of one Optical Distribution Network. An Optical Line Terminal serves as the service provider endpoint of a passive optical network. It provides two main functions:

1. perform conversion between the electrical signals used by the service provider's equipment and the fiber optic signals used by the passive optical network.

2. coordinate the multiplexing between the conversion devices on the other end of that network (called either optical network terminals or optical network units).

Also referred to as Optical Line Termination. Optical Network Unit A CPE that converts optical signals from the EPON interface to electrical

signals on the customer facing ports (i.e., UNIs).

Page 19: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 19

4 ABBREVIATIONS AND ACRONYMS This document uses the following abbreviations:

API Application Programming Interface BSS Business Support System CCAP™ Converged Cable Access Platform CIN Converged Interconnect Network DPoE™ DOCSIS Provisioning of EPON EPON Ethernet Passive Optical Network FCAPS Fault, Configuration, Accounting, Performance, Security GPON Gigabit Passive Optical Network HFC Hybrid Fiber-Coaxial IP Internet Protocol IPHSD Internet Protocol High-Speed Data JSON JavaScript Object Notation L2VPN Layer-2 Virtual Private Network LLID Logical Link Identifier MANO Management & Orchestration MSO Multiple System Operator NETCONF Network Configuration Protocol NS Network Service NFV Network Functions Virtualization NFVI NFV Infrastructure NFVO NFV Orchestrator ODN Optical Distribution Network OLT Optical Line Terminal / Optical Line Termination ONOS Open Networking Operating System ONU Optical Network Unit OSS Operations Support System PON Passive Optical Networks SDN Software Defined Networking SNMP Simple Network Management Protocol UNI User Network Interface URL Uniform Resource Locator vCM Virtual Cable Modem VIM Virtual Infrastructure Manager VNF Virtual Network Function VPI Virtual Provisioning Interface XML eXtensible Markup Language

Page 20: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

20 CableLabs 04/24/17

5 VPI ARCHITECTURE In an SDN-enabled access network architecture, a software defined network controller (Control Layer) is responsible for managing the underlying network (Network Layer). The SDN Controller has the means to access and communicate with underlying network devices so that applications and services (Service Layer) do not have to be aware of how devices are configured, or where they are in the network topology.

The goal of the VPI architecture is to define data models and Application Programming Interfaces (APIs) enabling and facilitating programmatic configuration, control, and monitoring of service on the access network equipment. Specifically, the project enables dynamic configuration and monitoring of CCAP, cable modems, DPoE System components, OLTs, and ONUs. Defining standard APIs and data models enables the operators and their software application developers to create software utilities and suites of applications that can provision, configure, and monitor various services on any type of access network. This communication happens through an SDN Controller using a specified set of VPI transactions exchanging information defined by a specified information model and implemented by a specified set of data models.

Common APIs for multiple access networks enable an application to be built once and be able to run on all the types of access networks. This simplifies the service provider’s provisioning and configuration systems compared to having to run separate tools for each type of access network.

APIs providing access to network capabilities, such as setting and changing service flows between the CCAP (or DPoE System) and cable modem (or ONU), enable dynamic configuration of the DOCSIS (or EPON) network without having to create new configuration files and interrupt the customer’s service by forcing modems to reboot and reload a new configuration.

Figure 3 illustrates the cable telecommunications systems service provider architecture with the interfaces described above, highlighting the main interfaces which are the focus of the VPI architecture.

Figure 3 - High-Level Service Provider Architecture with VPI architecture Focus

5.1 Architectural Assumptions and Agreements

5.1.1 Assumptions

Assumptions made when defining the VPI architecture are listed below:

• The architecture is limited to a single service provider’s administrative domain and is not applied across service provider domains.

Page 21: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 21

• The Controller Layer learns and maintains topology of the Network Layer to distribute the required configurations to physical (or virtual) network devices.

• The service provider maintains a data store which provides an association between the user and the access network device (e.g., CM) to which the user is connected. This data store is accessible to and used by both the service application and the SDN Controller.

• The service provider maintains a data store which provides an association between the products offered to customers and services and flows comprising the products.

• The service provider provides its customers and potential customers with the means to select and order products, which in turn result in a request to initiate services to be delivered over the service provider’s access network.

• A Service Level Agreement (SLA) that defines the level of service established between the service provider and its customers and that for which the customer will compensate the service provider.

• The service provider maintains a data store for configuring, monitoring, and maintaining the network.

5.1.1.1 Service Provisioning System Assumptions Some system components necessary for the provisioning and delivery of service are not the focus of the VPI architecture but are assumed to exist and be accessible to system components as needed. The list of out-of-scope components includes but is not necessarily limited to the following:

• Web Portal

• OSS/BSS

• TFTP, DHCP, DNS and other provisioning servers

• Service orchestrator (coordinating both VNFs and legacy PNFs)

• NFVO, VNFM, VIM to coordinate service instantiation in the virtual domain

• Product-to-Service Mapping Applications in the Service Layer and/or an orchestration function are not defined by VPI. The mapping of the Product ID to one or more Service IDs is assumed to be handled by the operator. Each Service ID corresponds to a service application listed in the Service Catalog. The Service Catalog can link a service with a Service ID.

• Subscriber-to-CPE Mapping Applications that will map the subscriber who ordered the product with the CPE serving the subscriber are out of scope; the assumption is this is handled by the operator.

• Connection Management Applications that manage connecting all of the multiple points that form a layer 2 VPN connection for L2VPN service are out of scope. Scope of the VPI includes provisioning devices in the operator’s network to be capable of participating in L2VPN service, but not connecting devices end-to-end.

5.1.2 Working Agreements

The VPI architecture, interfaces, and information models are based on the architectural agreements developed by the VPI working group and listed below:

• The VPI architecture has 3 layers, namely, the Service layer; the Controller Layer, and the Network Layer.

• Each layer in the architecture provides an abstraction to the layer above it and hides the details of its implementation.

• Each layer is responsible for optimizing the resources in its scope.

• The architecture can use a single SDN Controller, multiple-SDN-controller flat architecture, or a hierarchical SDN Controller architecture, where a Master SDN Controller orchestrates the operations of the individual domain controllers. In the case where a flat controller architecture is used, the controllers need to use a protocol to

Page 22: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

22 CableLabs 04/24/17

maintain a consistent view of the network topology and flow states. This protocol is not defined by the VPI architecture.

• The architecture could be implemented with real or virtual components. The description of specific virtual implementation of a specific network function is outside the scope of this report.

• The SDN Controller configures the tunnels between infrastructure devices and the servers that host the VNFs. The SDN Controller programs the physical and virtual switches to run traffic through the chain of Virtual Network Functions in order to deliver the required services. It also connects traffic from the CCAP/DPoE system over the switches to the desired destinations.

• The SDN Controller provisions all of the service flows and flow-specific attributes into every resource, virtual or physical, whether directed by the service applications or by signaling or by self-driven feedback optimization.

• The controller receives service attributes from an application, service orchestration function, or other function in the Service Layer.

• The controller creates identifications for required flows and maintains the states of the required flows to implement the service.

• The controller optimizes network resources according to changing service demands, network traffic, and the state of the network.

The following features and capabilities are out of scope for the initial phase of the VPI architecture definition:

• Dynamic detection of user mobility

• Resource and service orchestration as defined by ETSI MANO

• Network virtualization using network hypervisors

5.2 High-Level Architecture

Service provider service delivery platforms leveraging an SDN Controller can be modeled as a system of applications exchanging control and user data through interfaces as depicted in Figure 4. A user interacts with the VPI architecture through a web portal, which receives the user’s commands, passes the commands to a service application, receive responses from service applications, and returns responses to the user. The user may be a representative of the service provider, or the user could be the consumer of the service. Messages between the web portal and the service applications pass through an interface shown as the User-to-Service interface in Figure 4. The web portal exercises the User-to-Service interface with software calls and data payload specific to the user’s request.

The application in the Service Layer initiates provisioning of the service requested by the user across the service provider’s access network. The application should not have to know what type of underlying access network delivers the service to the user or how to find the user’s device on the network: it just requests the service and the Controller Layer is responsible for provisioning the service in the Network Layer. The Controller Layer abstracts the details of the access network from the Service Layer, by learning and maintaining the network topology and exposing to the Service Layer an interface allowing it to configure, initiate, manage, or monitor service to the customer. The interface between the service applications and the network controller framework is abstracted as the Service-Controller Interface on Figure 4.

The Controller Layer uses what it determines to be the most appropriate network path to communicate with the customer device(s) and the intermediate network devices that deliver service to the customer. The controller also provides responses to the service application, including success or failure reports and responses from the user. Messages between the Controller Layer and the Network Layer are exchanged through the interface referred to in Figure 4 as the Controller-to-Network Interface.

Page 23: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 23

Figure 4 - High-Level Service Provider Architecture

5.2.1 User-to-Service Interface

The User-to-Service interface is out of scope for the VPI. This interface is defined by the service provider, the service application developer, third-party integration software developer, or a combination of these. Ultimately the service requested by the user is provided by service applications and the underlying access network. The knowledge of and control over the access network is left to the lower layers.

The User-to-Service information model, which defines information exchanged between the User Layer and the Service Layer, is proprietary to the service provider and is out of scope for this report.

5.2.2 Service Layer

The Service Layer is a conceptual architectural component responsible for receiving customer requests for, initiating configuration of, originating, and maintaining information about services delivered to the service provider’s customers over the service provider’s access network. The Service Layer exchanges information with the service provider and/or consumers through the User-to-Service interface.

The Service Layer exchanges information with the Controller Layer through the Service-to-Controller Interface, and information exchanged between the Service Layer and the Controller Layer is defined by the Service-to-Controller information model. Each component of the Service Layer is listed below with a description of its characteristics and functions.

An end user requests a set of services from the service provider in the form of a product request. A product consists of multiple services, and each service is passed from the service application in the Service Layer to the Controller Layer which provisions the service. When provisioned, service is delivered from the service provider to the customer by means of data flows. Each flow is uniquely identified and has a set of attributes describing specifics

Page 24: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

24 CableLabs 04/24/17

about how the service is delivered to the end user. Flow identifiers and attributes are used to establish, control, track, and monitor delivery of service to the end user. The service application does not need to know details about services and flows that comprise the ordered product. It can simply pass the product identifier to the Controller Layer which can then look up the services associated with the requested product and the flows associated with each service. Alternatively, the service application can pass all the required flow characteristics for the requested product to the Controller Layer, making it unnecessary for the Controller Layer to look up service and flow parameters. The Service Layer receives and saves status information about flows for the products and services within its scope.

Coordination of service provisioning, delivery, and assurance within the service provider’s network and, where applicable, between the service provider’s network and other service providers’ networks is managed by a service orchestration function in the Service Layer as noted in Section 1.1.

Service layer responsibilities include the functions listed below. The VPI architecture is defined to enable and support each of these functions:

• Expose available services to the end user, provide the user the means to select a service and specify service characteristics

• Enable creation, storage, and retrieval of information about each available service and associated flows

• Pass application data and control information to and from the Controller Layer

• Pass management information to and receive management information from the Operations Support System

• In addition to the Service Layer features above that are supported by VPI, the Service Layer includes other functions such as those listed below:

• Billing systems

• Customer support systems

• Inventory management systems

• Workforce management systems

5.2.2.1 Service Catalog The Service Catalog is a record of products and services made available to customers by the service provider. Each product an end user consumes can consist of a set of services, and each service is stored with a set of attributes. The product-service catalog model is unique to each service provider and perhaps even different within a single service provider across geographical locations, business units, or other logical divisions. No assumption is made about how the Service Catalog is implemented, and although the service catalog model is generally out of scope for the VPI architecture, the project makes assumptions about fundamental information associated with the Service Catalog, such as the existence of a Service ID and the existence of attributes for each service.

5.2.2.2 Customer Information Datastore The Customer Information Datastore maintains records about each of the service provider’s customers and the service(s) each customer subscribes to. The Customer Information Datastore is unique to each service provider. No assumption is made about how the datastore is implemented, and although it is generally out of scope for the VPI architecture, the VPI architecture makes assumptions about fundamental information associated with the Customer Information Datastore. Assumptions include the existence of an identifier for the customer, the association between a customer and one or more service(s), the association between a customer and one or more customer premises equipment (CPE) devices, existence of attributes for each service subscribed to by a customer, and existence of customer transactions for each subscribed service.

5.2.2.3 Operations Support System (OSS) Service provider OSS provide a variety of functions supporting delivery of services to the customer. OSS includes device configuration, service configuration, device/ network/ service/ service use status monitoring and reporting, and other applications implemented by the operator enabling and guaranteeing delivery of service to the customer according to the customer’s agreement with the service provider.

Page 25: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 25

Another view of the role of OSS is fault management, i.e., monitoring and reporting of CM/ONU connectivity to the service provider and to other network elements. It also includes configuration management, accounting management, performance management, and security management.

5.2.2.4 IP High Speed Data Application The IP High Speed Data (IPHSD) application is a software entity responsible for delivering high-speed data service to an end customer behind a CM or ONU. Delivery of IPHSD service to a CM or ONU requires a business agreement between the service provider and the customer, connectivity between the service provider’s network and the customer’s CM/ONU, and configuration of service/device parameters in the CM/ONU. The configured parameters enable and enforce service options and details defined by the business agreement. This application is responsible for initiating the provisioning of services associated with the customer’s IPHSD product request, by communicating with the SDN Controller.

The service provider might offer multiple different options of IPHSD service, varying by different characteristics, such as maximum upstream and downstream data rates, minimum upstream and downstream data rates, total bytes per month, and service quality or priority classes.

5.2.2.5 Layer 2 VPN Application The Layer 2 VPN (L2VPN) application is a software entity responsible for establishing a private network connecting two or more customer endpoints using OSI layer-2 addressing and data forwarding protocols and messaging. Delivery of L2VPN service to a CM or ONU requires a business agreement between the service provider and the customer, connectivity between the service provider’s network and the customer’s CM/ONU, and configuration of device/service parameters in the CM/ONU,. The configured parameters enable and enforce service options and details defined by the business agreement. This application is responsible for initiating the provisioning of services associated with the customer’s L2VPN product request, by communicating with the SDN Controller.

The service provider might offer multiple different options of L2VPN service, varying by different characteristics, such as maximum upstream and downstream data rates, minimum upstream and downstream data rates, total bytes per month, type of connectivity (point-to-point, point-to-multipoint, or multipoint-to-multipoint), and quality of service characteristics.

5.2.2.6 Service-to-Controller Interface Service providers may offer many different services delivered over the access network to their customers. Some examples include high-speed IP data service, video service, telephony/voice service, business services, gaming service, security service, and a variety of cloud-based services. Devices responsible for delivering the service need to be configured or provisioned to be able to successfully deliver the service ordered by the customer. Therefore, the Service-to-Controller Interface must allow and support delivery of the service by enabling configuration of networked equipment.

Two of the primary objectives of the VPI architecture are to define the messaging interface and the models for the data to be exchanged across the Service-to-Controller Interface.

The Service-to-Controller Interface exposes to service applications the underlying access network capabilities and have the ability to translate applications’ requests and parameters to appropriate commands to the network controller framework.

The VPI architecture Service-to-Controller interface is modeled by a YANG data model [VPI YANG].

5.2.3 Controller Layer

The Controller Layer is responsible for configuration and management of the network. In the VPI architecture, the Controller Layer is implemented as a Software Defined Network (SDN) Controller which exposes application programming interfaces (API) to program the underlying network. The Controller Layer receives requests for service from the Service Layer, interprets the request, obtains additional information as needed from system data stores, and formulates configuration or management messages for network elements. The Controller layer issues configuration or management commands to network elements as needed through the Controller-to-Network Interface to enable the delivery of requested service within the constraints of defined policy and system conditions.

Page 26: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

26 CableLabs 04/24/17

It receives requests and responses from network elements, interprets the responses or requests, forwards information from networked elements to the Service Layer, and stores information, including state and status information, in system datastores. The Controller Layer performs translation between the Service Layer product and service definitions and Network Layer device flow definitions.

The Controller Layer maintains information about elements connected to the network within its scope of control and the connectivity between them (the network topology). The Controller Layer also maintains information about the mapping of subscribers to the appropriate physical devices and the location of those devices.

The Controller is also responsible for the continuing maintenance of all services under its purview in light of changing service demands, network traffic, failures and the like, including any necessary logging and reporting, especially when it is unable to satisfy all service expectations placed upon it. Monitoring and maintenance functions of the Controller may be defined by another forum such as Open Network Operating System (ONOS) or Open Networking Foundation (ONF), or by the Controller software provider. The Controller will re-establish some services without Service Layer intervention and not automatically re-establish other services based on a restore on reboot configuration attribute in the service definition. This signaling mechanism allows the Controller to quickly re-establish service such as IPHSD, which may be acceptable to interrupt and resume without the need for the user to re-initiate the service, while not attempting to re-establish service such as a voice call which should be re-initiated by the user or application when interrupted.

5.2.3.1 Controller to Network Interface The Controller-to-Network Interface shown in Figure 3 is actually a collection of interfaces and is modeled by a set of YANG modules. The Controller Layer provides an interface to one or more access networks. For example, a service provider may deliver services over the HFC network via the DOCSIS infrastructure, over an optical network such as an Ethernet Passive Optical Network (EPON), or over a wireless network. Therefore the controller may communicate with a DOCSIS CCAP, a DPoE System or a wireless access network transmitter and receiver.

The Controller Layer implements the protocols necessary to allow it to interface with the various access network devices. The VPI architecture defines the information model, YANG data models, and programming interface to standardize the Controller-to-Network Interface for the DOCSIS and DPoE Network elements so data exchange and functionality supported between the controller and each of the access networks will be as common as possible.

5.2.4 Network Layer

The Network Layer is the set of all network endpoints and intermediate network devices which connect the end user to the operator’s network. The operator delivers services over these network elements. For the cable operator access network, this includes DOCSIS CCAPs, CMs, DPoE Systems, ONUs.

The CCAP and DPoE System will expose a RESTCONF interface to the SDN Controller for service configuration and status monitoring. The CCAP RESTCONF interface will proxy for the CMs connected to it, and the DPoE System RESTCONF interface will proxy for the vCMs connected to it.

VPI requires DPoE Systems and DOCSIS CCAPs to support the same protocols and interfaces. However, DPoE is a virtual DOCSIS provisioning interface for EPON and requires additional considerations with regard to VPI. See Appendix IV for a DPoE overview.

5.3 System Operation

The goal of the VPI architecture is to standardize the Service-to-Controller Interface and the Controller-to-Network Interface for provisioning services delivered over a cable operator’s access network, when a customer orders a product offered by the service provider, changes the attributes or features of a previously-ordered product, or discontinues the use of the product.

The VPI architecture description above defines the high-level framework enabling the delivery of a service or set of services to the customer over an access network. This section provides additional context by introducing the concept of a product, its relationship to services and flows, and an overview of the provisioning process enabled by the VPI architecture. More detail on the specific actions and messages for each type of service is provided in the Use Cases section.

Page 27: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 27

A product is provided to the customer by an application or set of applications running in the operator’s network and is understood to be composed of one or more individual services. A service is a description of the parameters of the SLA between the operator and the customer. Each service is implemented through data communication channels known as flows, which are the physical implementation of the service parameters on the network. There are typically at least two flows: one or more flows in the direction from the network to the customer premises equipment (downstream) and one or more from the CPE/CM/ONU to the operator network (upstream). Therefore, each service has two or more defined flows associated with it. Each flow may have one or more classification definitions describing how system components are to handle the packet flows for that service. Some types of service will also include parameters for the endpoints of that service, which could include encapsulation parameters the endpoint will need.

Establishing the flows entails configuring the network equipment to forward data packets so that the service is delivered to the customer in accordance with the service guarantees and characteristics associated with the ordered product.

Configuring network equipment to deliver the product consistent with the business relationship established between the service provider and the customer is referred to as service provisioning.

Many separate actions occur in order to provision service to a customer, and each action taken results in an updated state of the network. It is critical for the system to monitor and manage the actions and state of the system to ensure proper delivery of the product to the customer.

The VPI architecture focuses on interfaces for provisioning network elements for delivery of two services, IPHSD and L2VPN, over two access network architectures: HFC/DOCSIS, and PON/DOCSIS Provisioning of EPON (DPoE). A set of use cases is defined to describe actions and information exchange required for the provisioning of each service. The use case descriptions validate the information model for parameters passed between the layers of the architecture and the protocols defined for the parameter exchange.

The Service Layer, Service-to-Controller interface, Controller Layer, and service provisioning processes for DOCSIS and DPoE systems are largely common, but there are some distinctions in the parameters for the access network technology types in the Controller-to-Network interface and Network Layer. Additionally, the provisioning of service for DPoE systems requires some additional parameters and message exchanges different from DOCSIS over a HFC system. These differences are highlighted in the use case descriptions in Section 6.

The Controller-to-Network interface terminates on the CCAP for the DOCSIS network and it terminates on a virtual cable modem (vCM) on the DPoE System when the service is delivered over a fiber network using DOCSIS Provisioning of EPON (DPoE) [DPoE Arch].

Cable modems receive their configuration and channel assignment, establishing connectivity with the CCAP using processes defined in CableLabs DOCSIS specifications. Once connectivity is established between the CCAP and CM, in a VPI environment services subscribed to by the customer are provisioned using the framework and mechanisms described in this report.

The vCM is defined as a virtual function separate and distinct from a DOCSIS-compliant cable modem, that provides the service provisioning interface for OLTs in a DPoE System and provides provisioning and management for the ONUs. Details of the EPON architecture and the migration from DPoE to native EPON provisioning are provided in Section 9.

5.3.1 Service Provisioning State Management

The service provisioning architecture partitions the service state information across the Service, Controller, and Network layers. Here we consider state information to be the identity, status, and statistics associated with the service implementation. In particular we consider the state of the flows that are the fundamental components of the Service as defined in the VPI information model defined in Section 8. Since the Network Layer implements the actual service delivery and by definition contains the detailed flow state specific to the particular access network technology, the tradeoffs discussed here are primarily between the Service layer and Controller layer. Ultimately, this decision is up to the operator deploying the provisioning architecture, but this section discusses some of the tradeoffs to be considered and provides recommendations that can be applied to many deployment scenarios.

Page 28: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

28 CableLabs 04/24/17

As described previously, the purpose of the SDN Controller in the VPI architecture is to provide an abstract view of the access network to service provisioning applications to help facilitate service velocity and automation. This network abstraction provides advantages including those listed below:

• A common network view independent of the access network technology (DOCSIS, DPoE, or Wireless)

• Network topology hiding: the SDN Controller takes care of mapping the subscriber identification (e.g., CM or ONU MAC address) to the CCAP or DPoE System that is providing service to the subscriber device.

• Network device protocol independence: the controller maps the Service-to-Controller Interface operations into the protocols supported by the Network Layer devices. The primary examples are RESTCONF, NETCONF, SNMP, etc.

Achieving these abstractions is one of the overriding goals in determining how to distribute the flow state across the layers of the VPI provisioning architecture. At the same time, the controller must also provide detailed service and network state information to enable OSS applications to effectively monitor the service status. Another consideration is the impact on service recovery following a failure of the CM/ONU, the CCAP/DPoE System, the SDN Controller, or the service application.

Two main approaches for flow state management were evaluated for this report:

1. In one approach, all service and flow state is maintained in the Service Layer, such as by an application (or an orchestration function). This approach makes the service application (or service orchestrator) responsible for mapping the service definition into its constituent flows and attributes and consequently restricts the SDN Controller to predominately a protocol converter function, along with the topology resolution.

2. In an alternative approach, the SDN Controller (Controller Layer) is responsible for maintaining the identity and state of all flows associated with each service instance. The service application need only provide the service ID and subscriber CM/ONU identity to provision the service and the controller need only return the status of the provisioning operation to the service application. The SDN Controller maps that service ID into its constituent flows and attributes (with the help of an MSO-provided database) and configures each flow into the appropriate Network Layer devices. In this case, the SDN Controller must also provide access to lower level Network Layer status and statistics to facilitate appropriate service monitoring and management.

Note that there are minor variations of each approach. Both approaches have their pros and cons.

The advantages of maintaining flow state in the Service Layer are primarily the simplicity of the SDN Controller implementation - it acts primarily as a protocol converter rather than an active flow state manager - and the availability of detailed flow information in the Service Layer for service monitoring and other OSS/BSS functions. The Service Layer also knows the persistence and other service-specific attributes of the flows and can manage them appropriately. Maintaining flow state in the Service Layer reduces complexity for the SDN Controller. Tracking flow states for each flow associated with each customer over a plethora of network devices represents a significant processing load for an SDN controller and likely increases the development effort for an SDN Controller implementation.

The disadvantages of Service Layer flow state management are: the service application is responsible for creating and managing the flows associated with each instance of the service and this functionality must be replicated in each of the different service applications. Scalability also impacts the Service Layer when applications are required to maintain flow state for multiple controllers. Each service application becomes responsible for restoring flows after Network Layer failures such as CM/ONU resets and CCAP restarts and thus must be aware of the state of all the Network Layer elements that implement the service. Coordination of service restoration among the different service applications may be challenging. The additional complexity of the service applications may increase the time to deploy new services.

The advantages of maintaining flow state in the SDN Controller layer are that each service application needs to pass only the service ID, subscriber CM/ONU ID (MAC address), and persistence attributes of the flows (for service recovery purposes) to the controller, which uses an external data source to map the service ID into the necessary flows. Restoration of flows after Network Layer failures such as CM/ONU resets and CCAP restarts is centralized in

Page 29: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 29

the Controller Layer, which can properly sequence the flow restoration by service type (for example, IP video before HSD). Removing this complexity from the service applications helps facilitate rapid deployment of new services.

The main disadvantage of maintaining flow states in the Controller Layer is the additional controller complexity and its impact on scalability. Impact on scalability is due to the fact that the controller maintains flow states from each and every application, across all services for all subscribers, which becomes significantly larger in magnitude than dividing flow state management among each individual application in the Service Layer. Most SDN Controllers available today are integrated with a clustering technology that allows a single controller to be distributed across multiple servers and maintain data consistency, which can address many scalability issues. The SDN Controller must also provide lower-level access to the underlying Network Layer flow state and statistics to enable proper service monitoring and other OSS/BSS functions.

As mentioned above, the decision on where to maintain service state is ultimately up to the operator deploying the provisioning architecture.

• Based on the tradeoffs described above, the recommended approach is to centralize flow state and other service state in the Controller Layer, abstracting Network Layer details from the service applications and centralizing flow recovery operations at the SDN Controller on behalf of all services. The scalability problem can be mitigated by implementing a multiple SDN Controller architecture.

• Scalability issues for a single SDN Controller architecture can be mitigated by maintaining flow state at each of the applications in the Service Layer.

5.3.2 Service Provisioning API Detail

There is also a choice around the definition of the APIs between the Service layer and the Controller layer. There are a couple of options for this API definition. The SDN Controller implementation can expose the full service provisioning information model (see Section 8.1) in a single interface. The SDN Controller implementation can also expose only a simple model (Service ID, Subscriber, class of service) of the interface and the controller obtains the service specific information via another interface for database access.

Figure 5 - Service Provisioning Flow State Management

Figure 5 illustrates a model for implementing flow state maintenance in the Controller layer.

1. The Service layer application receives a request from the Web Portal to enable a new product for a subscriber.

2. The Service application looks up the individual services associated with the requested product from an operator data store, determines the CM/ONU associated with the subscriber, and passes the CPE ID and

Page 30: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

30 CableLabs 04/24/17

service IDs to the SDN Controller in a request. In addition, the Service application could send the service parameters as well.

3. The SDN Controller determines to which CCAP/DPoE System the subscriber CPE is registered. The SDN Controller retrieves the flow parameters and other attributes associated with each service (possibly from the same data store used by the application to determine the product-service mapping) or these parameters are passed in directly from the application to the controller.

4. The SDN Controller assigns a FlowID for each needed flow, and establishes the flows on the CCAP or on the DPoE system. The SDN Controller may determine the CM/ONU to CCAP/DPoE System mapping based on prior topology resolution.

5. If the access network is DOCSIS, the CCAP receives the FlowIDs and the parameters to be set up and creates the network flows and returns the status of the operations to the SDN Controller. The CCAP maintains the FlowID to Service Flow mapping for future reference by the SDN Controller flow-related operations (e.g., delete flow).

6. If the access network is EPON, the vCM in the DPoE System receives the FlowIDs and the parameters to be set up, configures the OLT and ONUs with the network flows, and returns the status of the operations to the SDN Controller. The vCM/DPoE System maintain the FlowID to Service Flow/LLID mapping for future reference by the SDN Controller.

The SDN Controller updates its internal Flow state data based on the information returned by the CCAP/DPoE System and responds to the service requests from the Service application with the status of the operation, the FlowIDs created, and possibly additional information.

Figure 6 presents a sequence-based view of the VPI system operation described above.

Figure 6 - VPI System Operation Overview

5.3.3 Management Considerations

Management in the context of a data communications network includes many functions, sometimes categorized as fault management, configuration management, accounting management, performance management, and security management (FCAPS) [ISO 10040].As VPI is focused on Service Provisioning, many of the management considerations are out of scope. Management considerations applicable to the VPI architecture are notifications, status reporting, and topology resolution.

Page 31: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 31

5.3.3.1 Notifications Notifications are messages sent asynchronously from one entity to another in a system to provide status updates, warnings, exception conditions, and potentially other message types that are reported or “pushed” by network elements to avoid the need for a monitoring system to use bandwidth needlessly, to poll or query elements for status even when there is nothing to report. Examples of notification types are SNMP traps sent asynchronously to a trap receiver and events that are reported to a system log (syslog) server to record the occurrence of a status change or exception condition. These are examples of notifications implemented by current DOCSIS-compliant CCAP and CMs.

Notifications are an important tool in the implementation of management functions, providing a means of exchanging messages alerting monitoring systems about changes to the network. This includes asynchronous alerts about events that have a negative impact to the network, which is one of the functions of Fault Management [ISO 10040].

The VPI model defines the use of notifications for the SDN Controller to report changes in service status and changes in flow status. In the VPI model, notifications are also used by the CCAP and DPoE System in the Network Layer to report changes in CPE (CM or ONU) status, and DOCSIS-defined events [CCAP-OSSIv3.1]. A notification reporting a change in CPE status, such as a change in registration status, is a form of Fault Management since it alerts the management system to an event with negative impact to the network.

Section 7.8 describes the asynchronous notification mechanism defined for use by the VPI model and identifies the types of notifications reported by the SDN Controller in the Controller Layer and by the CCAP and DPoE System in the Network Layer.

5.3.3.2 Status Reporting Status reporting is a fundamental function for each of the management categories defined by the OSI Systems Management model [ISO 10040]: Fault Management, Configuration Management, Accounting Management, Performance Management, and Security Management. Status reporting communicates the current state of components and functions in a system, which is especially important when the state changes.

Elements in the VPI model report status for some functions using asynchronous notifications following a “push” model as described in Sections 5.3.3.1 and 7.8, but for other functions VPI model components report status using the “pull” model, that is, by responding to queries from the system management interface or from the SDN Controller responsible for managing and controlling the Network Layer and keeping the state of the network as described in Sections 5.2.3 and 5.3.1.

Components of the VPI model respond with the status information listed below, in response to queries. The mechanism used to query components for status over the Service to Controller interface is described in Section 7.5,and the method for querying status over the Controller to Network interface is described in Section 7.6.

Table 1 - VPI Model Status Reported in Response to Queries

Querying Component Reporting Component Status Attribute Management application SDN Controller List of CCAP and DPoE Systems

Management application SDN Controller Status of link with CCAP and DPoE Systems

Management application SDN Controller Interface byte counts

Management application CCAP or DPoE System Interface byte counts

SDN Controller CCAP or DPoE System CM or ONU online/offline status

SDN Controller CCAP or DPoE System Flow status

SDN Controller CCAP or DPoE System List of registered CMs or ONUs

SDN Controller CCAP or DPoE System Status of link with CMs or ONUs

SDN Controller CCAP or DPoE System Interface byte counts

In addition to the status attributes returned over the VPI model Service to Controller and Controller to Network interfaces as listed in Table 1, components compliant with CableLabs DOCSIS or DPoE specifications will also

Page 32: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

32 CableLabs 04/24/17

respond to management queries as defined in the respective specifications, for example, responses provided by DOCSIS compliant cable modems in response to SNMP Get requests as defined in DOCSIS OSSI specifications.

5.3.3.3 Remote Procedure Calls (RPC) The management system is responsible for initiating maintenance actions on the network, such as upgrading software in a device. RESTCONF and NETCONF define remote procedure call (RPC) operations to initiate data model-specific operations, and these operations can be used for initiating stateless network maintenance operations. Section 7.9 identifies RPC operations defined by VPI data models [CM SERVICES CFG YANG][CMTS STATUS YANG].

5.3.4 Topology Resolution The SDN Controller is responsible for resolving and maintaining network topology. This includes the association of cable modems to CCAPs and the association of ONUs to DPoE systems. The method employed by cable operators to inform the Controller of the CCAPs and DPoE Systems it is responsible for controlling is outside the scope of the VPI. The Controller may be directly configured, e.g., via CLI, DHCP response, configuration file, or other configuration mechanism, with the IP addresses of its CCAPs and DPoE Systems. The Controller may also be provided with the address of a database where it looks up CCAP and DPoE System identifying information.

When the Controller first joins the network it will obtain a list of CMs and ONUs by executing a GET Device List against each of its CCAPs’ and DPoE Systems’ RESTCONF interfaces. This will allow the Controller to associate CMs with CCAPs and ONUs with DPoE Systems.

The Controller will subscribe to its CCAPs’ and its DPoE Systems’ Device Status Notification service to receive status of connected CMs and ONUs, respectively via Server Sent Events (SSE) on CCAPs’ and DPoE Systems’ RESTCONF interfaces.

Page 33: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 33

6 USE CASES The VPI information models and APIs will enable many services and use cases, but in order to focus development and make rapid progress, the VPI architecture team identified two services to concentrate on for initial information model and API definition. The two initial VPI services are IP High Speed Data service and Layer 2 VPN service. For each service a set of use cases is identified for initial analysis. Information models, protocols, and interface models are derived from the needs identified in these use cases.

Assumptions for each use case are listed below:

• System architecture is as described in Section 5.

• The CM or ONU are physical devices. In the DPoE case, the virtual CM (vCM) logically represents the ONU to the controller (and the back office) as a CM.

• Service provider has established a Service Catalog and list of flows and parameters needed for each offered service and made this information available to the Controller layer via a database interface.

• The SDN Controller learns of the CCAP, DPoE Systems, cable modems, and ONUs in its scope of control using one or more of multiple possible topology resolution methods. Refer to Section 5.3.3.3.

• The SDN Controller will subscribe to CCAP Cable Modem Status event streams to be notified when modems join and leave the network. Refer to Table 3.

• The communication channel between the Service Layer and the customer’s physical CM or vCM (ONU) is established and traffic can be passed between them. This includes communication between CM/ONUs and OSS services in the Service Layer required for CM/ONUs to become operational.

• When power is applied to a CM, it ranges and registers with the CCAP.

• When power is applied to an ONU, it registers with the OLT according to Multipoint Control Protocol [IEEE 802.3] and the DPoE system assigns a single LLID to the ONU [DPoE Arch]. Once the ONU is successfully registered the DPoE system instantiates a vCM.

• The CM or vCM receive a basic initial configuration from a TFTP configuration file composed of Type-Length-Value fields detailing the configuration.

• The Service Layer maintains state of services delivered to the customer. The CCAP or DPoE System does not maintain state of service(s). The Service layer maintains the association between the CM/ONU and the customer.

• Controller maintains state of service(s) implemented on all physical and virtual elements it manages in the network.

• Each service is identified with a Service ID. The Flow ID associated with each of the flows within a service is globally unique for all customers within an SDN Controller’s domain.

• Multiple transactions can occur between the customer’s CM/ONU and the service provider within each service. Each service transaction is monitored and recorded by the Service Layer. Each service transaction requires configuration of a set of service parameters, e.g., QoS classifiers.

• CCAP or DPoE System are capable of reporting device level state and logging events.

• References to Network Side Interface (NSI) apply to the CCAP NSI defined in [DOCSIS NSI] for the DOCSIS network or to the DPoE D-Port defined in [DPoE Arch] for the EPON.

• The DPoE Architecture specification refers to the MN interface, which provides the equivalent function of a MEF Internal Network to Network Interface (INNI) or Layer 2 VPN Network Systems Interface (NSI). The MNE (MEF INNI External) interface is defined as a substitute for the MN interface for DPoE [DPoE Arch]. In the VPI architecture the role of the MNE interface is satisfied by the D interface, or the operator network-facing interface of the DPoE System.

Page 34: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

34 CableLabs 04/24/17

6.1 IP High Speed Data Use Cases

System operators defined four key use cases for IP High Speed Data (IPHSD) service: initial CM/ONU bootup for a new customer, dynamic addition of new service, CM/ONU reboot, and deletion of a service. These four use cases sufficiently represent typical IPHSD service delivery operations that they are used to validate the information and data models defined in Section 8 and the interface protocols described in Section 7.

Note: In the context of IPHSD service, the configuration mainly applies to the CM for DOCSIS networks or to the ONU for EPON as applicable, and not to customer-owned equipment downstream from the CM/ONU such as home routers, personal computers, smartphones, tablets, and IoT devices.

6.1.1 Initial CM/ONU Bootup and Service Creation for New Customer

Initial bootup is the process of a new customer’s CM or ONU transitioning from an inoperative state to a configured and operative state for the first time. In the case of EPON, a vCM will be instantiated when the ONU becomes operative.

This use case includes establishment of physical and logical connectivity between the Service Layer and devices in the Network Layer required to deliver service. It does not include actual transfer of data associated with the service. Physical and logical connectivity beyond the CCAP NSI or DPoE D-Interface is out of scope for VPI.

6.1.1.1 Use Case Objectives Objectives of the Initial CM/ONU Bootup for New Customer use case are as follows:

• CM/ONU is provisioned with operating parameters enabling delivery of the ordered IPHSD service(s) to the customer. This will be achieved with no human intervention, with the customer requesting the service via a portal application in the Service Layer and one or more Service Layer applications converting the request to RESTful commands to the Control Layer

• Controller Layer establishes, saves, and maintains the state of the CM/ONU connectivity and location in the network topology

• Service Layer receives, saves, and maintains

o the state of the customer’s service(s)

o information about the device including association with the customer, means of communicating with it, its state, and types of service it supports.

6.1.1.2 Use Case Operation 1. Preconditions

a. Controller Layer recorded that the CM or the vCM are online. b. Service has not been previously established with the customer.

2. The customer completes a service order for IPHSD service with the service provider (e.g. from a web portal). The IPHSD application takes this request and issues a Product POST to the Controller Layer’s RESTCONF server, passing in the product ID for the requested IPHSD product, the corresponding Service parameters, and optionally the flow parameters along with the subscriber information. Refer to Table 2 and [VPI YANG].

3. The Controller Layer receives and stores Service ID, CM/ONU information, and subscriber information. 4. Controller Layer returns success message (200 OK) via RESTCONF to the Service Layer. <TBD: confirm

RESTCONF response> 5. The Controller creates a new FlowID as a global handle for each of the flows associated with this Service

request. 6. The Controller Layer retrieves from a data store in the Service Layer the parameters and flow information

required to instantiate the requested IPHSD product, if it was not already in the request from the Service layer. 7. The Controller Layer instantiates the required flows for IPHSD service on the CCAP or DPoE System

managing the subscriber’s cable modem or ONU by issuing one or more POST Flow-Entry messages as shown in Table 3 with the flow parameters corresponding to the required service flows. a. DOCSIS Network

Page 35: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 35

Figure 7 - IPHSD Service - Initial Bootup and Service Creation Use Case Flows: DOCSIS Variation

i) The CCAP receives the Flow-Entry POST message [CM SERVICES CFG YANG] with Flow

parameters from the Controller Layer. ii) The CCAP initiates a DSx message exchange with the customer’s CM to establish one or more

downstream service flow(s) and one or more upstream service flow(s) for exchange of configuration, management, and user data traffic with the CM using the flows with classifiers (or not) according to Flow information obtained from the Controller Layer. The CCAP generates a Service Flow ID as a reference for each service flow it creates between itself and the CM. A one-to-one correspondence exists for Flows defined by the Service Layer and Service Flows created by the CCAP.

iii) If the Controller Layer subscribes to Flow-Event notifications the CCAP returns a success response indicating successful creation of the Flow. Alternatively the Controller Layer can perform a GET Flow-Status on the CCAP and the CCAP will respond with a success response for Flow creation.

Page 36: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

36 CableLabs 04/24/17

b. DPoE Network

Figure 8 - IPHSD Service –Initial Bootup and Service Creation Use Case Flows: DPoE Variation

i) The DPoE System receives the Product POST RESTCONF message with service flow identification

and parameters [CM SERVICES CFG YANG] from the Controller Layer. ii) The DPoE System translates the YANG parameters to an internal format and provides the parameters

to the vCM iii) The vCM translates parameters into eOAM commands.

Note: the following steps address the configuration of the OLT and vCM but, because the internal DPoE interfaces are proprietary, the actual steps could be different (i.e., if the vCM directly configures the ONU and OLT, etc.). (1) The vCM provisions the OLT with the IPHSD downstream Service Flow classifiers, and upstream

QoS enforcement rules. (2) The vCM configures the ONU with the upstream Service flow classification and forwarding rules.

iv) The DPoE System returns a success (200 OK) or failure status code to the Controller Layer. v) The Controller Layer retrieves service flow status information from the DPoE System using the GET

method. Refer to Table 3. 8. Controller Layer updates flow information in its internal Flow State data store 9. The Controller Layer notifies the Service Layer about the change in service status. Refer to Table 4 VPI

RESTCONF Server Event Streams.

Page 37: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 37

10. The IPHSD application updates its Service State data store. 11. The IPHSD application begins monitoring CM/ONU and CCAP/DPoE System status.

At the conclusion of this use case the following services have been established:

• Customer has IPHSD service, with connectivity established between the customer’s network connected devices and the Internet.

• System operator/service provider has management (fault management, configuration management, accounting management, performance management, and security management) responsibility for the customer’s service and is monitoring CM/ONUs and network for fault and performance problems.

Parameters Exchanged

• Flow parameters associated with IPHSD service including Flow ID and Service Flow ID

• Flow success/failure status

• ONU link status

• Service success/failure status

6.1.2 CM/ONU Device Reboot – Resume Services This section describes the use case when the CM/ONU reboots. The basic idea is to restore relevant services at the SDN Controller layer while keeping the same FlowID for the flows associated with the service.

6.1.2.1 Use Case Operation See the bottom part of Figure 7 and Figure 8 for the reboot use case data flow.

1. Preconditions: a. The Initial CM/ONU bootup and service creation use case has been completed as described in section 6.1.1. b. The CM/ONU is already configured to provide IPHSD service to the customer. c. The established service is configured with the restore on reboot attribute set to ‘true’. d. The SDN Controller is subscribed to applicable event streams as described in Sections 5.3.3.1 and 7.8.

2. The CM or ONU reboots for any number of reasons. It could be that the user power cycled the modem, or the operator chose to reboot the modem for maintenance reasons. The CCAP/DPoE System sends a Cable Modem Status event to subscribed entities including the SDN Controller when the CCAP/DPoE System detects that the customer’s CM/ONU went offline. Refer to Table 4.

3. The SDN Controller keeps track of all the services which have been instantiated on a cable modem and now, because the CM/ONU has gone offline, the SDN Controller updates its Flow/Service State data store and sends a Flow-Event notification to subscribers to notify them of the change in state.

4. The IP HSD Application in the Service layer updates its own state of the network. 5. Once the CM/ONU Reboots and comes back online, the CCAP/DPoE System sends the Cable Modem Status

event to subscribers to notify them that the customer’s CM is back online. 6. Since the CM/ONU has rebooted, ranged, and registered and no RESTCONF messages have come from the

application to make any changes to the service and since the restore on reboot attribute is true, the SDN Controller re-instantiates services that were running on the modem prior to the reboot. The Controller Layer maintains the same FlowID as a global handle to the flows associated with the IP HSD service that it previously established for this CM. To re-instantiate services to the CM/ONU the Controller again issues the Create Service Flow POST method to the CCAP/DPoE System with the previously-configured parameters for the services assigned to that CM/ONU. From this point re-establishment of service closely follows the Service Creation use case described in Section 6.1.1.

7. The Controller Layer sends a Create Service Flow RESTCONF POST message with parameters re-initiating the same flows as previously established for this CM/ONU to the CCAP or DPoE System. a. DOCSIS Network: steps are the same for Section 6.1.1 Initial CM/ONU Bootup and Service Creation for

New Customer, step 7a. b. DPoE Network: steps are the same for Section 6.1.1 Initial CM/ONU Bootup and Service Creation for New

Customer, step 7b. See the bottom part of Figure 8 for the reboot use case data flow.

Page 38: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

38 CableLabs 04/24/17

8. When it re-establishes flows with the CCAP or DPoE System for the CM or ONU that came back online the Controller layer updates the Flow Status and Service Status datastore. The Controller notifies the IP HSD application about the change in state by sending a Flow Status event and a Service Status events to subscribers to indicate Flows and Service are active for the CM/ONU.

6.1.3 Delete Service

This section describes the use cases when the user/application removes the services configured for subscriber.

See Figure 9 for the Delete Service use case data flow.

6.1.3.1 Use Case Operation 1. Preconditions

a. The Initial CM bootup use case has been completed as described in Section 6.1.1, and the CM/ONU is already running with a set of new services.

2. The user chooses to drop a service from the web portal which gets sent to the IPHSD application. The Service layer/IPHSD Application sends a Delete Service RESTCONF DELETE message containing the Product ID for the dropped service and subscriber information to the Controller layer, for the service which needs to be deleted. Refer to Table 2.

3. The Controller layer receives the DELETE message for the particular Service ID and sends a RESTCONF DELETE message with the corresponding Flow ID to the CCAP or DPoE System. a. DOCSIS Network

i. The CCAP receives the DELETE message and initiates DSx messages with the CM to delete the service flows marked for deletion.

ii. The CCAP sends back a success or failure return code to the Controller layer. iii. The Controller Layer sends a Retrieve Service Flow Status GET message to the CCAP to obtain

service flow status. b. DPoE Network

i. The DPoE System receives the RESTCONF DELETE message and forwards it to the vCM. The vCM initiates eOAM messages with the customer’s ONU to delete upstream service flows associated with the Flow ID(s) marked for deletion.

ii. The vCM initiates eOAM messages with the customer’s OLT to delete downstream service flows associated with the Flow ID(s) marked for deletion.

iii. The OLT de-registers the LLID(s) associated with the deleted service flow(s) on the ONU. iv. The DPoE System sends back a success or failure return code to the Controller Layer. v. The Controller Layer retrieves service status from the DPoE System using a Retrieve Service Flow

Status GET method. 4. The Controller Layer updates the Service Status and Flow State data store as deleted. 5. The Controller Layer sends a Service Status event and a Flow Status event to the Service layer to notify the

IPHSD application about the change in service. At the conclusion of this use case the following conditions exist: • The CM/ONU is online but providing one less service.

Note: The CM’s primary upstream downstream service flow will remain active, for the service provider to monitor status, initiate service later if needed, etc. The remaining service flows are mainly for service provider access to the CM/ONU management interface only, and provide no services to the customer.

• The Flows previously associated with IPHSD service no longer exists in the Flow State database. • No IPHSD service is associated with the CM/ONU in the Service Status database.

6.1.4 Dynamically Change Services

Dynamic change of services is the process of providing the end user with new IPHSD-based services according to an agreement with the service provider and deleting the old services. An example is an upgrade from 50 Mbps IPHSD service to 100 Mbps IPHSD service. See Figure 9 for the Dynamically Change Services use case data flow.

Page 39: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 39

6.1.4.1 Use Case Objectives Objectives of the Dynamically Change Services use case are listed below:

• Receive the customer’s request for a change in service. Ultimately this will be achieved with no human intervention, with the customer submitting the change request via a portal application in the Service Layer and one or more Service Layer applications converting the request to RESTful commands to the Control Layer.

• Previous service is discontinued.

• New requested service is initiated.

• The CCAP/DPoE System is reconfigured with new service flow parameters and classifiers as applicable to enable the new requested service.

• Service disruption for the customer is minimized.

• Service status is updated in the data store for Service Layer and Control Layer reference.

• New requested service SLA parameters are validated and validation confirmation is made available to the customer.

6.1.4.2 Use Case Operation

Figure 9 - IPHSD Service - Service Delete and Change Use Case Flows

1. Preconditions

a. The Initial CM bootup use case has been completed as described in Section 6.1.1, and the CM/ONU is already running with a set of new services.

2. The user chooses a new product from the web portal which gets sent to the IP HSD application.

Page 40: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

40 CableLabs 04/24/17

3. The application layer initiates a delete-add operation as detailed below: a. Delete current service: follows the steps outlined in section 6.1.3. b. Add new service: follow the steps outlined in section 6.1.1.

4. The Controller Layer sends a Service Event notification to the Service Layer to notify the application about the new established flow.

5. The Service Layer updates Service information in the data store.

6.2 Layer 2 VPN Use Cases

System operators defined three key use cases for L2VPN service: initial CM/ONU bootup and Service creation for a new customer, CM/ONU reboot, and deletion of a service. These three use cases sufficiently represent typical L2VPN service delivery operations that they are used to validate the information and data models defined in Section 8 and the interface protocols described in Section 7.

Note: In the context of L2VPN service, the configuration mainly applies to the CM for DOCSIS networks or to the ONU for EPON as applicable, and not to customer-owned equipment downstream from the CM/ONU such as Network Interface devices (NID).

The section will be focused around the EPL provisioning L2VPN use case.

6.2.1 Initial CM/ONU Bootup and Service Creation for New Customer

6.2.1.1 Use Case Objectives The use case objectives for initial bootup provided in Section 6.1.1 for IP HSD service also apply to this use case. The following additional objectives also apply to the Initial CM/ONU Bootup and Service Creation use case for L2VPN service:

• The CCAP is configured to encapsulate layer 2 frames from the customer’s CM assigned to a specific L2VPN with some NSI Encapsulation option (e.g., IEEE 802.1Q tagged Ethernet packets).

• The ONU is provisioned to either add the Service VLAN tag or classify and forward packets already tagged by UNI-attached NID. The tagged packets would be forwarded by the DPoE System.

• The CCAP/DPoE System serving the customer’s CM/ONU is configured to forward the layer 2 packets tagged for the customer’s L2VPN to the correct NSI port, if the destination CM/ONU is served by a different CCAP or DPoE System.

• If the destination CM/ONU is served by the same CCAP/DPoE System as the source CM/ONU, and if supported by the CCAP/DPoE System, the CCAP/DPoE System serving the customer’s CMs is configured to bridge Layer 2 traffic between the customer’s CM/ONU.

Page 41: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 41

Figure 10 - DOCSIS Network L2VPN Service - Initial Bootup and Service Creation Use Case Flows

6.2.1.2 Use Case Operation 1. Preconditions

a. Controller Layer has recorded that the CM or the vCM(ONU) are online. b. L2VPN service has not been previously established with the customer. c. The CCAP/DPoE System serving the customer’s CM/ONUs is DOCSIS L2VPN compliant or DPoEv2.0

Compliant. The NSI is capable of distinguishing downstream L2VPN traffic from non-L2VPN traffic and determining which VPN packets belong to and the CCAP or DPoE System is capable of forwarding (bridging) traffic received from the customer’s CM/ONU to a configured 802.1Q VLAN ID on the NSI port.

2. The customer completes a service order for L2VPN service with the service provider (e.g. from a web portal). The L2VPN application in the Service layer takes this request and issues a Product RESTCONF POST message carrying the Product ID for L2VPN service , the corresponding Service parameters, and optionally the flow parameters along with the subscriber information and sends the message to the Controller layer. The Product POST optionally includes service parameters defined in Section 8.2.1.

3. The Controller Layer receives the POST, stores the Product ID , the service ID and subscriber information, and creates a new FlowID as a global handle for each of the flows associated with the service request.

Page 42: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

42 CableLabs 04/24/17

4. The Controller Layer looks up any additional parameters and flow information required to instantiate the requested L2VPN service, if it was not already in the request from the Service layer. This includes the customer’s assigned NSI encapsulation and L2VPN ID.

5. The Controller Layer instantiates the required flows for L2VPN service on the CCAP or DPoE System managing the subscriber’s cable modem or ONU by issuing one or more Flow-Entry POSTs. This message carries the service flow information and L2VPN information for the requested L2VPN service and sends the message to the controlling entity (CCAP or DPoE System).

a. DOCSIS Network

i. The CCAP receives the Flow-Entry POST message with service flow and VPN parameters [CM SERVICES CFG YANG] from the Controller Layer.

ii. The CCAP uses the L2VPNID to configure itself to forward (bridge) between the NSI Encapsulation (e.g., 802.1Q VLAN ID) -tagged frames on the NSI port and the RF interface serving the customer’s CM.

iii. The CCAP exchanges DSx messages with the customer’s CM to establish at least one downstream service flow and one upstream service flow, and the appropriate classifiers. The packets from the upstream service flow are configured to be tagged with the NSI encapsulation as it passes through the NSI of the CCAP.

iv. The CCAP returns a Flow-Event notification to the Controller Layer with success or failure status.

b. DPoE Network

Page 43: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 43

Figure 11 - DPoE Network L2VPN Service - Initial Bootup and Service Creation Use Case Flows

i. The DPoE System receives the RESTCONF Flow-Entry POST message with service flow and VPN parameters [CM SERVICES CFG YANG] from the Controller Layer.

ii. The DPoE System translates the YANG parameters to an internal format and provides the parameters to the vCM.

iii. The vCM configures the ONU to encapsulate upstream packets with e.g. the Service VLAN ID (S-Tag) assigned to the customer and, optionally, the PBB VLAN ID (I-Tag or B-Tag) based on the TLV parameters.

iv. The vCM configures the OLT to forward downstream packets to the ONU(s) configured to encapsulate as defined above.

v. The vCM configures the VSI on the DPoE System.

vi. The vCM exchanges eOAM messages with the ONU to establish at least one downstream service flow and one upstream service flow , along with the appropriate classifiers between the ONU and DPoE System.

vii. The DPoE System returns a Flow-Event notification to the Controller Layer with success or failure status.

Page 44: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

44 CableLabs 04/24/17

6. The Controller Layer enters the flow information associated with the customer’s L2VPN service in its internal Flow State data store.

7. The Controller Layer enters the customer’s L2VPN service information in the Service State data store.

8. The Controller Layer sends a Service-Event notification to the L2VPN application in the Service Layer indicating the re-establishment of the Flow and the L2VPN service.

9. The Service Layer updates its Service Status data store.

The same sequence of events will happen for each of the customer ONUs , in different locations, with the same L2VPN ID and NSI Encapsulation, to create a complete L2VPN service. This could involve two OLTs and/or two DPoE Systems within the service provider’s network. The scope for this report is a single service provider’s administrative domain.

At the conclusion of this use case the following services have been established:

• Customer has L2VPN service, providing private Layer 2 connectivity between the customer’s two (or more) network connected devices in the same administrative domain.

• System operator/service provider has management (fault management, configuration management, accounting management, performance management, and security management) responsibility for the customer’s service and is monitoring the network for fault and performance problems.

Parameters Exchanged

• Flow parameters associated with L2VPN service

• L2VPN ID

• NSI Encapsulation (VLAN ID, etc.)

• CM/ONU/Service Status according to the service provider’s fault and performance management strategy

6.2.2 CM/ONU Device Reboot - Resume Services

This use case is similar to 6.1.1, with the exception that service has previously been established for the customer, resulting in the following differences from the Initial Bootup use case:

• Service parameters (SLA parameters) for service ordered by the customer are already stored in the Service database in the Service Layer.

• Customer’s L2VPN Service remains in active state in Service State database.

• Customer’s L2VPN ID and NSI Encapsulation (VLAN ID) are retained in the Service State database and local to the Controller Layer.

At the initiation of the Endpoint Reboot – Resume Service use case the CM/VCM(ONU) link status is ‘down’ and the L2VPN service status is ‘down’. Refer to Figure 10 and Figure 11 for the DOCSIS and DPoE use case flow diagrams, respectively.

6.2.2.1 Use Case Objectives The objectives for the CM/ONU Reboot – Resume Service use case are to restore L2VPN service for the customer following a fault or other service interruption event. The Controller Layer and Service Layer retain customer information and L2VPN service information. Flow information and L2VPN service information at the CCAP/DPoE System has been lost and needs to be re-created or replaced.

6.2.2.2 Use Case Operation See the bottom parts of Figure 10 and Figure 11 for the reboot use case data flow.

1. Preconditions

Page 45: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 45

a. L2VPN service had been established so the Controller Layer is configured with L2VPN ID, NSI Encapsulation and Flow IDs and the Service Database has L2VPN service parameters for the customer. The restore on reboot attribute is set to ‘true’ for L2VPN service.

b. The Controller Layer is subscribed to CCAP/DPoE System event reporting, including for when a CM/ONU in its administrative domain is offline or online.

c. The CM/ONU have rebooted resulting in lost connection with CCAP or DPoE System and lost/discarded the Service Flow ID, NSI Encapsulation, and L2VPN ID for L2VPN service.

d. The CM/ONU initiated the re-registration process with the CCAP/DPoE System.

e. The Controller Layer has not received any changes from the Service Layer for the customer’s L2VPN service.

2. When the CCAP/DPoE System detects the CM/ONU offline it sends a CM-Event notification to the Controller layer to notify the Controller of loss of service with the CM/ONU.

3. When it is notified the CM/ONU is down, the Controller Layer updates its Flow State database to indicate the flow is down, updates its Service State database to indicate the customer’s L2VPN service is down, and sends a Service-Event notification to the L2VPN application in the Service Layer to inform the application the service is down.

4. The Service Status data store in the Service Layer is updated with the change in L2VPN service.

5. When the CM/ONU re-registers, the CCAP/DPoE System sends a CM-Event notification to the Controller Layer and the Controller updates its local Cable Modem Status data store.

6. The Controller Layer sends a RESTCONF Flow-Entry POST message with the original L2VPN service flow parameters and L2VPN service parameters including the previously-used NSI Encapsulation and L2VPN ID. The Controller Layer re-uses the unique Flow ID previously-created for the customer’s L2VPN service.

a. DOCSIS Network

i. The CCAP receives the Flow-Entry POST.

ii. The CCAP uses the NSI Encapsulation (e.g., VLAN ID) to re-configure itself to forward (bridge) frames between the NSI Encapsulation (e.g., 802.1Q VLAN ID) tagged frames on the NSI port and the RF interface serving the customer’s CM.

iii. The CCAP exchanges DSx messages with the customer’s CM to re-establish at least one downstream service flow and one upstream service flow for the L2VPN service.

iv. The CCAP issues a Flow-Event notification to the Controller Layer indicating success or failure of Service Flow creation.

b. DPoE Network

i. The DPoE System receives the Flow-Entry POST.

ii. The DPoE System translates the YANG parameters to an internal format and provides the parameters to the vCM for each ONU.

iii. The vCM configures the customer’s NSI Encapsulation (e.g., VLAN ID) on the ONU to be used to encapsulate layer 2 frames from the new customer’s LAN onto NSI Encapsulation (e.g., IEEE 802.1Q) tagged Ethernet frames before bridging it to the OLT for upstream transmission.

iv. The vCM configures the NSI Encapsulation (e.g., VLAN ID) and L2VPN ID to the DPoE System, which will re-establish bridging between the NSI Encapsulation (e.g. 802.1Q VLAN ID) tagged frames on the NSI port and the port serving the customer’s ONU.

v. The vCM exchanges eOAM messages with the ONU to re-establish at least one downstream service flow and one upstream service flow between the ONU and DPoE System.

vi. The DPoE System issues a Flow-Event notification to the Controller Layer indicating success or failure of Flow creation.

Page 46: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

46 CableLabs 04/24/17

7. The Controller Layer updates its Service State database to indicate the customer’s L2VPN service is up.

8. The Controller Layer updates its Flow State database to indicate the Flow associated with the customer’s L2VPN service is up.

9. The Controller Layer sends a Service-Event notification to the L2VPN application in the Service Layer indicating the re-establishment of the L2VPN service.

10. The Service Layer updates its Service Status data store.

6.2.3 Delete Existing “L2VPN” Between Two Endpoints

This use case is the deletion of an Ethernet Private Line (EPL) transparent LAN service previously established between two CMs within a single service provider’s administrative domain. Refer to Figure 12 for a diagram of the use case information flow.

6.2.3.1 Use Case Objectives The objectives for the Delete Existing L2VPN use case is to tear down the EPL connection between the CMs, free resources, and update service databases to notify the Service Layer about the status of the service.

6.2.3.2 Use Case Operation 1. Preconditions

a. L2VPN service is established between two CM/ONU so the Controller Layer is configured with NSI Encapsulation, L2VPN ID, and Flow IDs and the Service Database has L2VPN service parameters for the customer, with Service status in ‘active’ or ‘up’ state.

2. The customer initiates a Delete Service request with the service provider, via the service provider’s service portal or through another means.

3. The Service Layer L2VPN application constructs a new RESTCONF Delete Service DELETE message with the product ID, Service ID, Flow IDs of the L2VPN service to be deleted and the subscriber ID of the subscriber requesting the service to be deleted, and sends the message to the Controller Layer.

4. The Controller Layer receives the DELETE message.

5. The Controller Layer creates a RESTCONF Delete Service message with L2VPN Service ID & Flow ID and sends the message to the CCAP or DPoE System managing the subscriber’s CM or ONU.

a. DOCSIS Network

Page 47: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 47

Figure 12 - DOCSIS Network L2VPN Service - Service Deletion Use Case Flow

i. The CCAP receives the Delete Flow-Entry message.

ii. The CCAP reconfigures itself to delete the bridge between the NSI Encapsulation (e.g., 802.1Q VLAN ID) tagged frames on the NSI port and the RF interface serving the customer’s CM.

iii. The CCAP exchanges DSx messages with the customer’s CM to delete the service flows associated with the L2VPN service Flow ID.

iv. The CCAP issues a Flow-Event notification to the Controller Layer indicating success or failure of Service Flow creation.

Page 48: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

48 CableLabs 04/24/17

b. DPoE Network

Figure 13 - DPoE Network L2VPN Service - Service Deletion Use Case Flow

i. The DPoE System receives the DELETE Flow-Entry message.

ii. The DPoE System reconfigures itself to delete the bridge between the NSI Encapsulation (e.g., 802.1Q VLAN ID) tagged frames on the NSI port and the RF interface serving the customer’s ONU.

iii. The vCM exchanges eOAM messages with the customer’s ONU to delete the service flows associated with the L2VPN service Flow ID.

iv. The DPoE System issues a Flow-Event notification to the Controller Layer indicating success or failure of the Flow creation.

6. The Controller Layer deletes the flow information for the customer’s L2VPN service from the Flow State database.

7. The Controller Layer deletes the customer’s L2VPN service entry from the Service State database.

8. The Controller Layer sends a Service-Event notification to the L2VPN application in the Service Layer indicating the deletion of the L2VPN service.

9. The Service Layer updates its Service Status data store.

Page 49: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 49

7 PROTOCOLS

7.1 Overview

System protocols enable the exchange of information (data) and control messages between the layers of the architecture. Protocols are designed so each architectural component is able to correctly receive and parse the information sent to it and so each architectural component is able to correctly send information to other components. Protocols may be subject to constraints to satisfy system objectives. Constraints may include latency or other timing constraints, scope constraints such as ISO OSI layer of operation, and data type constraints.

7.2 REST

Representational State Transfer (REST) [Fielding 2000] is a client-server based architectural style providing a stateless, cacheable, and redundant system with a uniform, client-discoverable interface which exposes data resources directly to clients. Web services that comply with some or all of the above conditions are considered RESTful interfaces.

REST interfaces are modeled on the CREATE, READ, UPDATE, and DELETE (CRUD) storage functions and implement them utilizing the HTTP transport protocol methods of POST, GET, PUT, and DELETE. Other HTTP methods such as OPTIONS/PATCH/HEAD can also be supported. REST is a set of pull-based interface principles, layered on top of the HTTP communication standard. It inherits the requirements and limitations of HTTP but is loose in how the HTTP mechanisms apply to data resources. REST doesn't have a native notification mechanism so most developers layer in WebSockets, Server-Sent Events, or Long-Polling for asynchronous client-notification support.

Authorizations for RESTful transactions are typically done either using Http Basic Auth, OAuth, or a shared pre-defined key. Since this information is passed in an unsecured manner, it is recommended that REST interfaces always are connected via SSL (HTTPS).

REST operations follow a typical URL pattern of host, port, resource and HTTP method, followed by an optional data body typically carrying either an XML or JSON representation of the target resource or operation parameters, e.g., when implementing a remote procedure call via REST. URL encoding of operation parameters or resource selection criteria is also seen frequently in lieu of encoding in the message body, typically in relation to GET and PUT methods.

The major advantage of a REST API is platform independence. Since all communications are via HTTP leveraging the web services infrastructure, RPC calls are machine-independent and data encoding formats are well-defined, especially when using either XML or JSON. REST APIs are also easy to expose to Web Browser AJAX calls so are often seen in the context of an Internet Single-Page Application.

REST APIs also make good translation layer APIs bridging system, language or solution specific programming interfaces and granting access to those interfaces via a universally accessible front end.

Drawbacks of the REST model include a lack of consistency in implementations. Since there is no standard, each solution that offers a REST API can have specific and unique usage requirements that force the consumer to develop custom interfaces for each supported API. This can lead to confusion in terminology, usage, and implementation and careful analysis of the particular REST API documentation is required.

7.3 NETCONF

NETCONF protocol [RFC 6241] provides mechanisms to install, manipulate, and delete configuration of network devices. It is designed to reduce programing effort involved in automating device configuration. NETCONF is based on secure transport and uses Extensible Markup Language (XML) based data encoding for configuration and state data as well as for protocol messages. The choice of data model language is independent. YANG [RFC 6020] is a recommended NETCONF modeling language, which introduces advanced language features for configuration management.

Page 50: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

50 CableLabs 04/24/17

The NETCONF protocol is based on a Remote Procedure Call (RPC) model. The base protocol specifies a set of RPCs that clients invoke to manipulate configuration data stores, e.g., get-config, edit-config, copy-config, etc. The data modules implemented by the managed device specify additional RPCs that can be used by clients to manipulate specific device configuration and state data. The RPCs applicable to each data module are specified as part of the YANG definition of the module. NETCONF provides mechanisms for multi-action transaction management and two-phase commit, which assure data consistency at the cost of implementing a stateful session-based protocol with transaction semantics at the server side.

NETCONF can run over several transports including SSHv2, SOAP, and TLS. SSHv2 is the transport required by the base specification, with others being optional. NETCONF is defined in [CCAP-OSSIv3.1] as an optional configuration protocol for CCAP configuration management.

While NETCONF is a viable option, the industry favors RESTCONF for management and configuration of network devices.

7.4 RESTCONF

RESTCONF is an HTTP-based, REST-like protocol used to access data defined by YANG models using the data stores defined in NETCONF. RESTCONF is a proposed IETF standard [RFC 8040]. RESTCONF supports both XML and JSON for encoding of the YANG-defined data models. RESTCONF relies on Transport Layer Security (TLS) to provide privacy and data integrity between the application and server.

RESTCONF provides the CRUD operations on YANG-defined models via the HTTP methods: OPTIONS, HEAD, GET, POST, PUT, PATCH, and DELETE. Configuration and operational data are represented as resources addressed via an URI, which can be retrieved with the GET method. Resources representing configuration data can be modified with the DELETE, PATCH, POST, and PUT methods. RESTCONF also provides YANG-defined Server-Sent event notifications for asynchronous notifications to clients.

The RESTful interface implemented by RESTCONF contrasts to the RPC-based method of NETCONF. The RESTCONF protocol operates on a hierarchy of resources, each of which represents a manageable component within the device. RESTCONF resources are accessed via well-defined URIs and the HTTP methods mentioned above. RESTCONF significantly reduces the transaction complexity of NETCONF when each transaction succeeds. RESTCONF increases in complexity relative to NETCONF as the application becomes responsible for handling error recovery. Each action on a resource is assumed to commit automatically on successful application and RESTCONF removes the option of two-phase commit.

For example, a NETCONF <get-config> RPC operation is implemented with a RESTCONF HTTP GET method and a NETCONF <edit-config operation=create> RPC operation is implemented with a RESTCONF HTTP POST operation. For compatibility with YANG managed data modules that export application-specific RPC actions for NETCONF, RESTCONF supports use of the HTTP POST method to invoke those RPC calls. This Technical Report will present only RESTCONF, though the YANG models are also compatible with NETCONF.

7.5 Service-to-Controller Protocols

The VPI model assumes that Service layer applications implement a RESTCONF client and the Controller layer implements a RESTCONF server according to and including complying with integrity and confidentiality requirements of [RFC 8040]. Applications will use the RESTCONF server interface in the Controller to request service creation, modification, and deletion.

Section 5.3.3.1 describes VPI model use of RESTCONF notifications for the Controller Layer to report status about services to the Service Layer, and for the CCAP and DPoE System in the Network Layer to report status about services to the Controller Layer.

Table 2 provides examples of Service-to-Controller VPI operations required to initiate IP High Speed Data use cases detailed in Section 6.1. [VPI YANG] models the VPI Service-to-Controller interface. See Appendix I for a full listing of the Service-to-Controller RESTCONF endpoints derived from [VPI YANG].

Note that the SDN Controller is the RESTCONF server, for all operations with an application in the Service layer acting as the RESTCONF Client. Any push-type operations toward the application are implemented using

Page 51: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 51

NETCONF notifications as required by [RFC 8040]. A new notification “stream” specific to VPI notifications needs to be defined (see YANG model).

The payload for the Service Layer to Controller Layer interface will be composed of information defined in Section 8.2 and formatted as JavaScript Object Notation (JSON).

7.6 Controller-to-Network Protocols

The Virtual Provisioning Interface model defines use of RESTCONF [RFC 8040] for the Controller-to-Network protocol. The VPI architecture requires that system components (Service Layer, Controller, CCAP, and DPoE System) support RESTCONF.

RESTCONF leverages the wide pool of developers familiar with the RESTful web-client development model and the rich toolset available for developing and debugging HTTP-based applications.

RESTCONF is currently supported by many open source implementations. RESTCONF is already well established in virtualization frameworks, e.g., RESTCONF is an integral protocol in Linux Foundation’s OpenDaylight open source project.

The VPI model assumes that the Controller layer implements a RESTCONF client and the CCAP implements a RESTCONF server according to and including complying with integrity and confidentiality requirements of [RFC 8040]. The SDN Controller will use the RESTCONF server interface in the CCAP to request service creation, modification, and deletion, and to request information (GET) such as a list of registered cable modems.

The payload defined for the VPI Controller-Network Interface is described in Section 8.3.1.The payload for the Controller Layer to Network Layer interface will be comprised of information defined in Section 8.3 and formatted as JavaScript Object Notation (JSON).

The VPI model defines the use of notifications for the CCAP to report status about service flows and cable modems to the Controller Layer. The Controller will subscribe with the CCAP to receive event streams of interest.

7.7 Application of RESTCONF, YANG for VPI

In the VPI Architecture , the Controller layer implements a RESTCONF server for the Service layer and a RESTCONF client to communicate to the CCAP/DPoE System.

Table 2 provides examples of Service-to-Controller VPI RESTCONF required for lifecycle management of IP High Speed Data use cases detailed in Section 6.1. The VPI YANG model [VPI YANG] models and defines the Service to Controller interface.

Table 2 - Example VPI RESTCONF Use Case Transactions: Service Layer to Controller Layer

VPI Operation

RESTCONF Method

Target URI(s) Example Payload

Create Service

POST /restconf/data/config/vpi:vpi/subscriber={subscriber-id}/product={product-id}/

{“product” :[ {“product-id”:16, “product-name”:”Miami_Gold_50”, “product-defn”:”IPHSD 50 Mbps QoS Gold”, “restore-on-reboot”,true } ] }

Delete Service

DELETE /restconf/data/config/vpi:vpi/subscriber={subscriber-id}/product={product-id}/service={service-id}

N/A The payload is not necessary because the service ID specified in the endpoint (URI) is sufficient for the SDN Controller to delete the service.

Table 3 provides examples of Controller-to-Network VPI RESTCONF operations required for lifecycle management of IP High Speed Data use cases detailed in Section 6.1. [CM SERVICES CFG YANG] and [CMTS STATUS YANG] model the VPI Controller-to-Network interface.

See Appendix I for a full listing of the Service-to-Controller RESTCONF endpoints derived from [VPI YANG] and Controller-to-Network RESTCONF endpoints derived from [CM SERVICES CFG YANG] and [CMTS STATUS YANG].

Page 52: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

52 CableLabs 04/24/17

Any push-type operations toward the SDN Controller are implemented as RESTCONF server event notifications. Refer to Section 7.8 for descriptions of VPI asynchronous notifications.

Note: The term CCAP is used interchangeably with "DPoE System” in this table unless noted otherwise.

Table 3 - Example VPI RESTCONF Use Case Transactions: Controller Layer to Network Layer

VPI Operation RESTCONF Method

Target URI(s) Payload

Create Service Flow

POST /restconf/data/config/docsis-cm-services-cfg:cm-services-cfg/cm- entry={cm-mac-address}/

{ “global-flow-id”:9239 }

Delete Service Flow

DELETE /restconf/data/config/docsis-cm-services-cfg:cm-services-cfg/cm- entry/{cm-mac-address}/{global-flow-id}/{ global-flow-id}

N/A The payload is not necessary because the global flow ID specified in the endpoint (URI) is sufficient for the CCAP to delete the service.

Retrieve Global Flow Status

GET /restconf/data/operations/cm-services-cfg:cm-srv-cfg/cm-entry/{cm-entry}/flow-entry/{global flow id}/

This describes the status of the Global Flow as set by the SDN controller. The Global Flow is one of the following subtypes: • Upstream service flow • Downstream service flow • Upstream drop classifier • L2VPN connection • Multicast authorization • Subscriber management settings • CPE management settings

CM Software Image Update (RPC)

POST restconf/operations/cm-services-cfg: upgrade-cm-software-image

{ "input" : {"cm-mac-address" : "0xABCDEFABCDEF"} {"sw-upgrade-filename" : "newSwVer2017.bin"} {"sw-upgrade-tftp-server" : "10.10.1.25"} }

7.8 Application of Asynchronous Notifications for VPI

As described in Section 7, it may be inefficient for an SDN Controller to be constantly polling the CCAP to determine when the state of the network changes, such as when a CM changes state. As an alternative, the CCAP can support a “push” model, where event notifications are sent out asynchronously to interested listeners as these network impacting events occur.

One method to implement CCAP event notifications utilizes RESTCONF Notifications, which in turn are based on W3C Server-Sent Events. In this model, the RESTCONF server (CCAP) makes a list of event streams that it supports available to RESTCONF clients at the stream list URL. The client retrieves the supported event streams list using the HTTP GET operation for the stream list URL, to which the CCAP server responds with the list of supported event streams, their characteristics, and their locations (URIs). The client then “subscribes” to a desired event stream by issuing a GET request for the location URL with the "Accept" type "text/event-stream". The CCAP then responds with a stream of events, continuing until either side closes the associated TCP connection.

The application in the Service Layer acting as a RESTCONF client subscribes to the SDN Controller (RESTCONF server) event streams. The RESTCONF client implemented by the SDN Controller subscribes to the CCAP/DPoE System RESTCONF server event streams.

Table 4 lists the event streams created by the RESTCONF servers in the VPI model:

Page 53: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 53

Table 4 - VPI RESTCONF Server Event Streams RESTCONF

Server Event Stream XML Namespace

Service Layer to Controller Layer

SDN Controller Service Event urn:cablelabs:params:xml:ns:yang:controller:service-event

Error Message urn:cablelabs:params:xml:ns:yang:controller:error-message

Controller Layer to Network Layer

Access Network

Controller (CCAP or

DPoE System)

Cable Modem/vCM Events

urn:cablelabs:params:xml:ns:yang:ccap:docsis-cm-services-cfg:cm-events

Access Network Device Event

urn:cablelabs:params:xml:ns:yang:ccap:docsis-cmts-status:access-netwk-device-events

Global Flow Event urn:cablelabs:params:xml:ns:yang:ccap:docsis-cm-services-cfg:global-flow–event

Error Message urn:cablelabs:params:xml:ns:yang:ccap:docsis-cm-services-cfg:error-message

New CM Online Event

urn:cablelabs:params:xml:ns:yang:ccap:docsis-cmts-status:new-cm-event

For example, consider an event stream of “Cable Modem Events” that the CCAP publishes to interested listeners such as the SDN Controller.

The SDN Controller finds the location of the “Cable Modem Events” stream with the following GET request: GET /restconf/data/ietf-restconf-monitoring:restconf-state/streams HTTP/1.1 Host: ccap Accept: application/yang.data+xml

The CCAP sends the following response: HTTP/1.1 200 OK Content-Type: application/yang.api+xml <streams

xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"> <stream>

<name>CM-Event</name> <description>CM event stream </description> <replay-support>false</replay-support> <access>

<encoding>xml</encoding> <location>https:// ccap/cm-srv-cfg:cm-event</location>

</access> </stream> … more available streams …

</streams>

The SDN Controller then subscribes to the “Cable Modem Events” stream with a GET request to the CCAP stream URL: GET /streams/CM-Event HTTP/1.1 Host: ccap Accept: text/event-stream Cache-Control: no-cache Connection: keep-alive

The CCAP can then stream events to the SDN Controller for as long as the TCP connection remains open:

Page 54: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

54 CableLabs 04/24/17

HTTP/1.1 200 OK Content-Type: text/event-stream <notification

xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <event-time>2016-05-27T00:01:00Z</event-time> <event xmlns="urn:cablelabs:params:xml:ns:yang:ccap:cm-events">

<event-class> cm-event </event-class> <cm-mac-address>00:1D:CE:01:23:45</cm-mac-address> <cm-ip-address>10.10.212.75</ cm-ip-address > < cm-status-message >up</cm-status-message >

</event> </notification> [TCP connection closed]

Note: The event stream and event notification structure shown above is only an example. Definition of actual event streams, events, and their structure to be supported by a CCAP for SDN Controller are for future study.

7.9 Application of RPC to VPI

VPI YANG models define remote procedure calls to enact stateless operations on the network device.

As described in Section 5.3.3.3, RPCs are used when the intent is not to change the configuration but initiate an action on the network device. Typically RPCs are stateless in the sense that they do not affect the configuration settings for the services the device delivers.

Network layer RPCs are implemented by defining remote procedure calls in the YANG. RPCs are initiated in RESTCONF using the HTTP POST operation to the RPC’s URI. The POST passes input parameters in a payload, and once the operation is complete the server may return parameters in the response payload.

The application in the Service Layer acting as RESTCONF client initiates RPCs on the SDN Controller (RESTCONF server). The RESTCONF client implemented by the SDN Controller initiates RPCs on the network device (RESTCONF server).

Table 5 lists RPCs in the VPI model:

Table 5 - VPI Remote Procedure Calls RESTCONF Server RPC Path

Service Layer to Controller Layer

SDN Controller N/A N/A

Controller Layer to Network Layer

Access Network Controller (CCAP or DPoE System)

Reboot CM/ONU /cm-srv-cfg:reboot-cm

List all configured CM/ONU /cm-srv-cfg:list-all-configured-CMs

Upgrade software on CM/ONU /cm-srv-cfg:upgrade-cm-software-image

List all active CM/ONU /cmts-status:list-active-CMs

List all active Service Class Names /cmts-status:list-active-SCNs

Page 55: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 55

8 INFORMATION AND DATA MODELS

8.1 Component Model

Organization of the information exchanged between the layers in the VPI Architecture described in Section 5.2 begins with identifying the system components that will send and/or receive information with other components. The component model establishes logical system functions forming the basis for information grouping expressed as classes in the information models. Figure 14 illustrates the VPI Architecture component model.

Figure 14 - VPI Architecture Component Model

Page 56: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

56 CableLabs 04/24/17

The main components in the VPI Component model are the Business Application which is associated with the Service Layer, the SDN Control and Management which is associated with the Controller Layer, and the CCAP/DPoE which is associated with the Network Layer. The interfaces shown in the diagram correspond to the Service to Controller interface described in Section 7.5 and the Controller to Network interface described in Section 7.6. The information models describing the information exchanged between the components are described in Sections 8.2 and 8.3.

8.2 Service-Controller Model

8.2.1 Service-Controller (NorthBound) Information Model

The Service-Controller information model describes the information passed between the Service Layer and the Controller Layer. The information model includes all the parameters needed for the Service Layer and the Controller Layer to deliver service to the customer and ensure proper operation of the network, and relationships among the parameters, if any. The service-to-controller information model is depicted in Figure 15. Several different data models and APIs are envisioned for this effort. Data models are derived from information models that define the grouping and classification of VPI attributes (taxonomy) and the relationships between VPI attributes (ontology). The VPI information model defines attribute classes and relationships for products that an MSO wishes to offer subscribers. The model is rooted at the subscriber. This subscriber object points to a product which describes the product they consume. The product consists of one or more services. The subscriber is also associated with the CPE data, i.e., the information of the CM or ONU in the customer premise. Each Service consists of one or more flows. Each flow is described by flow parameters and classifiers. L2VPN services additionally includes parameters for the endpoints of that service and will include encapsulation parameters as well, if needed.

The CPE, the service and the flow all have associated status objects which maintain the status for those entities.

The VPI Service-Controller information model is described in UML in Figure 15.

Page 57: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 57

Figure 15 - Service-Controller Information Model

Page 58: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

58 CableLabs 04/24/17

These objects provide the foundation for the communication between an application that is defined to manage services like high-speed data or voice and video services, and the SDN Controller.

The Data Model and resulting API between the IP HSD App and the SDN Controller IPHSD API is realized by the VPI YANG model [VPI YANG]. Objects comprising the VPI service to controller information model include some defined in other CableLabs specifications and some introduced in this document. References to the defining documents are provided for objects defined elsewhere and definitions are provided below for objects defined for VPI.

8.2.1.1 Product Object The Product object uniquely identifies a product that can be ordered by a subscriber and is used to look up the service(s) for which devices will be provisioned.

Table 6 - Product Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

ProductId AdminString Key R/W SIZE (1..255) N/A N/A

ProductName AdminString Yes R/W SIZE (1..255) N/A N/A

MarketId AdminString No R/W SIZE (1..255)N/A N/A N/A

8.2.1.1.1 ProductId

The product identifier attribute is a string assigned by the service provider to uniquely identify a specific product offering. It is recommended that the VPI Service Layer application enforces uniqueness for the product identifier. The product is comprised of one or more services.

8.2.1.1.2 Product Name

The product name attribute is a string one to 255 characters in length assigned by the service provider as reference for the Product.

8.2.1.1.3 MarketId

This attribute is a string one to 255 characters in length assigned by the service provider as an identification for the market location where the service is deployed.

8.2.1.2 Subscriber Object The Subscriber object uniquely identifies the person or business entity receiving one or more products from the service provider and is used to identify the CPE to be provisioned for one or more services associated with the product ordered by the subscriber.

Table 7 - Subscriber Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

SubscriberID uint32 Key R/W N/A N/A N/A

AccountId AdminString Yes R/W SIZE (1..255) N/A N/A

FullName AdminString No R/W SIZE (1..255) N/A N/A

8.2.1.2.1 SubscriberID

This attribute is a service provider-specified string associated with the subscriber for identifying the subscriber. This attribute is expected to be unique within the service provider’s backoffice systems but uniqueness is not expected to be enforced by the VPI architecture.

Page 59: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 59

8.2.1.2.2 FullName

This attribute is a name string associated with the subscriber.

8.2.1.2.3 AccountId

This attribute is an identifier for the subscriber’s account with the service provider. This attribute is expected to be unique within the service provider’s backoffice systems, but uniqueness is not expected to be enforced by the VPI architecture.

8.2.1.3 Service Object Table 8 - Service Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

ServiceId AdminString Key R/W SIZE (1..255) N/A N/A

serviceName AdminString Yes R/W SIZE (1..255) N/A N/A

serviceDefn Enum No R/W 0..9 N/A N/A

restoreOnReboot Boolean No R/W N/A N/A True

8.2.1.3.1 ServiceId

This attribute uniquely identifies a service comprising a product ordered by a subscriber. ServiceId is passed from the Service layer to the Controller layer and is used by the SDN Controller to look up the attributes and flow characteristics associated with the specific service.

8.2.1.3.2 ServiceName

This optional attribute is provided to enable the operator to assign a human-readable name in string format for the service.

8.2.1.3.3 ServiceDefn

This attribute defines the type of the service, used by the SDN controller to translate parameters to the Controller to Network interface.

8.2.1.3.4 RestoreOnReboot

This attribute indicates to the controller whether it is required to restore the service autonomously. If the value of this attribute is false, the controller is required to not initiate action to restore flows associated with the corresponding serviceId when the CPE returns from reboot. If the value of this attribute is true, the controller is required to initiate action to restore flows associated with the corresponding serviceId when the CPE returns from reboot. The default value of this attribute is ‘true’.

8.2.1.4 ServiceStatus Object This object provides current information about the specified service.

Table 9 - ServiceStatus Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

SrvStatus Status Yes R 0..5 N/A N/A

LastConfigUpdate DateTime Yes R N/A N/A N/A

Page 60: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

60 CableLabs 04/24/17

8.2.1.4.1 SrvStatus

This attribute indicates whether the service associated with the specified ServiceId is operational (up) or not operational (down, fail, offline, or pending).

8.2.1.4.2 LastConfigUpdate

This attribute indicates the day and time of day when the service associated with the specified ServiceId was last configured.

8.2.1.5 VpnParamSet Object This object defines parameters for the tunnel protocol configured for secure delivery of the service.

Table 10 - VpnParamSet Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

VpnId UnsignedInt Key R/W N/A N/A N/A

L2vpnEncapsulation Enum No R/W 1-5 N/A N/A

Topology Enum No R/W 1-7 NA NA

Policy AdminString No R/W N/A N/A N/A

8.2.1.5.1 VpnId

This attribute is a numeric identifier associated with the corresponding Virtual Private Network created for the service.

8.2.1.5.2 L2vpnEncapsulation

Defines the type of encapsulation for this L2VPN.

8.2.1.5.3 Topology

This attribute is the requested L2VPN topology (e.g., EVPL). If value is other than none(0), the Encapsulation object is instantiated.

8.2.1.5.4 Policy

This attribute is the set of MSO-defined policy parameters for this L2VPN service. For example, a policy could contain Service Level Agreement, data rates, filtering, availability settings and other service attributes sold to the customer. The SDN Controller will conduct a lookup on this attribute and obtain the specific policy parameters required to implement the service.

8.2.1.6 L2vpnEncapsulation Object This object defines L2VPN Encapsulation details.

Table 11 - L2vpnEncapsulation Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

VLANId UnsignedInt No R/W N/A N/A N/A

Pseudowire Type Enum No

MiscL2VPN Parameters

Page 61: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 61

8.2.1.6.1 VLANId

The attribute is the virtual LAN identifier associated with the corresponding L2VPN connection.

8.2.1.6.2 Pseudowire Type

Defines the method of encapsulation or tagging.

8.2.1.6.3 MiscL2VPN Parameters

Defines the a set of miscellaneous L2VPN parameters which the operator wants to use for this L2VPN setup.

8.2.1.7 EndpointParameters Object Table 12 - EndpointParameters Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

EndpointIpv4Address hexBinary No R/W

EndpointIpv6Address hexBinary No R/W

8.2.1.7.1 EndpointIpv4Address

This attribute is the VLAN tunnel endpoint IPv4 address. This attribute is null if the endpoint does not have an IPv4 address.

8.2.1.7.2 EndpointIpv6Address

This attribute is the VLAN tunnel endpoint IPv6 address. This attribute is null if the endpoint does not have an IPv6 address.

8.2.1.8 Flow Object The flow object, defined by the values of its attributes, uniquely describes each service flow providing transport of data packets from the Service Layer to the Controller Layer and shapes, polices, and prioritizes traffic according to QoS traffic parameters defined for the flow [MULPIv3.1].

Table 13 - Flow Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

FlowReference UnsignedInt Key R/W N/A N/A N/A

Direction Direction Yes R/W 0..3 N/A N/A

Latency UnsignedInt No R/W N/A Milliseconds N/A

MaxRate UnsignedLong No R/W N/A Bits per second

N/A

MinRate UnsignedLong No R/W N/A Bits per second

N/A

8.2.1.8.1 FlowReference

This attribute is an identifier for a specific service flow (see definition for Service Flow in [MULPIv3.1]). The flow reference is assigned by the SDN Controller and maps to parameters that are specifically designed to fully provide a service to a specific cable modem or virtual cable modem. The FlowReference is meaningful only within the context of the service being provisioned.

Page 62: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

62 CableLabs 04/24/17

8.2.1.8.2 Direction

This attribute enables configuration and reporting of the direction of the unidirectional service flow of packets between the Service Layer and the Network Layer. Possible values are upstream and downstream, where upstream is in the direction from the Network Layer to the Service Layer, and downstream is in the direction from Service Layer to the Network Layer.

8.2.1.8.3 Latency

This attribute specifies the upper limit of the round-trip time the system is configured to pass a packet between the source at either the Service Layer (downstream direction) or the Network Layer (upstream direction), and the destination.

8.2.1.8.4 MaxRate

This parameter specifies the maximum data rate the system transfers packets between the Service Layer and the Network Layer, in bits per second (bps). The CCAP is required to not transfer packets at a rate higher than MaxRate.

8.2.1.8.5 MinRate

This parameter specifies the minimum data rate the system transfers packets between the Service Layer and the Network Layer, in bits per second (bps). The CCAP is required to not transfer packets at a rate lower than MinRate.

8.2.1.9 FlowStatus Object This object provides current information about the specified flow.

Table 14 - FlowStatus Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

FlowReference UnsignedInt Yes R N/A N/A N/A

FlowStatus Enum Yes R 0..5 N/A N/A

LastConfigUpate DateTime Yes R N/A N/A N/A

AccessNetworkSFID UnsignedInt Yes R N/A N/A N/A

8.2.1.9.1 FlowReference

See Flow Object definition.FlowStatus

This attribute indicates whether the flow associated with the specified FlowReference is operational (up) or not operational (down, fail, offline, or pending).

8.2.1.9.2 FlowStatus

Enumerated values reporting a Service Flow status.

8.2.1.9.3 LastConfigUpdate

This attribute indicates the day and time of day when the flow associated with the specified FlowReference was last configured.

8.2.1.9.4 AccessNetworkSFID

This attribute is the Service Flow Identifier [MULPIv3.1] assigned by the CCAP or DPoE System to the service flow between the CCAP and the cable modem or between the DPoE System and the vCM. Each SFID is associated to the flow specified by the Flow Reference.

Page 63: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 63

8.2.1.10 ClassifierParameters Object Refer to [CM OSSIv3.1] PktClass object description.

8.2.1.11 CPE Object This object defines the Consumer Premises Equipment (CPE) at the customer’s location to which services comprising the product are delivered.

Note: In the context of IPHSD and L2VPN services, references to CPE apply to the CM for DOCSIS networks or to the vCM /ONU for DPoE/EPON networks as applicable, and not to customer-owned equipment downstream from the CM/ONU such as home routers, personal computers, smartphones, tablets, and IoT devices.

Table 15 - CPE Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

MacAddress MacAddress Key R/W N/A N/A N/A

IpAddress InetAddress Yes R/W N/A N/A N/A

IpAddressType InetAddressType Yes R/W

IpAddressPrefixLength InetAddressPrefixLength Yes R/W

DeviceType TechVersion Yes R/W 0 | 10 | 11 | 12 | 13 | 20 | 21 N/A N/A

VendorID AdminString Yes R/W N/A N/A N/A

8.2.1.11.1.1 MacAddress

This attribute is the Media Access Control address assigned to the CPE. The subscriber’s cable modem MAC address or ONU MAC address is used as the unique identifier for the subscriber, so services delivered to a subscriber are referenced to the MAC address of the CM providing the subscriber access to the cable operator’s services. See also [CCAP-OSSIv3.1], section B.1.1 DOCSIS Service Requirements, item 2.

8.2.1.11.2 IpAddress

This attribute identifies the IP address assigned to the CPE receiving the service provider’s service at the subscriber’s premises.

8.2.1.11.3 DeviceType

This attribute identifies the type of CPE receiving the service provider’s service at the subscriber’s premises.

8.2.1.11.4 VendorID

This attribute identifies the manufacturer of the CPE receiving the service provider’s service at the subscriber’s premises.

8.2.1.12 CPEStatus Object This object provides current information about the specified flow.

Table 16 - CPE Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

CPEStatus Status Yes R N/A N/A N/A

LastRebootTime DateTime Yes R N/A N/A N/A

Page 64: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

64 CableLabs 04/24/17

8.2.1.12.1 CPEStatus

This attribute indicates whether the CPE associated with the specified MAC address is operational (up) or not operational (down, fail, offline, or pending).

8.2.1.12.2 LastRebootTime

This attribute indicates the day and time of day when the CPE associated with the specified MAC address last became operational following a device reboot.

8.2.2 Type Definitions

This section defines data types used in the object definitions for the Service-Controller information model. Refer to Annex E of CableLabs DOCSIS 3.1 Cable Modem OSSI specification [CM OSSIv3.1] for all types that are not defined in Table 17.

Table 17 - Data Types for Service-Controller Information Model Data Type Name Base Type Permitted Values

ServiceDefn Unsigned Integer other (0) generic (1) iphsdTier1 (2) iphsdTier2 (3) gamingTierA (4) gamingTierB (5) videoSD (6) videoHD (7) voiceG711 (8) voiceHD (9)

Status Unsigned Integer other (0) down (1) up (2) fail (3) offline (4) pending (5)

EncapsulationType Unsigned Integer other (1) IEEE 802.1Q (2) IEEE 802.1ad (3) MPLSPW (4) IEEE 802.1ah (5)

Direction Unsigned Integer other (0) Downstream (1) Upstream (2) Bidirectional (3)

PseudowireType Unsigned Integer other(1) EnetTaggedMode(2) EnetRawMode(3)

Topology UnsignedInteger other (1) epl(2) evpl(3) eplan(4) evplan(5) eptree(6) evptree(7)

Page 65: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 65

Data Type Name Base Type Permitted Values TechVersion Unsigned Integer other (0)

DOCSIS 2.0 (10) DOCSIS 3.0 (11) DOCSIS 3.1 (12) Full Duplex DOCSIS (13) DPoE 1.0 (20) DPoE 2.0 (21)

InetAddressPrefixLength This data type corresponds to the InetAddressPrefixLength textual description defined in [RFC 4001]. It is the number of contiguous "1" bits from the most significant bit of an InetAddress.

8.2.3 Service-Controller Data Model

The above information model about has been translated into a YANG data model and it is published in [VPI YANG].

8.3 Controller-Network Model

8.3.1 Controller-Network (SouthBound) Information Model

The Controller-Network information model describes the information passed between the Controller Layer and the Network Layer. The information model includes all the parameters needed for the Controller Layer and the Network Layer to deliver service to the customer and ensure proper operation of the network, and relationships among the parameters, if any.

Each of the underlying network elements exposes functionality upstream to the SDN Controller. This is the API that is implemented by the Access Network Devices(CCAP/DPoE System), for use by the SDN Controller. The SDN Controller will use the value of serviceId from the IP HSD application and the associated Flow parameters including any classification parameters that define the Service’s flows. The Controller assigns a global FlowId that is used to identify a specific flow on a specific subscriber device. The Controller then passes the global FlowId and all the needed flow parameters and classifiers to the access network device (e.g., CCAP/DPoE System). The Network Device then creates the Service Flow in the access network and passes back the service flow IDs (DOCSIS, DPoE LLID) to the controller, for the controller to maintain the mapping state between the Global FlowId and the Access Network Service Flow. The CCAP device then returns back a result code, global FlowId and the ServiceFlowId.

Controller-Network information is specific to the access network type, as different data communication technologies used for different access network types require different parameters to configure and manage the communication channel and equipment. The VPI TR addresses hybrid fiber/coax (HFC) and optical fiber access networks. Because initial focus is on service provisioning defined by DOCSIS specifications for both access network types, the Controller-Network information model for DOCSIS and DOCSIS Provisioning of EPON (DPoE) is the same. The DOCSIS/DPoE Controller-Network information model is depicted in Figure 16 and objects and attributes are described in sections following.

The high-level Controller to Network information model for provisioning IPHSD service and L2VPN service is shown in Figure 16. Figure 18 and Figure 19 detail the sub-classes that comprise the CmServicesCfg class. Where applicable, the VPI Controller to Network information model leverages classes and attributes already defined by the DOCSIS family of specifications, and the model below references those DOCSIS specifications for the definitions of these existing classes. Classes and attributes introduced in the VPI Technical Report are described below.

Page 66: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

66 CableLabs 04/24/17

Figure 16 - Controller to Network Information Model - High Level: CmServicesCfg

The CmServicesCfg object is a composition of the following classes to dynamically configure services on a cable modem or virtual cable modem via the CCAP or DPoE System.

CmtsStatus provides operational information pertaining to flows provisioned represented by the CmServicesCfg object.

Page 67: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 67

8.3.2 CmServicesCfg Model

Figure 17 - CMServices

This object contains the attributes defining a subscriber for provisioning IPHSD & L2VPN services on a DOCSIS or DPoE network.

Table 18 - CmServicesCfg Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

MacAddress MacAddress Key R/W N/A N/A N/A

GlobalFlowId Unsigned Long Yes R/W N/A N/A N/A

8.3.2.1.1 MACAddress

This attribute is the MAC address assigned to the CM to which the service provider delivers services for the subscriber.

8.3.2.1.2 GlobalFlowId

The unique identifier for a flow generated by the SDN Controller. The Global Flow Id is unique within an SDN controller whereas DOCSIS Service flow is unique per MAC Domain per CMTS or per DPoE System.

Page 68: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

68 CableLabs 04/24/17

8.3.3 QosCfg Model

Figure 18 - Controller to Network Information Model – QoSCfg

Page 69: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 69

These objects model the configuration objects for QoS provisioning on DOCSIS and DPoE cable modems and vCMs respectively.

The Data Model for the configuration and status objects are realized by the VPI YANG data model [VPI YANG]. Differences between DOCSIS and DPoE are specified as ‘specialization’ objects (i.e. an object is the union of the primary object and the specialization object.

8.3.3.1 AggregatedSF Object An Aggregate Service Flow (ASF) is a grouping of one or more Service Flows mapped to a single CM or vCM. The DPoEAsf and DOCSISAsf are access network specializations.

Table 19 - AggregatedSF Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

AggregatedSF AdminString Key R/W SIZE (1..255) N/A N/A

Direction Direction Yes R/W N/A N/A N/A

8.3.3.2 DpoeAsf Object Table 20 - DpoeAsf Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

ServingGroupName AdminString Key R/W SIZE (1..255) N/A N/A

MespReference UnsignedSh Yes R/W N/A N/A N/A

8.3.3.3 MespRef Object Refer to [DPoE MULPI] Section C.7.3 for Serving Group Name attribute description.

8.3.3.4 DocsisAsf Object Table 21 - DocsisAsf Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

AsfQosProfileName String Key R/W SIZE (2..16) N/A N/A

ApplicationId UnsignedByte Yes R/W N/A N/A N/A

ServiceClassName AdminString Yes R/W N/A N/A N/A

TrafficPriorityRange UnsignedSh Yes R/W N/A N/A N/A

8.3.3.5 AsfQosProfileName Object Refer to [MULPIv3.1] Section C.2.2.7.12 for AsfQosProfileName attribute description.

8.3.3.6 ApplicationId Object

Refer to [MULPIv3.1] C.2.2.5.9 Application Identifier description.

Page 70: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

70 CableLabs 04/24/17

8.3.3.7 ServiceClassName Object

Refer to [MULPIv3.1] C.2.2.7.13.2 Service Flow to ASF Matching by Service Class Name description.

8.3.3.8 TrafficPriorityRange Object

Refer to [MULPIv3.1] C.2.2.7.13.3 Service Flow to ASF Matching by Traffic Priority Range description.

8.3.3.9 Serviceflow Object Table 22 - Serviceflow Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

Id Unsignedint Key R/W SIZE (2..16) N/A N/A

Direction Boolean Yes R/W N/A N/A N/A

Primary Boolean Yes R/W N/A N/A N/A

AsfReference UnsignedSh No R/W N/A N/A N/A

8.3.3.9.1 Id

Service Flow Id assigned by the CMTS or DPoE system during flow creation.

8.3.3.9.2 Direction

This attribute defines the upstream or downstream service flow direction

8.3.3.9.3 AsfReference

This attribute associates the service flow with an ASF.

8.3.3.9.4 Primary

This attribute defines flow as primary service flow for the CM or vCM

Refer to annex G.2.3.3 ServiceFlow Object of CableLabs DOCSIS 3.1 CM OSSI specification [CM OSSIv3.1] for detailed descriptions of attributes in the ServiceFlow object.

8.3.3.10 FlowStatistics Object Table 23 - Serviceflow Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

Pkts Counter64 Key R/W SIZE (2..16) N/A N/A

TimeCreated TimeStamp Yes R/W N/A N/A N/A

AttrAssignSuccess Boolean Yes R/W N/A N/A N/A

8.3.3.10.1 Pkts

The number of packets carried on this service flow.

8.3.3.10.2 TimeCreated

The creation time of this service flow.

Page 71: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 71

8.3.3.10.3 AttrAssignSuccess

Attribute update successful/unsuccessful.

8.3.3.11 Paramset Object Refer to [CM OSSIv3.1] for the complete list of QoS parameters.

8.3.3.12 Pkt Class Object Refer to [CM OSSIv3.1] for the complete list of QoS parameter.

8.3.3.13 DpoEParamset Object The following table reports the two paramset attributes not supported by DOCSIS.

Table 24 - DpoeParamset Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

SfCollection Unsigned Byte No R/W SIZE (2..16) N/A N/A

MespName Adminstring No R/W N/A N/A N/A

8.3.3.13.1 SfCollection

Refer to [DPoE MULPI] for Service Flow Collection description.

8.3.3.13.2 MespName Object

Refer to [DPoE MULPI] C.5.3 for MESP Name description.

8.3.3.14 DocsisParamset Object The following table reports the DOCSIS-only attributes objects not supported by DPoE.

Table 25 - DpoeParamset Object

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

MaxReqPerSidCluster Unsigned Byte No R/W SIZE (2..16) N/A N/A

MultiplierContentionReqWindow Unsigned Byte No R/W N/A N/A N/A

TolPollJitter UnsignedInt No R/W N/A N/A N/A

MaxTimeInSidCluster UnsignedSh No R/W N/A N/A N/A

MaxOutstandingBytesPerSidCluster UnsignedInt No R/W N/A N/A N/A

MaxLatency UnsignedInt No R/W N/A N/A N/A

MaxTotBytesReqPerSidCluster UnsignedInt No R/W N/A N/A N/A

UnsolicitGrantSize UnsignedSh No R/W N/A N/A N/A

MaxConcatBurst UnsignedSh No R/W N/A N/A N/A

NomGrantInterval UnsignedInt No R/W N/A N/A N/A

TolGrantJitter UnsignedInt No R/W N/A N/A N/A

GrantsPerInterval UnsignedByte No R/W N/A N/A N/A

PeakTrafficRate UnsignedInt No R/W N/A N/A N/A

AttrAggrRuleMask AttrAggrRuleMask No R/W N/A N/A N/A

BitMap ParamSetBitMapType No R/W N/A N/A N/A

AppId UnsignedInt No R/W N/A N/A N/A

AqmDisabled Boolean No R/W N/A N/A N/A

Page 72: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

72 CableLabs 04/24/17

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

AqmLatencyTarget UnsignedShort No R/W N/A N/A N/A

AqmAlgInUse AqmAlgType No R/W N/A N/A N/A

DsResequencing DsRequencingType No R/W N/A N/A N/A

RequestPolicyOct RequestPolicyOct No R/W N/A N/A N/A

MultiplierBytesReq MultiplierBytesReq No R/W N/A N/A N/A

Refer to [MULPIv3.1] for descriptions of the attributes listed above.

8.3.3.15 L2vpnClassifiers Object The following table reports L2VPN Classifiers

Table 26 - DpoeParamset Object

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

CVid Vid No R/W SIZE (2..16) N/A N/A

CTpid UnsignedSh No R/W SIZE (2..16) N/A N/A

CPcp UnsignedByte No R/W SIZE (2..16) N/A N/A

CCfi UnsignedByte No R/W SIZE (2..16) N/A N/A

CTci UnsignedSh No R/W SIZE (2..16) N/A N/A

STpid UnsignedSh No R/W SIZE (2..16) N/A N/A

SVid Vid No R/W SIZE (2..16) N/A N/A

SPcp UnsignedByte No R/W SIZE (2..16) N/A N/A

SDei Boolean No R/W SIZE (2..16) N/A N/A

Tci UnsignedLong No R/W SIZE (2..16) N/A N/A

ITpid UnsignedSh No R/W SIZE (2..16) N/A N/A

ISid UnsignedLong No R/W SIZE (2..16) N/A N/A

ITci UnsignedLong No R/W SIZE (2..16) N/A N/A

IPcp UnsignedByte No R/W SIZE (2..16) N/A N/A

IDei UnsignedByte No R/W SIZE (2..16) N/A N/A

IUca UnsignedByte No R/W SIZE (2..16) N/A N/A

BTpid UnsignedSh No R/W SIZE (2..16) N/A N/A

BTci UnsignedSh No R/W SIZE (2..16) N/A N/A

BDei UnsignedByte No R/W SIZE (2..16) N/A N/A

BVid UnsignedSh No R/W SIZE (2..16) N/A N/A

BDa MacAddress No R/W SIZE (2..16) N/A N/A

BSa MacAddress No R/W SIZE (2..16) N/A N/A

MplsTcBits UnsignedByte No R/W SIZE (2..16) N/A N/A

MplsLabel UnsignedLong No R/W SIZE (2..16) N/A N/A

Refer to [L2VPN] for descriptions of the attributes listed above.

Page 73: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 73

8.3.4 L2vpnCfg Model

Figure 19 - Controller to Network Information Model – L2vpnCfg Class

Refer to annex B.3 L2VPN Encoding of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the L2vpnCfg object.

The UML above models the L2VPN Service Flow (left side of diagram) and the L2VPN CM configuration components (right sided of diagram).

8.3.4.1 CmL2vpnCfg Object This object contains only the VpnIdentifier. It serves as the association between an L2VPN service flow and the encoding/forwarding of the traffic into the MSO’s network.

This object associates L2VPN parameters and values with a Service Flow for DOCSIS and DPoE. The attributes are described in [L2VPN].

8.3.4.2 L2vpnFlowCfg This object contains the L2VPN access network side definitions common to DOCSIS and DPoE. It contains a reference to the Service Flow model defined previously.

8.3.4.3 DpoeL2vpnCfg This object contains flow options applicable only to DPoE.

Page 74: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

74 CableLabs 04/24/17

Table 27 - DpoeL2vpnCfg Object

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

L2VpnMode Boolean No R/W SIZE (2..16) N/A N/A

VpnServingGroup UnsignedSh No R/W SIZE (2..16) N/A N/A

NetworkTimingProfile No R/W N/A N/A N/A

NetworkTimingProfileRef String No R/W N/A N/A N/A

L2MultiptFwdingMode Boolean No R/W N/A N/A N/A

This object is a specialization of the L2vpnFlowCfg object.

8.3.4.3.1 L2VpnMode

Refer to [DPoE MULPI] for L2vpnMode

8.3.4.3.2 VpnServingGroup

Refer to [DPoE MEF] for VpnServingGroup description.

8.3.4.3.3 NetworkTimingProfile

Refer to [DPoE MULPI] for the Network Timing Protocol attributes.

8.3.4.3.4 NetworkTimingProfileRef

Refer to [DPoE MULPI] for the Network Timing Reference attributes.

8.3.4.3.5 L2MultiptFwdingMode

Refer to [DPoE MULPI] for L2 Multipoint Forwarding Mode

8.3.4.3.6 TPID Translation Object

Refer to annex B.3.15 Tag Protocol Identifier (TPID) Translation of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the TPID Translation object.

8.3.4.3.7 ONU Encapsulation Object

This object describes ONU encapsulation associated with a specific service flow. The 802.1ad tag is specified as an [L2VPN] encapsulation TLV.

8.3.4.4 L2vpnNetworkCfg Object This object anchors the network side configuration of the L2VPN (i.e., traffic to/from the NSI port)

Table 28 - DpoeL2NetworkCfg Object

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

UpstreamUserPriority UnsignedByte No R/W SIZE (2..16) N/A N/A

DownStreamUsrPri UnsignedByte No R/W SIZE (2..16) N/A N/A

Refer to [L2VPN] for priority UpstreamUserPriority and DownStreamUsrPri descriptions.

8.3.4.5 L2VPN SOAM Object Refer to annex B.3.24 L2VPN SOAM Subtype of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the L2VPN SOAM object.

Page 75: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 75

8.3.4.6 MepCfg Object This object provides configuration parameters and values for both the local and remote MEP.

8.3.4.7 L2CP Processing Object Refer to annex B.3.16 L2CP Processing of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the L2CP Processing object.

8.3.4.8 L2vpnNsiEncap Object Table 29 - L2vpnNsiEncap Object

Attribute Name Type Required Attribute

Access Type Constraints

Units Default

EncapType Enum Yes R/W SIZE (2..16) N/A N/A

Refer to annex B.3.2 [L2VPN] for the following encapsulation methods.

8.3.4.9 802.1QEncapsulation Object Refer to annex B.3.23 specifies dot1Q encapsulation.

8.3.4.10 802.1adEncapsulation Object This object specifies 802.1Q encapsulation

8.3.4.11 802.1ahEncapsulation Object This object specifies 802.1Q Provider Bridge encapsulation

8.3.4.12 MPLS Encapsulation Object Refer to annex B.3.23 Pseudowire Signaling of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the MPLS Settings object.

8.3.4.13 BGP Attributes Object Refer to Annex B.3.21 BGP Attribute sub TLV of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the BGP Attributes object.

8.3.4.14 DPoEL2vpnNsiEncap This object is a specialization of the L2vpnNsiEncap class.

8.3.4.15 DpoeServiceDelimiter Object Refer to annex B.3.19 Service Delimiter of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the Service Delimiter object.

8.3.4.16 DpoeVsiEncoding Object Refer to annex B.3.20 Virtual Switch Instance Encoding of CableLabs Layer 2 VPN specification [L2VPN] for descriptions of attributes in the VSIEncoding object.

Page 76: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

76 CableLabs 04/24/17

8.3.5 McastJoinAuthorization Model

Figure 20 - MulticastJoinAuthorization Object

8.3.5.1 CmMcastJoinAuthorizationRules Object Refer to [MULPIv3.1] for the Multicast Join Authorization attribute descriptions in this object.

8.3.5.1.1 McastJoinAuthorization Object

Refer to [MULPIv3.1] for the Multicast Profile Name attribute in this object

Page 77: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 77

8.3.6 SubscriberMgmt Model

Figure 21 - SubscriberMgmt Object

8.3.6.1 SubMgmt Object

Refer to C.1.1.19.1 in [MULPIv3.1] and [CCAP-OSSIv3.1] for additional information pertaining to the following attributes.

MgmtCpeCtrlActive – enables/disables CMTS subscriber management of a cable modem

SubscriberCpeLearnable – Enables CMTS learning of CPE IP Addresses behind the CM

CpeMaxIpv4 – Sets the maximum number of simultaneous IPv4 addresses behind a CM

CpeMaxIpv4 – Sets the maximum number of simultaneous IPv6 addresses behind a CM

Ipv4Addr – IPv4 address of the CPE

Ipv6Addr – IPv6 address of CPE

Ipv6Prefix – IPv6prefix associated with CPESubMgmtPrefix Object

Page 78: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

78 CableLabs 04/24/17

8.3.7 CpeMgmt Model

Figure 22 - CpeMgmt Object

8.3.7.1.1 CmCpeMgmt

Refer to C.1.1.3 in [MULPIv3.1] for Network Access Control description.

8.3.7.1.2 DocsisCmCpeMgmt Object

This object is a DOCSIS specialization of the CmCpeMgmt Object.

Refer to C.1.1.1 in [MULPIv3.1] for DschannelFree (Downstream Frequency Configuration) description.

Refer to C.1.1.2 in [MULPIv3.1] for UschannelFree (Upstream Frequency Configuration) description.

8.3.7.1.3 SecurityParameters

For each of the following attributes, refer to the specified [MULPIv3.1] location.

Privacy Enable: Refer to [MULPIv3.1] C.1.1.7 for the description.

Page 79: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 79

Manufacturer Code Verification: Refer to [MULPIv3.1] C.1.2.10 for the description.

Cosigner Code Verification certificate: Refer to [MULPIv3.1] C.1.2.11 for the description.

Software File Upgrade Name: Refer to [MULPIv3.1] C.1.1.3 for the description.

Software Upgrade IP TFTP Server IpAddrType: Refer to[MULPIv3.1] C.1.1.3 for the description.

Software Upgrade IP TFTP Server: Refer to [MULPIv3.1] C.1.2.7 or C.1.2.8 for the description.

8.3.7.1.4 SrcAddrVerification

Refer to [MULPIv3.1] C.1.1.18.1.7 for the SAV Group Name.

Refer to [MULPIv3.1] C.1.1.18.1.7.2 SAV Static Prefix Rule Subtype for further information.

8.3.7.1.5 CmAttributeMasks

Refer to [MULPIv3.1] C.1.1.18.1.8 Cable Modem Attribute Masks for further information.

8.3.7.1.6 UdcGroupIds

Refer to [MULPIv3.1] C.1.1.26 Upstream Drop Classifier Group ID for further information.

Page 80: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

80 CableLabs 04/24/17

8.3.8 CmtsStatus Model

Figure 23 - Controller to Network Information Model – CMTS Status

8.3.8.1 CMTS Status Object This abstract object roots CM Status and MacDomain status information.

Table 30 - CmtsStatus Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

MacAddress MacAddress Key R/W N/A N/A N/A

Page 81: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 81

8.3.8.1.1 MACAddress

This attribute is the MAC address assigned to the CM to which the service provider delivers service for the subscriber.

8.3.8.1.2 GlobalFlowId

The unique identifier for a flow. The Service flow is unique per MAC Domain whereas the Global Flow Id is unique within an SDN controller.

8.3.8.2 CMTSList Status Object This object lists each CMTS, by MAC Address, configured by the SDN controller.

Table 31 - CmtsList Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

CmtsMacAddress MacAddress Key R/W N/A N/A N/A

ServiceClassNames Adminstring No R/W N/A N/A N/A

MespNames Adminstring No R/W N/A N/A N/A

8.3.8.2.1 CmtsMacAddress

This attribute is the MAC address assigned to the CM to which the service provider delivers service for the subscriber.

8.3.8.2.2 ServiceClassNames

The unique identifier for a flow. The Service flow is unique per MAC Domain whereas the Global Flow Id is unique within an SDN controller.

8.3.8.2.3 MespNames

The unique identifier for a flow. The Service flow is unique per MAC Domain whereas the Global Flow Id is unique within an SDN controller

8.3.8.3 CMTSMacDomain Status Object This object lists all MAC Domains, by MAC Address, associated with each CMTS configured by the SDN controller.

Table 32 - MacDomain Status Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

MacAddress MacAddress Key R/W N/A N/A N/A

8.3.8.3.1 MacAddress Object

A MAC Address for each MAC Domain.

8.3.8.4 CmStatus Object This object provides the list of cable modems served by this CCAP and for each cable modem identifies the assigned IP Address(es), device technology, vendor identification, and capabilities of the device.

Page 82: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

82 CableLabs 04/24/17

Table 33 - CM Status Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

CmMacAddress MacAddress Key R/W N/A N/A N/A

IpAddressType InetAddressType Yes R/W N/A N/A N/A

IpAddress InetAddress Yes R/W N/A N/A N/A

DeviceType TechVersion Yes R/W 1 | 10 | 11 | 12 | 13 | 20 | 21 N/A N/A

VendorId Hex Binary Yes R/W N/A N/A N/A

DeviceCapabilities HexBinary Yes R/W N/A N/A N/A

8.3.8.4.1 CmMacAddress

This attribute is the MAC address of the cable modem served by the CCAP.

8.3.8.4.2 IpAddressType

This attribute represents the IP address type of the cable modem’s IP address. The value is of type InetAddressType as defined by [RFC 4001].

8.3.8.4.3 IpAddress

This attribute represents the IP address of the cable modem served by the CCAP.

8.3.8.4.4 DeviceType

This attribute represents the access network type and version of protocol implemented by the cable modem. The value is of type TechVersion defined in Table 17.

8.3.8.4.5 VendorId

This optional attribute represents an identification for the cable modem vendor provided by the service provider.

8.3.9 CMTS DocsQosCfg Model

Refer to section 6.6.6.4.2 DocsQosCfg of CableLabs DOCSIS 3.1 CCAP OSSI specification [CCAP-OSSIv3.1] for descriptions of attributes in the DocsQosCfg object.

The MespTable object was added to the composition to support MESP Profiles.

Page 83: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 83

Figure 24 - CMTS DocsQosCfg Objects

8.3.9.1 MespTable Object This object provides the list of MESP profile names.

Table 34 - MESP Reference Table Object

Attribute Name Type Required Attribute

Access Type Constraints Units Default

Name Adminstring Key R/W N/A N/A N/A

Reference UnsignedSh Key R/W N/A N/A N/A

Page 84: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

84 CableLabs 04/24/17

Attribute Name Type Required Attribute

Access Type Constraints Units Default

BP-CR UnsignedLong Yes R/W N/A N/A N/A

BP-CBS UnsignedLong Yes R/W N/A N/A N/A

BP-ER UnsignedLong Yes R/W N/A N/A N/A

BP-EBS UnsignedLong Yes R/W N/A N/A N/A

BP-Coupling Flag Enum Yes R/W N/A N/A N/A

BP-CM-CF Enum Yes R/W N/A N/A N/A

BP-CM-CIF-Value EnumBits Yes R/W N/A N/A N/A

BP-CR-Color Marking

Enum Yes R/W N/A N/A N/A

BP-CR-CR-Color Marking

EnumBits Yes R/W N/A N/A N/A

8.3.9.2 ServiceClass Object Annex G.2.3.2 ParamSet Object of the DOCSIS 3.1 CM OSSI specification [CM OSSIv3.1] defines most of the ServiceClass object attributes. Table 20 defines attributes not defined in [CM OSSIv3.1].

8.3.9.3 AsfQosProfile Profile This attribute describes a provisioned QoS profile for Aggregate Service Flows on a CCAP. Each object instance defines a template for certain Aggregate QoS Parameter Set values. AsfQosProfile for ASF is an equivalent to Service Class for a Service Flow. Refer to [CM OSSIv3.1] Section 6.6.6.4.4 for additional information.

8.3.9.4 DocsisParamSet Object Refer to Annex G.2.3.2 ParamSet Object of the DOCSIS 3.1 CM OSSI specification [CM OSSIv3.1] for descriptions of the DoscisParamSet object and attributes.

8.3.10 Controller-Network Data Model The VPI architecture Controller-to-Network interface is modeled using several YANG modules. The modules developed here model the VPI Controller-to-Network interface and can be considered as extensions to the YANG modules defined to model the CCAP configuration interface in DOCSIS 3.1 specifications [CCAPv3.1 YANG].

The VPI architecture extends the DOCSIS 3.1 YANG model by adding YANG modules under the /CCAP/DOCSIS hierarchy as shown in sections above and listed below.

Virtual Provisioning Interfaces Controller-Network API uses the following YANG modules: docsis-cm-services-cfg.yang [CM SERVICES CFG YANG] docsis-cmts-status.yang [CMTS STATUS YANG]

cl-docsis-types.yang [DOCS TYPES YANG] docsis-service-flow.yang [SF YANG] docsis-aggregated-service-flow.yang [AGG SF YANG] docsis-classifier.yang [CLASS YANG] docsis-l2vpn-cfg.yang [L2VPN CFG YANG] docsis-mcast-join-authorization.yang [JOIN AUTH YANG] docsis-cm-cpe-mgmt.yang [CM CPE MGMT YANG] docsis-snmp-mib.yang [MIB YANG] docsis-subscriber-mgmt.yang [SUB MGT YANG]

Page 85: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 85

9 DISAGGREGATED DPOE ARCHITECTURE The DPoE System must evolve to economically expand its fiber footprint in light of the overarching cable industry goal to drive fiber deeper. The VPI working group initiated an effort to analyze and define how the DPoE System could be disaggregated (i.e., separating the OLT from the DPoE System) in order to push the OLT deeper into the network while, at the same time, architecting VPI to support both DOCSIS and EPON access networks.

Disaggregating the DPoE architecture has the potential to provide the service provider with multiple benefits such as those listed below:

• Apply cloud virtualization flexibility, scaling, and economics to MSO headend.

• Leverage SDN, NFV, and cloud technology to optimize the operations of cable access networks .

• Allow rapid deployment and termination of component functions where and when needed by separating functions of currently dedicated hardware into dynamically-deployable Virtual Network Functions such as virtual CPE (vCPE), virtual CCAP (vCCAP), virtual edge QAM (vEQAM), and virtual routers.

• Automate management of both the virtual and physical infrastructures.

• Improve agility of service provisioning for quicker time to revenue .

• Efficiently allocate the resources used to create and chain VNFs.

• Reduce CAPEX since service providers do not have to invest in hardware, that may not be used all the time, and may have to be replaced when new functionalities are introduced.

The team’s DPoE disaggregation architecture recommendation follows but is best understood in the context of DPoE as it exists today.

9.1 State of PON and DPoE

DPoE has matured to address expanding Operator requirements for higher data rates, increased functionality, and fiber deep initiatives. DPoE originally specified 1Gbps data rates, IP(HSD) and the MEF Ethernet Private Line (EPL) service. Today, the DPoE specification suite specifies symmetric 10Gbps EPON, IPHSD, and the suite of MEF private and virtual services for Line and LAN topologies. Though DPoE development stabilized, the EPON industry has not. For example, the vendor community experienced a sudden expansion with three new suppliers of DPoE 2.0 products which may indicate increased demand for DPoE. Further, the SDN/NFV revolution enables and demands that DPoE evolve further to improve service velocity and customer responsiveness. DPoE, though stable, must be modified to remain current and relevant.

9.2 Problem Statement

The first generation of DPoE was entirely chassis-based. The OLT(s) were deployed on a blade alongside other chassis-based network components such as a router and switche(s). Though a single chassis might support multiple EPONs (e.g., 8 per line card) the capacity often may not be fully realized due to EPON distance limitations. To further compound Operator challenges, IPHSD deployment was primarily targeted for residential greenfields that too often were outside of the 10-20 km distance to the nearest hub or headend. As a result, EPON extension options emerged to increase reach to 40-80 kilometers. But these solutions carried disadvantages, including additional CAPEX and OPEX for the additional hardware, maintenance and staff support to install, configure, and manage. Rather than continuing to add stopgap hardware and absorbing the related costs, it became clear that a long-term FTTx solution is required that meets the following criteria.

• Integrates with Operators’ OSS/BSS (i.e., manageable)

• Supports Existing infrastructure to provide a migration path

• Extensible to support future provisioning and management interfaces (e.g., VPI)

• Leverages, if possible, existing initiatives (i.e., DCA-defined Remote PHY) architecture

Page 86: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

86 CableLabs 04/24/17

• Scalable

• Standards-based

9.2.1 Industry Drivers Network architecture, in general, has dramatically shifted in recent years. A significant force behind the change is customer sophistication and related growth in content consumption per subscriber.

Home networks have become ubiquitous and networked-device laden. Wi-Fi, Smartphones, OTT video, storage and gaming are driving ever increasing bandwidth demands. IOT will add yet more to the demand. The Gartner Group predicts in less than 8 years, homes will average more than 500 networked devices. In addition to ever-increasing home subscriber bandwidth consumption, small cell deployments will also require reliable and predictable voice and data transport.

One such enabler, Distributed CCAP Architecture [DCA-MHAv2], is a maturing project that specifies L2TPv3 tunneling protocol to carry the MAC layer and higher payloads from the CCAP core device to the node. The DCA layer 2 network, the Converged Interconnect Network (CIN), may also carry management and data traffic between a DPoE System and the Remote OLT. Reuse of the CIN reduces the capital expenditures for Operators that deploy DCA in the future.

Finally, because DPoE is already compatible with the existing SNMP and configuration based systems BSS/OSS, a disaggregated DPoE System enables FTTx today by bringing the OLT closer to the home or MDU. Whether a remote DPoE OLT is deployed using extender technologies or simply within the native distance limitations, a migration path is inherently and immediately available. The next sections describes one such path.

9.3 Evolution of DPoE

This section proposes a migration path from the chassis-based DPoE System that currently employs DOCSIS provisioning methods to a fully disaggregated set of DPoE functions using native EPON provisioning protocols and methods. However, as noted in the following sections, this TR details only the first disaggregation phase—moving the R-OLT to the node and defining the tunneling protocols.

9.3.1 Current DPoE Architecture

The current integrated DPoE Architecture is detailed in CableLabs DPoE version 2.0 Architecture Specification [DPoE Arch].

9.3.1.1 DPoE Components, Functions, and Interdependencies The DPoE System is a complex architecture realized as a collection of interfaces, functional components, and technologies. Yet, by design, very few interfaces are explicitly exposed. For example, the vCM downloads and translates the configuration file and converts the Service Flow TLVs into extended OAM (eOAM) for ONU configuration. However, the specifications are silent on OLT configuration of downstream service flows, data rate policing and shaping. In addition, the architecture often requires multiple components participate in the configuration of a single function (e.g., the router is configured using IPNE and vCM/ONU provisioning). Hence, the first step to disaggregate DPoE is to better understand the functional interdependencies to help define project scope. The process follows.

1. Create simple disaggregation model: Three components—DPoE System, vRouter, and Remote OLT

2. Compile comprehensive DPoE function list extracted from IPNE specification

3. Identify the configuration and operations for each of the three components listed in Step 1

Figure 25 graphically revealed the complexity of DPoE System disaggregation. For example, the same configuration interface is located on two components, features configured through the DPoE System function (e.g., TACACS) are applied to another component (e.g., vRouter). This high-level understanding laid the groundwork for subsequent disaggregated DPoE System architecture assessments and recommendations captured in the following pages.

Page 87: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 87

Figure 25 - DPoEv2.0 Interfaces and Reference Points

9.3.2 DPoE Evolution: Next Steps/Beyond the Chassis

The monolithic, self-contained DPoE chassis is diminishing in utility, yet the demand for its functionality grows as EPON deployments increase. VPI dynamic provisioning with an SDN focus is well-timed to facilitate the evolution of the chassis to a distributed NFV solution. Three disaggregated DPoE architectures were evaluated to identify the solution that 1) enabled standardized dynamic provisioning for both the CCAP and DPoE while 2) balancing the level of effort and time to ensure the architecture enabled vendor development and Operator deployment. The following sections present each option reviewed with the benefit analysis.

9.3.2.1 Disaggregated DPoE Architecture Options The working group evaluated three disaggregation solution proposals described below. Each proposal is accompanied with a brief benefit/drawback summary.

9.3.2.1.1 Option 1: Fully Disaggregated – All Interfaces Defined

Identify all DPoE functions and standardize the interfaces, messaging protocols and content.

This option provides the most granularity with respect to identifying components and defining interdependent interfaces. The option enables a flexible architecture that could be appealing to a number of operators. However, the number of functions in the DPoE System that are to be isolated and implemented as VNFs creates a vast number of VNF permutations. Figure 26 below is a high-level pictorial view of the vendor-defined DPoE components. It illustrates a small number of the interfaces requiring standardization.

Page 88: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

88 CableLabs 04/24/17

Figure 26 - Fully Disaggregated DPoE Architecture Option 1

Benefits: This model potentially offers the greatest flexibility to the operator by creating granularity at the micro-function level. It is unlikely any Operator SDN/NFV network designed would be constrained by this solution.

Disadvantages: The level of effort to thoroughly decompose the DPoE System into micro-functions and define standard interfaces for each was cost and time prohibitive for inclusion in the technical report.

Operators were surveyed to determine interest in this solution. The consensus follows.

• Though desirable, open interfaces are not feasible

• Vendor engagement would be unsustainable

Decision: Option 1 deferred to a later effort (see Figure 29 below).

9.3.2.1.2 Option 2: Proprietary Disaggregated Architecture

Allow vendors to disaggregate their systems without changing any of the currently exposed interfaces. The tunneling protocol between the DPoE System and the R-OLT is vendor-proprietary as illustrated in Figure 27.

Figure 27 - Proprietary Disaggregated Architecture

Page 89: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 89

Benefits: Minimum time to market due to little change to the current DPoE implementation and not constrained by specification development.

Time to market gated only by vendor disaggregation implementation and VPI support.

Disadvantages: Operators locked into same-vendor OLT/ONU solution.

Tunnel protocol may not be compatible with Operator’s network.

Tunnel protocol may not traverse multiple hops.

Decision: Option 2 was dropped as a risk to interoperability with Operator networks.

9.3.2.1.3 Option 3: Disaggregated - Defined Tunnel Only

Vendors are allowed to disaggregate their systems without changing any of the currently exposed interfaces. However, vendors must add new, open tunneling interfaces for R-OLT. The new tunneling protocol and endpoint interfaces defined by the VPI working group are shown in Figure 28.

Figure 28 - Disaggregated Architecture – Defined Tunnel

Benefits: Operators not locked into vendor solution for enabling the R-OLT.

Vendor DPoE investment preserved, though additional costs will be incurred to implement tunneling protocol.

Additional disaggregation (e.g., separating the router from the DPoE System) can be added at a later time as requirements change.

Tunnel configuration may already be standardized by each vendor.

Disadvantages: Less flexibility in planning and deploying DPoE in an SDN/NFV network due to vendor-proprietary DPoE Systems.

Vendors may incur additional cost to add new tunnel protocol support.

The tunnel chosen by the working group may not be compatible with the vendor’s tunneling strategy.

Decision: Option 3, the Disaggregated Architecture-Defined Tunnel. is the best fit and constitutes the first disaggregation phase. This option provides substantial opportunity for vendor implementation and MSO architecture flexibility. It assumes existing DPoE provisioning entities (i.e., vCM and DPoE Systems) and exposed interfaces

Page 90: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

90 CableLabs 04/24/17

(e.g., D, TU) continue to exist in addition to the two new tunnel interfaces. These functions and interfaces collectively support the new VPI defined services and APIs. However, though the tunnel interfaces are defined, the full functionality set described above is not defined or constrained by the VPI architecture. For example, the DPoE System may or may not be implemented in a single chassis-based solution as long as the management plane tunnel from the DPoE System to the R-OLT complies with the VPI specification.

This recommendation also carries an important caveat. The vCMs and R-OLT may no longer be connected via a single link. Therefore, the OAM link-local attribute could pose an implementation challenge (i.e., the vCMs' location in the network). Thus, management tunnels may be required if the vCMs are moved such that they are no longer adjacent to the tunnel endpoint or the OLT.

9.3.2.2 DPoE Evolution Summary DPoE provided a method for operators to provision services delivered over EPON in a way that leverages their DOCSIS provisioning and management infrastructure. While DPoE satisfies service provider objectives for initiating the development of a system-compatible EPON provisioning process, it is only the first step in a migration to a dynamic EPON provisioning system that is independent of DOCSIS-specific provisioning infrastructure.

The migration path ensures current vendor investments are preserved while also articulating a means to achieve native provisioning of EPON. Hence, each phase of disaggregation is reusable in the following phase. The migration path is depicted in Figure 29 below.

Figure 29 - DPOE to Native EPON Provisioning Migration

Integrated DPoE System: The DPoE System and ONUs are provisioned utilizing a combination of DHCP, a minimal DOCSIS configuration file, and VPI-defined YANG data models. The DPoE System remains intact including the embedded OLT.

Disaggregated DPoE System - Tunnel Interface Defined: The OLT is physically separated from the DPoE System Chassis and connected remotely via a high-speed link. The DPoE System and ONUs are provisioned utilizing a combination of DHCP, a minimal DOCSIS configuration file, and VPI-defined YANG data models. Internal and proprietary interfaces remain intact. The connection to the R-OLT is based on a standard tunneling protocol. Tunnel configuration and management will be defined in this document.

Disaggregated DPoE System - All Interfaces Defined: The DPoE System may further disaggregate such that all the internal major functional components are identified and interfaces to each are openly described. This solution enables more granular virtualization of DPoE components. The dynamic service provisioning mechanism used for the first two options may require expansion to support configuring the newly exposed, standardized interfaces. This option is not in scope for this technical report.

Native EPON Provisioning of EPON: The provisioning of EPON OLTs and ONUs is no longer dependent on DPoE constructs. This option is not in scope for this technical report.

VPI requires additional functions be supported by the DPoE System.

• Dynamic Service Flows: The mechanism is currently defined in the specifications; most deployed offerings do not yet support this feature.

• YANG API (aka protocol shim) must be implemented to process service activation/deactivation requests. This API converts the YANG-defined service parameters in a format usable by the vCM for provisioning the OLT and ONU(s). The ‘shim’ effectively preserves the existing DPoE provisioning and management mechanisms.

Page 91: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 91

9.4 Next Step: Disaggregated DPoE with Defined Tunnel Interfaces

The VPI architecture describes the disaggregated DPoE architecture including recommendations for the tunnel interfaces between the DPoE System and the remote OLT (R-OLT). Operators may use this as a starting point in their design of the next generation of distributed access networks. Many Operators already plan to disaggregate and virtualize the DOCSIS network by using Remote PHY/Remote MACPHY technologies and virtualizing the CCAP Core and MAC layer functions. As the DOCSIS networks evolve, operators would like to apply the same principles to the EPON network with the goal of migrating to a disaggregated DPoE architecture.

9.4.1 Disaggregated DPoE Architecture

DPoE System and Remote OLT (R-OLT) are the two high level elements in the first disaggregation phase. The DPoE management plane is the focus but the data plane is addressed as well. The architecture is defined such that vendors may reuse existing DPoE management plane software, perhaps in a virtualized environment such as a VNF and communicate to and through the Remote OLT. In addition, the DPoE System VNF will also have an interface to a physical or virtualized router. The router and the Remote OLT will be connected by the CIN network. This connection may be a Layer 2 network or Layer 3 network.

9.4.1.1 Components The main components in the disaggregated DPoE System Architecture are the DPoE System Controller, the router, the Remote OLT, and ONUs. Together the DPoE System Controller, the router, and the OLT perform the functionality of an integrated DPoE System chassis.

• DPoE System Controller The DPoE System Controller is composed of one or more VNFs supporting the following functionality:

• vCMTS Implements a RESTCONF client to expose the underlying DPoE network to the SDN controller, supports the Controller to Network YANG data model described in Section 8.3.10. The vCMTS also implements other CCAP functionality as defined in the DPoE v2.0 specifications [DPoE Arch].

• vCMs Implements the vCM functionality as defined in the DPoE v2.0 specifications [DPoE Arch].

• Router (Virtual or Physical) Implements Layer 3 and Layer 2 traffic forwarding for control and data traffic between the service provider’s network and ONUs, OLTs, and the customer network. The router can be implemented as a physical device or as a virtual network function. The DPoE System VNF manages traffic forwarding configurations on this router.

• Remote OLT (R-OLT) Implements the 1G-EPON and 10G-EPON as defined in [IEEE 802.3]. The R-OLT is deployed outside of the service provider’s headend, typically co-located with a fiber node.

• ONU Implements D-ONU functionality as defined in DPoE v2.0 specifications [DPoE Arch].

9.4.1.2 Management Plane The DPoE System management software/VNF configures the R-OLT and the ONUs via eOAM and vendor proprietary messages. Hence, the DPoE System must establish a management plane tunnel or VLAN-based connection to the Remote OLT. DPoE System management software will also require a management tunnel for router configuration and control—this tunnel/connection is not in scope.

9.4.1.3 Data Plane The Remote OLT requires a connection to a router for transport of user data, i.e., data plane traffic. The transport mechanism between the R-OLT and router could be a tunnel or a tagging protocol depending on the operator’s need to secure and/or isolate user data.

Page 92: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

92 CableLabs 04/24/17

9.4.2 Generalized Disaggregated DPoE Architecture

Figure 30 below is a synthesis of four vendor-proposed DPoE disaggregation models. It captures the common components and functionality contained in each proposal. This model segments functionality by one possible set of locations (i.e., Data Center, Headend or Remote OLT Node). It’s possible, if not likely, that individual MSO architectures will not align exactly to this model. But, this generalization enables the reader to better understand and compare the vendor proposals. Each location and functionality is briefly described below. The highlighted boxes in Figure 30, VPI YANG and CM Services Configuration YANG, represent the YANG data models that model the Service to Controller interface and Controller to Network interface, respectively, by implementing the Service to Controller and Controller to Network information models described in Section 8.

Figure 30 - Synthesized Solution

9.4.2.1 DataCenter VPI assumes the datacenter physically hosts the DOCSIS OSS back office—both traditional and the VPI-defined Service Applications. The SDN controller may be collocated in the datacenter. It provides the northbound interface to the service applications and a southbound interface to the vCMTS.

9.4.2.2 Headend The disaggregated DPoE System components reside in the headend. Both the data plane and management planes traverse the headend via the DPoE Router. The switched fabric provides layer 2 connectivity to the remote OLTs. The link between the switched fabric and the R-OLT is secured using MACsec. The network north of the headend is assumed to be secure.

9.4.2.3 R-OLT Node The node may contain one or more OLTs. Each OLT may support 1 or more EPONs. The R-OLT node supports at least one of the following traffic separation protocols: SIEPON 1904.2, VXLAN or simply VLAN tagging, depending on the MSO architecture. Note the SIEPON 1904.2 specification is not yet an approved IEEE standard.

Page 93: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 93

9.4.3 Disaggregated DPoE Architecture – Defined Tunnel Interfaces - Proposals

The following sections present four vendor proposals and capture the diversity of architectural and technical options supported by VPI. Each recommendation conformed to the Open Disaggregated Architecture-Defined Tunnel model and is required to include a tunnel protocol specification if applicable.

9.4.3.1 Option 1 Disaggregated DPoE Architecture option 1 defines component Virtual CMTS (vCMTS) within the DPoE System and identifies four interfaces shown in Figure 31, between the vCMTS and other DPoE System components.

The vCMTS aligns functionality and behavior between the DOCSIS provisioning system and DPoE functionality and segregates functions required for the distributed DPoE architecture. The vCMTS and vCM together are referred to as the DOCSIS Virtualization Layer.

Option 1 also allows for IEEE 802.1-compliant switches providing layer 2 connectivity between the service provider’s edge router and the OLT. These switches are labeled X in Figure 31.

The new interfaces defined for the disaggregated DPoE architecture are listed below:

IF-CRpe: Between vCMTS and the virtual provider edge router (RPE) in the DPoE System

IF-CSw: Between vCMTS and 802.1 switches in the DPoE System

IF-COlt: Between vCMTS and the OLT in the DPoE System

IF-CmOnu: Between the vCM in the DPoE system and the ONU

Figure 31 - Option 1 Disaggregated DPoE Architecture

This option identifies two implementation extremes for the Distributed DPoE Architecture:

• Remote Linecard/Partially Distributed scenario: architecture as implemented by existing solutions with OLT physically remote from the DPoE System chassis. Figure 32 illustrates disaggregated DPoE components with their locations and connections for a partially distributed architecture. Here some key components reside within a DPoE System chassis in the service provider’s headend.

Page 94: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

94 CableLabs 04/24/17

Figure 32 - Option 1 Partially Distributed Disaggregated DPoE Architecture

• Full Distributed scenario: DPoE System is fully disaggregated with all DPoE System internal components existing as physically separate entities and the DOCSIS Virtualization Layer is at least one Layer 3 hop away from the service provider’s headend or central office. Figure 33 illustrates disaggregated DPoE components with their locations and connections for a fully distributed architecture.

Figure 33 - Option 1 Fully Distributed Disaggregated DPoE Architecture

In actual deployments, the disaggregated DPoE implementation will be on a continuum between the partially distributed model and the fully distributed model.

Protocols for transporting layer 2 control messages across the interfaces defined above will depend on the implementation. Option 1 recommends the protocols as shown in Table 35.

Table 35 - Protocols Proposed for Disaggregated DPoE Architecture Option 1

Interface Protocol: Remote Linecard Scenario Protocol: Fully Distributed Scenario IF-CRpe Undefined if internal to chassis or

NETCONF/YANG NETCONF/YANG over TLS or SSH

IF-CSw Undefined if internal to chassis or NETCONF/YANG

NETCONF/YANG over TLS or SSH

Page 95: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 95

Interface Protocol: Remote Linecard Scenario Protocol: Fully Distributed Scenario IF-COlt eOAM over SIEPON IEEE 1904.2 eOAM over SIEPON IEEE 1904.2 over L2TPv3 IF-CmOnu DPoE-SP-OAMv2 over SIEPON IEEE 1904.2 DPoE-SP-OAMv2 over SIEPON IEEE 1904.2 over

L2TPv3

Interface “D” is the DOCSIS IP network-to-network interface, also referred to as the Network Systems Interface (NSI) [DPoE Arch]. It is defined as the DPoE interface for the service provider’s Operations Support System (OSS) and the interface for data plane traffic, for moving customer traffic into and out of the DPoE. DPoE disaggregated architecture option 1 disaggregates interface D by attaching the OSS/provisioning interface to the vCM and vCMTS and attaching the data plane to the operator’s edge router RPE.

The distributed DPoE model follows the ‘integrated’ DPoE model for transportation of user data.

The interface between the provider’s edge router (RPE) and the 802.1 switches (X) is a Network-to-Network Interface (NNI) or Internal Network-to-Network Interface (INNI) as defined by MEF [DPoE Arch]. This interface is labeled MNi in Figure 31.

The interface between the 802.1 switches (X) (if more than one switch is used) and the interface between the 802.1 switch and the OLT is layer 2 with VLAN tags if needed. Note that the system should be properly provisioned to accommodate large frames if needed.

9.4.3.2 Option 2 Option 2 assumes the presence of an orchestrator that is responsible for configuration of connected components such as routers, aggregation switches, OLTs, and ONUs, but not for the DPoE System and vCMs. Option 2 combines the Controller functionality with the DPoE System.

Due to potential problems with system dimensioning, vCMs are separated from the DPoE System, resulting in the need to define the interface between the DPoE System and vCMs. This option recommends to locate vCM within the remote OLT (R-OLT).

The disaggregated DPoE architecture defined by Option 2 is therefore composed of four functional blocks:

SDN Controller

Router: Standard routing of layer 3 traffic between components

DPoE System

R-OLTs and ONUs

Figure 34 - Disaggregated DPoE Architecture Option 2

Page 96: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

96 CableLabs 04/24/17

Figure 34 identifies the control interfaces listed and described below.

Label Endpoints Function(s) A' DOCSIS OSS – Translator/vCMTS Configuration and management requests & responses B' DPoE System (vCMTS) – vCM C1' DPoE System (vCMTS) – R-OLT Configure & retrieve status from vCMs & R-OLTs

User traffic, including IP HSD, L2 HSD, and MEF services C2' vCMs – R-OLTs C3' vCMs – ONUs

The VPI architecture allows for - and assumes - multiple OLTs and many ONUs (proxied by vCMs) within the scope of each DPoE System, and many OLTs exchanging traffic with components on the Internet via one or more routers. These system characteristics lead to the following assumptions:

• Many management connections need to be established with OLTs and ONUs (vCMs)

• Many data connections need to be established between router(s) and OLTs

Although tunnel protocols have been considered for the management and data connections, this option recommends using VLANs for connections between router(s) and R-OLTs. This recommendation is justified by the following design considerations:

• Requiring R-OLT to implement a tunneling protocol will increase its complexity and cost due to the need to parse tunnel protocol header information at line rate.

• R-OLTs use Layer 2 forwarding internally so using Layer 2 VLAN connection between the router and the R-OLT eliminates the need for the R-OLT to translate between Layer 2 and Layer 3 which simplifies R-OLT implementation.

• Because tunnels are point-to-point connections so a separate tunnel will need to be established between the R-OLT and the router for each service type. This potentially results in the need for an R-OLT to establish and maintain many tunnels simultaneously, which increases R-OLT complexity and could necessitate higher-performance implementation.

• Tunneling protocols do not handle multicast traffic well so replication of multicast traffic destined to R-OLTs would need to be conducted by the router if tunnels are used between the router and R-OLTs. This can lead to congestion and forwarding delays at the router.

• MPLS tunnels are required for MEF services when forwarded to the Internet by the Router [DPoE Arch]. If tunnels are also used between the router and R-OLTs, the router is therefore required to translate between tunnel protocols, which adds to router processing load.

• Link capacities between the router and the R-OLT through intervening switches will likely be negatively impacted by tunnel protocol overhead.

• Securing links between aggregation switches and remote devices (R-OLTs, ONUs/vCMs) can be accomplished using Layer 2 security such as IEEE 802.1AE aka MACsec, rather than Layer 3 encryption which would have to be implemented for each tunnel. Layer 3 encryption for each tunnel adds complexity for the endpoint routers, R-OLTs, vCM, and VNFs.

The management interfaces C1', C2', and C3' can be any kind of point-to-point connections or they can also be VLANs. Because these are typically low-speed control connections, they can be implemented with sophisticated tunnels. Note, however, that implementing these connections with feature-rich Layer 2 tunnels, e.g., L2Tp, or with Layer 3 tunnels can add significant complexity to the vCM or vCMTS.

The control connections between the vCM and R-OLT (C2') and between the vCM and ONU (C3') both need to be terminated on the R-OLT, so it is not recommended for the implementation to provide direct OAM access to the ONU unless the MAC is involved.

Page 97: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 97

9.4.3.3 Option 3 This option references the DPoE v2.0 architecture [DPoE Arch], which defines interface D as the DOCSIS IP network-to-network interface (NNI), also referred to as the Network Systems Interface (NSI) in DOCSIS specifications. DPoE Architecture specification also defines interface Tu between the R-OLT and the D-ONU. This option defines a new interface, referenced as Tuˊ, representing the Converged Interconnect Network (CIN) between the DPoE System and the R-OLT. Interface Tuˊ includes point-to-point optical links and the switching fabric in a service area, such as the area serviced by a cable operator’s headend.

This option identifies the location of DPoE components and functions for disaggregated implementation as follows. These components are illustrated in Figure 35:

• Remote OLT (R-OLT)

○ Located off an optical fiber-based CIN

• DPoE System Controller

○ Located in the operator’s network where it can access the CIN and any routing infrastructure. It can be located anywhere as long as latency and connectivity to the mapped R-OLTs is maintained, since it is not in the data path.

○ If R-OLT management and bootstrap traffic is transported via management VLAN, the VLAN must extend to the DPoE Controller.

• Router (RP or RPE)

○ Located anywhere needed in the operator’s network, likely in a headend facility

• SDN Controller

○ Located anywhere needed in the operator’s network.

○ Requires connectivity to all switch fabric in the operator’s headend as well as to all R-OLTs and DPoE Controllers.

Option 3 characterizes the CIN as not utilizing active routing protocols but being capable of forwarding Layer 3 traffic. The CIN is implemented as a 10 Gbps or 40 Gbps optical network using Ethernet framing and carries both management (control plane) traffic and subscriber (data plane) traffic to and from the R-OLT. Switch fabric in the service provider’s central facility, such as the cable operator’s headend, terminates point-to-point optical connections. The system router and DPoE System Controller are located off the switch fabric. Refer to Figure 35. In the entities RP and RPE are the router interfaces to and from the Operator OSS/BSS networks. These interfaces may be shared for the RPE and RP for the data plane in the diagram or may be separate interfaces. The Rp or RPE can be an actual physical router interface or a virtual router instance on a host. The logical link D' is shown in the diagrams as a separate tunneled infrastructure. This link can be realized as VxLAN tunneled traffic or single-tagged or double-tagged VLAN traffic based on Operator preference. The R-OLT can support single or multiple NSI connections to 802.1 switch or switches. Interface Tu' carries all control plane traffic and interface D' carries all data plane traffic. Interface D in this proposal also carries traffic for interface ME for MEF services. Interface Tu' can be realized over a VLAN (L2) based network or over a VxLAN (L3) network based on Operator preference.

In this option, the DPoE Controller entity can be located anywhere within the Operator’s network, but is subject to some restrictions. When interface Tu' is configured as a VLAN, the VLAN must be reachable by the DPoE Controller. The DPoE Controller contains the DPoE functions relating to the firmware management of R-OLTs and ONUs, provides the interface for SNMP, Logging, Notifications and IPDR reporting, as well as maintaining an interface to the Operator’s SDN controllers for managing L2VPN and MEF related services in the Operator’s network. The DPoE controller also includes the vCMs that are instantiated for the set of R-OLTs that are managed by the DPoE Controller.

Page 98: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

98 CableLabs 04/24/17

Figure 35 - Disaggregated DPoE Architecture Option 3

Option 3 defines the following tunnels for the disaggregated DPoE architecture, as shown in Figure 35

• Management traffic for the R-OLT: via interface Tu' The traffic on this interface consists of initial bootstrapping or startup of the R-OLT, including management VLAN if needed, IP address, and authentication and authorization traffic as required by the operator. Software upgrades of both the R-OLT and ONUs are managed over this interface. VLAN tags are a possible implementation for management traffic over this interface.

• Subscriber data traffic Option 3 recommends the use of VxLAN between the R-OLT and the router or vRouter, but allows that VLAN is a possible solution as well.

9.4.3.4 Option 4 The disaggregated DPoE architecture defined by option 4 is composed of the following components:

• White box switches that are used as spine and leaf fabric

• White box servers that host virtual machines (VM) which run virtual network functions (VNF)

• One or more SDN controller(s) that serve as the operating system for access networks.

• A Network Functions Virtualization (NFV) orchestrator

This architecture assumes the vCMTS, vCM, and other virtualized functions to be deployed in virtual machines running on the white box servers in the service provider’s central location, such as a cable operator’s headend. The NFV orchestrator and SDN controller(s) may be open source. If the service provider implements multiple access network technologies, such as PON, HFC, and/or wireless, a separate SDN controller can be implemented for each access network technology. When multiple SDN controllers are implemented, they are orchestrated using an SDN “super controller” or orchestration function. The NFV Orchestrator manages the lifecycle of virtual functions such as the vRouter, vCMTS, and vCM.

Disaggregated DPoE architecture option 4 obviates the need to send traffic to a centralized VSI, because in this option, the cloud is implemented in the headend (edge) so only the DPoE fully distributed forwarding mode is used [DPoE Arch].

CableLabs DPoE Architecture specification defines Fully Centralized DPoE forwarding mode and Fully Distributed DPoE forwarding mode [DPoE Arch]. Option 4 recommends following the fully distributed model, because for VPI no need has been identified for forwarding traffic to a centralized Virtual Switching Instance (VSI).

Page 99: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 99

Option 4 allows for the OLT to either be deployed in the service provider’s central facility, such as the cable operator’s headend, or in a remote location such as in a node in the HFC plant.

OLT in the Headend

If the OLT is deployed in the headend, Layer 2 switching can be used for traffic forwarding for the data plane. For configuration and management functions IEEE 1904.2 is used to carry eOAM frames between the OLT and vCM. Dynamic service provisioning can be done using Dynamic D-ONU Configuration Update [DPoE MULPI]. Option 4 recommends SNMP or NETCONF for management of the OLT deployed in the service provider’s headend.

Figure 36 - Disaggregated DPoE Architecture Option 4 - OLT in the Headend

Remote OLT

When the OLT is remote, such as if it is deployed in the service provider’s node, separate VxLAN tunnels are recommended for control plane traffic and data plane traffic.

Figure 37 - Disaggregated DPoE Architecture Option 4 - Remote OLT

9.4.3.5 Proposals Summarized Table 36 summarizes and contrasts four vendor proposals with respect to traffic separation, data/management plane and logical/physical location.

Page 100: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

100 CableLabs 04/24/17

Table 36 - Summary of Disaggregated DPoE Architecture Options

vCMTS-OLT Management Connection

vCM-ONU Management Connection

Router-OLT data connection

Virtual component

location

Physical component

location Option 1 See Figure 32

IEEE 1904.2 IEEE 1904.2 VLAN vCMTS – Headend vCM-Headend

R-OLT - Node

Option 2 See Figure 34

VLAN1 (endpoint in R-OLT)

VLAN1 (endpoint in R-OLT)

VLAN2 vCMTS – Headend vCM – Headend/Node

R-OLT - Node

Option 3 See Figure 35

VXLAN or VLAN (DPoE Sys Controller equivalent to vCMTS)

VXLAN or VLAN (tunnel terminated at R-OLT)

Recommends: VXLAN Double tagged VLAN Ok if CIN L2

DPoE System Controller: Headend vCM-Headend

R-OLT - Node

Option 4 – OLT in Headend See Figure 36

IEEE 1904.2 IEEE 1904-2 VXLAN vCMTS – Headend vCM-Headend

R-OLT - Headend

Option 4 – R-OLT in Node See Figure 37

VXLAN VXLAN VXLAN vCMTS – Headend vCM-Headend

R-OLT - Node

The table demonstrates a subset of the variety and permutations that are possible but a trends are identifiable.

• Data plane traffic separated by VLAN tags rather than Layer 3 tunneling.

• Management plane traffic separated by VXLAN tunnels.

• The vCMTS emerging as common VNF

• IEEE 1904.2 is preferred mechanism to transport extended OAM.

That disaggregated DPOE options above represent different variations on the virtualization architecture for EPON. As MSOs solidify their SDN/NFV approach, and investigate virtualization in the access network for DOCSIS and DPoE, they have a set of viable options from which they may choose or merge from. Essentially, this gives the MSO freedom to incorporate the best of the various options as it fits their network. The current analysis also provides ample opportunity for vendors to differentiate through innovation.

The analysis results are intended to provide a migration path that begins with the chassis-based DPoE System and ends with a disaggregation solution that satisfies an Operators architectural and deployment requirements. The options provide but four solutions—each reveals subtle but significant differences. These provide a sample of the flexibility enabled with SDN, NFV, and VPI.

It should also be noted that this report aligns DPOE access network with the distributed CCAP architecture with that of DOCSIS. In addition, this analysis also concludes MSOs could benefit from leveraging DCA infrastructure if not the tunneling protocols (e.g., L2TPv3).

Finally, it must be noted that though this technical contains a high-level decomposition of DPoE and describes the high level architecture/connection of the components, a much deeper analysis should be applied to fill the gaps and understand the details of a fully disaggregated DPOE solution.

Page 101: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 101

9.4.4 System Operation (A Day in the Life of a Packet)

The disaggregated architecture was analyzed at a high level, with an “A Day in the Life of a Packet" exercise. This included verifying the underlying structural components were in place and configured to successfully forward a packet up and downstream. See Figure 38 below. The diagrams assume time elapses from top to bottom.

The following Figure 38 illustrates the infrastructure over which the data packets will flow.

Figure 38 - Infrastructure Over Which the Data Packets Will Flow

The following Figure 39 illustrates a day in the life of a packet on an IPHSD service flow.

Page 102: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

102 CableLabs 04/24/17

Figure 39 - A Day in the Life of a Packet on an IPHSD service flow

Finally, Figure 40 illustrates a day in the life of a packet on an L2VPN service flow.

Figure 40 - A Day in the Life of a Packet on an L2VPN Service Flow

Page 103: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 103

10 CONCLUSIONS

10.1 Security Threat Analysis

There are three main VPI management interfaces: Service-to-Controller, Controller-to-Network, and DPoE System-to-OLT (see Section 5). If any of the traffic on these interfaces is sent over un-trusted network segments, it could be vulnerable to attack. Attacks include spoofing of endpoints, traffic snooping, and modification of data in transit. These attacks could result in unauthorized access/control of devices, sensitive data being compromised and malicious data tampering.

Suggested security controls to secure these interfaces include mutual authentication, encryption, and message authentication. TLS can be used to secure the RESTful management protocols. IPSec can be used to secure tunneling protocols. Certificate credentials can provide strong mutual authentication and reduce key management complexity. Certificates issued from the cable industry PKI managed by CableLabs are recommended.

Even if the management interfaces described above are secured, devices can be vulnerable to attack resulting in malicious software installation. This is especially true for devices located in an untrusted environment. Attacks could exploit software lacking appropriate security controls, e.g., input checking and authentication. Devices would be subject to the same threats as the main VPI management interfaces including man-in-the-middle attacks.

To help prevent attacks on devices, software development should include good security design/support. Devices should be hardened by removing any unnecessary interfaces and processes. Any that remain should require strong authentication and protect exchanged data. Software updates should also be secured by authenticating downloads and verifying software loaded during the bootup process.

Other vulnerabilities with the Disaggregated DPoE Architecture also exist (see Section 9) . Data plane traffic is only encrypted between the ONU and OLT. If the DPoE System-to-OLT interface sends data plane traffic over an untrusted network segment, this traffic could be vulnerable to attack. Also, a man-in-the-middle attack could occur at the OLT where subscriber data traffic could be intercepted.

Two possible security controls for mitigating Disaggregated DPoE data plane attacks are:

• Security harden the OLT and secure the data plane tunnel between the OLT and DPoE System using IPSec. If a network access control architecture is used, similar to DOCSIS Remote PHY, 802.1x with MACsec can be used.

• Extend key management and traffic encryption into the DPoE System and add mutual authentication between the OLT and DPoE System. This is also similar to the DOCSIS Remote PHY architecture.

This Security Analysis indicates the need for solutions which can be addressed in future projects.

10.2 Challenges and Gaps

Virtual Provisioning Interfaces described in this report is a vision for the future of service provider network provisioning, with potential benefits of increasing flexibility, efficiency, and automation. The vision is based on some existing technology and on some widely-held assumptions for the direction of technology, such as virtualization. Following are some of the challenges with VPI identified by the working group.

10.2.1 Operational

Operational challenges are those challenges related to deploying the new technology, integrating it with existing technology, and ensuring its proper function on a daily basis.

• Risk of new technology Technology such as the disaggregation and virtualization of network functions is in its infancy and not yet established, and scarcely tested outside of data centers.

• Evolution of provisioning systems Cable operator provisioning systems are each unique, complex, highly co-dependent on other applications and systems such as billing systems and management systems. Modifications must be done carefully to ensure they do not adversely impact any aspect of existing operational systems. Virtualization of the provisioning system is a

Page 104: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

104 CableLabs 04/24/17

major architectural shift from existing systems, so it must be introduced in a way that does not interrupt existing systems or introduce any type of error or malfunction. Therefore, the planning, execution, and validation of introducing VPI to existing OSS will be a major challenge to service providers.

10.2.2 Dependencies

Dependency challenges are those challenges related to obtaining a new technology for use with the solution.

• 1904.2 maturity/timing

IEEE 1904.2 is a developing Standard that will describe a management channel for customer-premises equipment (CPE) connected to Ethernet-based subscriber access networks. The management channel will have Multi-hop capabilities to allow management of various CPE devices located behind an Optical Network Unit (ONU), a Coaxial Network Unit (CNU), a Residential Gateway (RGW), etc. The standard will describe the message format, as well as processing operations and forwarding rules at the intermediate nodes.

The use of 1904.2 for a disaggregated EPON solution will depend on the completion of the development and publication of the standard by the IEEE. If the standard is not ready in time, the solutions will use other mechanisms to the get the configuration information across.

• MSO review and alignment with SDN/NFV architecture.

As each MSO defines and develops their SDN/NFV architecture.

10.2.3 Technical

• Technical challenges are those challenges related to developing appropriate technical solutions. End-point & Link Security, for disaggregated DPoE:

Similar to the prior solutions (CMTS with CMs, DPoE System with ONUs, RPHYs with CCAP), remote devices are needed to be secured via strong mutual authentication mechanisms as well as the link between the hub locations to R-OLTs needs to be encrypted. The document makes assumptions of such encryption and authentication mechanisms existing; however, clarifications and standardization of them are needed.

• Multicast effects on bandwidth for disaggregated DPoE

Multicast video distribution related bandwidth consumption impacts are mentioned during the comparison of "connection options"; however, an analysis of how the multicast video distribution needs to be done or details for its impact on each device (i.e. router(s), aggregation switches, R-OLTs and controlling blocks) are not discussed.

• Disaggregated DPoE Gaps: This TR hasn’t focused on the interfaces between vCMTS and vCMs or similarly between the VCMs to R-OLT and vCMTS to R-OLT. It has mainly focused on the connections between the virtualized functions in the headend and the R-OLT, for the management and data path.

• Network abstraction SDN/NFV Architectures need to define a clear view of what the orchestrator is expected to do along with the SDN Controller.

• Dynamic provisioning in DPoE

Dynamic changes of service provisioning is a gap in the current DPoE standards and implementations. The model is that of reboot-and reconfigure with new services, which is not effective when delivering service without any downtime. Also this model does not work for PacketCable Voice Service deployments.

Page 105: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 105

10.3 Next Steps

There are various potential next steps after the research and analysis from this project. All of these can be possible future project efforts.

10.3.1 VPI Requirements Imposed on Other Specifications

• CCAP/CMTS required to implement a RESTCONF interface

• DPoE System required to implement a RESTCONF interface

• CCAP and DPoE System need to format data (JSON or XML) for RESTCONF responses

• CCAP/CMTS required to support the (southbound) YANG model to configure services

• DPoE System required to support the (southbound) YANG model to configure services

• CCAP and the DPoE System Controller RESTCONF interface proxies for the CMs/ vCMs, for configuration and status monitoring

• DPoE System required to redefine and implement dynamic service provisioning

• DPoE ONU required to implement dynamic service provisioning

• Consider dropping config file requirements for DPoE

• Update DPoE & DOCSIS specifications to be consistent with VPI TR

10.3.2 Virtualized network Architecture

• SDN controller : The MSO network architect needs to clearly define the all roles responsibilities and tasks performed by the SDN Controller.

• Orchestration : The role of an orchestrator, the methods by which it instantiates VNFs and manages the lifecycle of VNFs and ultimately implements the service , needs to be well defined and developed

• Define vCCAP functionality and architecture, this can include DOCSIS, DPoE and video

10.3.3 Disaggregated EPON Architecture

• Define interoperable OLT-vCMTS interfaces (e.g., eOAM)

• Define DPoE System Controller/vCMTS functionality.

10.3.4 Security Architecture

• Define specific Security controls for the VPI architecture

10.4 Summary

Dynamic Provisioning

The virtual provisioning interfaces architecture has brought in a new way of provisioning, managing and operating the access network for the cable operators. The VPI architecture enables the operator to transition into and introduce software defined networking(SDN) principles into their network. It has created a method for provisioning services dynamically on the access network. The VPI architecture has defined standardized interfaces from a northbound application to SDN controller, and southbound from the controller to each access network element such as CMTS and DPOE system. The DOCSIS CMTS can provision many services dynamically on the CM. On the DPoE System, a gap exists around the dynamic change/provisioning of services.

Page 106: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

106 CableLabs 04/24/17

Network Abstraction

This abstraction of the underlying network simplifies service application development by the operator, and removes the silos in the service provisioning for different access networks. The operator can now develop applications and deploy it across multiple access networks because the application always talks to an SDN controller. The SDN controller abstracts all of the underlying network complexity to the higher-level applications. The SDN controller takes the incoming requests from the application figures out being access network type for on which the service needs to be provisioned, and provisions the underlying devices with the appropriate parameters.

Interface protocols

The VPI working group has defined RESTCONF as the interface of choice between applications and the SDN controller. RESTCONF defines standard mechanisms to allow Web applications to access the configuration data, state data, data-model-specific Remote Procedure Call (RPC) operations, and event notifications within a networking device, in a modular and extensible manner. RESTCONF uses HTTP methods to provide CRUD operations on a conceptual datastore containing YANG-defined data, which is compatible with a server that implements NETCONF datastores.

Information and Data models

The VPI working group has defined information models and data models both for the northbound and southbound interfaces. Then northbound interface is a simplified model which allows applications to request services to be instantiated on the network. The northbound interface is implemented by the SDN controller, acting as a RESTCONF Server, and the application being the client. The southbound interface is a more complex device specific data model which includes all the detailed parameters needed to configure the access network appropriately. The south bound interface is implemented by the CMTS and DPOE system, with each of them acting as the RESTCONF Server, and the SDN Controller being the client.

Use cases

The current data models support mainly the IP HSD and L2VPN use cases. Both the northbound and southbound data models provide baseline functionality which can be extended to cover other applications such as gaming our IP video. The southbound data models can be thought of as an extension to and fit within the D3.1 YANG model hierarchy, which when implemented the by CMTS/DPOE System can be presented as a single RESTCONF Server implementation.

Disaggregated EPON Architecture

The VPI working group also investigated Dis-aggregated DPoE architectures. EPON architectures will evolve from integrated DPOE systems, to disaggregated DPOE Systems where the R–OLT is separated from the management and control plane functions, and then finally to native EPON solutions with the control management planning functions abstracted by an SDN controller. The work done by the VPI working group focuses on the first step of Dis-aggregated DPoE architectures, by defining a generalized disaggregated architecture and then defining standardized protocols to connect the different components. The group has defined VLAN & VXLAN as the data plane connection protocol between the R-OLT and the router at the headend. VLAN, VXLAN or IEEE 1904.2 could be used as the connection protocol between the R-OLT and virtualized DPOE System/DPOE Controller/vCMTS VNFs at the headend.

Page 107: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 107

Appendix I VPI RESTCONF ENDPOINTS

This section lists the supported VPI RESTCONF operations with all endpoints enabled by the Service-to-Controller interface and by the Controller-to-Network interface.

I.1 Service-to-Controller VPI Endpoints The Service-to-Controller VPI endpoints listed below are derived from [VPI YANG] and generated by OpenDaylight yangtools. The URL for this OpenDaylight tool for displaying all RESTCONF endpoints for all loaded YANG modules if running OpenDaylight locally is http://localhost:8181/apidoc/explorer/index.html.

POST /config/

GET /config/vpi:vpi/

PUT /config/vpi:vpi/

DELETE /config/vpi:vpi/

POST /config/vpi:vpi/

GET /config/vpi:vpi/subscriber/{subscriber-id}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/ipv6/{ipv6-address}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/ipv6/{ipv6-address}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/ipv6/{ipv6-address}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-status/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-status/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-status/

GET /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-flow/{flow-id}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-flow/{flow-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/cpe/{cpe-mac-address}/cpe-flow/{flow-id}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/

Page 108: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

108 CableLabs 04/24/17

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/service-status/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/service-status/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/service-status/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/flow-status/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/flow-status/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/flow-status/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/classifiers/{classifier-id}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/classifiers/{classifier-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/flow/{flow-ref}/classifiers/{classifier-id}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/

Page 109: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 109

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/encapsulationParameters/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/encapsulationParameters/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/encapsulationParameters/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/

POST /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/

GET /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/ipv6/{ipv6-address}/

PUT /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/ipv6/{ipv6-address}/

DELETE /config/vpi:vpi/subscriber/{subscriber-id}/product/{product-id}/service/{service-id}/tunnel-parameters/{vpn-id}/enpoints/{endpoint-name}/endpointParameters/ipv6/{ipv6-address}/

GET /operational/vpi:vpi/

I.2 Controller-to-Network VPI Endpoints The Controller-to-Network VPI endpoints listed below are derived from [CM SERVICES CFG YANG] and [CMTS STATUS YANG].

[CM SERVICES CFG YANG]

Configuration Endpoints

POST /config/

Page 110: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

110 CableLabs 04/24/17

GET /config/docsis-cm-services-cfg:cm-services-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/

Page 111: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 111

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/ip-tos-overwrite/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/ip-tos-overwrite/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/ip-tos-overwrite/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/timeout-for-qos-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/timeout-for-qos-parameters/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/timeout-for-qos-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/cm-attribute-masks/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/cm-attribute-masks/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/cm-attribute-masks/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/application-id/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/application-id/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/application-id/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/aqm-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/aqm-parameters/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/aqm-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/mesp-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/mesp-parameters/

Page 112: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

112 CableLabs 04/24/17

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-parameters/mesp-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/classifier-cfg/{classifier-id}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/classifier-cfg/{classifier-id}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-classifier-cfg/classifier-cfg/{classifier-id}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/

Page 113: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 113

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/nsi-encapsulation-param/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/nsi-encapsulation-param/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/nsi-encapsulation-param/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/tpid-tanslation-param/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/tpid-tanslation-param/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-l2vpn-cfg/l2vpn-parameters/l2vpn-cfg/{vpn-id}/tpid-tanslation-param/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/mcast-join-authorization-rules/{rule-priority}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/mcast-join-authorization-rules/{rule-priority}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/mcast-join-authorization/mcast-join-authorization-rules/{rule-priority}/

Page 114: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

114 CableLabs 04/24/17

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/static-mcast/{static-mcast-group-addr}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/static-mcast/{static-mcast-group-addr}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-multicast-cfg/static-multicast/static-mcast/{static-mcast-group-addr}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/

POST /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/

Page 115: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 115

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/sav-addr-prefix/{sav-addr-prefix-val}/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/sav-addr-prefix/{sav-addr-prefix-val}/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/security-parameters/src-addr-verification/sav-addr-prefix/{sav-addr-prefix-val}/

GET /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/cm-attribute-masks/

PUT /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/cm-attribute-masks/

DELETE /config/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-cpe-mgmt-cfg/cm-attribute-masks/

Status Endpoints

GET /operational/docsis-cm-services-cfg:cm-services-cfg/

GET /operational/docsis-cm-services-cfg:cm-services-cfg/cm-entry/{cm-mac-address}/flow-entry/{global-flow-id}/cm-qos-cfg/service-flow/sf-status/

Remote Procedure Calls (RPCs)

POST /operations/docsis-cm-services-cfg:reboot-cm

POST /operations/docsis-cm-services-cfg:list-all-configured-CMs

POST /operations/docsis-cm-services-cfg:upgrade-cm-software-image

[CMTS STATUS YANG]

Status Endpoints

GET /operational/docsis-cmts-status:cmts-status/

GET /operational/docsis-cmts-status:cmts-status/cmts-mac-domain/{cmts-mac-address}/

GET /operational/docsis-cmts-status:cmts-status/cmts-mac-domain/{cmts-mac-address}/service-class/{scn}/

GET /operational/docsis-cmts-status:cmts-status/cmts-mac-domain/{cmts-mac-address}/cm/{cm-mac-address}/

GET /operational/docsis-cmts-status:cmts-status/cmts-mac-domain/{cmts-mac-address}/cm/{cm-mac-address}/ipv6/{ipv6-address}/

Remote Procedure Calls (RPCs)

POST /operations/docsis-cmts-status:list-active-CMs

POST /operations/docsis-cmts-status:list-active-SCNs

Page 116: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

116 CableLabs 04/24/17

Appendix II Transport Between R-OLT and DPoE System VNF(s)

II.1 Disaggregated EPON Transport Technology Evaluation Criteria The disaggregated EPON architecture defined in this technical report requires transport of traffic between the DPOE system and the OLTs it supports, which may be multiple hops away on the network. The role of the transport technology is key to enabling disaggregation of the EPON components. The optimum transport technology possesses each of the characteristics listed below to ensure reliability of service delivered over the disaggregated EPON access network. Technologies researched for this technical report were evaluated against these criteria and the recommended protocols exceed the others in these categories. Although technically not all of the transport technologies evaluated (see Section 9) are tunnel protocols, the term is useful in depicting the role of the transport between the DPoE System and the OLTs it supports.

II.1.1 Provisioning

A tunnel will be created for each flow, so the ease with which the protocol allows a tunnel to be created, configured, and destroyed is an important consideration for the tunnel. Ease of tunnel provisioning is based on the tunnel protocol’s ability to enable the capabilities listed below:

Manual provisioning Does the tunnel protocol allow or require configuration via a manual interface, namely a command line interface?

Dynamic provisioning Does the protocol provide or enable frequent reconfiguration?

Autodiscovery of tunnel endpoints Does the protocol provide a means to discover endpoints without operator intervention?

Multiple tunnel instances How well does the protocol allow and facilitate creation and maintenance of multiple simultaneous tunnels?

Allocatable bandwidth per tunnel How well does the protocol allow the operator to configure and change upper and lower data throughput limits?

Objects defined for real-time statistics reporting Does an information model and/or a data model exist that support fault management, configuration management, accounting management, performance management, security management, or other network management function for the protocol? What data models for the protocol have been implemented, e.g., MIB or YANG model?

Predefined models Have specific tunnel configurations been defined for the protocol?

II.1.2 Operations

Operations refers to support the tunnel protocol provides for monitoring and maintaining its function once it is provisioned and operational.

Fault healing Does the protocol include an inherent method for detecting, isolating, and correcting tunnel malfunctions?

Redundancy Does the protocol include means to provide alternative data forwarding paths to allow traffic to be forwarded if the current path no longer allows traffic to be forwarded?

Resiliency Does the protocol provide means to recover from a fault or other malfunction and re-initiate proper traffic forwarding and reporting?

Availability and Reliability How well does the protocol withstand potential faults and other impacts and continue operating?

Page 117: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 117

Scalability Does the protocol easily allow the number of endpoints increase, or easily accommodate large numbers of tunnels?

Throughput and Latency The tunnel will need to support at least the minimum throughput, and impose no more than the maximum allowed latency, required by the customer’s SLA. The ideal tunnel protocol will add no latency and be capable of line rate throughput.

Throughput Does the protocol enable traffic forwarding sufficiently fast to match OLT ingress and egress traffic rates?

Bidirectional The tunnel protocol used for the VPI disaggregated EPON architecture is required to be capable of bidirectional packet forwarding.

Latency What features of the protocol contribute to network latency? What processing is required to implement the protocol, e.g., header inspection, encapsulation?

II.1.3 Network Layer and Data Link Layer Protocol Encapsulation

The tunnel protocol is required to support global standard OSI Network Layer and OSI Data Link Layer communications protocols to ensure compatibility with prevalent enterprise and backbone networks.

IEEE 802.3 Ethernet Is the tunnel protocol capable of passing Ethernet traffic? Does it assume or require only Ethernet Data Link protocol?

IPv4, IPv6, and dual stack Network Layer Encapsulation Is the tunnel protocol capable of encapsulating IPv4, IPv6, and dual stack datagrams?

Configurable Maximum Transmission Unit (MTU) Does the tunnel protocol allow the MTU to be configured? Do MTU options allowed by the protocol impose constraints on traffic?

Data Link Layer Encapsulation Is the tunnel protocol capable of encapsulating Data Link Layer frames?

Transport Layer Encapsulation Is the tunnel protocol capable of encapsulating Transport Layer datagrams, in particular Transport Control Protocol and User Datagram Protocol? What other Transport Layer protocols is it capable of encapsulating?

II.1.4 Quality of Service Differentiation

The tunnel protocol is required to be capable of allowing different types of datagrams, based on defined classifiers or other markings, to be treated differently depending upon operator-defined policy. This includes differences in treatment for forwarding or discarding datagrams or frames. The following considerations are applied to evaluating tunnel protocol options:

Differentiation between Control Plane and Data Plane traffic How does the tunnel provide for determining the difference between control traffic and user data traffic?

Differentiation between Data Types How does the tunnel provide for determining the difference between traffic associated with different services, such as L2VPN traffic and IPHSD traffic?

Separation of traffic Does the tunnel protocol allow traffic within the tunnel to be separated based on datagram or frame markings?

Differential service for L2VPN Does the tunnel protocol provide for differentiated service of L2VPN traffic?

Page 118: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

118 CableLabs 04/24/17

II.1.5 Security

Security features mitigate the risk of malicious or inadvertent interception, manipulation, or destruction of user data or control traffic. Security plays an important role in ensuring customer SLAs are met by helping to ensure that device configuration is completed correctly and that the customer’s traffic is delivered only to its intended recipients and remains private.

Encryption What encryption capability is defined for the tunnel protocol? How secure is the encryption? If it does not support encryption natively, what encryption options are available for the protocol?

Mutual Authentication What authentication capability is defined for the tunnel protocol? If it does not support authentication natively, what encryption options are available for the protocol?

II.1.6 Distribution Limitations

The tunnel protocol will need to be able to accept various distribution methods and protocols used between the DPoE System and the OLTs. Evaluation of the tunnel protocol needs to consider whether it has limitations on the type of traffic it is capable of forwarding. Following are important communication protocols and message types the VPI tunnel protocol is required to carry. Only tunnel protocols that have no limitations for carrying these protocols are considered for VPI:

• IPv4 and IPv6

• Layer 2 VPN

• Internet Group Management Protocol (IGMP)

• CableLabs-Extended EPON Operations Administration and Maintenance (eOAM) Messages

II.1.7 Deployment Readiness

Viable tunnel protocol options are those which are well-defined and sufficiently mature to be deployed in a commercial network. The protocol used for VPI will ideally be already successfully deployed in large commercial enterprise networks and have an active community of implementers providing solutions and support.

II.2 Transport Protocols Evaluated As stated above, in a disaggregated DPoE system, the OLT is detached from the DPoE System and connected remotely via a high-speed link. This high-speed link is a point-to-point architecture between the DPoE system, typically located at a headend, and one of many OLTs that are closer toward the network edge, which is typically at a node. The physical link connecting them is fiber. Therefore, bidirectional traffic that was once contained inside a DPoE chassis must now be transported over fiber between the DPoE system and the OLTs.

Transport in this context has many protocol options that include both tunnel technologies as well as routing/switching protocols. There is a natural bias toward tunnel technologies because they provide a private, secure path by encapsulating packets inside a transport protocol. This isolation has benefits in areas of security, operations and management compared to non-tunneled technologies.

Finally, protocol simplicity is usually better than protocol complexity, unless that complexity significantly facilitates initial administration or reduces operational issues. MSOs have multiple architectures that demand a choice of solutions, which is why we present both L2 and L3 solutions. In a direct point-to-point topology between the DPoE chassis and the OLT, a L2 protocol would fit best because of superior latency and throughput characteristics of switching frames instead of routing packets. Conversely, a multi-hop topology could benefit from an L2 overlay, L2.5 protocol (MPLS) or even an existing L3 protocol to avoid introducing a new protocol to the topology system.

Tunnel technologies have some common characteristics:

Encapsulation – An additional L2 (MAC) or L3 (IP) header and a tunnel header is added to the original data unit during transport. The original frame or packet (called inner) is “hidden” in that routing/switching devices do not act

Page 119: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 119

on this information. The added L2 or L3 header, called the “outer” header, is used only for transport across the link and then removed along with the tunnel header.

L3 Tunnel - The original packet’s IP header is “encapsulated” by placing another IP header (and tunnel header) in front of it. Again, the routing devices do not examine this inner IP header during transport. The second or “outer header” is used to move packets from the egress of one domain to the ingress of another. Examples include GRE, SoftGRE and VxLAN protocols. Note: This is referred to as “IP in IP” but is not strictly correct, as IP in IP has no associated tunnel header. Still it is useful to think of these tunnels as IP in IP technologies.

L2 Tunnel - The original frame’s MAC header is “encapsulated” by placing another MAC header (and tunnel header) in front of it. Like L3 tunnels, this inner MAC header is not visible to the switching devices during transport. The second, or “outer header” is used to move frames from the egress of one domain to the ingress of another. Examples include PPTP and L2TPv3 protocols. Note: This is referred to as “MAC in MAC” but also is not strictly correct, as MAC in MAC has no associated tunnel header. However, it is useful to think of these tunnels as a MAC in MAC protocol to easily distinguish from an IP in IP tunnel technology.

Tunnel header – Each tunnel protocol has a header specified by a standards organization such as the IEEE or IETF. These headers allow additional features such as authentication, link status, magic cookies, etc.

Point 2 Point – Tunnel architecture is inherently point-2-point, which is expected but is still listed as it perfectly fits the application for moving the OLT away from the DPoE chassis.

Non-tunnel technologies such as MPLS, IPsec and Segment Routing can also transport data but without using encapsulation. These technologies use header insertion, which has a subtle but important difference - the original L2 and/or L3 headers are both seen and used during the transport.

Eight (8) different transport technologies are evaluated and their most salient features are summarized in the following subsections.

II.2.1 VLAN or Q-in-Q (802.3ad)

VLANs, a Layer-2 shim that is often called a VLAN Tag, or VLAN ID, have been used for decades for various purposes of segregating different data plane frames as well as control and management frames. VLANs provide 12-bits for packet flow identification (zero through 4,095), although a few VLAN numbers are reserved for specific purposes. As a header shim, VLANs are considered header insertion and mark traffic flows, but do not encapsulate headers or payload data.

VLANs are a simple method of tagging frames and can be used in both switching and routing topologies. The natural limit of just over 4,000 VLAN numbers might be a constraint if there are more than that number of traffic flows to mark. In that case, the IEEE devised a protocol that stacks an outer VLAN tag in front of an inner VLAN tag, called 802.3ad or more commonly, Q-in-Q. Q-in-Q can expand the limit of over 4,000 VLAN IDs but is more frequently used to move already tagged traffic, say from an Enterprise, across a Service Provider’s domain and still preserve the original VLAN ID. For example, traffic from an enterprise site tagged with VLAN-ID 173 can have another VLAN tag added at the ingress to the Service Provider’s Network. The SP network uses the added (outer) VLAN tag to guide the traffic through their network and removes the outer VLAN tag at the egress of their network. The original (inner) VLAN tag is intact when delivered to the far-end Enterprise site. The MSO application for Q-in-Q is providing Internet connectivity between multiple Enterprise sites. The other application is to expand the limit of over 4,000 VLAN-IDs if that scale is needed.

Page 120: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

120 CableLabs 04/24/17

Figure 41 - Q-in-Q Topology Example

Figure 42 - VLAN and Q-in-Q header comparison

VLANs were an IPv4 afterthought, hence the idea for an added shim in the Ethernet header. Both IPv4 and IPv6 packets are compatible with VLANs because packet types are considered “payload” to an Ethernet frame. Similarly, there is no consideration for support of eOAM, Multicast, FCAPS or resiliency because all of those features are also considered “payload” to the Ethernet frame. In addition to the DSCP bits in the IP header, there are 3 precedence bits in the VLAN tag that can be used as a primitive QoS mechanism. Message integrity at the frame level is accomplished via the FCS trailer of the Ethernet frame. There is no support for encryption at Layer-2, but again, the payload and some of the higher level headers can be encrypted using IPsec.

VLAN header size: 4 bytes Q-in-Q header size: 4 additional bytes

II.2.2 L2TPv3

The earlier version 2 released as RFC 2661 in 1999 gave way to version 3 in 2005 (RFC 3931). L2TPv3 added additional security, improved encapsulation and can carry packets over many L2 protocols such as Frame Relay, Ethernet, ATM, HDLC, etc., each of which are defined as a “pseudowire type”. L2TPv3 supports Layer 2 tunneling over IP for any payload, including eOAM, STP, LLDP and other L2 protocols; this is an advantage over the GRE protocol.

Page 121: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 121

Figure 43 - L2TPv3 Example

L2TPv3 does not mandate the use of a UDP header, like VxLAN and others. L2TPv3 adds 8-bytes of session ID and Pseudowire control information and an IP header. A Magic Cookie is optional and can add another 4 or 8 bytes, but can detect and discard misdirected packets with a wrong session ID. Magic Cookies also protect against packet insertion attacks.

Figure 44 - L2TPv3 Frame Format

There are two modes of L2TPv3; Ethernet and VLAN. Ethernet mode places packets of any VLAN ID into the same tunnel. VLAN mode has a tunnel for each VLAN and only packets with the same VLAN-ID are in one tunnel.

L2TPv3 does not directly support FCAPS, but like other tunnel protocols, vendor implementation can add some of these features. While designed for an IPv4 world, you can configure L2TPv3 to use IPv6 endpoints to form an IPv6 tunnel. The management plane allows granular bandwidth allocation per tunnel as well as QoS settings in the DSCP filed in the IP header. RFC 4045 added Multicast support.

Static L2TPv3 is set up manually. Optionally, dynamic L2TPv3 is automatically established through the exchange of control messages. Multiple L2TPv3 sessions can exist between a pair of Pseudowire Endpoints, and a single control-channel can maintain them. Pseudowires can be used to separate control plane traffic and data plane traffic. For example, one pseudowire could be dedicated to control plane traffic and one (or more) pseudowire(s) dedicated to data plane traffic. Encryption is not a native feature but manifold examples of using IPsec are available.

CableLabs Remote PHY specification requires implementation of L2TPv3 as the tunneling solution between the CCAP Core and the Remote PHY Device, so this technology has the advantage that it is well-understood for an implementation for which VPI is intended and deployment models are defined. Where CCAP Core – Remote PHY use an L2TPv3 tunnel to implement the Generic Control Plane [Remote PHY], VPI can replace this with another session, e.g., IPHSD user data or control for the OLT.

L2TPv3 header size: 32 bytes

Page 122: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

122 CableLabs 04/24/17

II.2.3 GRE / SoftGRE

GRE (Generic Routing Encapsulation) is a tunneling protocol that encapsulates packets over IP networks and is defined by RFC 2784 since 2000. MSOs use GRE today, especially for community Wi-Fi isolation from subscriber LANs. GRE creates point-to-point connections between two networks, contiguous or dis-contiguous.

Figure 45 - GRE Tunnel Configuration Example

The GRE protocol adds both a GRE and an IP header to the existing packet entering the tunnel interface. The GRE header is typically 4-bytes but has options that can inflate it to 16-bytes.

Figure 46 - GRE Header

GRE endpoints are hosted by a layer-3 device such as a router or L3 switch that uses the interface of the host device to allocate tunnel bandwidth and keep throughput statistics. eOAM (802.1ag & 802.1ah) messages are NOT SUPPORTED on GRE interfaces, but vendor-specific methods to validate tunnel uptime, interface alarming and statistics are usually available. Vendor implementations can add restrictions such as limiting the number of GRE tunnels to a range of 20 to 500 tunnels per interface; therefore, check your vendor’s OS for specific support.

The GRE protocol does not acknowledge FCAPS, but it supports IPv4 and IPv6 needs (IPv6 as of RFC 7676), and can carry any payload type. QoS can either be inherited from the Inner packet header or a new value can be mapped. Special support for tunneling by VLAN is available. Encryption is not a GRE feature, but you can use IPsec to provide security, including encryption.

Redundancy, resiliency and reliability are not specifically addressed by the protocol, but vendor OS can allow a primary and secondary tunnel, and again, this protocol has been used by MSOs for a long time, so limits should be known.

Because GRE tunnels are stateless; the endpoint of the tunnel contains no information about the state or availability of the remote tunnel endpoint. SoftGRE has no control plane to manage and once one side is configured, the other side will set up the tunnel interface without manual effort. This is a big advantage with GRE. The other big advantage is that GRE supports Multicast messages, whereas other transport methods (like IPsec in tunnel mode) do not.

Throughput is high as GRE inherits the delay of the Ethernet interface on the L3 device supporting the tunnel. Gigabit interfaces are around 10-microseconds of delay, but check your vendor OS for specifics. Lastly, GRE does not support nested tunnels, but will support multiple tunnels per interface.

GRE header size: Typically 24 bytes, maximum with options is 36 bytes

II.2.4 VxLAN

VxLAN, as described by RFC 7348: Virtual eXtensible Local Area Network, is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants. The scheme and the related protocols can be used in networks for cloud service providers and enterprise data centers.

Page 123: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 123

VxLAN is a layer-2 overlay scheme over a Layer-3 network, and is a newer protocol (August 2014), supported by major vendors even though it is technically not a standard. Supporting tenants natively fits very well in the DPoE application and a major strength is the ability to have up to 16 million different identifiers in the VxLAN header for massive scaling.

Figure 47 - VxLAN Example

Unlike GRE, VxLAN encapsulates the entire original frame (including the MAC addresses) and adds a UDP, IP and new Ethernet MAC headers - referred to as a ‘MAC-in UDP’ encapsulation.

Figure 48 - VxLAN Header

Each VxLAN Tunnel Endpoint (VTEP) maps the tenant’s end devices to VxLAN segments and performs VxLAN encapsulation and decapsulation. The VTEP also discovers the remote end of the VxLAN segment.

Figure 49 - VTEP Example

Each VxLAN Segment is identified by a VNI (VxLAN Identifier). A VNI can support unicast, broadcast and multicast traffic. A VNI can be mapped to an IMGP multicast group. Bandwidth is easy to allocate and there is support for IPv4/IPv6 as well as QoS mechanisms. Any payload type is welcomed and encryption is again added by

Page 124: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

124 CableLabs 04/24/17

using IPsec along with VxLAN. OAM is supported, but can vary by the OS used on the L3 device supporting the VTEP. Redundancy is built-in by using ECMP.

VxLAN header size: 50 bytes

II.2.5 NVGRE

NVGRE (Network Virtualization using Generic Routing Encapsulation) is a network virtualization method that uses encapsulation and tunneling to create large numbers of virtual LANs (VLANs) for subnets that can extend across dispersed data centers at layer 2 and layer 3. The purpose is to enable multi-tenant and load-balanced networks that can be shared across on-premises and cloud environments. NVGRE, specified as RFC 7637 as of Sept. 2015, is a new tunnel protocol. The NVGRE proposal uses GRE to create an isolated virtual Layer 2 network that may be confined to a single physical Layer 2 network or extend across subnet boundaries.

This tunnel protocol competes with VxLAN in that it also has 24 bits used to provide up to 16 million unique tenant IDs (TNI). Unlike VxLAN, NVGRE does not require an additional UDP header and is compatible with many devices that support GRE. Microsoft, Intel, HP, and Dell proposed NVGRE whereas Cisco, VMware, Citrix, Red Hat Arista and Broadcom proposed VxLAN. There are many similarities:

• Both use encapsulation strategies to create a larger number of VLANs for subnets that can extend across dispersed data centers and Layers 2 and 3.

• Both standards aim to enable load-balanced, multi-tenant networks that can be shared across cloud and on-premises environments.

• Both make use of a 24-bit identifier

• Both use tunnels to carry packets across the network and across subnet boundaries

• Both use Multicast to connect application components

By comparing the headers, we see the subtle differences between VxLAN and NVGRE:

Figure 50 - VxLAN and NVGRE Header Example

Page 125: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 125

Each NVGRE Segment is identified by a TNI (Tenant Network Identifier). A TNI can support unicast, broadcast and multicast traffic. A TNI can be mapped to an IMGP multicast group. Bandwidth is easy to allocate and there is support for IPv4/IPv6 as well as QoS mechanisms. Any payload type is accepted and encryption is again added by adding an IPsec header. NVGRE supports OAM, but can vary by the OS used by the vendor. Redundancy and load balancing is built-in by using ECMP. FCAPS, bandwidth allocation, QoS and v4/v6 is a function of the L3 device, not the NVGRE tunnel specification. Lastly, scalability was the main intent of this protocol, much like VxLAN.

NVGRE shim size: 46 bytes

II.2.6 MPLS

Multi-Protocol Label Switching is an OSI layer 2.5 network protocol that uses a short label to transport data between nodes instead of a long IP address. Label switching is faster than routing because you avoid complex route-table lookups. Service Providers use this protocol to move traffic across their network backbone and usually terminate this protocol just before delivering data packets to an enterprise.

As a data transport mechanism, the disaggregated DPoE system can use it by assigning different labels; for example, each flow with a specific QoS marking. Again, MPLS is a packet forwarding technology, which uses labels to make data forwarding decisions. The 4-byte label is a ‘shim’ that is inserted between the Data Link Layer and the IP layer by the ingress router, and the MPLS terminating router removes the label.

Figure 51 - MPLS Label Format

Many of the MPLS advantages are designed to, and only applicable for, large routed networks - especially Service Provider backbones. These advantages include providing VPNs, Traffic Engineering, QoS and the ability to transport packets over almost any L2 network (Frame Relay, Ethernet, ATM, etc.). While few of these features apply to the DPoE application, it is still a viable transport protocol and should be considered.

MPLS inherits the bandwidth allocation from the L3 device interface configuration. It supports eOAM, FCAPS and QoS (per flow). As a shim, the IP transport protocol (v4/v6) does not matter; any payload type is acceptable and resiliency is built-in provided there is a mesh of routers in the transport area. MPLS scales very well, has excellent throughput and very low latency and jitter. It is highly available because paths are predetermined (before traffic begins to flow) and can heal broken links, again assuming there are choices of other routes and routers in the transport area. VLANs can be tricky to sustain, but there many examples to follow. If no more than two routers exist between the DPoE chassis and the OLT location, MPLS offers few advantages and may be a heavyweight solution for a point to point topology. While MPLS has always supported Multicast, RFC 6513 added Multicast over VPN links for BGP/MPLS systems.

MPLS shim (label) size: 4 bytes, but multiple labels can be added

II.2.7 Segment Routing

Segment routing is a new forwarding paradigm that provides source routing, which means that the source can define the path that the packet will take. With Service Providers, segment routing is a technology that is gaining popularity as a way to simplify MPLS networks. It has the benefits of interfacing with software-defined networks. It does this without keeping state in the core of the network. One of the main applications for SR is to enable some kind of application controller that can steer traffic over different paths, depending on different requirements and the current state of the network. This is an example of software-defined networking (SDN). It is then possible to program the network to send voice over a lower latency path and send bulk data over a higher latency path.

Page 126: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

126 CableLabs 04/24/17

Figure 52 - Segment Routing Example

The Segment Routing header can contain one or more segment and policy lists.

Figure 53 - Segment Routing Header Example

Like MPLS, Segment Routing has many advantages, but most of them are only applicable in a dense routing environment as found in Service Provider networks. Some of the advantages include a simple configuration, typically just a few lines; less state is stored in the network, which means that less information has to be communicated to other network devices; and any complex state information is maintained within a PCE (Path Computation Element), which can be a logically centralized SDN Controller. Segment routing provides fast failover to alternate paths; about 50 milliseconds.

Like MPLS, provisioning of bandwidth, QoS, v4/v6 support is dependent on the L3 device and the OS it is running. FCAPS is not directly supported. Any payload is allowed and encryption is not a feature of Segment Routing, but IPsec can be easily used. Like any IPv6 protocol, Multicast messages are supported and highly used. eOAM is supported, redundancy is inherent and resiliency and reliability are high. As a Service Provider protocol, Segment Routing scales very well.

Segment Routing header size: 8 bytes + (16 bytes * number of segments)

II.2.8 IPsec

IP Security (IPsec) began as an IETF standard as RFC 1827, but has had many updates through the years. IPsec is a set of protocols that authenticate, encapsulate, encrypt (data integrity and confidentiality), and has replay protection. IPsec adds security at the Network Layer, instead of the application layer, which free developers to focus on applications.

Page 127: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 127

Figure 54 - IPsec Example

IPsec provides two choices of security service: Authentication Header (AH), which essentially allows authentication of the sender of data, and Encapsulating Security Payload (ESP), which supports both authentication of the sender and encryption of data as well. The specific information associated with each of these services is inserted into the packet in a header that follows the IP packet header.

There are two modes of IPsec, transport and tunnel and both modes apply for use to both the AH and the ESP headers.

Transport mode inserts the AH or ESP header between the original IP and L4 headers.

Tunnel mode adds the EH or ESP header and an ‘outer’ IP header before the original IP header. It then encrypts from the original IP header through the rest of the original packet.

Illustrations of frame formats will help to understand these modes:

Figure 55 - AH Header Example

Page 128: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

128 CableLabs 04/24/17

Figure 56 - ESP Header Example

As a network security protocol, there is no native support for bandwidth allocation, FCAPS, QoS, v4/v6 usage or payload types. Those are left to the L3 device to provide and configure. IPsec Tunnel mode can be attractive because it both tunnels (adds an additional IP header) as well as encrypts, adds authentication of the sender and has anti-spoofing and Man-in-the middle protections. IPsec does not support OAM messages. IPsec does NOT support multicast messages in tunnel mode, which is a major deficit. While very scalable, IPsec does not have any special support for redundancy, resiliency or availability.

IPSec Authentication Header size: 12 bytes + authentication data (variable)

IPSec Encapsulating Security Payload header size: 50 bytes + pad

II.3 Transport Technologies Analysis Analysis of the strengths and weaknesses of the eight technologies evaluated against the criteria suggests that the following four may not be optimally suited for the purpose of carrying control plane traffic and data plane traffic on the interconnect network between the DPoE System and the OLT, which is expected to be within the service provider’s administrative domain, likely switched rather than routed, and implemented as a flat network:

• IP Sec: While it has value as a “helper protocol” in providing encryption to add security for the other evaluated transport technologies, it is not generally considered to be a standalone tunneling protocol.

• MPLS: This is intended as a backbone protocol, rather than as a simple interconnect protocol, and has features not applicable to the DPoE System - OLT interconnect, making it unnecessarily ‘overweight’ for the VPI application.

• Segment Routing: Segment routing is designed to enable path optimization through a more complex, multiple-hop routed network. Therefore the protocol is not a fit for the simpler, flat interconnect network between the DPoE System and OLT.

• NVGRE: This network virtualization method shows promise as a possible alternative to VxLAN, but it was only published as an RFC in 2015 so is less widely deployed and not yet well-proven in production environments.

Eliminating the four protocols above leaves the following four technologies for the purpose of providing transport between vCM and R-OLT in the Network Layer:

• VLAN / Q in Q

• GRE/SoftGRE

• VxLAN

• L2TPv3

Further study and analysis needs to happen before this disaggregated architecture/design is completed.

Page 129: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 129

Appendix III Proof of Concept (Informative)

III.1 Summary CableLabs has developed a VPI demo that shows a standardized software interface using an OpenDaylight SDN Controller to provision IP HSD services on a DOCSIS® Network. Standard interfaces allow automation, dynamic service provisioning, and virtualization.

III.2 The Problem Interfaces to cable operators’ service provisioning systems are a complex mix of vendor-proprietary systems and home-grown special-purpose applications and utilities. Efforts to configure and manage services in a dynamic fashion, across multiple access networks, without reboot of customer devices, are impeded the current static (config. file based) and siloed provisioning systems. Implementing new services is time and resource intensive and operators are prioritizing the need for faster deployment of new services. MSOs would like an environment where they can programmatically control the services deployed on their networks without service interruption.

Additionally, service models developed by the SDN/NFV industry focus on enterprise networks and data center networking rather than residential and commercial subscriber services on access networks.

III.3 The Solution The specific access network elements can be abstracted away from the service applications, by an SDN Controller.

Figure 57 - Layered Service Provisioning Architecture with Standardized Interfaces

Standardized RESTful interfaces are defined between the service provider’s applications and the SDN Controller, and between the SDN Controller and access network equipment, be it DOCSIS or DPoE. These interfaces provide a uniform, layered communication path between service applications systems and the access network.

The cable services data model created by the VPI working group abstract the detailed functions of the underlying DOCSIS and DPoE system components, and allow the service applications to be written once and used across the networks, including future access networks such as wireless. Dynamic changes to services can now occur instantaneously via software control and be automated. Well-defined interfaces and data models enable virtualization of service enablement, where the functions are performed by generic computing platforms.

The VPI solution provides an abstraction between service applications, the SDN controller and access network equipment. It defines common YANG models defining the services and the service configuration functions to be implemented across the interfaces.

Page 130: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

130 CableLabs 04/24/17

III.4 The Demonstration

Figure 58 - Demo Components communicate across VPI Defined Interfaces

The demonstration emulates a service provider’s IPHSD application with a web interface exposing different sets of services (basic IPHSD service, a gaming service, and a video service), which need different levels of QoS from the DOCSIS network.

The application uses the RESTCONF interface exposed by a VPI-specific plug-in to OpenDaylight open source SDN Platform, passing subscriber information and service parameters. OpenDaylight in turn uses the RESTCONF interface exposed by a DOCSIS CCAP emulator to configure service flows implementing the requested service with classifiers satisfying the customer’s SLA.

III.5 Demo Screenshots

Figure 59 - WebPortal of Application to provision services

Page 131: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 131

Figure 60 - OpenDaylight Controller: Status of current services

Figure 61 - RESTCONF Transactions: between Application and SDN Controller

Page 132: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

132 CableLabs 04/24/17

Figure 62 - WebPortal: Showing Customer/Service status

III.6 Expected Benefits VPI delivers a programmable and dynamic method for provisioning services on a DOCSIS and EPON access network. This allows for Cable Operators to reduce time to deployment, simplifies operations related to service provisioning, and paves the way for automation and customer self-provisioning.

The PoC could be used by operators as a reference to start implementing a dynamic and programmable service configuration model. The PoC can also be used by CCAP vendors as a reference to support a RESTCONF interface.

Page 133: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 133

Appendix IV DPoE Overview

The DPoE System provides the components and services required to implement an EPON access network analogous to the CCAP controlling the DOCSIS HFC network. In the integrated system today, the provisioning-related components follow.

• Router - IP routing and L2VPN PE support

• OLT - Manages the Optical Distribution Network and attached ONUs. Management includes ONU registration, data transmission access (i.e., Gates), and priority scheduling upstream and downstream data flows (i.e., priority-based service flow scheduling).

• Virtual Cable Modems (see ONU section for details)—provides management interface to DOCSIS OSS/BSS on behalf of ONUs. Downloaded configuration file contents are used to provision services on OLTs and ONUs.

The ONU, in conjunction with the vCM, is analogous to the cable modem in a DOCSIS network. The ONU classifies packets, forwards packets onto service flows, and provides the subscriber access to the network. The vCM provides the management interface to OSS/BSS on behalf of the ONU and it uses DPoE OAM to manage the remote ONU. Explicitly, the vCM provides the following functions for the ONU.

• One vCM is instantiated for each ONU that registers with the OLT and provides:

• Provides a proxy IP service in behalf of the layer-2 ONU

• Instantiates a provisioning and management interface for an ONU, Over Ethernet OAM (DPoE OAM, specifically)

• Configures the ONU with classifiers and forwarding rules

• Configure the OLT to support/enforce provisioned services on the ONU/vCM

• Performs DHCP and TFTP to download configuration file

• Implements management interface to the BSS/OSS

• Maps L2VPN services from LAN to WAN

Figure 63 - DPoE Architecture

The DPoE System currently supports dynamic service provisioning but the procedures and provisioning method remains configuration file dependent—the original configuration file must be updated, downloaded to vCM,

Page 134: Virtualization and Network Evolution Virtual Provisioning ...

VNE-TR-VPI-V01-170424 Virtualization and Network Evolution

134 CableLabs 04/24/17

differences identified, then eOAM is created and sent to update the ONU. The operation is initiated and managed via SNMP. The VPI architecture’s goal, provides an alternative provisioning model consistent for both the CCAP and DPoE System.

Vendors may not have implemented support for dynamic service changes in ONUs. This limitation is hardware dependent only and applies equally to the original DPoE and VPI-defined services.

DPoE 2.0 specifications [DPoE Arch] describe two IP forwarding modes for MEF services. These modes are listed and described below:

• Fully Centralized In the fully centralized model, the MAC learning and forwarding occurs within a "centralized" bridge or Virtual Switching Instance (VSI). Thus, per operator provisioning, each DPoE System would establish a spoke pseudowire or VPWS per participating SI to the centralized VSI. The centralized VSI would forward the frames based on standard [802.1d] bridging (unqualified learning via [802.1d]; or qualified learning via [802.1d] + [802.1Q]) behavior to the appropriate pseudowire spoke, VPWS, or local attachment circuit.

• Fully Distributed In the fully distributed model, each DPoE System with a participating SI in an E-LAN or E-TREE service instance would contain and operate its own local VSI. The DPoE System would forward frames within that VSI based on standard [802.1d] bridging (unqualified learning via [802.1d]; or qualified learning via [802.1d] + [802.1Q]) behavior to the appropriate pseudowire spoke, VPWS, or local attachment circuit.

Page 135: Virtualization and Network Evolution Virtual Provisioning ...

Virtual Provisioning Interfaces Technical Report VNE-TR-VPI-V01-170424

04/24/17 CableLabs 135

Appendix V Acknowledgements

On behalf of our industry we thank the following VPI working group participants who contributed directly to the writing of this technical report: CableLabs, Arris, Netcracker, Nokia, Tibit, Aricent, Huawei, Vecima, Cisco.

We also thank the following individuals for their participation in the Virtualized Provisioning Interfaces working group and their contributions to development of ideas reflected in this TR.

Contributor Company Affiliation Contributor Company Affiliation Tom Lybarger Aricent Hesham ElBakoury Huawei

Dan Torbet Arris Evan Sun Huawei

Jeff Dement Arris Phil Oakley Liberty Global

Janet Bean Arris Arkin Aydin Nokia

Sebnem ZorluOzer Arris Michael Kloberdans Netcracker

Karthik Sundaresan CableLabs Anh Tuan Le Netcracker

Steve Burroughs CableLabs Gilad Aloni Oliver Solutions

Kevin Luehrs CableLabs George Hart Rogers

Aseem Choudhary CableLabs Michael Peters Sumitomo Electric

Mark Szczesniak Casa Systems Niem Dang SCTE

Maike Geng Casa Systems Andrew Chagnon Tibit

Richard Zhou Charter Kirk Erichsen Time Warner Cable

Alon Bernstein Cisco Kevin Noll Time Warner Cable, Tibit

Nagesh Nandiraju Comcast Douglas Johnson Vecima

Dave Hood Huawei