Scalable for Cloud-native Transport SDN
Controller Using GNPy and Machine Learning
techniques for QoT estimation
Carlos Manso
1
, Ricard Vilalta
1
, Raul Mu ˜ noz
1
, Ramon Casellas
1
, Ricardo Mart´ınez
1
1
Centre Tecnol` ogic de Telecomunicacions de Catalunya (CTTC/CERCA). Castelldefels (Spain)
carlos.manso@cttc.es
Abstract: This demo shows a cloud-native SDN controller that can estimate end-to-end
QoT using both analytical (GNPy) and machine learning algorithms on WDM systems.
This transport SDN controller is able to scale horizontally using custom metrics. © 2021
The Author(s)
1. Introduction
In optical networks, it is key to ensure a good Quality of Transmission (QoT) before the provisioning to avoid
the degradation of the optical signal during the transmission below a given threshold. The selection of the path
between the endpoints of the service is critical to maintain the performance during the duration of the connection.
With all the physical impairments of the optical fibers and transmission equipment, it is hard to predict what
the Bit-Error Rate (BER) of the connection will be. Traditionally, it has been an election between precise and
approximate methods. Precise methods can shape physical impairments with a high accuracy but their use of
complex mathematical algorithms and formulas makes the computation cost very high, such as the Split Step
Fourier (SSF) or analytical models. On the other hand, approximate methods, such as the Gaussian Noise model
(GN), although not as precise, they make good approximations but with a fraction of the computational cost that
require analytical models [1].
The GNPy (Gaussian Noise model in Python) is an open-source software library built for route planning and
optimization of optical networks based on the GN model [2]. It is being developed by the Telecom Infra Project
(TIP), an engineering-focused initiative driven by operators that at the same time collaborates with suppliers,
developers and startups to ease the disaggregation of the traditional optical networks. To achieve this, it is critical
to accurately plan and predict the performance of disaggregated optical networks based on the simulation of the
optical parameters, that is what GNPy is mainly developed for.
Nonetheless, in the last few years, Machine Learning (ML) techniques have also been studied for QoT predic-
tion. They offer similar computational cost (comparing the inference phase) to the approximate methods, while
removing the need to model some components that are not accurate, adding some impairments that are too slow
to compute, and the dependence on certain parameters on the models [3]. The main problem with ML techniques,
is that they need previously obtained data to be fed to the ML model, which sometimes are difficult to obtain.
ML techniques and complex path computation models have been typically deployed in dedicated hardware and
software apart from transport SDN controller, in order to ensure enough resources for these operations. Current
novel SDN controller architectures based on microservices allow the introduction of these features inside the SDN
controller. For this, the infrastructure running the transport SDN controller should allow scable resource allocation
based on horizontal scaling of the computation resources.
In this demonstration, we present an extended cloud-native Software-Defined Networking (SDN) controller
known as uABNO [4] based on microservices. This demo features a new path computation module able to use
both the GN model based on GNPy, and ML model for QoT prediction. This two models are also built as mi-
croservices that communicate with the path computation microservice, but unlike the other microservices of the
SDN controller, they are automatically scaled under different user-defined metrics to avoid excessive queuing of
requests. They are deployed using Docker, the monitoring of the metrics is performed by Prometheus, while the
orchestration and auto-scaling will be carried out by Kubernetes.
2. Overview
In Fig.1 (left) the SDN controller architecture is shown. It contains a cloud controller (based in Kubernetes), as
well as a monitoring system and time series data base (based on Prometheus), a metrics server that allows multiple
metrics export from monitoring system to cloud controller and a Horizontal Pod AutoScaler (HPA). It also depicts
the different types of microservices: