A. Liotta, “Farewell to deterministic networks” 19 th Symposium on Communications and Vehicular Technology in the Benelux, Eindhoven, the Netherlands, 16 November 2012 (IEEE) This paper complements the plenary keynote given by the author at IEEE SCVT – audio slidecast: http://bit.ly/SCVT_Liotta Farewell to deterministic networks (Invited paper) Antonio Liotta Eindhoven University of Technology Eindhoven, the Netherlands a.liotta@tue.nl Abstract—As communication networks become increasingly complex and dynamic, the three functions known as monitoring, control and management prove to be ineffective. It is increasingly difficult to operate large networks, perform diagnostics, prevent cascading failures, or deliver dependable services. I argue that this is because, although the Internet serves more terminals than there are neurons in the brain, we still handle our networks via deterministic protocols. We still try to capture the complex entangle of interconnections by creating deterministic models of the network, its traffic and control system. By the year 2020 the number of interconnected ‘things’ will grow by a factor of a thousand, and networks will be programmed to ‘learn’ how to detect new communication patterns and self-regulate, rather than acting deterministically. I introduce the anatomy of a smart network, discussing what more could be achieved with it. Keywords—Smart communication networks; Internet of Things; future internet I. HOW HAVE WE COME TO BUILD DETERMINISTIC NETWORKS? A. Early deterministic networks The foundations of modern communications were laid well before the Internet made its appearance. In fact, we haven’t challenged the basic communications principles for almost two centuries. William Cooke and Charles Wheatstone invented the very first ‘deterministic’ network, the telegraph, in 1839. They demonstrated how to move bits of information by means of a pre-determined set of rules. Today’s networks are still rule- based. The protocols that control the access to shared channels are deterministic; and the same applies to switching, routing and controlling of data frames, packets and flows. B. Deterministic switching to scale up the network The deployment of telephony required the introduction of a new entity, the switch, which was instrumental to the realization of the current billion-node network. Switching allows to share individual lines with multiple end-systems and, in turn, to scale up the network. At first, switches were extremely intelligent, as their function was performed by human operators (Fig. 1). Yet, it was soon clear that, for such a repetitive task, the use of humans was not scalable. Intelligence was traded for automation; networks could grow further but, at the same time, required a more structured engineering approach. Figure 1. The human switch, circa 1940. C. The perfectly engineered network Let’s fast-forward to the end of the last century. Communications had evolved considerably to what we can regard as a perfectly engineered Telecommunications Management Network (TMN) [1]. It’s well worth looking into the TMN, as it introduced a number of architectural principles and management strategies that we still use today: • Layering. Allows a clear separation of responsibilities among a set of well-specified management layers, concerning: elements, networks, services and business- oriented applications (Fig.2). • Abstraction. Going from bottom towards the top, the management functions become increasingly more abstract, which is instrumental to handling the complexity of applications, services and networks. • Insulation. Any information flows only via standardized interfaces and only between adjacent layers. • Deterministic programming. To each possible condition or event corresponds a specific action. Thanks to its solid engineering approach, the TMN allowed handling the complexity of the Telecom systems of the 1990’s. Furthermore, it was possible to accurately dimension the systems for highly specialized services and usage patterns.