International Conference on Management, Science, Technology, Engineering, Pharmacy
and Humanities (ICM STEP) – 2021 | ISSN: 2349-6002
151527 © June 2021| IJIRT | Volume 8 Issue 1 | www.ijirt.org 187
Software-Defined Networking: Self-Healing Topology
Discovery Protocol for Software Defined Networks
T. Vamshi Mohana
1
, Dr. Baddam Indira
2
1
Research Scholar, Career Point University
2
Research Supervisor, Career Point University
Abstract - Plug-and-play information technology
(IT)infrastructure has been expanding very rapidly in
recent years. With the advent of cloud computing, many
ecosystem and business paradigms are encountering
potential changes and may be able to eliminate their IT
infrastructure maintenance processes. Real-time
performance and high availability requirements have
induced telecom networks to adopt the new concepts of
the cloud model: software-defined networking (SDN)
and network function virtualization (NFV). NFV
introduces and deploys new network functions in an
open and standardized IT environment, while SDN aims
to transform the way networks function. SDN and NFV
are complementary technologies; they do not depend on
each other. However, both concepts can be merged and
have the potential to mitigate the challenges of legacy
networks. In this paper, our aim is to describe the
benefits of using SDN in a multitude of environments
such as in data centers, data center networks, and
Network as Service offerings. We also present the
various challenges facing SDN, from scalability to
reliability and security concerns, and discuss existing
solutions to these challenges.
Index Terms - Software- Defined Networking, OpenFlow,
Datacentres, Network as a Service, Network Function
Virtualization.
1.INTRODUCTION
Today’s Internet applications require the underlying
networks to be fast, carry large amounts of traffic, and
to deploy a number of distinct, dynamic applications
and services. Adoption of the concepts of “inter-
connected data centers” and “server virtualization” has
increased network demand tremendously. In addition
to various proprietary network hardware, distributed
protocols, and software components, legacy networks
are inundated with switching devices that decide on
the route taken by each packet individually; moreover,
the data paths and the decision-making processes for
switching or routing are collocated on the same device.
This situation is elucidated in Fig. 1. The decision-
making capability or network intelligence is
distributed across the various network hardware
components. This makes the introduction of any new
network device or service a tedious job because it
requires reconfiguration of each of the numerous
network nodes.
Legacy networks have become difficult to automate
[1, 2]. Networks today depend on IP addresses to
identify and locate servers and applications. This
approach works fine for static networks where each
physical device is recognizable by an IP address, but
is extremely laborious for large virtual networks.
Managing such complex environments using
traditional networks is time -consuming and
expensive, especially in the case of virtual machine
(VM) migration and network configuration. To
simplify the task of managing large virtualized
networks, administrators must resolve the physical
infrastructure concerns that increase management
complexity. In addition, most modern-day vendors use
control-plane software to optimize data flow to
achieve high performance and competitive advantage
[2]. This switch-based control-plane paradigm gives
network administrators very little opportunity to
increase data-flow efficiency across the network as a
whole. The rigid structure of legacy networks
prohibits programmability to meet the variety of client
requirements, sometimes forcing vendors into
deploying complex and fragile programmable
management systems. In addition, vast teams of
network administrators are employed to make
thousands of changes manually to network
components [2, 3].