- The Evolution of Network Architecture
- NFV Architectural Framework
- Benefits of NFV
- NFV Market Drivers
- Summary
- References
- Review Questions
NFV Architectural Framework
The architecture that defines traditional network devices is fairly basic, because both the hardware and software are customized and tightly integrated. In contrast, NFV allows software developed by the vendors to run on generic shared hardware, creating multiple touch points for management.
The NFV architectural framework is developed to ensure that these touch points are standardized and compatible between the implementations of different vendors. This section provides a comprehensive discussion on the framework and the rationale behind its blocks. Understanding the framework enables readers to envision the flexibility and freedom of choice that NFV has to offer.
Need for a Framework
The architecture that defines the traditional network devices is fairly basic since both the hardware and software are customized and tightly integrated. In contrast, NFV allows software developed by the vendors to run on generic shared hardware creating multiple touch points for management. In the NFV jargon the virtual implementation of the network functions is referred to as virtualized network function (VNF). A VNF is meant to perform a certain network function e.g. router, switch, firewall, load-balancer, etc. and a combination of these VNFs may be required to implement the complete network segment that is being virtualized.
Different vendors may offer these VNFs, and the service providers can choose a combination of vendors and functions that best suit their needs. This freedom of choice creates the need for a standardized method of communication between the VNFs as well as a way to manage them in the virtual environment. The management of NFV needs to take into account the following considerations:
multivendor implementations of VNFs
managing the life cycles and interactions of these functions
managing the hardware resource allocations
monitoring the utilization
configuration of the VNFs
interconnection of the virtualized functions to implement the service
interaction with the billing and operational support systems
To implement these management roles and keep the system open and non-proprietary, a framework must be defined for standardization. This standard framework should ensure that the VNF deployed is not tied to specific hardware and does not need to be especially tailored for any environment. It should offer vendors a reference architecture that they can follow for consistency and uniformity in the deployment methodologies of any VNF they implement. Additionally, it needs to ensure that the management of these VNFs and the hardware they run upon does not have a dependency on any vendor. There should be no special tweaking required to implement the network functions in this heterogeneous ecosystem. Essentially, this framework must provide the architectural foundations that allow the VNFs, hardware, and the management systems to work seamlessly within the well defined boundaries.
ETSI Framework for NFV
NFV was first introduced at the SDN OpenFlow World Congress in 2012 by a consortium of key service providers. They referenced the major challenges faced by network operators, especially their dependency on introducing new hardware for enabling innovative services to their customers. The group highlighted the challenges associated with the following concepts:
design changes around the new equipment
deployment cost and physical constraints
need for expertise to manage and operate the new proprietary hardware and software
dealing with hardware complexity in the new proprietary equipment
the short lifecycle that makes this equipment become obsolete rapidly
restarting the cycle before the returns from the capital expenses and investments are fully realized
The group proposed NFV as a way to tackle these challenges and improve efficiency by “leveraging standard IT virtualization technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Datacentres, Network Nodes and in the end user premises.”[3]
To realize this goal and define a set of specifications that would make it possible to move from the traditional vendor and network centric approach to an NFV-based network, seven of these leading telecom operators formed an Internet specification group (ISG)—under an independent standardization organization called the European Telecommunications Standards Institute (ETSI).[1]
This group formally started in early 2013, working towards defining requirements and an architectural framework that can support the virtualized implementation of network functions performed by custom hardware devices from vendors.
This group used three key criteria for coming up with the recommendations:
Decoupling: complete separation of hardware and software
Flexibility: automated and scalable deployment of the network functions
Dynamic operations: control of the operational parameters of the network functions through granular control and monitoring of the state of network
Based on these criteria, a high-level architectural framework was established, defining distinct areas of focus as shown in Figure 1-3.

Figure 1-3 High-Level ETSI NFV Framework
This architectural framework forms the basis of the standardization and development work and is commonly referred to as the ETSI NFV framework. At a high level, the framework encompasses management of VNFs, relationships and interdependencies, data flow between VNFs, and resource allocation. ETSI ISG categorized these roles into three high-level blocks, namely the infrastructure block, virtualized functions block, and management block. In ETSI’s definition, the formal names of these blocks are defined as:
Network Functions Virtualization Infrastructure (NFVI) block: This block forms the foundation of the overall architecture. The hardware to host the virtual machines, the software to make virtualization possible, and the Virtualized resources are grouped into this block.
Virtualized Network Function (VNF) block: The VNF block uses the virtual machines offered by NFVI and builds on top of them by adding the software implementing the virtualized network functions.
Management and Orchestration (MANO) block: MANO is defined as a separate block in the architecture, which interacts with both the NFVI and VNF blocks. The framework delegates to the MANO layer the management of all the resources in the infrastructure layer; in addition, this layer creates and deletes resources and manages their allocation of the VNFs.
Understanding the ETSI Framework
The ETSI framework and the thought process behind its high-level blocks can be better understood if you examine the building process that led to this framework. Let’s begin with the fundamental concept of NFV, such as virtualizing the function of a network device. This is achieved through VNFs.
To implement the network service, VNFs may be deployed either as standalone entities or as a combination of multiple VNFs. The protocols associated with the function that is being virtualized within a VNF do not need to be aware of the virtualized implementation. As shown in the Figure 1-4 the VNF implementing the firewall service (FW), NAT device (NAT), and routing (RTR) communicate to each other without the knowledge that they are not physically connected or running on dedicated physical devices.

Figure 1-4 Network Functions Working Together as VNFs
Since there isn’t dedicated or custom hardware designed to run these VNF, a general-purpose hardware device with generic hardware resources such as a processor (CPU), storage, memory, and network interfaces can be used to run these VNFs. This can be made possible by using COTS hardware. It doesn’t need to be a single COTS device; it can be an integrated hardware solution providing any combination of the required hardware resources to run the VNFs. Virtualization technologies can be used to share the hardware among multiple VNFs. These technologies, such as hypervisor-based virtualization or container-based virtualization, have been used in data centers for some time and have become fairly mature. These details are covered in Chapter 2, “Virtualization Concepts.”
Virtualization of hardware offers an infrastructure for the VNF to run upon. This NFV infrastructure (NFVI) can use COTS hardware as a common pool of resources and carve out subsets of these resources creating “virtualized” compute, storage, and network pools that can be allocated as required by the VNFs, as shown Figure 1-5.

Figure 1-5 Virtual Computing, Storage, and Networking Resources Provided to VNF
The vendor for the VNF recommends a minimum requirement for the resources that its implementation should have available to it, but the vendor can’t control or optimize these hardware parameters. For instance, the vendor can make a recommendation on the CPU cores necessary to execute the code or the storage space and memory the VNF will need—but the vendors no longer get a free hand to design the hardware around their specific requirements. The virtualization layer using the physical hardware can cater to the VNF resource request. The VNF doesn’t have any visibility into this process, nor is that VNF aware of the existences of other VNFs that may be sharing the physical hardware with them.
In this virtualized network’s architecture, there are now multiple resources to manage and operate at various levels. In comparison, today’s network architecture management is vendor specific and has limited knobs and data points offered by vendors. Any new requirements or enhancements in management capabilities are possible only with vendor support. With NFV it is possible to manage the entities at a more granular and individual level. The NFV architecture, therefore, wouldn’t be complete without defining the methodologies to manage, automate, coordinate, and interconnect these layers and functional blocks in an agile, scalable, and automated way.
This requirement leads us to add another functional block to the framework that communicates with and manages both the VNF and NFVI blocks, as shown in Figure 1-6. This block manages the deployment and interconnections of the VNFs on the COTS hardware and allocates the hardware resources to these VNFs.

Figure 1-6 Management and Orchestration Block for NFV
Since the MANO block is meant to have full visibility of the entities and is responsible for managing them, it is fully aware of the utilization, operational state, and usage statistics of them. That makes MANO the most suitable interface for the operational and billing systems to gather the utilization data.
This completes the step-by-step understanding of the three high-level blocks—NFVI, VNF, and MANO—and captures the reasoning behind defining and positioning these blocks in the ETSI framework.
A Closer Look at ETSI’s NFV Framework
The previous section provides a high-level view of the ETSI NFV architecture framework and its basic building blocks. The framework defined by ETSI goes deeper into each of these blocks and defines individual functional blocks with distinct role and responsibility for each of them. The high-level blocks, therefore, comprise multiple functional blocks. For instance, the management block (MANO) is defined as a combination of three functional blocks: the Virtualized Infrastructure Manager (VIM), Virtualized Network Function Manager (VNFM), and NFV Orchestrator (NFVO).
The architecture also defines reference points for the functional blocks to interact, communicate and work with each other. Figure 1-7 shows the detailed view of the framework as defined by ETSI.

Figure 1-7 Low Level View of the ETSI NFV Framework
This section takes a deeper look into this framework and reviews the suggested functions, the interworking of each of these functional blocks, and their interlinking through the reference points.
For convenience of understanding, these functional blocks are grouped into layers, where each layer deals with a particular aspect of NFV implementation.
Infrastructure Layer
The VNFs rely on the availability of virtual hardware, emulated by software resources running on physical hardware. In the ETSI NFV framework, this is made possible by the infrastructure block (NFVI). This infrastructure block comprises physical hardware resources, the virtualization layer, and the virtual resources, as shown in Figure 1-8.

Figure 1-8 Infrastructure Layer of ETSI NFV Framework
ETSI framework splits the hardware resources into three main categories – computing, storage, and network. The computing hardware includes both the CPU and memory, which may be pooled between hosts using cluster-computing techniques. Storage can be locally attached or distributed with devices such as network-attached storage (NAS) or devices connected using SAN technologies. Networking hardware comprises pools of network interface cards and ports that can be used by the VNFs. None of this hardware is purposely built for any particular network function, but all items are instead generic hardware devices available off the shelf hardware (COTS). These functional blocks can span and scale across multiple devices and interconnected locations, and are not confined to a single physical host, location or point of presence (POP).
It must be mentioned that the networking hardware within the physical location interconnecting the storage and compute devices, or interconnecting multiple locations (such as switches, routers, optical transponders, wireless communication equipment, etc.) is also considered part of NFVI. However, these network devices are not part of the pool that is allocated as a virtual resource to VNF.
The virtualization layer is another function block that is part of NFVI. It interacts directly with the pool of hardware devices, making them available to VNFs as a virtual machine. The virtual machine offers the virtualized computing, storage, and networking resources to any software that it hosts (VNF in this case) and presents these resources to the VNF as if they were dedicated physical hardware devices.
In summary, it is the virtualization-layer that is decoupling the software for network function (i.e., VNF) from the hardware while provident them isolation from other VNFs and acting as an interface to the physical hardware.
To manage NFVI, ETSI defines a management functional block called the Virtualized Infrastructure Manager (VIM). VIM is part of MANO (Management and Orchestration blocks), and the framework delegates to it the responsibility for managing the computing, storage, and networking hardware, the software that is implementing the virtualization layer, and the virtualized hardware. Because VIM directly manages the hardware resources, it has a full inventory of these resources and visibility into their operational attributes (such as power management, health status, and availability), as well as the capacity to monitor their performance attributes (such as utilization statistics).
VIM also manages the virtualization layer and controls and influences how the virtualization layer uses the hardware. VIM is therefore responsible for the control of NFVI resources and works with other management functional blocks to determine the requirements and then manage the infrastructure resources to fulfill them. VIM’s management scope may be with the same NFVI-POP or spread across the entire domain spanned by the infrastructure.
An instance of VIM may not be restricted to a single NFVI layer. It is possible that a single VIM implementation controls multiple NFVI blocks. Conversely, the framework also allows for the possibility that multiple VIMs can function in parallel and control several separate hardware devices. These VIMs can be in a single location or different physical locations.
Virtualized Network Functions (VNF) Layer
The VNF layer is where the virtualization of network function is implemented. It comprises the VNF-block and the functional block that manages it, called VNF-Manager (VNFM). The VNF-block is defined as a combination of VNF and Element Management (EM) blocks as shown in Figure 1-9.

Figure 1-9 Virtualized Network Function Layer in ETSI NFV Framework
A virtualized implementation of a network function needs to be developed so it can run on any hardware that has sufficient computing, storage, and network interfaces. However, the details of the virtualized environment are transparent to the VNF, and it is expected to be unaware that the generic hardware it is running on is actually a virtual machine. The behavior and external interface of the VNF is expected to be identical to the physical implementation of the network function and device that it is virtualizing.
The network service being virtualized may be implemented through a single VNF, or it may require multiple VNFs. When a group of VNF are collectively implementing the network service, it is possible that some of the functions have dependencies on others, in which case the VNF needs to process the data in a specific sequence. When a group of VNFs doesn’t have any interdependency, then that group is referred to as a VNF set. An example of this is in a mobile virtual Evolved Packet Core (vEPC), where the Mobile Management Entity (MME) is responsible for authentication of the user and chooses the Service Gateway (SGW). The SGW runs independently of the MME’s function and forwards user data packets. These VNFs work collectively to offer part of the functionality of vEPC but are independently implementing their functions.
If, however, the network service requires VNFs to process the data in a specific sequence, then the connectivity between the VNFs needs to be defined and deployed to ensure it. This is referred to as VNF-Forwarding-Graph (VNF-FG) or service chaining. In the previous example of vEPC, if you added another VNF that provides Packet Data Network Gateway (PGW) functionality, that PGW VNF should only process the data after the SGW. As shown in Figure 1-10, this interconnection between SGW, MME, and PGW in this specific order for packet flow makes a VNF-FG. This idea of service chaining is important in the NFV world and requires a more detailed discussion. This topic is covered in depth in Chapter 6, “Stitching It All Together.”

Figure 1-10 Virtual Evolved Packet Core (vEPC) using VNF-FG
In the ETSI framework, it is the VNFM’s responsibility to bring up the VNF and manage the scaling of its resources. When the VNFM must instantiate a new VNF or add or modify the resources available to a VNF (for example, more CPU or memory) it communicates that requirement to the VIM. In turn, it requests that the virtualization layer modify the resources allocated to the VM that is hosting the VNF. Since the VIM has visibility into the inventory, it can also determine if it is possible for the current hardware to cater to these additional needs. Figure 1-11 shows this flow of events.

Figure 1-11 VNFM Scaling Up VNF Resources
The VNFM also has the responsibility for the FCAPS of the VNFs. It manages this directly by communicating with the VNFs or uses the Element Management (EM) functional block.
Element Management is another functional block defined in the ETSI framework and is meant to assist in implementing the management functions of one or more VNFs. The management scope of EM is analogous to the traditional element management system (EMS), which serves as a layer for interaction between the network management system and the devices performing network functions. EM interacts with the VNFs using proprietary methods while employing open standards to communicate with the VNFM. This provides a proxy to the VNFM for operations and management of the VNFs as shown in Figure 1-12. The FCAPS are still managed by VNFM, but it can take support from the EM to interact with the VNF for this aspect of management.

Figure 1-12 VNFM Managing VNF Directly or through EM
The framework doesn’t restrict the implementation to a single VNFM to manage all the VNFs. It is possible that the vendor that owns the VNF requires its own VNFM to manage that VNF. Therefore, there can be NFV deployments where multiple VNFM are managing multiple VNFs or a single VNFM manages a single VNF, as shown in Figures 1-13 and 1-14.

Figure 1-13 Single VNFM Managing Multiple VNFs

Figure 1-14 Multiple VNFMs Managing Separate VNFs
Operational and Orchestration Layer
When moving from physical to virtual devices, network operators do not want to revamp the management tools and applications that may be deployed for operational and business support systems (OSS/BSS). The framework doesn’t require a change in these tools as part of transformation to NFV. It allows them to continue to manage the operational and business aspects of the network and work with the devices even though the devices are replaced by VNFs. While this is in line with what is desired, using existing systems has its drawbacks, because it doesn’t fully reap the benefits of NFV and is not designed to communicate with NFV’s management functional blocks—VNFM and VIM. One path that providers can take is to enhance and evolve the existing tools and systems to use NFV management functional blocks and utilize the NFV benefits (like elasticity, agility, etc.). That’s a viable approach for some, but it is not a feasible option for others because these systems are traditionally built in-house or are proprietary implementations that do not allow for managing an open platform like NFV.
The solution that the ETSI framework offers is to use another functional block, NFV Orchestrator (NFVO). It extends the current OSS/BSS and manages the operational aspects and deployment of NFVI and VNF Figure 1-15 shows the two components of the orchestration layer in the framework.

Figure 1-15 Operational and Orchestration Layer of ETSI NFV Framework
The role of NFVO is not obvious up front and seems like an additional block buffering between current operating tools and VIM and VFNM. NFVO, however, has a critical and important role in the framework by overlooking the end-to-end service deployment, parsing the bigger picture of service virtualization and communicating the needed pieces of information to VIM and VNFM for implementing that service.
NFVO also works with the VIM(s) and has the full view of the resources that they are managing. As indicated previously, there can be multiple VIMs and each one of them has only visibility of the NFVI resources that it is managing. Since NFVO has the collective information from these VIMs, it can coordinate the resource allocation through the VIMs.
Similarly, the VNFM is independently managing the VNFs and doesn’t have visibility into any connection of the services between the VNFs and how the VNFs combine to form the end to the service path. This knowledge resides in the NFVO, and it’s the role of NFVO to work through the VNFM to create the end-to-end service between the VNFs. It is therefore the NFVO that has visibility into the network topology formed by the VNFs for a service instance.
Despite not being a part of the NFV transformation, the existing OSS/BSS do bring value to management and therefore have a place in the framework. The framework defines the reference points between the existing OSS/BSS and NFVO and defines NFVO as an extension of the OSS/BSS to manage the NFV deployment without attempting to replace any of the roles of OSS/BSS in today’s networks.
NFV Reference Points
The ETSI framework defines reference points to identify the communication that must occur between the functional blocks. Identifying and defining these is important to ensure that the flow of information is consistent across the vendor implementation for functional blocks. It also helps established an open and common way to exchange information between the functional blocks. Figure 1-16 shows the reference points defined by the ETSI NFV framework.

Figure 1-16 ETSI NFV Framework Reference Points
The list that follows describes these reference points in more detail.
Os-Ma-nfvo: This was originally labeled Os-Ma and is meant to define the communication between OSS/BSS and NFVO. This is the only reference point between OSS/BSS and the management block of NFV (MANO).
Ve-Vnfm-vnf: This reference point defines the communication between VNFM and VNF. It is used by VNFM for VNF lifecycle management and to exchange configuration and state information with the VNF.
Ve-Vnfm-em: This was originally defined together with Vn-Vnfm-vnf (jointly labeled Ve-Vnfm) but is now defined separately for communication between the VNFM and EM functional blocks. It supports VNF lifecycle management, fault and configuration management, and other functions, and it is only used if the EM is aware of virtualization.
Nf-Vi: This reference point defines the information exchange between VIM and the functional blocks in NFVI. VIM uses it to allocate, manage, and control the NFVI resources.
Or-Vnfm: Communication between NFVO and VNFM happens through this reference point, such as VNF instantiation and other VNF lifecycle-related information flow.
Or-Vi: The NFV orchestrator (NFVO) is defined to have a direct way of communicating with VIM to influence the management of the infrastructure resources, such as resource reservation for VMs or VNF software addition.
Vi-Vnfm: This reference point is meant to define the standards for information exchange between VIM and VNFM, such as resource update request for VM running a VNF.
Vn-Nf: This is the only reference point that doesn’t have a management functional block as one of its boundaries. This reference point is meant to communicate performance and portability needs of the VNF to the infrastructure block.
Table 1-1 summarizes these reference point definitions:
Table 1-1 ETSI NFV Framework Reference-Points
Reference Point |
Boundaries |
Use Defined in the Framework |
Os-Ma-nfvo
|
OSS/BSS<->NFVO
|
|
Ve-Vnfm-vnf
|
VNFM<->VNF
|
|
Ve-Vnmf-em
|
VNFM<->EM
|
|
Nf-Vi
|
NFVI<->VIM
|
|
Or-Vnfm
|
NFVO<->VNFM
|
|
Or-Vi
|
NFVO<->VIM
|
|
Vi-Vnfm
|
VIM<->VNFM
|
|
Vn-Nf |
NFVI<->VNF |
|
Putting it all Together
Let’s see how this model works end to end, taking the example of a simple network service and examining how the functional blocks defined in the ETSI framework collectively interact to implement the service. Figure 1-17 shows a simplified version of the steps involved.

Figure 1-17 End-to-End Flow in the ETSI NFV Framework
The following steps depict this process:
Step 1. The full view of the end-of-end topology is visible to the NFVO.
Step 2. The NFVO instantiates the required VNFs and communicate this to the VNFM.
Step 3. VNFM determines the number of VMs needed as well as the resources that each of these will need and reverts back to NFVO with this requirement to be able to fulfill the VNF creation.
Step 4. Because NFVO has information about the hardware resources, it validates if there are enough resources available for the VMs to be created. The NFVO now needs to initiate a request to have these VMs created.
Step 5. NFVO sends request to VIM to create the VMs and allocate the necessary resources to those VMs.
Step 6. VIM asks the virtualization layer to create these VMs.
Step 7. Once the VMs are successfully created, VIM acknowledges this back to NFVO.
Step 8. NFVO notifies VNFM that the VMs it needs are available to bring up the VNFs.
Step 9. VNF now configures the VNFs with any specific parameters.
Step 10. Upon successful configuration of the VNFs, VNFM communicates to NFVO that the VNFs are ready, configured, and available to use.
Figure 1-17 and the accompanying list depict a simplified flow as an example to help understand the framework. It intentionally doesn’t go into many more details associated with this process as well as possible variations. Though these are not being covered in this book, readers can refer to the ETSI document (Section 5, in [2]) for additional details and scenarios.
NFV Framework Summary
The goal of defining the framework and more specifically the individual functional blocks and the reference points is to eliminate (or more realistically, minimize) interoperability challenges and standardize the implementation. The purpose and scope of each of these blocks is well defined in the framework. Similarly, the interdependencies and communications paths are defined through the reference-points and are meant to be open and standard methods.
Vendors can independently develop these functions and deploy them to work smoothly with other functional blocks developed by other vendors. As long as these implementations adhere to the scope and role defined by the framework, communicate with the other blocks using open methods at the reference points, the network can have a heterogeneous deployment of NFV. This means that the service providers will have complete flexibility to choose between vendors for different functional blocks. This is in contrast to the way networks have traditionally been deployed, where service providers were tied to the vendor’s hardware (and its limitations) and software (and the challenges to adapt to it for all operational needs), and they had to deal with the interoperability concerns of mixed vendor networks. NFV offers service providers the ability to overcome this limitation and deploy a scalable and agile network using hardware and NFV functional blocks using any combination of vendors.
This doesn’t magically eliminate the higher-level protocol interoperability issues that may arise between VNFs implemented by different vendors. For example, a BGP implementation by a vendor of one VNF may have some issue when it is peering with another VNF developed by a different vendor. For these types of interoperability issues, a standardization process already exists and will continue to play a role. Also, NFV doesn’t mandate that vendors offer an open and standard way to manage the configuration and monitoring of the VNFs. EM in the NFV framework compensates for that. But in an implementation closer to the ideal, the operations support system should be able to work with the VNFs using standard methods. This is happening through a parallel technology shift towards software-defined networking (SDN). Though NFV and SDN are not interdependent, together they are complementing the benefits and advantages of each other. In this book, the focus is on NFV, but the picture is not complete without some discussion of SDN and how these two complement each other.
Though the NFV framework is well established, the standardization of NFV building blocks is an ongoing effort.