The Case for Express Virtio (XVIO) – Part 2

  • November 08, 2016
  • by Corigine

In part 1, my colleague Sujal Das reviewed the operational challenges data center operators spoke about at OpenStack Summit last month. The resounding common theme was poor server infrastructure efficiency. Sujal reviewed the underlying reasons for these challenges starting with a wide variety of VM workload requirements including networking options and CPU requirements. This combined with the fact that none of the current server-based networking solutions provide the ability to deploy and manage a homogeneous server environment means the data center operators must deploy a heterogeneous environment with high CAPEX and OPEX. Even with that, the infrastructure still does not deliver the business agility needed to quickly deploy/expand new revenue streams.

In this part we will do two things:

  • Revisit the impact of Data Path Developer Kit (DPDK) has had on server-based networking, VM networking performance and resource utilization, and,
  • Review the solution to the operational challenges discussed in part 1.

DPDK: Robbing Peter to Pay Paul

OVS or vRouter datapaths implemented in the user space using DPDK have provided some relief for VM profiles that require more I/O bandwidth. DPDK proponents tout DPDK as a method for solving performance bottlenecks in the world of NFV. However, since applications running in VMs (virtual network functions or VNFs, for example) have to share the same available CPU cores on the server with the CPU cores that need to be allocated to run the OVS or vRouter datapaths implemented in the user space using DPDK, one quickly runs into the “robbing Peter to pay Paul” scenario. For example, to service VM profiles of one kind, one may allocate eight cores to DPDK OVS or vRouter; for another VM profile, one may allocate four cores to DPDK OVS or vRouter; and for another VM profile, one may allocate 12 cores to DPDK OVS or vRouter. In some scenarios, the VM profile that needs DPDK OVS or vRouter with 12 cores most likely also needs the largest number of CPU cores for itself to execute. In this case, efficient distribution of cores becomes a challenge and this is further exacerbated when one has a mix of VM profiles on the same server, some requiring higher CPU cores than others and some requiring lower bandwidth that others.

Express Virtio (XVIO) to the rescue

An industry first, Netronome’s innovative Express Virtio (XVIO) technology eliminates the significant operational, performance and server efficiency-related challenges highlighted above. XVIO brings the level of performance of SR-IOV solutions to the standard Virtio drivers (available in many guest OSs), but maintains full flexibility in terms of VM mobility and the full gamut of network services provided by OVS and vRouter. This enables VMs managed using OpenStack to experience SR-IOV-like networking performance while at the same time supporting complete hardware independence and seamless customer VM onboarding. For the cloud service provider, the benefit of utilizing OpenStack cloud orchestration is a consistent and homogenous infrastructure where VMs can be placed and moved to optimize utilization of the data center while maintaining high performance.

The following figures illustrate the operational efficiencies that XVIO brings for SDN-based data center infrastructure deployments.

Figure 1 shows how XVIO fares versus VM data delivery mechanisms such as DPDK and SR-IOV when the OVS or vRouter datapaths are implemented in the Netronome Agilio SmartNICs and server networking platform. In this figure, flexibility in terms of customer VM onboarding and live VM migration is mapped versus performance delivered to VMs.

Highly Flexible XVIO


Figure 1: Highly Flexible XVIO - rapid customer VM onboarding, live VM migration

Figure 2 shows how XVIO fares versus VM data delivery mechanisms such as DPDK and SR-IOV when the OVS or vRouter datapaths are implemented in the Netronome Agilio SmartNICs and server networking platform. In this figure, rich SDN-based features such as policy rules with ACLs or security groups, flow-based analytics, or load balancing are mapped versus performance delivered to VMs.

Rich Networking Services with XVIO

Figure 2: Rich Networking Services with XVIO - policy rules with ACLs, flow-based analytics, and load balancing

Figure 3 shows how XVIO fares versus VM data delivery mechanisms such as DPDK and SR-IOV when the OVS or vRouter datapaths are implemented in the Netronome Agilio SmartNICs and server networking platform. In this figure, high server efficiency metrics such as freeing up CPU cores for applications and VMs are mapped versus performance delivered to VMs.

High Server Efficiency with XVIO


Figure 3: High Server Efficiency with XVIO - freeing up CPU cores for applications and VMs

The advanced XVIO technology is based on and builds upon industry standard and open source technologies such as SR-IOV, Virtio and DPDK supported by OpenStack. The XVIO technology and software components are transparent, and integrate easily with open source and commercial server networking software such as OVS, Linux Firewall and Contrail vRouter. VMs and their applications do not require any changes, and all popular guest operating systems with standard Virtio drivers are supported.

XVIO implemented in the Netronome Agilio server networking platform reduces operational complexity. For the cloud service provider, the benefit of utilizing OpenStack cloud orchestration is a consistent and homogenous infrastructure where VMs can be placed and moved to optimize utilization of the data center while maintaining high performance. This is depicted in Figure 4.

XVIO Server Network Configuration


Figure 4: With XVIO, servers are configured the same way; VMs with different profiles can be migrated and placed most efficiently

Summary

Private and public cloud deployments use SDN and cloud-based orchestration based on OpenStack or operator-developed centralized SDN controllers. They leverage networking and security services delivered by OVS and Contrail vRouter that run in servers. To enable virtualized server-based network performance scaling in cloud deployments, the industry has employed a number of acceleration mechanisms. DPDK requires changes to applications and VMs (adversely affecting server efficiency) and cannot use key Linux kernel-based networking services. Unlike SR-IOV, a PCI-SIG technology, XVIO does not limit VM mobility and availability of networking and security services needed by VMs, but delivers the same level of performance. The Netronome Agilio server networking platforms with XVIO technology deliver a simple deployment model that removes barriers and makes adoption of networking accelerators such as SmartNICs economical and practical, saving CAPEX and OPEX in a significant way. In short, XVIO with the Netronome Agilio server networking platform makes OpenStack, and in general cloud networking, faster and more economical.

Products

Archives

Subscribe to newsletter