Why Asking "Is Hardware Dead?" is a Dead End
A white paper by Håkon Dahle, CTO, Pexip
December 9, 2013

Proponents of software enterprise applications sometimes claim that “hardware is dead”, meaning that certain enterprise applications no longer need custom hardware in order to provide the performance required.

In the context of multipoint video conferencing and collaboration, Pexip’s multipoint video conferencing server software is now able to leverage Intel-based servers in order to deliver performance surpassing the market-leading custom hardware architectures. However there are other important aspects to consider as well, such as reliability, density, manageability and cost. Therefore we believe that simply discussing whether hardware is dead is a dead end.

The right discussion to have is “Have you selected the right hardware platform for your application?” This article will show that as far as multiparty video conferencing and collaboration is concerned, the time for custom hardware is over, and that the appropriate platform in terms of cost, reliability, density and performance is the combination of smart software, standard servers and virtualization.

Introduction

With standard off-the-self servers becoming ever more capable, we have seen a number of enterprise applications transition from custom hardware designs to running on standard servers:

  • Enterprise telephony – VoIP private branch exchanges
  • Audio conferencing – for hundreds of telephony participants
  • Media servers – general purpose media processing devices
  • Network edge devices – firewalls and session border controllers
  • Routers – software defined networking

 With recent developments in server CPU architectures, it is now entirely possible to run a very media processing intensive application such as multiparty videoconferencing on virtualized servers. In the past, these applications (also known as MCUs – Multipoint Conferencing Units) clearly required custom hardware based on high performance Digital Signal Processors (DSPs), since the need for voice and video processing was so extreme:

Decoding multiple high definition (HD) video streams in different formats such as H.263, H.264

  • Scaling and filtering of HD video streams
  • Compositing images
  • Rate-matching
  • Encoding multiple HD video streams
  • For audio: decoding, noise reduction, voice detection, gain control, mixing, encoding

 The capacity of these MCUs are often measured in terms of “ports”, where a port represents the number of simultaneous participants that may be connected to one or more conferences. Each of these participants connect using high definition video (720p30) and wideband audio. The audio and video streams from and to each participant must be fully decoded and encoded as well as decrypted and encrypted.

Industry standard servers are powerful enough

Only five years ago, off-the-shelf servers could only process half a dozen HD ports in real time. Today, a dual socket server using Intel’s E5-2600v2 “Ivy Bridge” CPUs can provide 50 HD ports per rack unit. This by far surpasses the capacity of market leading custom hardware MCUs.  Using off-the-shelf blade servers one can install 1000 ports in a mere 10 rack units of space. This is 5 to 10 times the port density of market leading hardware MCUs. Clearly industry standard servers are powerful enough. 

However we feel that simply claiming that “hardware is dead” is an extreme simplification.

The right discussion is: Have you selected the right hardware platform for your application? To answer that question, the important considerations are:

  1. Cost
  2. Reliability
  3. Density
  4. Ability to scale up the deployment

We will now look at each of these in some detail before concluding.

Cost

In terms of cost, there is little doubt that for a certain unit of compute, the purchase cost of a server will be lower than the purchase cost of a hardware based MCU. This is due to economies of scale. Custom hardware based MCUs are manufactured in volumes of only a few thousands per year, whereas servers are manufactured in volumes of close to ten million units per year. As Pexip has shown previously, a US$ 5000 server can host the Pexip Infinity application providing 30 ports of fully transcoded high definition video ports. Market leading custom hardware vendors will charge five to ten times as much only for the hardware portion of a similarly capable MCU.

Cost of ownership is another important aspect to consider. While traditional hardware based MCUs are delivered with their own management, deployment and diagnostic systems, software based conferencing can leverage existing IT tools such as those delivered by VMware and Microsoft. Allowing the IT staff to use familiar tools reduces the need for training and reduces the need for hiring extra staff to manage the conferencing software.

Reducing bandwidth usage on expensive MPLS WAN circuits is not a benefit of software based conferencing as such, but is a benefit of the distributed architecture of the Pexip Infinity conferencing system. A typical deployment would have Pexip conferencing virtual machines in multiple geographies, allowing the local endpoints to connect to a local conferencing node, and then connecting the conferencing nodes together. Between the conferencing nodes, only the current speakers are forwarded in high definition, the other participants are forwarded in low resolution live thumbnail views, requiring a fraction of the original bandwidth. Hence, global conferences can save substantial amounts of WAN bandwidth.

Reliability

There are multiple aspects of reliability at every level of the application stack. Here we will focus on the reliability of the platform on which the videoconferencing application runs. What if the platform fails – what if there is a minor or major hardware failure in the platform, whether it is a server or a custom hardware platform?

With custom hardware, the only answer has been to have a second identical piece of hardware in hot standby. If one fails, the second one will be used. In most cases this would be a highly manual task. The disadvantages of this solution are clear:

  1. Very high cost – a customer would have to install a complete hardware MCU as a dedicated standby MCU
  2. Significant downtime would often be the case, attributed to the manual process of connecting all the participants to the new MCU

With software running on virtualized servers, a much higher level of reliability and availability can be achieved.

  1. Server alarms. With technologies such as Microsoft Hyper-V Live Migration and VMware vMotion, the IT staff can move the entire conferencing application from one physical host (server) to another physical host while a conference is running. This is useful in case there is a server alarm that requires attention, such as fan failure. The result is that a server can be taken out of service in a controlled fashion, without application down-time.
  2. Physical server failures. VMware High Availability detects physical server failures, and will automatically restart the virtual machine on a different server in the resource pool. This happens without any human intervention. The result is increased up-time.

Unrelated to virtualization are the reliability benefits of the highly tested industry standard server architectures. These servers can be ordered with very low cost options that dramatically improve hardware reliability:

  1. Error correction memory, which in fact is standard and not even an option on most servers
  2. Redundant disks, e.g. configured in various RAID modes
  3. Redundant network interfaces, these can again be connected to redundant network switches

Custom hardware MCUs can be delivered with redundant power supplies at a very high cost, however redundant disks and network interfaces are not even available. For pure hardware reliability there is no doubt – industry standard servers are far ahead of the existing custom hardware MCU platforms.

Density

Figure 1: Port density for standard servers are typically double that of standard custom hardware.

As we showed in the introduction, the port density of a purely software based solution is on the order of 50-to-100 ports per rack unit in terms of fully transcoded 720p30 ports. This surpasses the hardware architectures provided by the current market leaders, where density ranges from 20-to-40 ports. More recent custom hardware architectures provide slightly more than 100 HD ports per rack unit, which makes sense since a custom hardware design should deliver higher density than an off-the-shelf standard server. However, is this marginal advantage in density worth the price? A software based solution provides lower acquisition cost, lower cost of ownership, more deployment flexibility, it fits into a standard data center, and it leverages all the reliability benefits of virtualization as well as the reliability features of standard server hardware architectures.

Scaling up the deployment

Scaling upAs an enterprise adopts video conferencing successfully, there will be a need to continue to scale up the deployment.  For medium to large companies, this will involve deploying conferencing resources in more than one data center. With custom hardware this is extremely inconvenient – especially if it involves shipping costly hardware systems internationally. Using a virtualized server approach, the IT staff can deploy conferencing resources in any existing location or datacenter, as long as server resources are available.

Conclusion

In considering video conferencing infrastructure, the question should not be whether hardware is dead or not. The real question is: Does the video conferencing software run on the appropriate platform?  Does the platform provide the necessary reliability for a mission critical business application?

As video conferencing is being widely adopted, and as it is starting to become used as a ubiquitous way of communication within and between enterprises, the need for high reliability and reduced cost becomes urgent. Furthermore, as enterprise IT are virtualizing and streamlining their datacenters, there is an expectation that conferencing and collaboration should be just another server workload, exactly as other enterprise applications already are. Custom hardware can not deliver on these requirements – custom hardware is costly to deploy while lacking basic reliability and redundancy, it can not be deployed, managed and diagnosed using standard IT tools, and it requires shipping of hardware across the globe as the enterprise adopts video conferencing at scale. Hence, the ideal platform for video conferencing infrastructure is the combination of standard servers and virtualization.

References

http://www.idc.com/getdoc.jsp?containerId=prUS23974913 “According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, factory revenue in the worldwide server market increased 3.1% year over year to $14.6 billion in the fourth quarter of 2012 (4Q12). This was the first quarterly increase of factory revenue in five quarters. Worldwide server shipments decreased 3.9% to 2.1 million units in 4Q12 when compared with the same period in 2011. For the full year 2012, worldwide server revenue decreased 1.9% to $51.3 billion when compared to 2011, while worldwide unit shipments decreased 1.5% year over year to 8.1 million units.”

http://www.vmware.com/files/pdf/VMware-High-Availability-DS-EN.pdf “VMware HA continuously monitors all virtualized servers in a resource pool and detects physical server and operating system failures. To monitor physical servers, an agent on each server maintains a heartbeat with the other servers in the resource pool such that a loss of heartbeat automatically initiates the restart of all affected virtual machines on other servers in the resource pool.”