On Tuesday, Austin-based startup Calxeda launched its EnergyCore ARM system-on-chip (SoC) for cloud servers. At first glance, Calxeda’s looks like something you’d find inside a smartphone, but the product is essentially a complete server on a chip, minus the mass storage and memory. The company puts four of these EnergyCore SoCs onto a single daughterboard, called an EnergyCard, which is a reference design that also hosts four DIMM slots and four SATA ports. A systems integrator would plug multiple daughterboards into a single mainboard to build a rack-mountable unit, and then those units could be linked via Ethernet into a system that can scale out to form a single system that’s home to some 4096 EnergyCore processors (or a little over 1,000 four-processor EnergyCards).
The current EnergyCore design doesn’t support classic, hypervisor-based virtualization; instead, it supports Ubuntu’s lightweight, container-based LXC virtualization scheme for system management. The reason that you won’t see a hypervisor running on Calxeda hardware anytime soon is that Calxeda’s whole approach to server efficiency is the exact opposite of what one typically sees in a virtualized cloud server. The classic virtualization model squeezes higher utilization and power efficiency out of a group of high-powered server processors—typically from Intel or AMD—by running multiple OS instances on each processor. In this way, a typical 2U virtualized server might use two Xeon processors and a large pool of RAM to run, say, 20 virtual OS instances.
With a Calxeda system, in contrast, you would run 20 OS instances in 2U of rack space by physically filling that rack space with five EnergyCards, which, at four EnergyCore chips per card and one OS instance per chip would give you 20 virtual servers. This high-density, one-OS-per-chip approach is often called “physicalization,” and Calxeda’s bet is that it represents a cheaper and lower power way to run those 20 virtual servers than what a Xeon-based system could offer. And for certain types of cloud workloads, this bet will no doubt pay off when you consider that a single EnergyCard gives you four quad-core servers in just 20 watts of power (an average of 5W per server and 1.25W per core. Contrast this with a single quad-core Intel Xeon E3, which can run anywhere from 45W to 95W depending on the model.
The new EnergyCore chips will be sampling at the end of this year, and are scheduled to ship in volume in the second half of next year.
The EnergyCore custom SoC that lies at the heart of Calxeda’s approach to power efficiency is built around four ARM Cortex A9 cores than can run at from 1.1 to 1.4GHz. The four cores share a 4MB L2 cache, a set of memory controllers, and basic I/O blocks (10Gb and 1Gb Ethernet channels, PCIe lanes, and SATA ports).
The EnergyCore Fabric Switch that sits in between the Ethernet blocks and the ARM cores is the key to Calxeda’s ability to scale out a single system to as many as 4096 processors using any network topology that the system integrator or customer chooses. This switch presents two virtual Ethernet ports to the OS, so that the combination of switch, Ethernet channels, and Calxeda’s proprietary daughtercard interface (the latter carries Ethernet traffic to connected nodes) is transparent the software side of the system while providing plenty of bandwidth for inter-node transport.
The crown jewel in Calxeda’s approach is the block labelled EnergyCore Management Engine. This block is actually another processor core that runs specialized monitoring and management software and is tasked with doing dynamic power-optimization of the rest of the chip. The management engine can turn on and off the separate power domains on the SoC in response to real-time usage, so that the parts of the chip that are idle at any given moment cease drawing power.
The management engine is also what presents the virtualized Ethernet to the OS, so it works in conjunction with the fabric switch to do routing and power optimization. There are also OEM hooks into the proprietary software that runs on the engine, so that OEMs can roll their own management offerings as a value add.
ARM vs. x86 and Calxeda vs. SeaMicro
It’s helpful to contrast Calxeda’s approach with that of its main x86-based competitor, SeaMicro. SeaMicro makes a complete, high-density server product based on Intel’s low-power Atom chips that is built on many of the principles described above. Aside from the choice of Atom over ARM, the main place that SeaMicro’s credit-card-sized dual-Atom server nodes differ from Calxeda’s EnergyCards is in the way that the latter handles disk and networking I/O.
As described above, the Calxeda system virtualizes Ethernet traffic so that the EnergyCards don’t need physical Ethernet ports or cables in order to do networking. They do, however, need physical SATA cables for mass storage, so in a dense design you’ll have to thread SATA cables from each EnergyCard to each hard drive card. SeaMicro, in contrast, virtualizes both Ethernet and SATA interfaces, so that the custom fabric switch on each SeaMicro node carries both networking and storage traffic off of the card. By putting all the SATA drives in a separate physical unit and connecting it to the SeaMicro nodes via this virtual interface, SeaMicro systems save on power and cooling vs. Calxeda (again, the latter has physical SATA ports on each card for connecting physical drives). So that’s one advantage that SeaMicro has.
One disadvantage that SeaMicro has is that it has to use off-the-shelf Atom chips. Because SeaMicro can’t design its own custom SoC blocks and integrate them with Atom cores on the same die, the company uses a separate physical ASIC that resides on each SeaMicro card to do the storage and networking virtualization. This ASIC is the analog to the on-die fabric switch in Calxeda’s SoC.
Note that SeaMicro’s current server product is Atom-based, but the company has made clear that it won’t necessarily restrict itself to Atom in the future. So Calxeda had better be on the lookout for some ARM-based competition from SeaMicro in the high-density cloud server arena.