ARM CPUs are everywhere. They are in almost all mobile phones, in most home internet routers and access points, in TVs, cable receivers and probably even in your dishwasher. So it’s safe to say that ARM CPUs are already very popular. One place however where we have not seen much adoption of ARM chips is in the datacenter. But that is rapidly changing and here is why.

Reason one: Edge computing

In recent years we have seen a massive move towards the cloud. Startups usually run all their workloads in the cloud from the get-go and more and more existing companies move their software into the cloud as well. At the same time, we see a trend towards IoT and so-called edge computing. In a lot of cases, it makes no sense to have your IoT devices sent all their data to the cloud. Instead, it makes more sense to process the data closer to the IoT devices. This is called edge computing and is done to reduce bandwidth usage, increase security and improve the speed at which can be responded to IoT data.

Another example of edge computing is the upcoming 5G mobile networks. The bandwidth between the cell tower and mobile device is so high that it starts to make sense to cache and process data locally in the cell towers or at least close to a group of cell towers.

Since edge computing will by definition not happen in large commercial data centers or other places where plenty of power and cooling is available it is necessary that edge computing hardware is as energy efficient as possible. ARM CPUs are by design a lot more energy efficient than Intel x64 CPUs so ARM chips are a logical choice for edge computing. For this reason, VMware decided to make vSphere run on ARM as well. So in the very near future, you’ll be able to manage your ARM-based edge compute in the same way you currently manage your datacenter compute.

Reason two: Accelerators

Another trend in the IT industry is the use of accelerators. Moore’s law seems to have died as far as CPUs go. To satisfy the increasing demand for raw calculating power, specific workloads are moved to accelerators. One example of such a workload is machine learning. Most machine learning algorithms are currently running on GPUs or FPGAs, not CPUs. In both the GPU and FPGA market, there are vendors selling solutions specifically engineered for machine learning.

But when the real workload is running on the accelerator, there is really no use for an extremely powerful CPU. Nvidia DGX machines, for example, are loaded with a ton of graphical cores but the CPU is a relatively slow ARM chip. This makes sense because you want as much power and money as possible going to the accelerators themselves, not to a CPU which is just moving data in and out of the accelerators. So I believe the use of ARM chips will become a lot more common with the increase in accelerator usage.

Reason three: Density

As stated in the previous point, Moores law no longer seems to apply to CPUs. But the demand for CPU cycles is growing faster than ever. If datacenters want to keep up with demand without making huge investments in real estate they’ll simply have to fit more CPUs in the same buildings. Since ARM CPUs use less power than comparable Intel CPU and thus requiring less cooling you’ll be able to cram more of those in the same buildings.

That is one of the reasons many high-performance computing clusters are now being built with ARM chips and I believe regular datacenters will follow this trend in the near future.

Christiaan Roeleveld Virtualization Consultant

Let's talk!

Knowledge is key for our existence. This knowledge we use for disruptive innovation and changing organizations. Are you ready for change?

"*" indicates required fields

First name*
Last name*
Hidden