With the digitalization of everything, terabytes of data are generated everyday. Those data are a valuable raw material because once transformed into meaningful information, they help understand what’s going on and make better decisions. However, data exploitation requires a cost effective computing infrastructure. This is where UPMEM can help.

Time to switch gear

Over last decade, many new datacenters have been built to address the demand, combined with advanced software frameworks capable to distribute computations among hundreds of servers. At the same time, significant efforts have been spent to minimize server unit cost, e.g. micro-server initiative. It was the right thing to do, even if new problems are showing up in term of network infrastructure and shared storage management.

However, we entered into the data-driven age, and it is impacting the server-level efficiency : Given the volume of data involved, applications have been decomposed in batches of unit tasks distributed to clusters of servers. Unfortunately, running basic operations on a lot of data is not convenient for CPUs, because of the cache architecture inefficiency in such case.

One solution could be to wait for the next generation CPU, but it won’t be enough :

  1. CPU frequency won’t increase that much, except if we add water-cooling
  2. There is not enough PINs on chip packages to add easily extra memory channels
  3. Adding more cores per CPU would require more memory channels…

There are millions of servers in production, but data processing demand is still growing fast… Is it realistic to build ten times more datacenters ? Obviously not. The next 10x productivity gain should come from within the server, and address the Memory Wall issue.

Processing In Memory (PIM)

Building a data-driven architecture, requires to combine memory (i. e.data) and processing, as closely as possible. Given the giga bytes to treat, the processing should be distributed among the data. In other words, a large number of co-processing units should be spread among data containers to enable a massively scalable architecture.

An ambitious PIM solution should be developer-friendly : Open, versatile, and surrounded by a rich ecosystem and toolset, enabling development, debug, profiling and massive parallelization. At server level, PIM solution should smoothly integrate with other components, to facilitate adoption and deployment. DDR is the best vehicle because it’s both standard and the faster interface available.

While memory manufacturer are working hard to minimize the price per bit, processor vendors are focusing on gaining MIPS given a set of memory constraints. PIM is right in the middle of both industries and we are very proud to be in position to fill the gap. 

What PIM is good at ?

PIM model consists of an battalion of co-processors, coordinated by the main CPU.  Given the unbeatable memory bandwidth and latency of the PIM, offloading data-consuming operations will immediately bring tangible performance gains. On the flip side, there is no added value to deport tasks that main CPU does already well.

The memory/processor proximity is bringing performance gain in two cases :

  1. When the next data to fetch is unpredictable, computing speed is bounded by memory latency (e. g. pointer chasing in graph structures). The lower latency, the better.
  2. When every single byte need to be processed (e. g. compression), throughput is bandwidth bounded. The higher bandwidth, the better.

Secondly, PIM solution is truly scalable: thousands of cores can be added per server. Then, it’s easy to understand why many algorithms can be accelerated over 10 times with such solution.

In the datacenter, the TCO (Total Cost of Ownership) metric is preferred to pure investment management because operational costs are dominant. The energy consumed per byte of data computed is a key success factor in the long term, and this is one more strength for PIM model : Data transit is minimal as data are processed in place.

The productivity gain brought the Processing In Memory benefits to the whole value chain, and beforehand to the datacenter owners, our customers.