UPMEM announced  a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated. UPMEM reduces data movement while leveraging existing server architecture and memory technologies.


UPMEM’s presentation at Hot Chips conference of its disruptive PIM acceleration solution attracts attention from microprocessor architects, analysts and IT professionals.

Here are some of the articles talking about UPMEM true Processing in-Memory much awaited innovation:

UPMEM Puts CPUs Inside Memory to Allow Apps to Run 20 Times Faster, on insideHPC. 


Hot Chips 31 Analysis: In-Memory Processing by UPMEM, by Dr. Ian Cutress on AnandTech.



UPMEM Puts CPUs Inside Memory to Allow Applications to Run 20 Times Faster, by David Marshall on vmblog.com. 



UPMEM Puts CPUs Inside Memory to Allow Apps to Run 20 Times Faster, on Business Wire. 



Upmem: DDR4-Speicherriegel mit integrierten Prozessoren,  by Florian Müssig on heise online. 


Upmem: DDR4 memory latches with integrated processors, on en24 news.



关于内存内计算,这家公司有新想法 , 半导体行业观察




Welcome to contact us for more insights !

Contacts: contact@localhost