MIT researchers have presented “Jenga”, a solution for managing the cache memory that increases the computing speed by 20 to 30%, but also reduces energy consumption by 30 to 85%!
The current cache memory is working with several levels (L1, L2, L3, L4). Starting with the smallest and fastest to the biggest and slowest. This gives priority to data access. The most important ones requiring permanent and fast access are located in L1 or L2, the other less important ones are in L3 and L4.
According to the researchers, the current data management is too rigid and applications are not always able to exploit these possibilities.
The idea for optimizing the management is to generate virtual memories from L3 and L4. An algorithm will deal with resource allocation that will be recalculated every 100 milliseconds.
Thanks to a chip prototype with more than 36 cores, a 512 KB L3 SRAM cache, a 4 x 256 MB L3 DRAM cache, they were able to integrate Jenga as a permanent process via the operating system. Then, they have tested about twenty applications and the processor performance is in most cases improved, because the number of memory accesses is greatly reduced.
We can bet that Jenga will be adopted in the next generations of processors.
Source : MIT.