Settings

Theme

RAM Coffers– I built conditional memory for LLMs 27 days before DeepSeek'sEngram

github.com

2 points by AutoJanitor 20 days ago · 1 comment

Reader

AutoJanitorOP 20 days ago

I'm an independent researcher running LLMs on an IBM POWER8 server (320GB RAM, vintage 2015 hardware).

  On December 16, 2025, I developed "RAM Coffers" - a NUMA-distributed conditional memory system that selectively houses model weights in RAM banks with resonance-based     
  routing for O(1) knowledge retrieval.                                                                                                                                      
                                                                                                                                                                             
  On January 12, 2026, DeepSeek published their "Engram" paper (arXiv:2601.07372) describing the same core concept: separating static knowledge from dynamic computation via 
  O(1) lookup.                                                                                                                                                               
                                                                                                                                                                             
  Same idea. 27 days apart. No connection.                                                                                                                                   
                                                                                                                                                                             
  DOI: 10.6084/m9.figshare.31093429                                                                                                                                          
  GitHub: https://github.com/Scottcjn/ram-coffers                                                                                                                            
  DeepSeek paper: https://arxiv.org/abs/2601.07372                                                                                                                           
                                                                                                                                                                             
  Running on "obsolete" POWER8 hardware, I hit 147 tokens/sec on TinyLlama - 8.8x faster than stock llama.cpp.                                                               
                                                                                                                                                                             
  Sometimes the garage beats the lab.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection