Wall Street didn’t like what Google just revealed
Airfind news item
By Faizan Farooque
Published on March 29, 2026.
Google has revealed that it has developed a new family of compression algorithms, TurboQuant, PolarQuant and Quantized Johnson-Lindenstrauss, aimed at reducing the memory required to run large language models and vector search systems. These algorithms were tested successfully and reduced key-value cache memory needs by at least six times while preserving accuracy. This led to a drop in AI infrastructure stocks, including Micron Technology (MU), Western Digital, Seagate Technology (STX) and SanDisk. The reaction was immediately attributed to the idea that part of the AI boom's value could shift away from hardware suppliers. However, there is still an important counterargument that future AI models may require less memory per workload.
Read Original Article