How the currency of the AI economy actually works
By Rebecca Falconer
Published on May 8, 2026.
The AI companies are facing a challenge in their competition for dominance, with scarce "compute" capacity threatening to hinder their rapid rise. Compute capacity is the hardware processing power, networking, and storage needed to process vast amounts of data and train or serve AI models. This has led to increased demand for these services, with many companies with limited access to these resources. AI labs are operating at a scale where compute procurement increasingly resembles industrial infrastructure, as they buy the hardware, energy, and processing time required to train, run and scale AI models and some companies have faced limited compute capacity, which can downgrade their customer experience. The cost of AI production also includes high-speed networking, storage, power delivery infrastructure, cloud-platform access, chip equipment makers, and lasers to conduct chips. However, challenges include limited semiconductors, with Taiwan Semiconductor Manufacturing Co. (TSMC) having almost a monopoly. Companies like Anthropic and OpenAI signing partnership deals buying key components like reserved GPU capacity, networking bandwidth and storage. Companies running heavy AI workloads increasingly rely on data center providers that allow them to rent out space for their own hardware.
Read Original Article