Ollama adopts MLX for faster AI performance on Apple silicon
Airfind news item
By Marcus Mendes
Published on March 31, 2026.
Ollama, a Mac, Linux, and Windows app that allows users to run AI models locally on their computers, has updated its preview version of its app to incorporate Apple's machine learning framework, MLX, allowing local AI models to run faster on Apple silicon Macs. The update makes it faster to run personal assistants like OpenClaw and coding agents like Claude Code, OpenCode, or Codex. However, Ollama warns that users should have a Mac with more than 32GB of unified memory, which may not currently be the case for many local users.
Read Original Article