Zero. That’s how many official Nvidia drivers Apple has approved for their Arm-based Macs since the M1 chip launched. Until now.
In April 2026, Apple did something that seemed about as likely as Tim Cook showing up to WWDC in a leather jacket: they approved a driver that lets Nvidia eGPUs work with Arm Macs. Before you start thinking Apple and Nvidia kissed and made up after years of cold shoulders, here’s the twist—the driver doesn’t come from Nvidia at all.
The Tiny Corp Surprise
The driver that just got Apple’s blessing was developed by Tiny Corp, a small company that apparently decided to do what neither Apple nor Nvidia would: build a bridge between the two tech giants’ hardware. This approval also extends to AMD eGPUs, making it a broader win for anyone who’s been frustrated by the limited external GPU options on Apple Silicon Macs.
For those keeping score at home, this is a pretty big deal. Since Apple switched from Intel chips to their own Arm-based processors, Mac users who wanted to boost their graphics performance with an external GPU have been stuck with extremely limited options. The relationship between Apple and Nvidia has been frosty for years, and that chill extended right into the M1, M2, and M3 era.
What This Means for AI Agents
Here’s where things get interesting for the AI agent space. Many AI models, especially the ones you might want to run locally on your machine, benefit enormously from GPU acceleration. If you’re running AI agents that process images, generate content, or handle complex reasoning tasks, having access to Nvidia’s CUDA ecosystem opens up possibilities that simply weren’t there before.
Tiny Corp claims the installation process is now so simple that “a Qwen could do it”—a cheeky reference to the Qwen AI model—”then it can run that Qwen.” That’s the kind of recursive AI humor that makes tech nerds smile, but it also hints at the practical applications: install the driver, plug in your eGPU, and suddenly your Mac can run AI models that previously required a dedicated Linux box or Windows machine.
The Thunderbolt Bottleneck
Before anyone gets too excited, there’s a catch. External GPUs connected via Thunderbolt ports face inherent bandwidth limitations. You’re not getting the full power of that Nvidia card—the Thunderbolt connection acts as a bottleneck, restricting data transfer speeds compared to a GPU installed directly on a motherboard’s PCIe slot.
Think of it like trying to fill a swimming pool through a garden hose. Sure, it’ll work, but you’re not using the full capacity of your water source. For AI agent applications, this means you’ll see performance improvements, but not the dramatic leap you’d get from a native installation.
Why This Matters Now
The timing of this approval is fascinating. We’re in the middle of an AI boom where more people want to run models locally rather than relying solely on cloud services. Privacy concerns, latency issues, and the desire for offline capability are driving interest in local AI deployment.
Mac users have felt left out of this party. The Apple Silicon chips are impressive for many tasks, but they can’t match the raw AI training and inference performance of high-end Nvidia GPUs. This driver approval doesn’t solve everything, but it gives Mac users a new option they simply didn’t have before.
The fact that a small company like Tiny Corp made this happen—rather than Apple or Nvidia themselves—tells you something about the current state of the tech industry. Sometimes the most practical solutions come from unexpected places, built by people who are tired of waiting for the big players to work out their differences.
For AI agent enthusiasts running Macs, this is your chance to experiment with models and workflows that were previously off-limits. Just don’t expect miracles from that Thunderbolt connection.
đź•’ Published: