OpenAI is breaking up with its GPU dependency issues. The ChatGPT creator just signed a massive deal with Broadcom to develop custom AI chips, and Nvidia probably isn't thrilled about it.
This isn't some small-time partnership either. We're talking about a $10 billion agreement that could reshape the entire AI hardware landscape. OpenAI wants to design its own "XPUs" – because apparently calling them GPUs would be too mainstream – specifically tailored for AI workloads.
The plan is ambitious, bordering on audacious. OpenAI aims to deploy 10 gigawatts of custom AI accelerator capacity globally. That's enough power to run a small city, but instead it'll be crunching through AI models.
Production starts in late 2026, with the whole project wrapping up by 2029. Broadcom brings more than just manufacturing muscle to the table. They're providing high-speed Ethernet networking solutions that'll connect these new AI racks.
It's a unified approach – custom chips talking to custom networks, all designed to make AI training and inference faster and more efficient. The market certainly likes what it sees. Broadcom's stock jumped 9% after the announcement, and OpenAI becomes their fourth major custom chip client alongside Google and others.
Meanwhile, Nvidia's stranglehold on AI hardware just got its biggest challenge yet. OpenAI's motivation is crystal clear: control and efficiency. When you're designing the next GPT model, having hardware built specifically for your needs beats buying off-the-shelf components.
They can embed insights from frontier model development directly into silicon. But let's talk numbers for a second. OpenAI reports $15 billion in revenue while their total commitments approach $1 trillion. OpenAI recently closed a record-breaking $40 billion funding round that valued the company at $300 billion.
That's either visionary investing or we're watching the AI bubble inflate in real time. The company remains unprofitable despite serving 800 million weekly users. The infrastructure demands are staggering, with AI data centers putting significant stress on electricity providers across the grid. The massive energy consumption from AI operations raises environmental concerns as companies scale their computational requirements.
The partnership officially kicks off in 2024 with initial design work, chips launching in 2025, and volume shipments starting 2026. OpenAI isn't putting all its eggs in one basket though – they've secured backup deals with both Nvidia and AMD.
Smart move, considering they're fundamentally declaring war on the GPU king.

