
Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business through AI” in Tokyo, on Feb. 3, 2025.
Tomohiro Ohsumi | Getty Images
In November, following Nvidia’s latest earnings beat, CEO Jensen Huang boasted to investors about his company’s position in artificial intelligence and said about the hottest startup in the space, “Everything that OpenAI does runs on Nvidia today.”
While it’s true that Nvidia maintains a dominant position in AI chips and is now the most valuable company in the world, competition is emerging, and OpenAI is doing everything it can to diversify as it pursues a historically aggressive expansion plan.
On Wednesday, OpenAI announced a $10 billion deal with chipmaker Cerebras, a relatively nascent player in the space but one that’s angling for the public market. It was the latest in a string of deals between OpenAI and the companies making the processors needed to build large language models and run increasingly sophisticated workloads.
Last year, OpenAI committed more than $1.4 trillion to infrastructure deals with companies including Nvidia, Advanced Micro Devices and Broadcom, en route to commanding a $500 billion private market valuation.
As OpenAI races to meet anticipated demand for its AI technology, it has signaled to the market that it wants as much processing power as it can find. Here are the major chip deals that OpenAI has signed as of January, and potential partners to keep an eye on in the future.
Nvidia
Nvidia founder and CEO Jensen Huang speaks during Nvidia Live at CES 2026 ahead of the annual Consumer Electronics Show in Las Vegas, Jan. 5, 2026.
Patrick T. Fallon | Afp | Getty Images
Since its early days building out large language models, long before the launch of ChatGPT and the start of the generative AI boom, OpenAI has relied on Nvidia’s graphics processing units.
In 2025, that relationship went to another level. Following an investment in OpenAI in late 2024, Nvidia announced in September that it would commit $100 billion to support OpenAI as it builds and deploys at least 10 gigawatts of Nvidia systems.
A gigawatt is a measure of power, and 10 gigawatts is roughly equivalent to the annual power consumption of 8 million U.S. households, according to a CNBC analysis of data from the Energy Information Administration. Huang said in September that 10 gigawatts will equate to between 4 million and 5 million GPUs.
“This is a giant project,” Huang told CNBC at the time.
OpenAI and Nvidia said the first phase of the project is expected to come online in the second half of this year on Nvidia’s Vera Rubin platform. However, during Nvidia’s quarterly earnings call in November, the company said there is “no assurance” that its agreement with OpenAI will progress beyond an announcement and to an official contract stage.
Nvidia’s first investment of $10 billion will be deployed when the first gigawatt is completed, and investments will be made at then-current valuations, as CNBC previously reported.
AMD
Lisa Su, chair and chief executive officer of Advanced Micro Devices Inc., displays an AMD Instinct MI455X GPU during the 2026 CES event in Las Vegas, Jan. 5, 2026.
Bloomberg | Bloomberg | Getty Images
In October, OpenAI announced plans to deploy six gigawatts of AMD’s GPUs across multiple years and multiple generations of hardware.
As part of the deal, AMD has issued OpenAI a warrant for up to 160 million shares of AMD common stock, which could amount to a roughly 10% stake in the company. The warrant includes vesting milestones tied to both deployment volume and AMD’s share price.
The companies said they plan to roll out the first gigawatt of chips in the second half of 2026, and added that the deal is worth billions of dollars, without disclosing a specific amount.
“You need partnerships like this that really bring the ecosystem together to ensure that, you know, we can really get the best technologies, you know, out there,” AMD CEO Lisa Su told CNBC at the time of the announcement.
Altman planted the seeds for the deal in June, when he appeared on stage alongside Su at an AMD launch event in San Jose, California. He said OpenAI planned to use AMD’s latest chips.
Broadcom
Broadcom CEO Hock Tan.
Lucas Jackson | Reuters
Later that month, OpenAI and Broadcom publicly unveiled a collaboration that had been in the works for well over a year.
Broadcom calls its custom AI chips XPUs, and has thus far relied on a few customers. But its pipeline of potential deals has sparked so much enthusiasm on Wall Street that Broadcom is now valued at over $1.6 trillion.
OpenAI said it’s designing its own AI chips and systems that will be developed and distributed by Broadcom. The companies have agreed to deploy 10 gigawatts of these custom AI accelerators.
In the October release, the companies said Broadcom will aim to begin deploying racks of the AI accelerator and network systems by the second half of this year, with a goal to complete the project by the end of 2029.
But Broadcom CEO Hock Tan told investors during the company’s quarterly earnings call in December that he doesn’t expect much revenue from the OpenAI partnership in 2026.
“We appreciate the fact that it is a multiyear journey that will run through 2029,” Tan said. “I call it an agreement, an alignment of where we’re headed.”
OpenAI and Broadcom did not disclose the financial terms of the deal.
Cerebras
Andrew Feldman, co-founder and CEO of Cerebras Systems, speaks at the Collision conference in Toronto, June 20, 2024.
Ramsey Cardy | Sportsfile | Collision | Getty Images
OpenAI on Wednesday announced an agreement to deploy 750 megawatts of Cerebras’ AI chips that will come online across multiple tranches through 2028.
Cerebras builds large wafer-scale chips that can deliver responses up to 15 times faster than GPU-based systems, according to a release. The company is much smaller than Nvidia, AMD and Broadcom.
OpenAI’s deal with Cerebras is worth more than $10 billion, and it could be a boon for the chipmaker as it weighs a potential debut on the public markets.
“We are delighted to partner with OpenAI, bringing the world’s leading AI models to the world’s fastest AI processor,” Cerebras CEO Andrew Feldman said in a statement.
Cerebras sorely needs marquee customers. In October, the company withdrew plans for an IPO, days after announcing that it raised over $1 billion in a fundraising round. It had filed for a public offering a year earlier, but its prospectus revealed a heavy reliance on a single customer in the United Arab Emirates, Microsoft-backed G42, which is also a Cerebras investor.
Potential partners
Sam Altman, OpenAI CEO, speaks during a media tour of the Stargate data center in Abilene, Texas, on Sept. 23, 2025. Stargate is a collaboration of OpenAI, Oracle and SoftBank, with promotional support from President Donald Trump, to build data centers and other infrastructure for artificial intelligence throughout the U.S.
Kyle Grillot | Bloomberg | Getty Images
Where does that leave Amazon, Google and Intel, which all have their own AI chip plays?
In November, OpenAI signed a $38 billion cloud deal with Amazon Web Services. OpenAI will run workloads through existing AWS data centers, but the cloud provider also plans to build out additional infrastructure for the startup as part of the agreement.
Amazon is also in talks to potentially invest more than $10 billion in OpenAI, as CNBC previously reported.
OpenAI could decide to use Amazon’s AI chips as part of these investment discussions, but nothing official has been determined, according to a person with knowledge of the matter who asked not to be named because the discussions are confidential.
AWS announced its Inferentia chips in 2018, and the latest generation of its Trainium chips in late 2025.
Google Cloud also supplies OpenAI with computing capacity thanks to a deal that was quietly finalized last year. But OpenAI said in June that it has no plans to use Google’s in-house chips called tensor processing units, which Broadcom also helps produce.
Intel has been the biggest AI laggard among traditional chipmakers, which explains why the company recently took massive investments from the U.S. government and Nvidia. Reuters reported in 2024, citing people with knowledge of the discussions, that Intel had a chance to invest in OpenAI years earlier and potentially make hardware for the then-fledgling startup, offering it a way to avoid reliance on Nvidia.
Intel decided against the deal, according to Reuters.
In October, Intel announced a new data center GPU that it’s codenamed Crescent Island and says is “designed to meet the growing demands of AI inference workloads and will offer high memory capacity and energy-efficient performance.” The company said “customer sampling” is expected in the second half of 2026.
Wall Street will hear updates on Intel’s latest AI efforts when the company kicks off tech earnings season next week.
— CNBC’s Kif Leswing, MacKenzie Sigalos and Jordan Novet contributed to this report.
WATCH: Breaking down AI chips, from Nvidia GPUs to ASICs by Google and Amazon