Choose your country or region.

EnglishFrançaispolskiSlovenija한국의DeutschSvenskaSlovenskáMagyarországItaliaहिंदीрусскийTiếng ViệtSuomiespañolKongeriketPortuguêsภาษาไทยБългарски езикromânescČeštinaGaeilgeעִבְרִיתالعربيةPilipinoDanskMelayuIndonesiaHrvatskaفارسیNederlandTürk diliΕλλάδαRepublika e ShqipërisëአማርኛAzərbaycanEesti VabariikEuskeraБеларусьíslenskaBosnaAfrikaansIsiXhosaisiZuluCambodiaსაქართველოҚазақшаAyitiHausaКыргыз тилиGalegoCatalàCorsaKurdîLatviešuພາສາລາວlietuviųLëtzebuergeschmalaɡasʲМакедонскиMaoriМонголулсবাংলা ভাষারမြန်မာनेपालीپښتوChicheŵaCрпскиSesothoසිංහලKiswahiliТоҷикӣاردوУкраїнаO'zbekગુજરાતીಕನ್ನಡkannaḍaதமிழ் மொழி繁体中文

OpenAI Partners with Broadcom to Deploy 10GW of Custom AI Chips, Broadcom Stock Soars!

OpenAI AI Chips

On October 13 (local time), AI technology giant OpenAI and semiconductor design leader Broadcom announced a collaboration to jointly develop custom AI accelerators for data centers with a total capacity of 10 gigawatts (GW). OpenAI will design these accelerators and systems, while Broadcom will assist in development and deployment.

OpenAI stated that by designing its own AI chips and systems, it can directly embed its experience from developing cutting-edge models and products into the hardware, unlocking higher levels of functionality and intelligence. These racks will be fully scaled using Broadcom’s Ethernet and other connectivity solutions and deployed across OpenAI’s facilities and partner data centers to meet the growing global demand for AI.

OpenAI and Broadcom have reached a long-term agreement for the joint development and supply of AI accelerators. The two companies have signed a term sheet to deploy racks that include AI accelerators and Broadcom networking solutions.

OpenAI co-founder and CEO Sam Altman said, “Partnering with Broadcom is a key step in building the infrastructure needed to unlock the potential of AI and deliver real benefits to individuals and businesses. Developing our own accelerators will further enhance our broader partner ecosystem, enabling us to build the necessary capabilities to advance AI for the benefit of humanity.”

Broadcom President and CEO Hock Tan said, “Our collaboration with OpenAI marks a pivotal moment in exploring artificial general intelligence. Since the launch of ChatGPT, OpenAI has been at the forefront of the AI revolution. We are thrilled to jointly develop and deploy 10GW of next-generation accelerators and networking systems that will pave the way for the future of artificial intelligence.”

OpenAI co-founder and President Greg Brockman added, “Our partnership with Broadcom will drive breakthroughs in artificial intelligence and bring the full potential of this technology closer to reality. By building our own chips, we can embed the lessons we’ve learned from developing advanced models and products directly into hardware, enabling greater functionality and intelligence.”

Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group, stated, “Our collaboration with OpenAI will continue to set new industry standards for the design and deployment of open, scalable, and energy-efficient AI clusters. The integration of custom accelerators with standards-based Ethernet for both vertical and horizontal scaling offers cost- and performance-optimized next-generation AI infrastructure. These racks incorporate Broadcom’s end-to-end portfolio of Ethernet, PCIe, and optical interconnect solutions, further strengthening our leadership in AI infrastructure products.”

Following the announcement, Broadcom’s stock price surged more than 10% in pre-market trading.

Notably, although OpenAI and Broadcom did not disclose specific transaction amounts in the announced agreement, industry sources have long speculated that OpenAI was the “new cloud customer” behind the $10 billion order Broadcom mentioned in its previous earnings report.

According to estimates by OpenAI executives, based on current electricity prices, deploying 1GW of AI computing power costs around $50 billion. This means the 10GW project could involve a total investment of up to $500 billion. Data shows that servers account for over 60% of data center costs, while AI chips make up more than 70% of AI service costs. In this $500 billion project, the value of AI chips alone could exceed $200 billion.

However, since OpenAI has opted to design its own AI chips—with Broadcom providing backend design services and TSMC handling manufacturing—the total cost is expected to be lower than directly purchasing AI chips from NVIDIA or AMD. This cost advantage is one of the key reasons behind OpenAI’s decision to develop its own chips.

According to CNBC, the partnership between OpenAI and Broadcom did not come together overnight—the two companies have been secretly working together for 18 months. These customized systems cover networking, storage, and computing capabilities and are specifically designed for OpenAI’s workloads.

Furthermore, leaked information suggests that OpenAI’s in-house AI chips are optimized for the inference stage and connected through Broadcom’s Ethernet stack. The companies plan to begin developing and deploying these racks containing OpenAI’s custom AI chips by late 2026. OpenAI President Greg Brockman also revealed that the company even plans to use its own models to accelerate chip design and improve efficiency.

In fact, this 10GW custom AI chip collaboration with Broadcom is part of OpenAI’s massive long-term commitment to computing power for its future development. Sam Altman has hinted that this 10GW initiative is only the beginning. OpenAI’s current computing capacity is slightly above 2GW—enough to scale ChatGPT to its current level, launch the video generation service Sora, and support extensive AI research.

Recently, OpenAI has announced a series of large-scale cloud and chip supply deals, including a $300 billion cloud services agreement with Oracle, a $22 billion cloud deal with CoreWeave, a $100 billion AI chip purchase agreement (10GW data center capacity) with NVIDIA, and a 6GW AI chip procurement deal with AMD. If all these partnerships materialize, OpenAI’s total computing power could exceed 30GW, significantly accelerating the advancement of its AI technology.

In summary, this collaboration marks another major milestone for OpenAI in its ambitious AI data center expansion plans. For Broadcom, the deal reinforces its central role in the custom AI accelerator market and further solidifies its position in the AI infrastructure supply chain by providing Ethernet-based solutions for both vertical and horizontal scaling.