At Computex 2025 in Taiwan, Nvidia CEO Jensen Huang introduced NVLink Fusion, a groundbreaking update to Nvidia’s interconnect technology that will enable integration of non-Nvidia central processing units (CPUs) and application-specific integrated circuits (ASICs) with Nvidia’s GPUs. Previously, NVLink was restricted to Nvidia chips only.
The shift reflects Nvidia’s intention to enable more customizable AI infrastructures and move beyond just producing proprietary chips. “NVLink Fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips,” Huang emphasized during his keynote.
NVLink Fusion Gains Key Partners, Strengthens Nvidia’s Role in Hybrid AI System Development
Nvidia revealed that several chipmaking and design partners—including MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence—will support NVLink Fusion. This expansion means that companies like Fujitsu and Qualcomm Technologies can now pair their third-party CPUs with Nvidia GPUs in data centers. According to analyst Ray Wang, the initiative signals Nvidia’s push into data centers traditionally built with ASICs, aiming to capture more market share by appealing to clients not solely reliant on Nvidia-based systems.

While Nvidia maintains dominance in the general-purpose AI GPU space, competitors like Google, Microsoft, and Amazon are investing heavily in developing their own custom processors. NVLink Fusion positions Nvidia to remain integral in AI system architecture, even in mixed-hardware environments.
Wang noted that the move allows Nvidia to consolidate its presence as the “center of next-generation AI factories,” while equity analyst Rolf Bulk warned that giving customers flexibility to use non-Nvidia CPUs could reduce demand for Nvidia’s own processors. Still, the increased flexibility could enhance Nvidia’s value proposition against emerging architectures.
Nvidia Expands Global Reach with New AI Systems, Cloud Platform, and Taiwan Investment
Beyond NVLink Fusion, Huang also announced advancements in Nvidia’s next-generation Grace Blackwell systems. The upcoming GB300, expected in Q3 2025, will provide significantly higher system performance for AI workloads. Additionally, Nvidia introduced the DGX Cloud Lepton, a platform that offers developers a global compute marketplace with access to GPU resources across cloud providers. This move aims to address a key industry challenge: securing scalable, high-performance GPU access for AI developers worldwide.
Nvidia’s dedication to strengthening its footprint in Asia was highlighted by its announcement to open a new office in Taiwan and partner with Foxconn to build an AI supercomputer. This collaboration aims to enhance Taiwan’s AI infrastructure and foster innovation among key players such as TSMC.
CEO Jensen Huang emphasized Taiwan’s critical position in the global technology supply chain and reiterated Nvidia’s commitment to driving progress in AI and robotics. These strategic moves reflect Nvidia’s comprehensive efforts to reinforce its position as a central force in the future of AI development.