NVIDIA GTC 2025 Reveals Storage Supplier Line Up Showcasing A Coordinated Effort

Storage players ride the Nvidia bus at GTC 2025

Twelve storage suppliers participated in a precisely coordinated showcase at Nvidia GTC 2025, aligning with Jensen Huang’s efforts to drive sales of GPUs, NICs, switches, and software to businesses investing in AI stack systems amid expectations of an agentic AI surge.

The lineup included Cloudian, Cohesity, DDN, Dell, HPE, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data, and WEKA.

These suppliers linked their announcements to Nvidia’s unveiling of its Blackwell Ultra GPUs and AI Data Platform, a reference design integrating Nvidia GPUs, BlueField NICs, Spectrum-X switches, and AI software with storage systems managing block, file, and object data, which are transmitted via NICs and switches to GPUs for processing using Nvidia’s AI software.

Cloudian, an object storage vendor, introduced its HyperStore object storage platform with GPUDirect for objects, offering both data lake capacity and HPC-class high-performance data access.

It asserts superiority over HPC file products for AI training and inferencing at one-third the cost of non-GPUDirect systems. Cloudian announced Nvidia-based reference architectures (RA) on Lenovo and Supermicro server platforms, with all-flash HyperStore supporting GPUDirect and RDMA networking. The Lenovo/Nvidia RA delivers 20 GBps per node with linear scaling, supporting large language model (LLM) training, inference operations, and checkpointing functions.

The Lenovo RA document, detailing ThinkSystem SR635 and SR655 V3 server options, is accessible here, along with the Supermicro RA document featuring the Hyper A+ Storage Server.

Cohesity enhanced its Gaia GenAI search assistant, positioning it as “one of the industry’s first AI search capabilities for backup data stored on-premises.” Sanjay Poonen, Cohesity’s president and CEO, emphasized: “By deploying Cohesity Gaia on-premises, customers can harness powerful data intelligence directly within their environment and not worry about any of that data leaving their infrastructure.”

The enhanced Gaia utilizes Nvidia GPUs, NIM microservices, and NeMo Retriever, enabling searches across petabyte-scale datasets. It supports multilingual indexing and querying, accompanied by reference architectures and pre-packaged on-premises LLMs.

Cohesity and HPE will validate and deploy Gaia on HPE Private Cloud AI, co-developed with Nvidia. Cisco and Nutanix will also integrate Gaia into their full-stack systems.

DDN is integrating Nvidia’s AI Data Platform reference design with its EXAScaler and Infinia 2.0 storage products within its AI Data Intelligence Platform, and officially supporting Nvidia Blackwell-based systems, including DGX and HGX configurations.

This platform, interfaced with Nvidia GPU server hardware and software, incorporates BlueField-3 DPUs, Spectrum-X network switches, access to NIM and NeMo Retriever microservices, and reference architectures.

DDN’s AI400X2 and AI400X2 QLC storage arrays have received Nvidia-certified Storage status, validated with DGX SuperPOD and GB200 NVL72 systems, and optimized for Nvidia’s Spectrum-X networking. They deliver sub-millisecond latency and 1 TBps bandwidth.

NVIDIA GTC 2025

Testing demonstrated that a single DDN AI400X2-Turbo achieved 10x the usual 1 GBps/GPU read-write requirement when paired with a Nvidia DGX B200. Multiple AI400X2-Turbo appliances achieved up to 96 percent network utilization per DGX B200, saturating nearly 100 GBps (800 Gbps) bandwidth in both read and write operations. Benchmark details can be reviewed here.

DDN also introduced DDN Inferno, described as “a game-changing inference acceleration appliance” integrating DDN’s Infinia storage with Nvidia’s Spectrum-X AI-optimized networking. Initial tests indicate that Inferno outperforms AWS S3-based inference stacks by 12x and can “provide 99 percent GPU utilization,” though workload specifics remain undisclosed.

Omar Orqueda, SVP, Infinia Engineering at DDN, stated: “Inferno delivers the industry’s most advanced inference acceleration, making instant AI a reality while slashing costs at enterprise scale.”

Additionally, DDN is integrating its ExaScaler Lustre parallel file system storage with Infinia object storage into a hybrid xFusionAI offering. Infinia currently supports S3, with future enhancements planned for block, file, and SQL protocols. However, details on workload distribution across ExaScaler and Infinia systems are unavailable.

Supermicro reports that DDN’s xFusionAI accelerates AI data center workflows by 15x, optimizing multimodal AI performance, from high-speed RAG pipelines to autonomous decision-making systems. According to DDN, this approach ensures “seamless AI scaling across environments, including on-premises, cloud, and air-gapped systems.”

Sven Oehme, DDN CTO, remarked: “xFusionAI is the convergence of AI’s past, present, and future. It brings together the raw performance of ExaScaler with the intelligent scalability of Infinia, delivering a true ‘best of both worlds’ platform that revolutionizes AI infrastructure.”

DDN plans to introduce fully validated Nvidia-Certified Storage reference architectures soon.

Dell is expanding its Dell AI Factory with Nvidia, integrating Dell storage with Nvidia GPUs, networking, and AI software using Nvidia’s AI Data Platform reference design.

Michael Dell, CEO, stated: “We are celebrating the one-year anniversary of the Dell AI Factory with Nvidia by doubling down on our mission to simplify AI for the enterprise … We are breaking down barriers to AI adoption, speeding up deployments, and helping enterprises integrate AI into their operations.”

Dell’s PowerScale scale-out file system has been validated under Nvidia’s Cloud Partner Program and certified as Nvidia Certified Storage for enterprise AI deployments. Recent PowerScale updates deliver 220 percent faster data ingestion and 99 percent faster data retrieval than previous iterations.

New enhancements include an open-source RAG Connector for LangChain and Nvidia NIM microservices, plus integration of the Nvidia RAPIDS Accelerator for Apache Spark with Dell Data Lakehouse software. Dell also supports Nvidia Dynamo, which optimizes GPU memory usage by offloading key-value cache data to PowerScale or other external storage.

Dell is introducing professional services for AI data strategy and cleansing to optimize Nvidia AI Data Platform features, ensuring systematic data discovery, integration, automation, and quality improvements. A blog details how customers can improve RAG data ingestion using PowerScale’s RAG connector.

HPE storage introduced a “unified data layer” to support the “agentic AI era,” combining structured and unstructured data to accelerate AI data lifecycles across its high-performance data fabric and enterprise storage infrastructure.

Jim O’Dorisio, SVP GM of HPE storage, explained: “As we integrate our Alletra storage MP [platform], our private cloud AI assets, and our cross multi-cloud environments, we create a truly unified data layer that moves AI closer to the data, and that’s absolutely critical.”

HPE announced new Alletra software features, enhanced ransomware protection with Zerto, and additional Azure support. The MP X10000 platform gains automated inline metadata tagging to accelerate AI data ingestion. Future updates will introduce GPUDirect for object support in collaboration with Nvidia.

HPE also launched AI Mod POD, a high-density, performance-optimized AI datacenter in a container, supporting up to 1.5 MW per module for rapid deployment.

Other vendors, including Hitachi Vantara, NetApp, VAST Data, and WEKA, also made announcements tied to Nvidia’s GTC event, all aligning their AI storage innovations with Nvidia’s Blackwell GPUs, AI Data Platform, and AI Enterprise software.