Nvidia announces next-gen AI and GPU advancements in San Jose
At the 2025 GPU Technology Conference (GTC) in San Jose, Nvidia CEO Jensen Huang shared exciting advancements in storage and memory technologies that will enhance future artificial intelligence (AI) applications. The event highlighted developments in GPU technology, including new models and initiatives aimed at improved data access. Jensen announced the new Blackwell GPU series and mentioned future generations called the Vera Rubin and Richard Feynman systems, set for release in 2026 and beyond. The Rubin Ultra NVL576 system promises impressive data transfer speeds and increased memory capacity, along with larger component designs. A key focus of the conference was the introduction of AI-driven data platforms aimed at improving access to digital storage. Various companies, including DDN, Dell Technologies, and IBM, will use Nvidia's AI query technology to generate near-real-time insights from vast amounts of data. Nvidia is also working on a new storage architecture designed to enhance GPU computing. This will use advanced NVMe technology to increase efficiency and reduce latency, allowing faster data processing directly from GPUs rather than relying on CPUs. Several storage and memory companies made significant announcements during the GTC. Micron introduced a modular memory technology named SOCAMM, which boosts data bandwidth. Phison unveiled its aiDAPTIV+ for affordable AI solutions. Vast Data added new features to its platform for better AI insights, and VDURA launched an all-flash appliance tailored for AI applications. Overall, the advancements discussed at the GTC will likely support the growing demands of AI technologies, ensuring faster and more efficient data handling in the future.