Siggraph 2025 in Vancouver showcased how artificial intelligence and extended reality (XR) are converging to reshape the graphics industry, with major announcements from Nvidia, Meta, and Arm demonstrating the future of neural rendering and immersive computing. The conference highlighted a fundamental shift toward AI-accelerated graphics processing, with new hardware and software solutions designed to make high-fidelity rendering more accessible and efficient across platforms.
What you should know: Nvidia unveiled new Blackwell-powered RTX PRO servers and workstation GPUs specifically designed for professional AI and rendering workloads.
- The RTX PRO 6000 servers deliver 4x improvements in real-time rendering FPS and 6x better LLM inference throughput compared to previous-generation L40S GPUs.
- Two new workstation GPUs round out the Blackwell lineup: the RTX PRO 4000 ($1,500) with 24GB VRAM and 770 AI TOPS at 70 watts, and the RTX PRO 2000 ($700) with 16GB RAM and 545 TOPS.
- Partners including Cisco, Dell Technologies, HPE, Lenovo and Supermicro will offer these systems.
Physical AI advances: Nvidia introduced new Omniverse libraries and Cosmos Physical AI models to accelerate robotics training with physically accurate simulations.
- Isaac Sim combines Omniverse NuRec Libraries with Gaussian splats to quickly generate 3-D simulations that mimic real-world environments.
- Cosmos Transfer1 creates photorealistic controllable synthetic data from multiple video sources for training purposes.
- Cosmos Predict2 serves as an image-to-future-world-state model designed to predict movement and actions in simulations.
- Cosmos Reason 7B functions as a state-of-the-art reasoning vision language model for on-device AI applications.
In plain English: Nvidia’s new AI tools help train robots by creating incredibly realistic virtual worlds. Think of it like building a hyper-detailed video game environment where robots can practice tasks thousands of times without breaking real equipment or wasting materials.
Meta’s prototype breakthroughs: Reality Labs showcased two experimental headsets pushing the boundaries of VR display technology.
- Tiramisu delivers hyper-realistic VR with resolution more than triple that of Quest 3 and 14 times the brightness, though currently limited to a 33 x 33 degree field of view.
- Boba 3 offers an unprecedented 200-degree field of view (180-degree horizontal, 120-degree vertical) covering roughly 90% of human visual range, compared to Quest 3’s 110-degree FoV.
- The VR prototype version of Boba 3 weighs just 660g, lighter than both the standard Boba 3 (840g) and Quest 3 (698g).
Mobile graphics evolution: Arm announced neural rendering capabilities coming to its next-generation GPUs in 2026, bringing desktop-class AI features to mobile devices.
- Neural Super Sampling can reduce GPU workloads by up to 50% by rendering at lower resolution and AI-upscaling to native resolution.
- Neural Frame Rate Upscaling technology enables power savings while maintaining high frame rates.
- These capabilities will be built into hardware with ML extensions for Vulkan graphics API and a Neural Graphics Development Kit for developers.
In plain English: Arm’s new technology is like having a smart assistant that can take a rough sketch and turn it into a detailed painting. Instead of your phone’s graphics chip working overtime to create every pixel, AI fills in the gaps to make games and apps look better while using less battery power.
Industry standardization: The Khronos Group partnered with major organizations to integrate geospatial Gaussian splats into the glTF 3-D asset format standard.
- Collaborators include Open Geospatial Consortium, Niantic Spatial, Cesium and Esri.
- This integration should broaden glTF applications and enable faster, more cost-effective 3-D asset creation through AI-accelerated Gaussian splatting techniques.
The big picture: Siggraph 2025 demonstrated that neural graphics has become the dominant paradigm across all computing platforms, from cloud servers to mobile devices.
- Adobe Research alone shared more than 25 published papers, with most incorporating AI in some capacity.
- The convergence of XR and AI technologies is creating new possibilities for everything from professional rendering to consumer entertainment and robotics training.
What they’re saying: Industry leaders emphasized the transformative nature of these developments for graphics computing.
- Nvidia positioned its GPUs as fundamental building blocks combining 3-D rendering with AI capabilities.
- Meta’s demonstrations showed “what’s technically possible and where VR could go in the future in terms of image quality and brightness.”
XR And AI Dominate The Future Of Graphics At Siggraph 2025