Dark or Light
logo
Logo

Nvidia Editors Day 2025: All the Details and Hands-On with RTX 5000

Christopher Coke Updated: Posted:
Category:
Hardware 0

At CES this year, Joseph and I were able to attend Nvidia’s Editor’s Day, along with a number of other journalists, to learn all of the exciting details surround the RTX 5000 series. Much has been said following CEO Jensen Huang’s keynote address, but after going hands-on with the new tech and seeing advancements like Reflex 2 and DLSS 4 in action, we walked away excited and eager to see more.

With each new generation, Nvidia holds an Editor’s Day conference, allowing selected journalists the opportunity to take a deep dive into the new technology. This year, I made the flight down to Las Vegas for CES 2025 so I could attend alongside Joseph. The event consisted of multiple presentations, diving into the architecture, DLSS 4, Multi Frame generation, and the growing role of AI in PCs. There were a number of interesting things that were discussed but the most impactful was trying the RTX 5090 for ourselves and seeing the real world impact of DLSS 4 on FPS, but even more so, image quality.

RTX 5000 Blackwell - Designed for AI, RTX Neural Shaders

RTX Blackwell will consist of the RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090. These will all be released this year and all but the RTX 5070 Ti are planned to have Founders Edition models retailing at MSRP. For the first time in what seems like ages, the RTX 5090 feels like a worthwhile step up with nearly double the shader, AI, and RT TOPS and FLOPS, as well as twice the memory with a whopping 32GB of G7 memory. It will also retail for $1,999, while the RTX 5080 will debut at $999, the RTX 5070 Ti at $749, and RTX 5070 at $549.

The overarching theme, like the keynote, was that this generation was designed with AI in mind. While the “fake frames” meme has been running rampant, I’m here to tell you that there is real potential here and that Nvidia has banked heavily on its success. RTX Neural Shaders, DLSS 4, and local AI processing via a refined core system promise to both provide much more realistic visuals but, at least in some cases, lowering the performance impact while doing so. 

RTX Neural Shaders are especially interesting. Blackwell brings with it a reworked streaming multi-processor (SM) that now incorporates the tensor cores in the rendering process. When games are designed to leverage Neural Shaders, the SM will incorporate these cores to dramatically improve the realism of textures, materials, and lighting. Because these details are being cached, generated, or enhanced through AI processing, it’s possible for in-game performance to actually improve when utilizing these features.

In a live demonstration in the setting above, the RTX Neural Shading system was used to enable path-traced lighting to deliver realistic light and shadows from the fire in the corner. With the current generation of GPUs, the graphics processor would need to individually trace every path, significantly dropping FPS. With Blackwell, the GPU is able to trace 1-2 paths, store that data in its AI cache system, and then use those paths to infer where else the light should spill and shadows should fall. Turning it on was a genuine and immediately noticeable visual improvement, and because the GPU didn’t have to render it all in real time, the FPS significantly improved by turning this feature on — the inverse of the relationship that’s always existed between image quality and FPS.

RTX Neural Texture Compression and RTX Neural Materials were even more impressive. These two systems, should they be integrated by game developers in a similar fashion to DLSS, stand to dramatically improve visual fidelity in games. Neural Texture Compression gathers, compresses, and stores texture data, recalling it more efficiently than traditional texture loading models, saving VRAM and allowing developers even more headroom to improve the visual fidelity in their games.

RTX Neural Materials, on the other hand, takes shader code used to render material surfaces in games and compresses it, allowing the GPU to process shaders multiple times faster than traditional rendering. As Jensen demoed in his address, and we were shown again at Editors Day, this allows shaders to be much, much more complex and detailed. 

These technologies are being leveraged in a number of compelling ways that could have a direct impact on the quality of gaming visuals over time. Skin, for example, can be made much more realistic. The advanced integration of AI also allows developers to leverage lifelike generative AI faces. While I’m no developer or 3D artist, it appears that this system will use a model rendered with the game engine, run it through an AI model trained on a large dataset of faces in different lighting and showing different emotions, and enhance the realism of the faces appearing in the game. 

Later in the demonstrations we attended, we were also shown examples of local AI processing enhancing generative and assistive AI in games. One example showed AI driven conversation trees and NPCs that were able to understand spoken words to carry out in-game actions, like changing the color on your spaceship. Another game, InZOI, which is set to release in March, applies AI in the form of a Small Language Model (sLM) to make its life sim more real and dynamic. NPCs (ZOIs) have thoughts and motivations, driven by AI, and act based on them, their character traits, and their interactions with other ZOIs. Another example, Mir5, previewed an AI-driven boss encounter where the enemy would react to your strategy, so you couldn’t run the same routine twice: the boss would learn and react based on your attempts.

Another interesting part of the presentation was the introduction of AI Agents. These personalized AIs take on a physical form on your PC and can be designed to specialize in different tasks and access/process different kinds of information. Nvidia demonstrated a new graphical form of programming these agents that seems much more user-friendly for those of us that don’t know how to code. 

We were treated to a demo from Streamlabs showcasing its new Intelligent Streaming Assistant agent. It is in development now and the demo highlighted its performance as a livestream producer, actively monitoring gameplay in Fortnite, and adding effects like a heartbeat sound-effect to increase the tension in a stand-off. It was also able to accept commands to control Streamlabs Desktop, comment on in-game action, and converse as a form of co-host. 

Another demo showed us PUBG Ally, from Krafton. Ally provides solo players with an AI bot that’s able to follow directions, provide suggestions, and fight alongside you similar to a human. It wasn’t without its hiccups, but the promise is there. How this would actually be rolled out into the game is another question that we don’t have answers to yet.

One area that all of this needs to improve is the quality of the AI voices. To a one, they all sounded robotic and unnatural compared to competing systems like Google Gemini or Microsoft’s Azure voices. This, I’m sure, will come with time but did highlight that these systems are still in the development phase. 

The other major highlight of the day was DLSS 4, Multi Frame Generation, and Reflex 2.0.

DLSS 4, Multi Frame Generation, Reflex 2.0

This portion of the show was particularly exciting. DLSS 4 is an evolution of the current technology and brings with it major advancements to make it more compelling and performant than ever before. 

DLSS 4 remains an intelligent upscaling solution, as always, but, in the demos we saw (Cyberpunk 2077 and Alan Wake 2) Nvidia is providing real value, even if you don’t necessarily need the performance improvement. It accomplishes this with upgraded AI models, a shift to higher performance Transformer models versus the Convolution Neural Network (CNN) design of previous DLSS iterations, DLSS Ray Reconstruction, DLSS Super Resolution, DLAA, and, of course, DLSS Multi Frame Generation. 

DLSS Multi Frame Generation has gotten a lot of attention since its announcement and for good reason. Rather than generate a single AI frame, like Nvidia’s current frame-gen technology, the system is able to generate up to three based on a single source frame. To ensure these frames are delivered smoothly and blend into the game naturally, the RTX 5000 series features hardware-based flip metering, intelligently pacing each frame. 

The question we, and many of our readers, had is how DLSS 4 could deliver three out of four frames with AI alone and whether it would impact performance with ghosting or added latency. The answer comes from the new Transformer model AI systems, moving from hardware based optical flow, and incorporating AI for DLSS Ray Reconstruction, DLSS Super Resolution, DLAA, and Reflex 2.0. And to answer that question:

The importance of the Transformer model really can’t be overstated. It’s a major shift in how DLSS interprets each scene and allows the model to assess the relative importance of what it interprets within it. The Transformer model can also accept twice the input parameters and offers four times the compute of the prior CNN model. It’s better, much more capable, and exceptionally fast, while also delivering clearer results with less ghosting and visual artifacts.

 

 

This system works alongside DLSS Ray Reconstruction and DLSS Super Resolution, and they make a big difference in image quality. Ray Reconstruction makes a big difference in lines, like telephone wires or the chain link fence pictured above. Super Resolution, through the new Transformer model, upscales content much better than DLSS 3.5.

Nvidia let me play around in Cyberpunk 2077 and freely explore to check out the image quality improvements DLSS 4 brings. Another system, running DLSS 3.5, was displayed on another PC right next to me and an Nvidia employee followed my movements so I could consistently have a comparison, no matter where I was. DLSS 4 not only runs much faster but the picture quality improvement is like a full resolution step upward. It is inarguably clearer and better looking and you don’t have to pixel peep to notice it. 

To address concerns about latency, DLSS 4 will utilize the second generation of Nvidia’s Reflex technology. It works similarly to the first — identically, actually — but utilizes frame warping and inpainting (generative AI) to blend frames together in movement, keeping latency numbers roughly consistent with Reflex 1 with single-frame generation. The tech, as presented, showcases that even with full-fledged, 1:3 Multi Frame Generation enabled, the input latency is the same if not less. We tested this ourselves in The Finals. And ideal case, to be sure, but it worked.

Two other important points came out of this section of the conference. First, ideally, you want to align your MFG frame rate with the refresh rate of your monitor. Massively overshooting what your monitor can display can lead to tearing and screen artifacts without providing any perceivable benefit in games. 

Second, and what I think is particularly cool, the Nvidia App will be adding DLSS overrides to its control panel. If you’re playing a game that supports DLSS Frame Generation but hasn’t updated to support MFG yet, you can manually enable this feature with a simple toggle. It also lets you use the latest Transformer model with DLSS Super Resolution and DLSS Ray Reconstruction for the best possible visuals without waiting on the developers to ship an update. Finally, if you’re playing a game that doesn’t provide performance modes for DLSS, the Nvidia App will allow you to manually turn on DLAA or Ultra Performance mode.

Wrapping Up

Ultimately, everything we saw will need to be tested and borne out over time. But, as someone that was skeptical of how AI would be applied to graphics cards, there’s a lot of promise to how it is being implemented. If it pans out as Nvidia shared with us and the many other journalists in attendance, AI truly could drive the future of graphics for PC gaming, enhancing image quality and performance at the same time. 

Stay tuned as we continue to cover this generation and for our final review.


GameByNight

Christopher Coke

Chris cut his teeth on MMOs in the late 90s with text-based MUDs. He’s written about video games for many different sites but has made MMORPG his home since 2013. Today, he acts as Hardware and Technology Editor, lead tech reviewer, and continues to love and write about games every chance he gets. Follow him on Twitter: @GameByNight