While many AI-based codecs are still making their first appearance in white papers, often with tortured playback requirements and no working decoder, the Deep Render codec is already encoding in FFmpeg, playing in VLC, and running on the billions of NPU-enabled devices already in the market.
Let’s take a step back. I’ve been following the development of the Deep Render codec for some time, through interviews and product demonstrations. Along the way, the company has made aggressive claims about performance and quality. These include 22 fps 1080p30 encoding and 69 fps 1080p30 decoding on an Apple M4 Mac Mini, and a 45 percent BD-Rate improvement over SVT-AV1. Recently, I had the chance to independently validate those claims.
Deep Render provided a complete evaluation environment, including a version of FFmpeg with the integrated Deep Render encoder and decoder, a VLC build supporting playback, and encoded outputs from their codec alongside matching files from SVT-AV1, x265, and VVenC. My role was to test performance, confirm the validity of the methodology, and verify output quality across all codecs. The report focused on the real-time low-latency use case for all tested codecs.
The results are documented in a downloadable technical report, which walks through the testing process, command strings, metric validation, and subjective comparisons.
Download the report here: Deep-Render-May-2025-Report-1.pdf (14 downloads )
Here’s the TL;DR:
Deep Render delivers in low-latency, real-time configurations. Subjective testing performed by Vittorio Baroncini of VABTech UK, combined with my own inspection and file verification, showed Deep Render achieving the claimed 45 percent BD-Rate savings over SVT-AV1. My VMAF-based testing showed a consistent advantage over both x265 and SVT-AV1 in this same use case, with Deep Render trailing VVenC by about 14 percent.
Just as notable is how deeply integrated the codec already is. Deep Render works directly in FFmpeg and VLC Player and runs efficiently on a $600 Mac Mini with the M4 chip and integrated NPU. With all codecs, playability = deployability, and most AI codecs require GPUs that won’t be available on mobile devices or Smart TVs for a decade. In contrast, the Deep Render codec requires an NPU, like those that have been shipping on iPhones since 2017.
As tested, Deep Render is not a general-purpose solution. As mentioned, the codec is currently tuned only for real-time, low-latency use cases. It still needs to evolve for VOD workflows, scalable live encoding, and broader deployment. But based on these results, the company has made significant progress and, more importantly, delivered on its early claims.

After reading the report, you might want to check out the comparison app mentioned in the write-up, which lets you view all clips from the subjective test side by side. This includes Deep Render and SVT-AV1 at matched bitrates. It is a helpful reference, especially if you want to verify how the encodes were configured and how the quality differences actually look.
I will publish a short video walkthrough showing the encode and decode process and the app in action later this week.
Again, you can download the report here: Deep-Render-May-2025-Report-1.pdf (14 downloads )