FFmpeg to the Rescue: Decoding Files into RAM for Decode Testing

This article details how to use FFmpeg to benchmark decoder speed. Much of the info was derived from this FFmpeg Wiki. This article was updated to show results when the source file is played from a RAM drive and for additional definitions as reflected below. 

I just completed a consulting project that involved analyzing decoder performance of a new codec as compared to x264 and x265. One way to do this is to decode a file to YUV and time the operation. That might look like this:

ffmpeg -i x264.mp4 -pix_fmt yuv420p -vsync 0 x264.y4m

Which would yield the following results.

Pretty snappy at 582 fps. But then you start to think, “how much of that decode was gated by the fact that the system stored a huge YUV file to disk?” Well, it turns out, quite a bit.

To avoid this issue and test decode only, you can use this command to decode the file to RAM, saving the storage step.

ffmpeg -i x264.mp4 -f null -

We’ve almost doubled the decode speed by removing the storage step. Since the player doesn’t store the video during normal viewing we’ve also come closer to testing relevant performance. If we add the -benchmark flag we get additional performance and memory usage data.

ffmpeg -i x264.mp4 -benchmark -f null - 

Definitions of these terms did not come easy.

According to this post, which I was directed to by Andrei Ka, utime equals “user time used by the current process,” to which Andrei added “non-cpu related time (userspace).” I’m sure he’s correct, but it made no sense to me because the entire process took under 2 seconds, start to finish. So, I’m just going to ignore utime.

The same post defines stime as “system time used by the current process,” which Andrei defined as the “time the os (kernel) spent just managing your process (system calls, run strace on your ffmpeg cmdline, you’ll see the gory mess).” This makes more sense but still isn’t that useful a metric since it seems independent of actual playback. Nonetheless, thanks for pointing me to the post and your comments, Andrei!

The post doesn’t define rtime, though in a perfect world this would be runtime since the 1.75 result appears to map to actual runtime (a 30-second file playing at 17.3x should play in 1.73 seconds, which is really close to 1.75). Of course, since the primary object of the exercise is frames per second, perhaps all of these performance metrics are secondary in importance.

Maxrss is maximum memory usage which definitely is a useful data point.

Just for fun, I ran this string to test HEVC decode (all on an HP Zbook Notebook).

ffmpeg -i HEVC.mp4 -benchmark -f null - 

Doing the math, playing a 30-second file at 9.71x should take 3.089 seconds, which is very close to rtime of 3.113. So, I’ll call it runtime.

Playing the File from a RAM Disk

In a LinkedIn comment, Miguel Ángel Chacón Espín suggested testing file playback from a RAM disk to eliminate the performance impact of retrieving the file from disk. At his suggestion, I Googled “create ramdisk Windows 10” and found an article entitled, How to Set Up and Use a RAM Disk in Windows, which suggested a program called ImDisk which I downloaded and installed.

Then I copied the video files to RAM and used the same commands shown above to play the H.264 and HEVC files. Here are the H.264 results, a boost from 1036 fps to 1054. Not a lot, but the HP Zbook has SSD drives and I’m sure the delta would be much more significant with traditional hard disks.

Here are the HEVC results, from 582 fps to 612 fps.

From now on, best practice for playback testing includes the RAM disk. It’s simple enough to create and simulates the actual playback experience more accurately than playing from disk. Thanks Miguel.

Again, here’s the FFmpeg Wiki that covers much of this material.


These FFmpeg to the Rescue articles will appear in future additions of my book Learn to Produce FFmpeg in 30 Minutes or Less, now on the 2018 Edition. The book helps beginning and intermediate FFmpeg users produce high-quality, bandwidth-efficient files and encoding ladders as efficiently as possible. For those who prefer learning via video, check out this course. 

About Jan Ozer

Avatar photo
I help companies train new technical hires in streaming media-related positions; I also help companies optimize their codec selections and encoding stacks and evaluate new encoders and codecs. I am a contributing editor to Streaming Media Magazine, writing about codecs and encoding tools. I have written multiple authoritative books on video encoding, including Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video (https://amzn.to/3kV6R1j) and Learn to Produce Video with FFmpeg: In Thirty Minutes or Less (https://amzn.to/3ZJih7e). I have multiple courses relating to streaming media production, all available at https://bit.ly/slc_courses. I currently work as www.netint.com as a Senior Director in Marketing.

Check Also

Simplify Your Workflow: Command-Line Variables in FFmpeg Batch Files

Creating batch files with variables is one of the more efficient ways to run FFmpeg. …

Navigating Rate Shaping and Zero Rating: Key Takeaways from Qwilt’s Findings

Many streaming publishers focus on the top rung of their encoding ladders and let the …

Simplifying Streaming Workflows with Norsk: An Interview with Dom Robinson

I recently spoke with Dom Robinson, co-founder and Chief Business Development Officer of id3as, to …

Leave a Reply

Your email address will not be published. Required fields are marked *