First, video compression is a garbage in/garbage out medium, so output quality improves with input quality. Second, codecs such as H.264 are lossy, which means the lower the data rate, the lower the quality. Finally, intermediate formats such as ProR

News Flash for Compressionists: Garbage in Means Garbage Out

First, video compression is a garbage in/garbage out medium, so output quality improves with input quality. Second, codecs such as H.264 are lossy, which means the lower the data rate, the lower the quality. Finally, intermediate formats such as ProRes preserve much higher quality than codecs such as H.264, albeit at much higher data rates.

Not so shocking, eh? In truth, these realities are givens even to most compression newbies. To a degree, however, the exigencies of producing H.264-encoded intermediate files for uploading to user-generated content sites such as YouTube or cloud encoding services have lulled us into thinking that there is no meaningful quality difference between files produced from ProRes and files produced from H.264 encoded uploads. A recent experience disabused me of this notion, costing me many hours of rework as a result.

Briefly, I was testing “Encoder A,” which wasn’t (at that time) compatible with ProRes input files, which many of my test clips are stored in. No problem, I thought; I’ll just encode my ProRes files into high bitrate H.264 and encode from those. These are 720p files, and I rendered them at 30Mbps because Adobe Media Encoder topped out at that value.

I then rendered files from the H.264 source in Encoder A and compared them to files produced by Encoders B, C, and D from the ProRes source. “Hmmm,” I thought, “I know high bitrate H.264 is good, but is comparing the output from Encoders B, C, and D from ProRes source to Encoder A encoding from H.264 source really an apples-to-apple comparison?” That was easy enough to check, of course; I just encoded the H.264 source in Encoders B, C, and D and compared those files.

Once done, the next question was whether there was a qualitative difference between the files produced from H.264 and ProRes source by Encoders B, C, and D. The answer was yes. The difference was so significant that I re-encoded my ProRes test files into H.264 format at 50Mbps, this time using Sorenson Squeeze. Now, for Encoders B, C, and D, I had three output files: one from a ProRes source, one from a 50Mbps H.264 source, and the third from a 30Mbps H.264 source.

Actually, I had six output files, because my tests involved two scenarios: one at 720p at 800Kbps and the other at 640×360 at 240Kbps. Both are extreme tests, with the 640×360 test simulating the most aggressive clip in a group of files encoded for adaptive streaming. The quality difference was more noticeable in the 640×360 output file than in the 720p file, mostly in areas of fine detail, which the ProRes-encoded file preserved more finely than either H.264-encoded file.

To supplement these tests, I converted a 1080p test file to 50Mbps and 20Mbps iterations and uploaded them to YouTube. Then I downloaded and compared the quality of the files produced by YouTube from these sources. In the 1080p files that YouTube encoded at 5.8Mbps, there was no noticeable difference. In the 640×360 files YouTube produced at 636Kbps, the files produced from the 50Mbps source showed more detail.

To be fair, the difference would be unnoticeable to the casual user. Within the context of my consulting project, however, the developers of Encoder A weren’t casual, so I had to redo all my tests at the higher level, and yet again when ProRes compatibility was enabled. In a more general sense, it made me rethink the value proposition of using 10Mbps-20Mbps H.264 as an intermediate format to save uploading time. As hard as we work to preserve quality throughout the production pipeline, this no longer looked like the most appropriate trade-off.

About Jan Ozer

Avatar photo
I help companies train new technical hires in streaming media-related positions; I also help companies optimize their codec selections and encoding stacks and evaluate new encoders and codecs. I am a contributing editor to Streaming Media Magazine, writing about codecs and encoding tools. I have written multiple authoritative books on video encoding, including Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video (https://amzn.to/3kV6R1j) and Learn to Produce Video with FFmpeg: In Thirty Minutes or Less (https://amzn.to/3ZJih7e). I have multiple courses relating to streaming media production, all available at https://bit.ly/slc_courses. I currently work as www.netint.com as a Senior Director in Marketing.

Check Also

Steve Strong Details Norsk's Suitability for Broadcasters

Norsk for Broadcasters: Interview with id3as’ Steve Strong

Recently, I spoke with Steve Strong, id3as’ co-founder and director, about Norsk’s suitability for broadcasters. …

Five star review for video quality metrics course.

New Five Star Review for Video Quality Metrics Course

The Computing and Using Video Quality Metrics course teaches encoding pro to compute and use video metrics like VMAF, PSNR, and SSIM.

Figure shows the different components to live streaming latency.

The Quality Cost of Low-Latency Transcoding

While low-latency transcoding sounds desirable, low-latency transcode settings can reduce quality and may not noticeably …

Leave a Reply

Your email address will not be published. Required fields are marked *