Five Signs Your Encoding Ladder May Be Obsolete

Home/Articles/Five Signs Your Encoding Ladder May Be Obsolete
By | 2017-02-23T00:58:31+00:00 March 3rd, 2016|Articles|Comments Off on Five Signs Your Encoding Ladder May Be Obsolete

Your encoding ladder is the set of encoding parameters that you use to create the various files that you deliver adaptively to your web viewers. These encoding parameters can live in your on-premise encoder, in your cloud encoder, or in your online video platform (OVP).

When optimally configured, your encoding ladder lets you efficiently deliver great quality video to viewers on all devices and over all connection speeds. When encoding ladders are suboptimal, data rates can be too high, wasting bandwidth and lowering the quality of experience (QoE) of those watching on lower bitrate connections. If rates are too low, file quality can be subpar, reducing the QoE of all viewers. Other improper configurations can cause compatibility issues, or playability issues, or other quality issues.

How optimal is your encoding ladder? Well, here are six signs that you might have a problem.

1.  You use the same encoding ladder for all content types (and you distribute multiple disparate types of videos).  You know that different types of videos encode more or less efficiently depending upon content. Talking heads encode efficiently, soccer games much less so.

Many organizations distribute distinctly different types of content. As examples, broadcasters stream talk shows, sports, and sitcoms, movie distributors stream animated movies and action thrillers, and enterprises stream screencam and PowerPoint videos. If these organizations use the same encoding ladder for all classes of content, it almost certainly means that the ladder is ill suited for one or more types. 

SQMScores.png

SSIMplus scores show how different types of videos encode more or less efficiently. The screencam achieves 90%+ quality at below 1000 kbps, Big Buck Bunny reaches 90% at 1500 kbps, while Tears of Steal needs 3000 kbps to reach the same threshold. 

Netflix recently transitioned to per-title optimization, using customized encoding parameters for each title. While this is impractical for most other organizations, category specific encoding can deliver much of the same benefits with much less complexity. 

2. You didn’t use objective quality metrics like peak Signal-to-Noise Ratio (which Netflix relies on), or other, more modern metrics like VQM or SSIMplus to formulate your ladder.  The essential purpose of your encoding ladder is to deliver excellent quality video. While objective quality metrics aren’t perfect, they provide a very useful measure of the existing quality of your video files.

 ladder.png

Objective quality benchmarks help identify when streams are redundant. For example, the 6500 kbps and 7500 kbps streams add significant bandwidth (and cost), but very little quality as measured by the SSIMWave Quality of Experience Monitor (SQM). This analysis makes it easy to eliminate the two higher quality streams from the encoding ladder.

3.  You haven’t changed your encoding ladder since 2012.  Much has changed since 2012, like higher throughput, larger resolution screens (particularly on mobile devices), and increasing codec efficiencies. If you haven’t changed your ladder since 2012, it might be worth a look.

 oldladder.png

Old ladders don’t reflect new realities, and using TN2224 without modification can get quite pricey.

4. You’re using TN2224, or recommendations from your cloud provider or OVP without modification. As Netflix clearly established, there is no one-size-fits all encoding ladder. Did your OVP or cloud provider actually test your typical source footage when creating their presets? If not, isn’t this worth a second look?

5. You’ve never had your encoding ladder reviewed by a third-party consultant. I make mistakes, you make mistakes, we all make mistakes. If your encoding ladder hasn’t been reviewed by a third-party, these mistakes could be costing you quality, excess bandwidth costs, or both. 

What’s the Answer?

An encoding ladder audit from the Streaming Learning Center (SLC). Though project details vary from engagement to engagement, here’s how it generally works.

What You Provide:

–  You supply SLC with the mezzanine files for multiple source clips.
–  Presets from your encoder.
–  Clips encoded with those presets.
–  If possible, access to your encoder. For previous projects, we’ve worked with encoding.com, Elemental Cloud, Elemental Server, Sorenson Squeeze, Telestream Vantage, and several other encoders.

    What We Do:

    –  We review the presets for encoding efficiency, and device compatibility.
    –  Compute the PSRN, VQM, and SQM values on all encoded files, and evaluate their relative quality.
    –  Produce a series of test encodes designed to identify the relative efficiency of existing presets, and if necessary, identify alternative configurations.

      What You Get:

      –  You receive a report detailing test results and other findings.
      –  Specific recommendations for modifying your encoding ladder (or creating alternative encoding ladders for different content types).
      –  Updated presets or specific recommendations for updating your presets.

Comments

#1Kevin MooreSaid this on 03/06/2016 At 12:41 pm

I've been doing something similar to this since about a year before Netflix made their announcement regarding how they dramatically changed their encoding methods. More information can be found on my blog.

Every single file I encode is different and this includes episodes in a TV series and sequels to movies. Also, make sure to perform the same testing for each rendition on your encoding ladder.

The high level overview of my procedure is as follows.

1) Find a CRF value that works for you. I use CRF 21 and encode using the veryfast preset and the baseline profile.

2) Look at the bit per pixel density using MediaInfo of the file you just encoded.

3) Encode your output file to that bit per pixel density. I perform a two pass encode using the medium preset and the high444 profile due to how that profile performs chroma subsampling. When the first pass is finished, FFmpeg will announce that the average CRF value is around CRF 19.54 for most content. The two pass bitrate based file will be very close to the same size as the CRF file.

I used to use VQM for output comparison, however I was never satisfied with the time it took to caluclate a better bitrate. Using CRF for bit per pixel estimation gets me close to VQM bitrate in a fraction of the time.

#2JanSaid this on 03/06/2016 At 01:02 pmIn reply to #1Kevin:

Thanks for your note. I actually saw your blog post when I was researching my Streaming Media article on Netflix and have included a link to it in an upcoming book.

Great work.

Jan