I taught a class on H.264 production at Streaming Media West today. Two related trends that I discussed were HTTP delivery of streaming media and adaptive bitrate streaming, where multiple files are encoded for distribution to remote viewer

The best adaptive streaming technology you probably don’t know about

 

I taught a class on H.264 production at Streaming Media West today. Two related trends that I discussed were HTTP delivery of streaming media and adaptive bitrate streaming, where multiple files are encoded for distribution to remote viewers, with the stream varied by playback device and viewer bandwidth, and adapting to changing conditions, like effective throughput and CPU utilization. I think over the next 12 to 18 months, HTTP-based adaptive bitrate streaming will become the norm, rather than the exception.

Why? Because it provides a higher quality, more consistent viewing experience for all viewers, irrespective of connection or device type, from cell phone to set top box to a dual processor, quad core computer on a high bandwidth FIOS connection. Lots of competition in the marketplace, however, with Microsoft, Move Networks, Akamai and Apple currently in and Adobe coming.

All these technologies share a couple of related characteristics that are huge implementation negatives. First, you have to decide which streams to create for each target bitrate – for example, Major League Baseball uses up to eleven streams, while MTV uses up to seven. This means lots of rack mounted encoders for live events and lots of file administration issues for the content producer.

Second, because these files are cut up into three to ten second chunks, you have hundreds of thousands of files to track for each hour of video. This is a huge nightmare for content delivery networks that have to deliver these files, and very inefficient from a caching perspective, which minimizes the benefit of using HTTP in the first place.

Why? Because caching devices cache files that are highly popular as measured by customer demand. If you have hundreds of thousands of files floating around, each chunk will be less popular than a technology that represented the same data in hundreds of files. And that’s exactly what H.264 Scalable Video Coding (SVC) does.

Briefly, H.264 SVC is an extension of the H.264 standard. Like Microsoft’s multiple bitrate technology (MBR), SVC produces multiple bitrate files, but stores them within a single file. Unlike MBR technology, the data rate premium is only about 15-20% above the highest quality file. In other words, SVC can serve multiple bit rates from a single file that’s only 15-20% larger than the highest quality stream, which is obviously much more efficient than eleven discrete files at multiple data rates.

Since this single file represents all data rates, CDNs have to administrate the delivery of one file through their HTTP footprint, which is easier and cheaper than eleven or seven. From a caching perspective, there are much fewer discrete chunks, each of which also represents all file bandwidths, so they’re inherently more cachable, which should make streaming less expensive.

Since SVC produces a single video file, a video producer might only need one encoder, rather than three or four to produce the multiple streams in a live environment. Finally, you can produce an SVC file with thirty or forty different quality levels, which means that you can more finely tune the quality delivered to the viewer over changing conditions, improving their experience as well.

The one disadvantage of SVC is that it’s not here yet. But it’s coming, perhaps sooner than you might think. So if you’re considering adaptive bitrate streaming, it needs to be on your radar screen. Here are a couple of background articles to help you get started. 

H.264 Scalable Video Coding – what you need to know

Meet MainConcept, the Codec People, and H.264 SVC

Streaming Gets Smarter: Evaluating the Adaptive Streaming Technologies

 

Thoughts? Comments? Please let me know.

Thanks.

About Jan Ozer

Avatar photo
I help companies train new technical hires in streaming media-related positions; I also help companies optimize their codec selections and encoding stacks and evaluate new encoders and codecs. I am a contributing editor to Streaming Media Magazine, writing about codecs and encoding tools. I have written multiple authoritative books on video encoding, including Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video (https://amzn.to/3kV6R1j) and Learn to Produce Video with FFmpeg: In Thirty Minutes or Less (https://amzn.to/3ZJih7e). I have multiple courses relating to streaming media production, all available at https://bit.ly/slc_courses. I currently work as www.netint.com as a Senior Director in Marketing.

Check Also

Five star review for video quality metrics course.

New Five Star Review for Video Quality Metrics Course

The Computing and Using Video Quality Metrics course teaches encoding pro to compute and use video metrics like VMAF, PSNR, and SSIM.

Figure shows the different components to live streaming latency.

The Quality Cost of Low-Latency Transcoding

While low-latency transcoding sounds desirable, low-latency transcode settings can reduce quality and may not noticeably …

NAB Session on AI in Video Streaming

Like most encoding professionals, I’ve followed AI-related streaming advancements for the last few years. I’m …

Leave a Reply

Your email address will not be published. Required fields are marked *