Every spring I speak at Streaming Media East in New York City. Usually, I do a 3-hour, tech-heavy seminar on streaming production or the current state of the codec market, plus a 1-hour discussion on an equally weighty topic. This year I decided t

Five Streaming Production Mistakes to Avoid

Every spring I speak at Streaming Media East in New York City. Usually, I do a 3-hour, tech-heavy seminar on streaming production or the current state of the codec market, plus a 1-hour discussion on an equally weighty topic. This year I decided to mix it up with a fun, Jerry Springer-like approach: the top X mistakes made by streaming producers, with examples. Though these topics are relevant to anyone who’s ever posted a streaming video, I’m guessing that the crossover between EventDV readers and Streaming Media East attendees is pretty slight, so I’ll share my ideas here.

The first and most common mistake made when producing for streaming is shooting in an interlaced mode. All streaming video is progressive. And if you shoot interlaced, you start with two fields that may not combine into one clean frame (even if you check the deinterlace box before rendering), especially when motion or sharp diagonal lines are involved. This can result in simple jaggies or bizarre artifacts, such as a table edge that looks like twisted wrought iron in a video produced by one of the largest retail chains in the world. Second, if you do shoot interlaced, remember to deinterlace the video. Streaming producers make this mistake all the time and end up with horizontal slices, almost like Venetian blinds in higher-motion sequences. It sounds obvious, but one trailer I saw for the movie Little Miss Sunshine exhibited obvious deinterlacing artifacts. How film source came to be interlaced is beyond me, but the artifacts were unmistakable.

The third mistake relates to the aspect ratio of the video, which one of the most widely watched news networks in the world can’t seem to get right. You know there’s an issue if a picture of the subject taken with a digital camera looks different from a video frame grab. If not, Houston (or, in this case, Atlanta), you have a problem. I’ve worked with a variety of input formats with varying aspect ratios, and one three-part approach always seems to produce the desired result.

  • Use the right preset in your video editor. If your video looks funky in the editor, exporting for streaming isn’t going to improve the situation.
  • Always export using square pixels (or 1:1), even if your video source is 16:9. For widescreen video, export using a 16:9 resolution, such as 640×360 or 480×270. But make sure that the pixel aspect ratio is square.
  • When outputting your video and choosing an aspect ratio that is different from the source video, most encoding tools give you the option to maintain the display aspect ratio, crop or letterbox, or (gasp) distort the aspect ratio. But in most instances, distorting the aspect ratio is exactly what you want to do. Here’s an example. Say you shot in 4:3 DV, which has a pixel resolution of 720×480 and a pixel aspect ratio of 0.9. That means that the horizontal pixels get shrunk by about 10% when displayed on a TV. Multiple 720 by 0.9 and you get 648, which is close enough to round to 640. Now, suppose you want to output a 640×480 streaming file from the 4:3 DV source. You don’t want to crop the right and left edges of the video-that will make your subjects look distorted (and fat rather than skinny). You don’t want to add black bars at the top and bottom or sides. Rather, you want to scale the 720×480 video into the 640 box, in essence “distorting” the video. Sounds awful, but it’s the right answer.

The next common mistake is to oversize or undersize your video. I’ve been tracking the resolutions and data rates used by high-profile broadcast and corporate websites for the last 2 years. In November 2008, broadcast sites were distributing 468×324 video at an average combined (video + audio) bitrate of more than 500Kbps. When ESPN recently revamped its site, it used 576×324 video at a combined data rate of 772Kbps. Heck, even YouTube recently launched a high-resolution mode that distributes video at 480×360 at a combined data rate of 730Kbps.

These stats are significant in two very distinct ways. First, they suggest that the average home viewer of streaming media can successfully retrieve and play video at these specs. Unless bandwidth costs are a significant concern, there’s no reason to distribute a smaller stream. Second, folks who watch your videos consider the video they see on entertainment sites as the norm. If you’re much smaller-say, in the 320×240 range that used to be considered relatively large-your video looks substandard. That said, 7 of the 16 corporate sites that I analyzed still distribute at 320×240 or smaller.

The last error relates to audio. Most producers think that stereo is “better” than mono, and most corporate sites distribute in stereo. However, if the predominant component of the audio is speech, it was almost certainly captured as mono. Producing this in stereo means that you duplicate a mono signal into two channels, doubling the input, which your encoding tool has to compress twice as much to meet your target.

Even if you add stereo music as background to your audio, it’s unlikely that the auditory cues (say, piano on the left, guitar on the right) exist or are perceptible to the listener. Producing in mono will either improve the quality of your audio or let you reduce the audio data rate and allocate more bandwidth to video, improving the overall quality of the stream.

About Jan Ozer

Avatar photo
I help companies train new technical hires in streaming media-related positions; I also help companies optimize their codec selections and encoding stacks and evaluate new encoders and codecs. I am a contributing editor to Streaming Media Magazine, writing about codecs and encoding tools. I have written multiple authoritative books on video encoding, including Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video (https://amzn.to/3kV6R1j) and Learn to Produce Video with FFmpeg: In Thirty Minutes or Less (https://amzn.to/3ZJih7e). I have multiple courses relating to streaming media production, all available at https://bit.ly/slc_courses. I currently work as www.netint.com as a Senior Director in Marketing.

Check Also

Steve Strong Details Norsk's Suitability for Broadcasters

Norsk for Broadcasters: Interview with id3as’ Steve Strong

Recently, I spoke with Steve Strong, id3as’ co-founder and director, about Norsk’s suitability for broadcasters. …

Five star review for video quality metrics course.

New Five Star Review for Video Quality Metrics Course

The Computing and Using Video Quality Metrics course teaches encoding pro to compute and use video metrics like VMAF, PSNR, and SSIM.

Figure shows the different components to live streaming latency.

The Quality Cost of Low-Latency Transcoding

While low-latency transcoding sounds desirable, low-latency transcode settings can reduce quality and may not noticeably …

Leave a Reply

Your email address will not be published. Required fields are marked *