Five best practices for producing high-quality video.
Anyone who’s ever picked up a camcorder and tried to tap into their inner- Spielberg knows that there’s a lot more to creating a high-quality, impactful movie than turning on the camcorder and pressing the red record button. The same is true with producing video for streaming.
Corporate meetings are often recorded and streamed using studio-grade equipment.
Sure, you can attach your $600 consumer camcorder to a tripod, connect it to your computer, and produce streaming video, but will the quality enhance or detract from your content? Will it look and sound as good as the streaming video produced by your competitors, or other videos your viewers watch online? Fortunately, you don’t need Spielberg-like skills to produce high-quality streaming video, but to optimize the quality of your training or classroom video, it’s important to follow certain best practices.
1. CHOOSING A BACKGROUND Look at the image from the McKinsey and Company streamed video (figure 1). What do you notice about the background? Not much, beyond the fact that it’s soft and white and doesn’t contain a lot of detail. If you think about how compression technology (or a codec) works, you’ll realize that this background makes this video exceptionally easy to compress. Codecs must shrink the video so that viewers can retrieve and play them from your web site while retaining as much quality as possible.
Figure 1: Images shot for streaming should have minimal background detail, like this frame from a McKinsey and Company video of Silicon Valley executive and author Judy Estrin.
When a video has lots of detail in the background — like bookshelves, a finely patterned wallpaper, or blowing leaves — the codec can’t tell whether you care about the subject’s face or the extraneous stuff in the background. So it tries to preserve the quality of all the content in the frame, which inevitably degrades the quality of what you care most about.
If you’re building an in-house studio or set specifically for streaming, consider using a simple black background. Resist the impulse (or CEO’s suggestion) to create a fancy curtain with your company name and logo — it will only stress the codec, and likely cause “mosquitoes” and other artifacts around the text that are highly noticeable to your viewers.
When you do have to shoot on location, minimize the amount of detail in the scene, either by removing it from the background or by using camera controls to blur the background. McKinsey appeared to use both techniques, as the background in the Estrin video is objectfree except for what appears to be picture frame or shutter, which is slightly blurry and free of detail.
In addition, when choosing a background for your shoots, make sure that it contrasts with the clothes that your subjects are wearing. Avoid bright lights in the background, which can cause backlighting that darkens the subject’s face.
Many of the same considerations for background detail pertain to your subject’s clothing. Minimize detail by avoiding herringbone, pinstripe, and other fine patterns, and removing excessive bracelets, necklaces, and other decorative jewelry. Color makes a difference, too. Although red worked well in the Estrin shoot, generally blue and gray are safer choices. Also avoid dramatic differences in brightness — like black suits and white shirts — that are tough for cameras and codecs to capture and retain without losing detail in either the black or white regions.
Even if you follow all the rules, it’s impossible to predict with 100 percent certainty what a set will look like after encoding. So, whenever possible, bring a computer and encode a short section of the video to the final encoding parameters before finalizing your set.
2. LIGHTING THE SET There are two realities to appreciate before you decide how to light your set. First is that if lighting is not adequate, you’ll have to boost the brightness (or gain) of the video to compensate, either while shooting (via your camera’s gain control) or during editing. Either way, increased gain manifests as noise, which is not only noticeable to the viewer, but also injects more irrelevant detail into the video, making the codec’s job more difficult and usually degrading quality to boot.
The second reality is that with most consumer and even prosumer camcorders (e.g., those that cost under $10,000), ambient lighting typically doesn’t produce sufficient lighting unless you’re shooting outdoors. If you’re shooting indoors, you’ll almost certainly have to supplement or replace ambient lighting with your own to optimize the quality of your streaming video.
If you’re on a budget, shop lights from Lowes or Home Depot can get you started, though they obviously lack many of the features of video specific lighting. Otherwise, there are a number of light kits available from a range of vendors, including Lowel, Arri, Kino Flow Chimera, and Photogenic, usually starting at close to $1,000.
When purchasing a light kit, keep in mind that “soft” lights are the easiest to compress because they create indistinct lines and shadows, as opposed to the sharp lines and deep shadows produced by hard lights. Virtually all fluorescent lights are soft lights, and you can soften hard lights like tungsten bulbs by using soft boxes, bouncing the lights off umbrellas, or shielding them with diffusion filters. Google “diffusion kit” and you’ll find multiple collections of fabric filters that you can attach to a variety of lights — including those shop lights.
Whatever lights you use, be aware that lights have different color temperatures, and they must be consistent within a set to avoid white balance problems. For example, if you use an incandescent light kit to supplement sunlight from a window, it will be virtually impossible to produce natural looking colors. You can run into similar issues by mixing fluorescent and incandescent lights. Fortunately, most high-quality bulbs list their color temperature, which should help you avoid these issues.
In terms of lighting style, you can use three-point lighting, which produces slight shadows on the face, or flat lighting, with no appreciable shadows. Either way, the most important priority is to provide sufficient lighting for the camcorder to achieve good exposure without injecting gain into the video.
3. CAMERA USAGE AND SELECTION If you’re shooting in a controlled environment, like a classroom or conference room, it’s best to move that camera out of automatic mode, and control exposure manually, which fortunately is easier than it sounds. Basically, there are three controls that control how much light gets to the camcorder’s sensing device; shutter speed, gain, and aperture.
Unless you’re shooting high motion sports, a shutter speed of 1/60th of a second should be fine. Faster speeds capture sharper detail, but require more light, while slower speeds can produce blurry video. Whenever possible, I set gain to zero and control exposure via aperture.
As with a still camera, the aperture controls the amount of light that gets to the camcorder’s CCD. Aperture is measured in f-stops, with lower f-stops (like 2) admitting more light and higher f-stops (like 9)admitting less light. How do you know how much light is enough?
Figure 2: Adobe OnLocation’s waveform monitor and zebra stripes indicate that the aperture settings are on target.
There are two relatively objective ways. One way is to use the camera’s zebra stripes. Zebra stripes are configurable indicators that appear on most consumer camcorders at specified levels of light, which is measured on the IRE scale (Institute of Radio Engineers). When lighting a face, you typically want IRE levels in the highlight regions to be about 70 to 80 IRE. If you set your zebra stripes at 80, and the face is properly exposed, you should see zebra stripes on the brightest regions of the face.
The other alternative is to use a waveform monitor to measure the incoming video signal on the IRE scale. There are expensive dedicated hardware waveform monitors or much less expensive software waveform monitors like those available as a feature in programs like Adobe OnLocation. Another advantage of OnLocation is that you can preview your video on a computer monitor, which is much larger than your camcorder’s LCD, and check exposure with zebra stripes as well as the waveform monitor.
In the image from Adobe OnLocation (see figure 2 — featuring this author) the waveform monitor is on the right. The monitor tracks data horizontally through the frame, with corresponding IRE values shown vertically from 0 to 100 IRE. My face is the vertical spot of green in the left center of the monitor that’s bisected by the 60 IRE value and peaks at close to 80. That, plus the zebra stripes on my forehead and cheek, which were set to 80 IRE, tell me that exposure is adequate. And don’t worry, I was shooting in HDV to scale down to 800 x 600 streaming resolution, so the final video was well centered and the microphone was cropped out.
Beyond setting exposure manually, you’ll get the best results if you shoot in progressive mode, not interlaced. That’s because all streaming video is delivered in progressive frames, and you can avoid deinterlacing “jaggies” if you shoot in progressive. If you do shoot in interlaced mode, remember to deinterlace during rendering — otherwise, you’ll see slicing artifacts like the ones in Ms. Estrin’s fingers.
And yes, since it wouldn’t be an article about streaming video production if I didn’t say this, you should definitely shoot with your camera stabilized on a tripod and minimize motion where you can. Your codec, and your viewers, will thank you for it.
4. FRAMING THE SHOT The Rule of Thirds is a principle of photographic image composition that can also be applied to shooting video. Imagine the video frame divided into thirds, both horizontally and vertically like tic-tac-toe board. If the subject is facing the camera, the top horizontal line should be at eye level and they should be in the center of the frame. If facing the interviewer (as Ms. Estrin is), their eye should be at one of the four points where the horizontal and vertical lines intersect — leave the wide open space (called look room) in the direction the subject is looking. If you focus on this point while watching TV one night, you’ll see this rule is nearly universally adhered to. Theoretically, if you place points of interest in the intersections or along the lines, your photo becomes more balanced and enables a viewer to interact with it more naturally. Studies have shown that when viewing images, people’s eyes usually go to one of the intersection points more naturally than the center of the frame.
Remember, too, that terms like close-up, medium shot and longshot are terms of art that actually mean something. The folks who watch your videos are accustomed to shots being framed in this way, and shots that don’t adhere to these rules look awkward. For example, you’ll hardly ever see a shot that cuts the subject off at the knees, or midway between the waist and chest. A long shot includes heads to toes, a medium shot is waist to head, and a medium close up is chest to head.
5. DON’T FORGET ABOUT AUDIO Don’t skimp on the audio side of the equation. Viewers accept some visual degradation in their streaming media, but not audio-related deficits, since they know that audio can be nearly perfect, even when delivered via streaming.
If you’re using the camcorder’s microphone to capture audio, audio will be noisy and recognizably lower quality. Only buy a camcorder that has a connection for an external microphone, as well as manual and auto volume controls, and purchase an external microphone that matches your shooting requirements.
Overall, you can’t have good streaming video unless you start with good video, which means solid fundamentals of set design, lighting, sound, and camera selection and use. You don’t need to be an expert — if you invest 30 minutes or so in each subject, you’ll achieve a new level of competency and streaming quality.