Video content editing can be done in a production facility or in standalone studios. The editing suite contains all the equipment and video and audio tools needed to ingest/access, edit, play back, and store the finished product. A typical editing suite will include:
- Broadcast display monitor (one or two)
- Studio quality speakers
- Editing/color timing software (Avid, Final Cut Pro, DaVinci Resolve, etc.)
- Workstation computer
- Control panel/surface
- Video tape playback (optional)
- Broadcast waveform monitor (optional)
FPGAs play a big role in providing much of the I/O and video processing horsepower in the form of dongles, boxes (standalone or rack-mounted), or cards to fit into the workstation or workstation expansion box.
Video capture devices are used to ingest content for editing. Devices can ingest one or more digital formats (DVI, SDI, HDMI, DisplayPort) or analog formats (AV, s-video, composite). (see Video Capture Device page)
Video and image processing may be needed to transfer the content into a common format for editing. Functions here might include color space conversion, deinterlacing, frame buffering, gamma correction, scaling, etc. (see Converter Boxes page)
Broadcast monitoring provides the ability to analyze your video using tools like waveform, vectorscope, RGB parade, YUV component parade, histogram, audio phase and audio level meters. This is necessary to ensure the video meets the broadcast standards for a particular region and is vital during editing, color correction and the mastering steps.
Video playback is also needed to output the video to the display monitor or the broadcast waveform monitor. Here, I/O functionality for SDI, HDMI and DisplayPort/Thunderbolt are needed.
Editing workstations typically work with proxies of the high resolution content and/or compressed content. There are many intermediate proprietary codecs that are designed for the editing process and not for final distribution. Examples include ProRes, DNxHD, CineForm, AVCHD, XDCAM, AVC-Intra and others. These intermediate codecs retain high quality but use less disk space than an uncompressed file.
Videos can also be “wrapped” in a variety of container formats (including metadata that tells what codec was used and how the pieces are stored). Examples include AVI, MP4, MOV, WMV, ASF, etc. FPGA are used to accelerate the encoding or decoding of files with these intermediate editing codecs and container formats.
Cards, rack-mount or standalone boxes, or USB-based dongles can combine some or even all of these functions. But the FPGA is at the heart of all this functionality.