Consumption of video in the broadcast market means playback to display devices. In general, displays can be integrated into other equipment (video monitors, waveform analyzers, scopes, etc.) or they can be standalone display devices such as display monitors, projectors, video walls or multiviewers. Regardless of the devices, there are three basic stages: input; processing and output.
The input stage will usually accept incoming signals whether it is a standalone display or equipment with a built-in display. The nature, number and type of inputs depends upon the intended uses. In the broadcast production plant, inputs are likely to be baseband uncompressed digital video such as SDI, HDMI, DisplayPort or legacy analog sources coming over component, composite or S-video. Some gear may also accept input from a USB port or over a CAT 5 cable that may be connected to an Ethernet-based network.
Once an input source has been selected, video processing may be performed. This could include de-interlacing, frame rate conversion, scaling, color correction and more. There may often be redundant video processing blocks in the playback path. For example, a router may have these capabilities built in, but the projector or monitor may also have these capabilities available.
The output stage connects any processed video to the actual display drivers. For a standalone monitor, video wall, projector, multiview display or equipment with a integrated displays, this interface is usually either LVDS or its replacement, V-by-One.
An FPGA can provide nearly all of this functionality allowing display devices to be quickly designed and brought to market – with the ability to allow upgrades over the product life. Below are available IP cores for I/O and video processing sections.
|Video and image processing (VIP II) Functions (4K scalers, mixers, DIL)|