Video and Image Processing Suite Details

The Gamma Corrector is used when you need to constrain pixel values to specific ranges based on information about the display it is going to be sent to.  Some displays have a nonlinear response to the voltage of a video signal, and as a result a remapping of pixel values becomes necessary to correct the display.  The Gamma Corrector uses an Avalon®-MM interface look-up table to provide mapping of pixel values to their altered values.

An example of the Gamma Corrector is shown where a Y'CbCr input with 8-bit color values ranging from 0 to 255 being passed through the Gamma Corrector which then remaps the values to fit within the range of 16 to 240, and is sent to a Clocked Video Output.

The 2D finite impulse response (FIR) filter video intellectual property (IP) core is used to process color planes serially and pass the pixel values through a FIR filter.  The coefficients are input through an Avalon Memory Mapped (Avalon-MM) interface which can be interfaced by a Nios® II processor or through other peripherals accessing the Qsys design containing the video datapath.

An example block diagram using the 2D FIR filter is shown with a Clocked Video Input with RGB color planes formatted serially in order to pass through the FIR filter.  Once the filtering is done, the Color Plane Sequencer is used to reformat the color planes from three planes in serial to three planes in parallel.  With three color planes in parallel the video frame is ready to be transmitted externally through the Clocked Video Output core.

The Alpha Blending Mixer and Mixer II cores provide the ability to mix up to 12 or 4 image layers respectively and are runtime controllable through an Avalon-MM interface.  Accessing from a Nios II processor through the Avalon-MM interface, you can dynamically control the location of each layer displayed and the order in which the layers are overlaid (Mixer I only). The alpha blending feature of the Mixer I supports the display of transparent or semi-transparent pixels (Mixer I only).

The Mixer II core includes a built in test pattern generator to use as a background layer.  This is an added benefit as one of the four inputs does not need to be from a test pattern generator core. Another benefit of Mixer II is its ability to support 4K video.

An example block diagram of how the Mixer cores are used is shown with a clocked video input providing the active video feed on input 0, a background layer provided by the built-in Test Pattern Generator and a Frame Reader core that is reading static graphics like a company logo on input 1.  These feeds are mixed together to provide a display of a video image with graphics and a background provided by the test pattern generator.

It is recommended that Mixer inputs are fed directly from a frame buffer unless it is certain that the inputs’ and output’s respective frame rates and offsetting of the input layers will not result in data starvation and consequent lock-up of video.

The Chroma Resampler is used to change chroma formats of video data.  Video transmitted in Y'CbCr color space can subsample the Cb and Cr color components in order to save on data bandwidth.  The Chroma Resampler provides the ability to go between 4:4:4, 4:2:2, and 4:2:0 formats.

An example shows a Clocked Video Input with Y'CbCr in 4:2:2 chroma format being upscaled by the Chroma Resampler to 4:4:4 format.  This upscaled video format is then passed to a Color Space Converter which converts the video format from Y'CbCr to RGB to be sent out to the Clocked Video Output core.


The Clipper core is used when you want to take fixed areas of a video feed to be passed onward.  The Clipper core can be configured during compilation or updated through an Avalon-MM interface from a Nios II processor or another peripheral.  The Clipper has the ability to set the clipping method by either offsets from the edges or by a fixed rectangle area.

An example shows two instances of the Clipper taking 400 x 400 pixel areas from their respective video inputs.  These two clipped video feeds are then mixed together in a Mixer core along with other graphics and the built-in test pattern generator as a background.  The Mixer has the ability to adjust the location of the video inputs, so you could position the two clipped video feeds side-by-side with the addition of frame buffers if necessary.

The Clocked Video Input and Output cores are used to capture and transmit video in various formats such as BT656 and BT1120.

Clocked Video Input cores convert incoming video data into Avalon Streaming (Avalon-ST) video formatted packet data, removing incoming horizontal and vertical blanking and retaining only active picture data.  The core allows you to capture video at one frequency and pass on the data to the rest of your Qsys system which can be run at the same or another frequency.

An example of a Clocked Video Input is shown feeding video into a scaler block to upscale from 1280 x 720 to 1920 x 1080, after which it is sent to a Clocked Video Output core.  If both input and output have the same frame rate, FIFOs in the Clocked Video Input and Clocked Video Output can be created to allow conversion to take place without a frame buffer.

The Color Plane Sequencer is used to rearrange the color plane elements in a video system.  It can be used to convert color planes from series to parallel transmission (or visa-versa), to “duplicate” video channels (such as might be required to drive a secondary video monitor sub-system) or to “split” video channels (such as may be required to separate an alpha plane from three RGB planes output as 4 planes from a frame reader).

An example of the Color Plane Sequencer is shown with the 2D FIR filter video IP core which requires video to be input and output with the color planes in series. To transmit video out to the Clocked Video Output in the desired format, the color planes must be converted to parallel by the Color Plane Sequencer.

The Color Space Converter cores (CSC and Color Space Converter II) are used when you must convert between RGB and Y'CrCb color space formats.  Depending on your video input and output format requirements, you may have to convert between different color formats.

An example of a Color Space Converter is shown with a Chroma Resampler upscaling Y'CrCb video and then it is passed to the Color Space Converter and converted into RGB color format to be sent to a clocked video output.

The Control Synchronizer is used in conjunction with an Avalon-MM master controller, such as a Nios II processor or other peripheral. The Control Synchronizer is used to synchronize runtime configuration changes in one or more video IP blocks in alignment with the video data as it is changing.  Some configuration changes can happen upstream from a video IP core while video frames are still passing through it in the previous format.  In order to make the transition seamless and avoid glitching on screen, the Control Synchronizer is used to align the configuration switch-over exactly as the new incoming video frame data is arriving at the core.

An example of the Control Synchronizer is shown with a Nios II processor configuring a Test Pattern Generator to change the frame size from 720p to 1080p.  The Control Synchronizer receives the notification from the Nios II processor that video frame data will be changing soon, but holds off from reconfiguring the Clocked Video Output until the new frames pass through the Frame Buffer to the Control Synchronizer.  The Control Synchronizer reads the control data packets of the frame to determine if it corresponds to the new configuration, and then updates the Clocked Video Output core to the new settings, making the resolution change on the video output seamless.

The Deinterlacer cores (Deinterlacer, Deinterlacer II and Broadcast Deinterlacer) convert interlaced video frames into progressive scan video frames.  There are multiple algorithms for how to deinterlace video to choose from, depending on the desired quality, logic area used and available external memory bandwidth.

An example of how the Deinterlacer core is used is shown with a Clocked Video Input receiving interlaced frames and passing through the Deinterlacer, which transacts with an external memory and Frame Buffer core.  After deinterlacing the video into progressive scan format, it is sent out through a Clocked Video Output core.

The Frame Buffer and Frame Buffer II cores are used to buffer progressive and interlaced video fields and can support double or triple buffering with a range of options for dropping and repeating frames.  In cases such as deinterlacing video, changing the frame rate of video, or sometimes mixing of video, a Frame Buffer is necessary.

An example of how the Frame Buffer is used is shown with a case where a Clocked Video Input core is receiving video at 30 frames per second (fps), and needs to convert it to 60 fps.  The Frame Buffer core is used to buffer multiple frames and supports repeating frames, so the frame rate is able to be converted to 60 fps and is then transmitted out through a Clocked Video Output core.

The Frame Reader core is used to read video frames stored in external memory and outputs them as an Avalon-ST video stream.  The data is stored as raw video pixel values only. 

An example is shown using the Frame Reader to get company logo graphics to overlay on another video stream and merging the layers together through a Mixer core.  From there the merged video is sent out to a Clocked Video Output core.  The mixer can optionally be configured to include an alpha channel.  In this case the frame reader could be configured to read three color planes and one alpha plane, which could be “split” out using a color space converter (not shown) before being input to the Mixer.

The Scaler II core is used to scale a video frame up or down in size.  It supports multiple algorithms including nearest neighbor, bilinear, bicubic, and polyphase/Lanczos scaling.  On-chip memory is used for buffering video lines used for scaling, with higher scaling ratios requiring more storage.

An example of the Scaler II core is shown taking a 720p video frame size from a Clocked Video Input and scaling it to 1080p and sending to a Clocked Video Output.

The Switch cores allow users to connect up to twelve input video streams to up to twelve output video streams.  The Switch does not merge or duplicate the video streams, but allows you to change the routing from input port to output port.  It's not necessary to connect all output ports unless you want to be able to still monitor those video streams.  Control of the Switch is done through an Avalon-MM interface accessible by a Nios II processor or another Avalon-MM mapped peripheral.

An example of the Switch is shown with a Clocked Video Input and a Test Pattern Generator feeding two ports on a Switch.  The second Switch output port is left unconnected, and the Nios II processor controls which of the two feeds is sent to the port connected to the Clocked Video Output for display.

The Test Pattern Generator core allows you to generate a number of images to quickly test your video interface.  The core is configurable for many different image sizes, as well as RGB and YCbCr color formats.

You can use a Test Pattern Generator core along with a Clocked Video Output core to quickly get your system's video interface verified.  With your desired video specifications in hand, completing a design takes only minutes to quickly validate the interface is able to generate an image on an external display.


The Avalon-ST Video Monitor is a core that can be inserted in series with your video datapath that reads Avalon-ST video packet information and provides diagnostic data to the Trace System.  The Video Monitor is inserted where you want to probe the video datapath for analysis and statistics information.  When combined with the Trace System core and connected externally through a debug port such as JTAG or through an Intel FPGA Download Cable, you can get greater visibility into the video system's behavior.  You can use System Console as the virtual platform to display this information.

An example shows the Avalon-ST Video Monitor inserted before and after a Color Plane Sequencer.  These are used to monitor video packet information coming from the Clocked Video Output and from the Color Plane Sequencer.  The Video Monitor does not alter the video data as it is passed through the core.  The Video Monitors are connected to the Trace System, which is accessed via JTAG in this case.

The Trace System is used to access the Avalon-ST Video Monitor cores inserted in a design for video diagnostic information.  Multiple Video Monitor cores can be used to connect to a Trace System controller.  The Trace System connects to a host using a debug interface typically like a JTAG connector or Intel FPGA Download Cable interface if available.

An example shows the Trace System used with a couple of Avalon-ST Video Monitor cores inserted before and after a Color Plane Sequencer.  The Video Monitors are connected to the Trace System, which is accessed via JTAG in this case.