Following 3points comment on removal of the DNxHD bitrate cap in blender and the discussion on the bug tracker that is linked to, with regard to DNxHD intermediate’s:

It brought to mind a couple of old posts I’d forgotten about:

And a bit of a revisit.

Many video cameras particularly the ‘consumer’ variety capture and encode luma across an 8bit levels range greater than the ‘rec709′ (and earlier specifications) of 16 – 235 and 16 – 240 for chroma.

Generally these cameras use the 16 – 255 8bit levels range, but there are numerous point and shoots which encode using the full 8bit range and don’t flag it as such. Both however are not strictly to a video ‘standard’.

There are a number of cameras such as Canon and Nikon video shooting DSLR’s and the Panasonic GH3 (h264 MOV profiles) which capture and encode to the JFIF standard where luma and chroma are over the full 8bit levels range however the encoder adds the h264 VUI Option ‘fullrange’ flag to the header of the h264 stream to signal to the decompressing codec to squeeze the full 8bit range into the ‘rec709′ 16 – 235, 16 – 240 levels range at playback or import into an NLE before any color space conversion to RGB for viewing, color processing, filters and effects.

So what happens to these 16 – 255 or unflagged full range camera video files and why does it matter, you may decide it doesn’t matter but it’s something to be aware of non the less.

To simulate the ‘full range’ levels found in numerous camera video files I created this image which contains text with RGB values 16 & 255. A gradient from 8bit level 235 up to 255 [White] our highlight roll off to hard clip at 255. And a gradient from 8bit level 16 down to 0 [Black] detail in the shadows.


The image was then encoded to h264 specifying that the full 8bit levels range was to be used in the YCbCr (YCC for short) video file.

The image used in the encoding is RGB and the ‘correct’ way to encode RGB to YCbCr is to encode using only the 16 – 235 levels range in YCC. So the relationship between 8bit levels range RGB to YCbCr would be 16 YCC is Black, equivilent to RGB 0. 235 YCC is White equivilent of RGB 255.

But as the purpose is to simulate a native camera file with luma levels outside of 16 – 235 the ‘correct’ way has not been followed.

Playing this video in a typical media player will result in nothing more than a black and white horizontal bar, not representative of the source image used to encode. But that gradient shadow and gradient highlight detail is there in the encoding it’s simply not shown because the typical media player takes only the 16 – 235 luma range when converting to RGB for playback and thus crushes shadow levels from 16 to 0 [Black] and compresses highlight detail from 235 up to 255 [White] giving us black and white horizontal bars.

To confirm the detail is there in the original.mp4 file here’s a waveform of a frame via Avisynth:


Full range levels isn’t really a problem for a decent modern NLE, which although will preview the same as a media player, ie: black and white horizontal bars, because ultimately any YCC output for delivery from the NLE should have it’s luma constrained within the rec709 16 – 235 8bit range and these camera files do not, it will generally be a non destructive operation because the color space conversion from YCC to RGB for display and color processing in a modern NLE will be done at 32bit float, not 8bit like a typical media player or image sequence output via ffmpeg for example.

The color space conversion at 32bit is generally non destructive so the ‘full range’ levels including those gradient shadow and highlights are still there but just not visible until some levels manipulation is done in the color correction or ‘grading’ process pulling that detail into display as it enters the 16 – 235 levels range. This can be a simple arbitary levels mapping ‘effect’ like PC to Video levels filter, or in the ‘grading’ process via curves or 3 way color corrector type tools.

Blender however relies on ffmpeg to do the decompression of the YCC video AND to do the color space conversion to RGB in the video import process.

As a result Blender receives 8bit RGB data from ffmpeg and you’ve probably guessed, the levels between 16 and 0, the shadow detail and levels between 235 and 255 the highlight detail are both destructively crushed to black and white respectively like the default preview of black and white horizontal bars.

There is no chance of manipulting the levels of the RGB data in Blender by means of curves or 3 way color corrector to pull the ‘out of range’ levels into play, that detail has gone, lost in the import process.

So the idea that we can import native YCC video files into Blender, edit and export to some high bitrate uncapped intermediate codec of a file size greatly larger than the original file in a ‘lossless’ or even ‘visually lossless’ intermediate step can be rather misleading, depending on the camera native source files used.

Here’s a high bitrate DNxHD encode out of Blender from the very same h264 Original.mp4 source file:

If we put these two files into a grading application like Davinci Resolve both will preview with black and white horizontal bars, but only one of the files will reveal the out of range shadow and highlight detail when a levels or curve adjustment is made on it.

Guess which. :-)

To conclude, a waveform via Avisynth of both the Original.mp4 and Blenders high bitrate uncapped :



Sure compression can kill detail but so can poor levels handling and 8bit color space conversions, no amount of high bitrate encoding will bring it back.