I thought I’d try Blender’s VSE (Video Sequence Editor) again to edit and export a simple movie using my cameras HD 50i and 25p footage (.m2t Mpeg2) as a simple test before I attempt to use blenders node compositor and VSE for editing and combing video and CG together.

The First thing I wanted assure myself of was the quality of import and sufficient export options. Export seems covered with either X264 or Blenders frame server to TMPGEnc 2.5 or Xpress, hopefully frame serving to Cinema Craft Encoder SP2 is an option as well.

To compare blenders import of my .m2t files I also used DGIndex to get a .d2v file, opened the video through a simple .avs file in VDubMod and used AVISynth to handle the deinterlacing, resizing to 1920×1080 PAR 1:1 from my cameras HDV 1440×1080 resolution, and YUV -> RGB conversion. Finally exporting to .png image sequences.

I then imported the original .m2t files into blender (It uses FFmpeg) and imported the AVISynth derived image sequences into Blenders VSE to compare results. I used blenders scopes and histogram tool for analysis and rendered results out from blender, then used The Gimp’s Histogram and Colour Cube Analysis tools as well.

Please feel free to interpret the results and post comments. Click on images for full size versions. Or RMB “Save As”.

Results:

.m2t Direct import into Blenders VSE

Subdued colours in histogram, sparse vectorscope, clipped blacks

.png Render from Blender in Gimp

FFmpeg appears to be scaling [16-235] to [0-255] resulting in clipping of blacks and stretching of the limited tonal range. Number of unique colours = 66274

.m2t Direct import into Blender via Composite Nodes

Subdued colours, clipping blacks, more info in Vectorscope. Broken histogram in 32bit float mode

.png Render from Blender in Gimp

FFmpeg appears to be scaling [16-235] to [0-255] via nodes as well as VSE, stretching out the tonal range, but interestingly over double the VSE import’s unique colours = 179010. Has the 32bit float node environment helped with this? Even though there are more unique colours the stretching is still producing a similar histogram.

Then in comparison, images exported from VDubMod via AVISynth as described above. I chose not to scale [16-235] to [0-255] because I don’t want super blacks and super whites, outside of the 16-235 range to be clipped and I think, thrown away, unrecoverable, by FFmpeg with Blenders import methods.

.png export as Images Sequence from AVISynth in Gimp

Healthier histogram due no doubt to not stretching the tonal range, but also providing over double the VSE import with unique colours = 162044. Although histogram is better colour count is down from Blender nodes import.

.png export using Images Writer from AVISynth in Gimp

Again healthier histogram, unique colours = 161966. Lower than AVISynth Image Sequence Export. Which was a surprise.

A side by side comparison of the best results which were from AVIsynth Image Sequence and the best Blender could muster, via composite nodes.

As noted previously subdued colours, flatter histogram in Blender Nodes import. Blenders histogram tool could do with some work, a 0 – 255 scale would be a good start.

The same but with Lift and Gain adjusted in Blender to try to get best overall match with AVISynth Image Sequence export.

Using the Lift and Gain tools in Blender to try to get the levels similar to the AVISynth import on the left. Notice the flat base to the clipped off blacks, are they unrecoverable even after Lifting them?, has the info been crushed and lost? Also what is going on with the combing effect, boosting localised values rather spanning across the tonal range to achieve a histogram that matches the AVISynth one to the left? Is this due to 8bit processing after the conversion to RGB? I’m considering comparing this against a ColorMatrix 601->709 in AVISynth. I think blenders FFmpeg version is doing a 601 import of my HD source, not 709.

The same adjusted levels image but with deinterlace activated in Blender.

Finally, the two above renders in Gimp.

Gain & Lift adjusted image.

Adjusted as above with deinterlace active.

Enabling deinterlacing appears to have improved the vectorscope reading from the previous adjusted image, not sure why as the images are .pngs not video sources. ??

Regarding interlacing / deinterlacing in blender below is a render from a direct import into blender of the .m2t video file, deinterlaced activated as this is 50i video and then frame 1 rendered as a .png.

Little or no difference regarding quality of output.

Conclusion:

Blenders Nodes import is better than VSE, understandable I guess as they have some sort of internal 32bit system going on in the nodes, don’t think that has made it into the VSE?

But it would appear that if you want the full unscaled range and no clipping of your ‘Y’ type sources 0-255 scale then the best way to import video (at least Mpeg2) into Blender is via image sequences generated from AVISynth where you have far more control over the colour space conversion.

Comparisons:

Kdenlive is also FFmpeg based so I tested and compared the .m2t import with Blender & AVISynth. To get a frame out of Kdenlive, in this case 7.5 I just used the ‘Extract Frame’ menu option in the clip window RMB menu.

Extracted frame in Gimp. Unique colours = 74860

Other comparisons:

AVISynth image sequence exported frame on the left compared to Kdenlive 7.5 frame extract from same .m2t file on the right.

Different pattern to the vectorscope in Kdenlive image. Blacks nearly clipping, tonal range stretched over 80%

Blender Nodes import of .m2t file on left compared to Kdenlive 7.5 frame extract from same .m2t file on the right

Similar levels and location on RGB Parade but Blender import on the left is better.

Finally Blender .m2t file directly into VSE on left compared to Kdenlive 7.5 frame extract from same .m2t file on right.

Very similar results, perhaps unsurprisingly as they both use FFmpeg. I gave up with Kdenlive at that point. The scopes are treated as plugins in kdenlive, which is bizzare, so you have to apply them to each piece of footage you put onto the time line. There appears to be no decent way of setting black and white points, even if the import came in unscaled and trying to view scopes and the clip that they relate to at the same time seemed impossible. No recognisable colour correction system either, no 3 Way, no curves.

**NEW TESTS**

FFmpeg version SVN-r19352-4:0.5+svn20090706-2ubuntu1

Using CLI ffmpeg -i movie.m2t -s hd1080 movie_%d.png

Healthier looking histogram. Unique colours = 75509

Comparison between FFmpeg CLI Conversion on left with AVISynth Image Seq on right.

Again stretched tonal range from scaling. Clipping blacks as a result. Sparse vectorscope. But histogram does look better, would you see this type of change if FFmpeg had moved from BT601 to 709 for HD material? Now if it just wouldn’t scale then maybe a histogram the same as AVISynth? Unique colour count is still down though.

Finally, for now.

Comparison between FFmpeg CLI import on left with Blenders VSE import on right.

Very similar results, tonal range stretched. Clipping Blacks. More colour values to the latest FFmpeg svn CLI conversion than whatever version of FFmpeg Blender 2.49a is using. But still not as good as AVISynth route.

Latest FFmpeg gives 75509 unique colours. Where as AVISynth gives over double that with 162044, no stretching of tonal range, no clipping of blacks and far healthier histogram.

Is FFmpeg using BT601 to convert to RGB? For HD source material?

It also looks like FFmpeg is scaling values [16-235] to [0-255] which maybe correct if we were converting for immediate play back in a PC’s media player, but the import into blender is for post processing, a creative process?

When shooting I generally pay close attention to the zebra patten set at 100%, that is 235 on Y scale, maximum legal white for broadcast on TV. Although many cameras capture all the way up that Y scale to 255, ie the super whites. Also 16 is legal black for broadcast however the Y scale goes down to 0.

FFmpeg appears to be scaling the 16 – 235 values to 0 – 255 on the RGB scale, stretching the tonal range in the process, mapping 16 to 0 and 235 to 255 means that both the black values below 16 and the white values above 235 captured by the camera to tape or h/d are being clipped off and thrown away or crushed down when FFmpeg converts from ‘YCrCb’ or YV12 which ever it is to RGB for import into blender.

Why no FFMpeg option not to scale? Whatever information my camera captures I want available for me to decide what I do with the levels, rather than having some codec arbitrarily clip the values and throw useful information way.

And FFmpeg does appear to throw it away, they don’t appear to be recoverable by adjusting levels after import, you just see a flat line appear at the top or bottom of you luma scope and RGB parade showing where it clipped, with no information above or below.

I’m using AVISynth to extract image sequences from my video files to bypass FFmpeg’s importing into blender. I’d probably try Lagarith or HuffyUV instead of image sequences as an intermediate format but Lagarith is not supported on Linux and the HuffyUV implementation for Linux I think is part of FFmpeg. FFmpeg is something I wish to avoid, currently.

Using AVISynth also allows you to choose you method of deinterlacing, if you need to, rather than blenders preset button and the choice of resizing method. Other filters for troublesome source material can be included such as Deshaker or TempGauss for cleaning up of DV material or even Dan Issacs SD2HD avsi script all before importing into blender as an image sequence.

Although I’m mentioning FFmpeg perhaps more accurate might be what blender is asking of FFmpeg. If anyone knows how to specify the colour space conversion for FFmpeg to use, please let me know.

**END NEW TESTS**

**UPDATE**

I rendered out a .png from Blenders VSE import of the .m2t and compared it with the AVISynth image sequence .png but rather than show scopes and histogram, I’ve tried to give a more tangible illustration of what I think is a definite reduction in quality from Blenders handling of the .m2t import. i think the loss in quality is due to blender scaling [16-235] to [0-255], a not so good colour space conversion and clipping.

But before I could compare the two images, I used blenders colour correction tool in float mode to adjusted the .m2t import until the luma scope, RGB parade and histogram matched the AVISynth image as near as I could. I then rendered out a .png of the adjustments and checked it once again against the AVISynth .png export.

AVISynth Image Export on Left v Blender VSE Import of .m2t on Right.

Clouds. 800%

AVISynth_Left v m2t VSE_Right_Sky_800pc

Flowers. 800%

AVISynth_Left v m2t VSE_Right_800pc

Castle. 400%

AVISynth_Left v m2t VSE_Right_400pc

Castle. 1600%

AVISynth_Left v m2t VSE_Right_1600pc

Castle. 1600%. Gimp Histograms and Colour Cube.

AVISynth_Left v m2t VSE_Right_Values_1600pc

There does seem to be a loss of quality from Blenders VSE importing of .m2t files and may be other formats too, but non of the above quality loss is due to choice of output encoder (haven’t got that far yet), whether it be FFmpeg’s X264 or HCEnc, TMPGEnc or Cinema Craft Encoder SP2.

The quality loss so far I think is a combination of BT601 import of HD material and stretching of the tonal range [16-235] it appears to be a damaging method of importing video into Blender. It doesn’t appear to be a good start and it will interesting to see the impact on quality once the output encoder has emphasised some of the mangled imported material.

** END OF UPDATE**

I will be testing some old DV sources in a similar way.

Tests with Blender 2.5 svn including colour management to follow.

Please free to comment on the way I’ve tested, good or bad, improvements that could be made and your own conclusions.