Latest Entries »

VSE: DIY wipes

Blender has a couple basic wipes to transition between shots. But with the addition of the new Mask based Strip Modifier, you can define a transition using any shape you want! Create a black and white gradient image (or high contrast if you prefer) and use that as a mask source (or matte if you’re old like me). Then change the ‘blend’ type of the B strip (incoming strip) to alpha over and apply a Mask Modifier that uses the imported artwork.

You can animate the contrast of the source art (matte) to alter over time, or you could create an actual animation to use instead.

Here is a tutorial describing the workflow.
DIY wipe tutorial

VSE: Add titles, a quick tutorial

Blender’s VSE doesn’t have a titling tool! I know that may come as a shock, some suggest that it is a core feature for an editing application. But the Blender foundation would remind you that Blender’s VSE is really just a clip sequencing tool, that is a device that allows you to shuffle rendered media and trim as required. In recent times you have been able to perform simple audio effects and enhance your images with robust color correction tools. You can even stabilise video now! (ignore the lack of simple keying effects).

Given these restrictions Blender does allow some quite advanced titling effects. Of course this is achieved via the 3D suite of tools. Basically what I outline in this video is the merging of a 3D scene with video in a master edit scene. I have suggested that you split off a new floating window and demonstrate the ease with which you can key and animate some basic text overlay.

Further to this tutorial, I have created an example of adding a drop shadow and outline to the text scene. I have used the compositor in that source scene. I found that the rendered scene played at almost realtime in the VSE master sequence.

Compositor frame grab, showing a drop shadow and outline on one layer.

Compositor frame grab, showing a drop shadow and outline on one layer.

You can get the .blend file here (on the original BA forum post). http://blenderartists.org/forum/showthread.php?290648-Text-in-the-sequencer&p=2367973&viewfull=1#post2367973

Pipe To RGB – Quick How To

Recently I posted about an AvsPmod macro by developer vdcrim for piping RGB to Imagemagick but didn’t go into detail on how to achieve this, so here goes.

First, why convert to RGB image sequences and why 16bit? It’s a plain and simple work around for applications that require RGB import or in the case of Blender convert to RGB at import regardless. If the NLE or grading application you choose does a color space conversion from YCbCr to RGB at 32bit float precision then you probably don’t need the 8 to 16bit conversion steps.

But more with regard to Blender, the color space conversion is done by ffmpeg at 8bit integer, with cheap interpolation and a typical 16 – 235 luma, 16 – 240 chroma to 0 – 255 RGB. Very few video cameras, particularly ‘prosumer’ use that restricted range, most mpeg2, mpeg4, h264 videos captured by these cameras are 16 – 255 or even 0 – 255 YCbCr, so highlights above YCC 235 are pushed to white RGB 255 and any shadows below YCC 16 are compressed down to RGB 0 in the conversion to 8bit RGB probably giving you a typical high contrast output with a fair amount of artifacts at edges.

If you happen to be using Canon, Nikon and Panasonic GH3 h264 MOV’s then they’re all JFIF ie: JPEG levels, normalized levels over full 0 – 255 and the h264 has the ‘fullrange’ flag set on to signal to scale levels before the conversion to RGB, ffmpeg respects the flag so Blender’s conversion to RGB is fine with regard to clipping but the interpolation is still cheap Fast Gaussian interpolation so edges suffer rather than using bilinear, bicubic or Lanczos. Maybe the reason is for minimum processing or just for speed of import and playback, I don’t know.

Why 16bpc, because even a simple 8bit to 16bit conversion appears to performs much better in ‘grading’ in Blender and other applications like Darktable, than simply importing 8bit into a 32bit processing workflow. Even better if we add a deblock, denoise, add discrete noise/grain processing step before conversion to 16bpc.

http://blendervse.wordpress.com/2013/03/12/pipe-to-rgb-avspmod-to-imagemagick/

Process.

First of all a few prerequisites including installing Avisynth, Imagemagick hdri MS Windows build is required for this macro and collecting a few Avisynth plugins and the Pipe To RGB macro:

http://blendervse.wordpress.com/2011/09/16/8bit-video-to-16bit-scene-referred-linear-exrs/

http://blendervse.wordpress.com/2013/03/12/pipe-to-rgb-avspmod-to-imagemagick/

I then created a Project Folder with subfolders like this:

ProjectFolder

Within the 16bit folder I added a sub folder called “Scenes”

For this test I created a short video edit in kdenlive http://www.kdenlive.org/ using Canon h264 remuxed to mp4 with a special build of MP4Box, why? http://blendervse.wordpress.com/2012/04/02/waiving-the-fullrange-flag/ and as long as no filters or processing are added to kdenlives timeline it’s possible to avoid any color space conversion to RGB, (which would be at 8bit and not lossless) and then rendered out to a lossless codec like ffv1 in a matroska container.

I added the source.mkv output file from kdenlive to the ‘Source’ Project Folder.

An Avisynth script is required next to process the source.mkv file.

LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\mvtools2.dll”)
#LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\ffms2.dll”)
LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\fft3dfilter.dll”)
LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\removegrain.dll”)
LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\mt_masktools-25.dll”)
LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\dfttest.dll”)
LoadPlugin(“c:\Program Files\AviSynth 2.5\plugins\smoothadjust.dll”)

source = ffmpegsource2(“Source.mkv”, threads=1)

return source#Gives us 8bit. Hash the line out for 16bit.

Dither_Convert_8_To_16(source)

Smoothgrad()

Dither_convert_yuv_to_rgb(matrix=”601″, tv_range=false, cplace=”DV”, chromak=”bicubic”, lsb_in=true, output=”rgb48y”)
Dither_y_gamma_to_linear (tv_range_in=false, tv_range_out=false, curve=”709″)
Dither_convey_rgb48_on_yv12 (SelectEvery (3, 0),SelectEvery (3, 1),SelectEvery (3, 2) )

Save the script as Source.avs into the ‘Source’ folder.

The script is very simple and does very little processing to give us 16bit RGB. It is possible to do many different processing stages dependant on source file such as adding a denoising process but for now, keep it simple. :-)

Just to mention a few of the settings within the ‘Dither…..’ lines of the script:

matrix=”” relates to the luma coefficients, ie: color matrix of the source file.
cplace=”” relates to the chroma placement of the source file. DV, MPEG1 or MPEG2
curve=”” relates to the Transfer Curve applied to the source file. 709 or sRGB
tv_range= relates to whether the source file luma is within the 16 – 235 8bit range (tv) or not. If unsure amend the 8bit line to this: return source.histogram(mode=”classic”)

The “classic” histogram is a luma waveform, if you see any waveform outside of the black 16 – 235 zone of the histogram you’re camera or other video source is ‘full range’ so tv_range=false, make sure of using a high contrast shot to test with.

The Dither_y_gamma line linearizes the output. Hash the line out if you prefer to leave them with gamma applied.

Best to read the Dither.html file included in the plugin download to establish your settings.

The script serves two purposes:

1. With the “return source#….” in place we are able to view the 8bit decompressed frames without any processing overhead in order to shuttle through the video and set bookmarks for ‘Hero’ frames first and then set bookmarks for ‘Scene’ splitting.

2. Hashing out the “#return source#….” line invokes the remainder of the script, the conversion to 16bit per channel RGB.

Stage 1 – Hero Frames

Open the Source.avs script in AvsPmod. You should see the first frame of your video.

Use the forward & back arrow keys on your keyboard to shuttle through the frames & press ctrl+b to set a bookmarks ready to generate ‘hero’ frames for looks development. ‘Hero’ frames may be those that best describe a scene with regard to visual appearance but maybe also worst case of shadows and highlights re clipping.

Set Bookmark_Heros

Each time you add a bookmark a small black triangle appears below the timeline.

Once you are ready to export your hero frames add a hash to the start of the “return source#….” line, the whole line should turn green.

I’ve found that at this point before starting the intensive 16bit conversion that using AvsPmod’s ‘Release all videos from memory’ appears to help memory usage. This is on the RMB menu when in the video frame area of AvsPmod.

Once actioned the video preview will disappear but your bookmarks will remain.

Then choose from the Macros menu “Pipe RGB To Imagemagick”:

HeroFrames

The macro works by automating the splitting of our source file into manageable chunks due to memory overhead of processing and batches the conversion.

Specifying ‘Split by frame step’ allows us to set the number of frames to have in memory at one time, 10 is a good start.

If you choose the ‘Split by time step’ alternative then set a Time step instead, if splitting by ‘Number of intervals’ set number of intervals ie: How many divisions of the source file.

For ‘Hero’ frame export tick the box for “Include only bookmarks if any”, this will export only the frames at bookmarks.

The Imagemagick processing arguements are:

-limit memory 500MiB -limit map 1GiB -depth 16 -size 1920×1080

Amend the frame size to suit your source files. Be aware if using native Canon DSLR video the frame size could well be 1088 lines in height depending on camera model.

FFmpeg doesn’t crop the bottom 8 lines where as QT or similar codecs do. The kdenlive output in my test, was based on Canon h264 but kdenlive via MLT crops the 8 off the bottom. Hence size 1920×1080.

Choose an output folder for the 16bit image frames. Following the Project Folder set up above the folder to choose would be ‘Heros’.

Name the file suitably and include the .tif file extension, if you prefer you can export 16bit .png’s amongst other image formats that support 16bpc.

Tick the box ‘Add the padded frame number as suffix’, this will give sequential file numbering onto the end of your filename set earlier, seperated with a – hyphen. Just as an aside if you intend importing the image sequences into an NLE such as kdenlive a hyphen rather than underscore must be used.

Tick ‘Show Progress’. Click OK, after a minute or so your 16bit hero frames should land in the folder chosen and ready for import into whatever application you intend ‘grading’ your video whether that be Darktable, Blender etc.

Stage 2 – 16bit Output & Scene Splitting

Once the ‘Hero’ frames are exported, unhash the 8bit line “return source#Gives us 8bit. Hash the line out for 16bit.”, so that we prevent the 16bit processing for now, if you added the histogram option you could drop a hash in after “return source” to prevent luma waveform being drawn.

In AvsPmod go to the menu and choose Video -> Clear All Bookmarks.

We’re now looking to export our movie to an image sequence or preferably sequences, with the option to put each sequence of images into a named ‘Scene’ sub folders because we will undoubtedly be dealing with hundreds of frames per scene, thousands of frames in total, for manageability, file manager performance sub dividing really helps.

With the movie loaded in AvsPmod once again create new bookmarks at the points in the movie where you’d like to do a scene split for possibly applying a different grade or ‘look’.

This time however your bookmarks need to be in pairs, like an ‘in’ point and an ‘out’ point, so if you don’t want to loose any frames in the 8 to 16bit export then bookmark the first and last frame of each ‘Scene’, the result will be two bookmarks next to each other like frame 235 (Last frame of Scene) & 236 (First frame of next ‘Scene’) and so on. This should also give you a bookmark on the first frame and the last frame of the movie.

If you want your Scene sub folders to be suitably named, (the default will be sequential numbers) then we use AvsPmod’s Bookmark Titling Tool in the menu Video -> Titled Bookmarks -> Title Bookmarks (Manual). A box will appear on the screen showing all the bookmarks ready for naming, but we only need to name the ‘in’ bookmark for each scene pair.

Title Bookmarks

Take care here, the dialogue is a little flaky, make sure before you start to expand it wide enough to view properly, particularly if you’re using AvsPmod with Wine on Linux.

When bookmark titles have been completed we’re nearly ready to Pipe To RGB again, but first add that hash back to the start of the 8bit line “#return source…..” to turn the line green and diable it so the 16bit processing steps after are activated.

RMB in the preview window space and choose, “Release all videos from memory”, preview will become blank.

In the Macro menu choose, “Pipe RGB To Imagemagick” and we need slightly different settings ticked.

FramesToFolders

Particularly untick “Include only bookmarks, if any”, we no longer just want ‘hero’ frames and tick the “Include only the range between bookmarks, if any”, this option does also include frames on bookmarks, so if bookmarks have been paired correctly as described previously, all frames should export.

The Imagemagick arguements remain the same. But choose a new folder for exporting ‘Scene’ sub folders to, if following the Project Folder setup then this will be the ‘Scenes’ folder.

Name the file suitably, I find Frame.tif ok for this, again the sequential frame number with hyphen will be added to become Frame-001.tif etc.

We keep the boxes ticked for “Add the padded frame number as suffix” and “Show Progress”.

Tick the box for “When using bookmarks save every range to a subdirectory”

When ready, click OK, after a short while the macro’s progress window will appear and show you how many batches based on the ‘step’ value you chose have been calculated and an estimated time will appear based on the time taken processing the first batch, depending on processing power this could take a while, in the meantime we can create the ‘Looks’ ready to apply to the 16bit frame sequences when export is complete.

For example: http://blendervse.wordpress.com/2013/03/21/darktable-for-video/

For final encoding we have numerous options such as:

Export graded 8bit images from DT and encode directly via ffmpeg to a suitable video codec. Or include minor additonal 8bit processing via Avisynth such as conversion to YCbCr, luma sharpening and adding grain.

Export graded 16bit images from DT and import into Blender’s VSE for adding wipes, transitions, titling, tweaking grade, compositing, adding sound tracks including Jack Transport sync with DAW’s such as Ardour and encode out via Blender’s ffmpeg interface or Frameserve RGB from Blender to Avisynth for conversion to YCbCr, luma sharpening, adding grain, dub audio and final encode.

Darktable For Video

After hearing Peter Doyle, Colorist and Color Scientist responsible for the color scoring of many films including at least the last five Harry Potter movies describe typical color grading based on adjusting a 1D LUT, like an RGB curve or Lift Gamma Gain as being a ’1D Look Up Table Jockey’, I was intrigued by his approach described within the FXGuide interviews which included processing more akin to typical photo development and photography based manipulations rather than what could be considered more typical digital video manipulations.

http://www.fxguide.com/quicktakes/peter-doyle-on-complex-feature-film-grading/

http://www.fxguide.com/fxpodcasts/the-art-of-grading-harry-potter-peter-doyle/

And seeing discussions regarding Adobe Lightroom 4 ‘support’ for digital video formats in order to begin using Lightroom specific tools for video processing. Yes, Adobe Camera RAW is available in more video centric products for handling RAW video from such cameras as Blackmagic’s Cinema Camera but I think there are tools within a typical RAW editor that are equally ‘valid’ for LDR image formats and therefore typical 8bit video.

Considering the above, I contemplated the limited availability of such tools in the Open Source world, specifically on Linux. The future need for a Linux based solution for developing RAW video, or more specifically handling RAW image sequences from such camera’s as the BMCC including playback, setting in/out points, RAW development, creating, storing and applying ‘looks’ to image sequences and lossless export to an NLE. I thought I’d briefly investigate Linux RAW editors to see how close or far we are from this and Darktable appears to offer the most so far, due to what appears an extensive range of interesting tools and active development.

http://www.darktable.org/

I first tested a straight forward 8bit png frame extracted from a h264 video in DT’s 32bit linear float OpenCL enabled processing pipe and found that even a simple curves manipulation gave same combed histogram 8bit output as using Blender’s OpenCL based nodal compositor. No surprise there. Basically what I wanted to establish was whether the 8 to 16bit conversion discussed in previous posts was beneficial for the Darktable test. Conclusion yes.

So I started with a quick edit of a few Canon MOV files in kdenlive and exported as a lossless ffv1 video in a matroska container, opened it in Avspmod, set bookmarks for ‘hero’ frames and piped 48bit RGB at bookmarks to Imagemagick resulting in 5, 16bit linear tif’s as output. I use the term ‘hero’ frames tongue in cheek, there far from heroic. :-)

The frames were then imported into Darktable to create a ‘Look’ for each scene or in DT’s terminology a ‘Style’. http://www.darktable.org/usermanual/ch02s02s10.html.php

Frame_Import

Creating a ‘Style’ in DT.

Create_Style

Each style saved with a unique name will then be accessible from Darktable for future use. As you can see I didn’t spend much time or judgement applying the styles for this initial test. :-)

Styles applied to ‘Hero’ frames.

Styles_Created

Back in AvsPmod, I created new named bookmarks at scene changes within the video and piped 48bit RGB to Imagemagick, the bookmarks this time used by vdcrim’s Pipe RGB To Imagemagick macro to split the frame sequences into named sub folders for each scene. Once export was complete I imported the 16bit image sequences contained within the folder’s into Darktable using the recursive search option. I’ll be making Darktable related comments regarding the whole process in a later blog post.

Image sequences in DT.

Frame_Seqs_Import

I then used Darktables ‘Grouping’ feature by selecting first and shift selecting the last frame for each scene sequence and ctrl + g, making a group to recreate the ‘folder’ principle. A ‘Folder to Group’ import option would really help in this situation. More comments to follow. :-)

Image Sequences ‘Grouped’

Grouped_Frame_Seqs

The result is five groups corresponding to five scenes ready to apply each of the previously created ‘Styles’ to each scene group. However applying a style to the contents of a group is not an option currently. Version 1.1.4 from the Darktable PPA for Ubuntu.

In order to apply a ‘Style’ to a groups contents the group has to be expanded, once again select the first and shift select the last frame in that group, which is tedious because the thumbnail regeneration process prevents selecting a frame unless it has a thumbnail image present. More on this in the following Darktable comments post.

Once all frames are selected, the ‘Style’ can be applied. Notice that we can choose to untick particular operations such as adding grain or sharpening. Both better applied with motion considered in a tool like Avisynth as part of final encode out to delivery codec. Certainly adding grain over frame sequences in DT results in a ‘screen door’ effect with static grain. Sharpening better suited to luma only. Repeat for each group being careful not to apply a style more than once because it’s very easy to apply the same named style repeatedly to a frame or frame sequence.

‘Styles’ applied to grouped image sequences.

Apply_Styles_To_Seqs

Exporting image sequences for encoding presents the same problem with ‘Groups’, simply selecting a group and exporting to an image sequence isn’t possible, only the first frame is written out. Again a group has to be expanded, select first and shift select last frame, once again waiting for thumbnail regeneration, then export. More comments on this to come in later post.

Export options are templated which is very useful and there are numerous image format options. I’ll add a section on encoding via Blender Frameserver and Avisynth in a later post.

Export_Options

Preliminary Conclusion:

Darktable developers do not suggest that DT is great with image sequences, it’s a RAW editor that just happens to have a wealth of functionality not only for image processing but database functionality, tagging, metadata etc and there are great touches and attention to detail throughout.

More to follow.

Pipe To RGB – Avspmod To Imagemagick

Just a quick update to a previous post regarding 8bit to 16bit conversion. Previously http://blendervse.wordpress.com/2011/09/16/8bit-video-to-16bit-scene-referred-linear-exrs

The main issue with the process is it’s memory intensive and the way avs2yuv works by requiring the whole video in memory before starting processing rather than taking a frame of video at a time into memory.

Previously it would be necessary to use the Avisynth ‘Trim’ command and split a video into managable frame range chunks via numerous avs scripts and process each script in a batch process all based on available processing power and RAM.

A while ago the main AvsPmod developer, vdcrim kindly created a script based on my needs for an automated approach using AVSPmod’s python based macro authoring feature. The Pipe To RGB macro is downloadable here: (Right Mouse Button Save Link As)

https://github.com/vdcrim/AvsP-macros/raw/master/Pipe%20RGB%20to%20ImageMagick.py

And AvsPmod from here:

http://forum.doom9.org/showthread.php?t=153248

So now a video can be loaded into AvsPmod and the whole lot piped to Imagemagick as 16bit RGB via an automated macro which can be set to split video frame ranges by ‘Steps’, ‘Time’ or ‘Intervals’.

Additionally AvsPmod’s bookmarking option can be used to set scene changes to suit the user and the Pipe To RGB macro will not only batch in ‘Steps’, ‘Time’ or ‘Intervals’ but then subdivide the output into sequentially named ‘Scene_001′ folders from the bookmarks.

There’s also an option to add additional Imagemagick processing arguements and set a destination folder and sequential file name prefix.

PipeToRGB

And why convert to RGB and at 16bit? Perhaps test using the files in the zip below to see if you think it’s worthwhile after editing before color grading and finishing.

Here’s a link to a zip containing 16bit frames from a 8bit Canon DSLR MOV.

http://dl.dropbox.com/u/74780302/Bee_Frames.zip

And a quick illustration of one basic RGB Curves adjustment in Blenders OpenCL Nodal Compositor on the original 8bit MOV file and the 16bit output options as in the Frames.zip above.

MOV

Original 8bit MOV (Above)

8To16

Quick 8 to 16bit Conversion, with no further processing. (Above)

8To16_SmoothGrad

Quick 8 to 16bit Conversion with addition of Smoothing Gradients at 16bit with no further processing. (Above)

MCTD_Enhancements

Using MCTDmod Avisynth script for Motion Compensated Temporal processing at 16bit, including Denoise, Stabilizing shimmer in flat areas, Enhancement with GradFun2DBmod to reduce banding and blockiness and Adaptive Luma Sharpening using LSFMod. MCTDmod is processor intensive, there is an option to use the GPU for some of the script components.

**EDIT**

One option missing I felt was the ability to extract ‘Hero’ frames at 16bit, well vdcrim has kindly updated the Pipe To RGB macro today to include the feature to pipe just the frames on bookmarks to Imagemagick again as 16bit RGB using the process from the 8 to 16bit EXR post, so now as the 8 to 16bit conversion process can take some time :-) depending on processing involved, first we can set bookmarks for ‘hero’ frames for each scene in a movie or those most represntative of each scene, pipe those singular frames to imagemagick and use the 16bit output in Blender to start ‘look’ developement for the grading process whilst we wait for the full conversion of the whole movie to 16bit image sequences to take place.

Lost in linear space

Lostinspace1

When you work with Blender and have used to work with another video editing software before, especially if you used an effect like Blender’s RGB curves, you may have noticed that the curves in the compositor (and indeed the whole color management) seem to work a bit differently. That’s because Blender works in a scene-referred linear color space.
Huh, if this sounds all G(r)eek to you, you can start here to get a little insight:
Enlightenment 1
Enlightenment 2
Enlightenment 3

While the linear workflow has many advantages especially when working with CGI content, in some situations when you work with Videos or Images, you may want the good old gamma-corrected control back though. For example many grading tasks with the curves require an exact fine-tuning of dark tones and this gets very difficult when working in linear space!

There are two fast ways to work around the “problem”:
1. Change the input color space of your input node to linear set the Render color space in the Scene Properties to Raw.
2. (The better one) Before the Node(s) you want to use in gamma corrected space, add a gamma node (we call it g1) with the value 1/2.2 and after your correction nodes add another gamma node (g2) with the value 2.2. By doing this you linearize the already linearized values again: Applying effects in linear color space to image data that is “double-linearized” compensates the linear space and makes you work in a gamma corrected space for the moment. To put it simply ;-)

For you the second way means that you may decide which nodes in your composition shall work in linear space and which not. So you get the advantages of Blender´s cool color management without suffering from it’s drawbacks.

lostinspace2

What I did in the screenshot: I added a value node which is directly connected to g2 and on its  connection to g1 there is an inversion (1/value), so we simply put a gamma value into the Value node and control with what assumed gamma corrected space the Curves effect works. This gives us even more control.

Now have fun with that!
Björn

VSE: Get Audacity, add meters!

Ok so I don’t use Linux or run Jack in Blender (for syncing a DAW), and Blender doesn’t have the most basic sound feature (unless you count waveforms). Blender doesn’t show audio metering! Which is odd given how great the vision metering is!

Recently I knocked up an example of sound editing in the VSE, you can see it here:

But I wondered how you could I mix a variety of sounds together so that they all balanced out against each other while not exceeding the top most limit for digital audio, ie. 0db
Well I have a portable version of Audacity from here http://download.cnet.com/Audacity-Portable/3000-2170_4-10608458.html and I wondered if I could just pass my Blender mix through? Well it turns out that you can, you just need to check that your levels match between the 2 apps.

Play 0db media from Blender and line up the input value in Audacity to peak.

Play 0db media from Blender and line up the input value in Audacity to peak.

Blender_Audio_meter01

In this example I have Audacity running in the background with the mic button active for loop through. In the preferences I turned off overdub and loop through but checked that the input was from Stereo Mix not Mic. Then I dragged the meter bar out of the main window and resized it down the side. I shrank Blender up a bit I guess I could also overlap the Audacity output meter as it is inactive, we only want to see the input meter here. Once your mix is done, export it as a Blender Mixdown (see tutorial above) and import it into Audacity for a bit of compression, that is lifting the silent parts and squashing the lound parts automatically. This makes podcast conversation easier to listen to ;)

Is it just video compression that kills detail?

Following 3points comment on removal of the DNxHD bitrate cap in blender and the discussion on the bug tracker that is linked to, with regard to DNxHD intermediate’s:

http://projects.blender.org/tracker/?func=detail&atid=498&aid=33499&group_id=9

It brought to mind a couple of old posts I’d forgotten about:

http://blendervse.wordpress.com/2010/07/11/simpleslugupscale/

http://blendervse.wordpress.com/2010/03/28/full-range-video-into-blender-hdv-pal/

And a bit of a revisit.

Many video cameras particularly the ‘consumer’ variety capture and encode luma across an 8bit levels range greater than the ‘rec709′ (and earlier specifications) of 16 – 235 and 16 – 240 for chroma.

Generally these cameras use the 16 – 255 8bit levels range, but there are numerous point and shoots which encode using the full 8bit range and don’t flag it as such. Both however are not strictly to a video ‘standard’.

There are a number of cameras such as Canon and Nikon video shooting DSLR’s and the Panasonic GH3 (h264 MOV profiles) which capture and encode to the JFIF standard where luma and chroma are over the full 8bit levels range however the encoder adds the h264 VUI Option ‘fullrange’ flag to the header of the h264 stream to signal to the decompressing codec to squeeze the full 8bit range into the ‘rec709′ 16 – 235, 16 – 240 levels range at playback or import into an NLE before any color space conversion to RGB for viewing, color processing, filters and effects.

So what happens to these 16 – 255 or unflagged full range camera video files and why does it matter, you may decide it doesn’t matter but it’s something to be aware of non the less.

To simulate the ‘full range’ levels found in numerous camera video files I created this image which contains text with RGB values 16 & 255. A gradient from 8bit level 235 up to 255 [White] our highlight roll off to hard clip at 255. And a gradient from 8bit level 16 down to 0 [Black] detail in the shadows.

Original

The image was then encoded to h264 specifying that the full 8bit levels range was to be used in the YCbCr (YCC for short) video file.

http://dl.dropbox.com/u/74780302/Original.mp4

The image used in the encoding is RGB and the ‘correct’ way to encode RGB to YCbCr is to encode using only the 16 – 235 levels range in YCC. So the relationship between 8bit levels range RGB to YCbCr would be 16 YCC is Black, equivilent to RGB 0. 235 YCC is White equivilent of RGB 255.

But as the purpose is to simulate a native camera file with luma levels outside of 16 – 235 the ‘correct’ way has not been followed.

Playing this video in a typical media player will result in nothing more than a black and white horizontal bar, not representative of the source image used to encode. But that gradient shadow and gradient highlight detail is there in the encoding it’s simply not shown because the typical media player takes only the 16 – 235 luma range when converting to RGB for playback and thus crushes shadow levels from 16 to 0 [Black] and compresses highlight detail from 235 up to 255 [White] giving us black and white horizontal bars.

To confirm the detail is there in the original.mp4 file here’s a waveform of a frame via Avisynth:

Original_mp4_wv

Full range levels isn’t really a problem for a decent modern NLE, which although will preview the same as a media player, ie: black and white horizontal bars, because ultimately any YCC output for delivery from the NLE should have it’s luma constrained within the rec709 16 – 235 8bit range and these camera files do not, it will generally be a non destructive operation because the color space conversion from YCC to RGB for display and color processing in a modern NLE will be done at 32bit float, not 8bit like a typical media player or image sequence output via ffmpeg for example.

The color space conversion at 32bit is generally non destructive so the ‘full range’ levels including those gradient shadow and highlights are still there but just not visible until some levels manipulation is done in the color correction or ‘grading’ process pulling that detail into display as it enters the 16 – 235 levels range. This can be a simple arbitary levels mapping ‘effect’ like PC to Video levels filter, or in the ‘grading’ process via curves or 3 way color corrector type tools.

Blender however relies on ffmpeg to do the decompression of the YCC video AND to do the color space conversion to RGB in the video import process.

As a result Blender receives 8bit RGB data from ffmpeg and you’ve probably guessed, the levels between 16 and 0, the shadow detail and levels between 235 and 255 the highlight detail are both destructively crushed to black and white respectively like the default preview of black and white horizontal bars.

There is no chance of manipulting the levels of the RGB data in Blender by means of curves or 3 way color corrector to pull the ‘out of range’ levels into play, that detail has gone, lost in the import process.

So the idea that we can import native YCC video files into Blender, edit and export to some high bitrate uncapped intermediate codec of a file size greatly larger than the original file in a ‘lossless’ or even ‘visually lossless’ intermediate step can be rather misleading, depending on the camera native source files used.

Here’s a high bitrate DNxHD encode out of Blender from the very same h264 Original.mp4 source file:

http://dl.dropbox.com/u/74780302/DNxHD.mov

If we put these two files into a grading application like Davinci Resolve both will preview with black and white horizontal bars, but only one of the files will reveal the out of range shadow and highlight detail when a levels or curve adjustment is made on it.

Guess which. :-)

To conclude, a waveform via Avisynth of both the Original.mp4 and Blenders high bitrate uncapped DNxHD.mov :

Original_mp4_wv

DNxHD_wv

Sure compression can kill detail but so can poor levels handling and 8bit color space conversions, no amount of high bitrate encoding will bring it back.

Have you ever wanted to edit some video in the VSE and send some of it to the Node Compositor? I have, and I thought you could make a script to do it, someone else thought that too and now you can.

Find the script in this BA thread (version 3 is best): http://blenderartists.org/forum/showthread.php?221567-Edit-strip-with-compositor

At the moment it is not an addon, grab a recent version of the script and place it in the Blender Addons folder. Then open User Prefs and activate it. Remember the strip you send must have a short name too. Like “shot-1″ for example. You run the script, select the strip (or strips if it is a key effect) to send and press the Send to Composite button in the strip properties panel. You can even send stacked strips or a series (end to end) strips! And a new feature lets you setup the shots with a predefined Node Group!

The script will generate a new scene and open a compositor with the right media linked using the correct start frames too! Back in the VSE scene your edit will get a new Scene Strip above the old media strip.  Check the tutorial for the right setup so that you can scrub the composite in the VSE (very slowly).

Here is the version that I used for the tutorial: http://blenderartists.org/forum/showthread.php?221567-Edit-strip-with-compositor&p=2236974&viewfull=1#post2236974

This time Imade the tutorial with a voice over!

Jump to video tutorial

 

Editing: Some thoughts on music

I was just speaking to a colleague who asked how I select music tracks for the short films we make at work.

To be clear I work at a TV station making news stories, the short 5-7 minute factual pieces that are referred to as Current Affairs (or Caff). These stories are characterised by a journalist’s voice over interspersed with brief interviews (i.e. parts of longer interviews). All of this is coloured with Overlay (often called B-roll). In the background there can be heard NatSOT (Natural Sound Off Tape), occasionally there will be an UpSOT (Up Sound Off Tape – increase volume) to feature a motivated sound recorded in the field.

Sometimes the talent, or the footage collected, will not carry the intent sufficiently. The interviewee may be a poor speaker or the overlay might not reproduce a dramatic event. The story may even be… boring. Yes this can happen, especially when the story is related to finance or politics. At these times something extra may be required to lift the emotional connection with the audience. The easiest way to achieve that is via music.

Where I work I am lucky to have access to a range of licensed production music that my employer pays for each year. This is high quality copyrighted music, which can be selected to fit easily into any emotional hole that we may create. I have terabytes worth of ques (music tracks) to choose from. But this presents a problem… one of choice.

Anyway my suggestion to my co-worker was this, “break it down to Verbs”.

Try to infer a motivation for your music by describing a way to get there, not just an outcome. So if you have some footage of aircraft getting ready to fly, don’t search keywords using “triumph” or “confidence” try “Soaring” or “countdown”. In this way you narrow down your needs based on an activity instead of a feeling. The problem with feelings is that they can be too broad and compound the choice issue.

From there I suggested listening to the results of the first pass and search for mood. Do you want to convey weight or worthiness? Is it sombre or bright? Perhaps it should be positive and cheery. These effects are often tied to the style of the music choice, from acoustic to techno, classic to rock n roll. Period choice can substantially influence tempo and feeling.

Finally I think that you need to consider tempo. That is the speed, or a change in speed/timing (or emotion). Look for sustained passages that serve as a fairly neutral constant, they can fill long passages of dry content without distracting from the key message. Of course look out for interesting changes too, think UpSOTs.

Before I wrap this up, there is one actual rule that is worth considering. Do you even need the music, (perhaps you can cheat in NatSOT from somewhere else) if you do then, Know When to Loose It!

Follow

Get every new post delivered to your Inbox.

Join 64 other followers