Sei sulla pagina 1di 33

Introduction

DE:Noise is a tool that removes excessive noise from an image sequence so it


looks better when you playback the video.

DE:Noise handles a wide range of spurious frame-to-frame incoherences ranging


from fine digital/electronic noise to blotchy spots (e.g. dirt on the film) in one
simple easy-to-use tool. DE:Noise combines motion estimation techniques with
state-of-the_art feature-sensitive (edge preserving) spatial filtering methods to
reduce the visual impact of various moving-picture problems such as: noisy video
(as seen in many low-light capture contexts sensor noise + compression
noise), excessive film grain, computer graphics renders affected by ray-tracing
sampling artifacts, fingerprints and dust captured during film scan/transfer and
printing, electronic snow and drop-outs

Unlike other tools, there is no content dependent windows to set to profile noise
and other similar intensive per shot user interaction. Once a particular type of
shot noise handling is setup, the tool can be be used into a pipeline and apply to
large amount of similar shots without too much worry.

This is common manual for After Effects compatible version (including After
Effects and Premiere); FxPlug for (Final Cut Pro and Motion); as well as
OpenFx (OFX) hosts which includes Black Magic Fusion, the Foundry Nuke and
Sony Vegas Pro for example. The differences between the versions is minor and
highlighted when not the same. Internally its the same processing engine in all
versions.

So often our tutorials apply to all: http://help.revisionfx.com/search/?p=65

Internally, DE:Noise uses RE:Vision Effects Academy Award winning optical


flow technology to estimate the motion of each pixel between two frames. To
create a noise-reduced result, DE:Noise uses the estimated per pixel motion to
warp the surrounding frames to match the current frame (i.e. to locally motion-
compensate the surrounding frames), and then combines these intermediate
frames with the current frame using one of 8 included temporal mode options to
merge the intermediate results. For special contexts where temporal coherence
is not sufficient, DE:Noise also provides robust spatial (single-frame) denoising
algorithms. Spatial and Temporal processing can be combined and proportionally
fine-tune for better results. DE:Noise supports field-based material where the
application cooperate and allow you to mark cut points so that the temporal
processing does not attempt to work across cut boundaries (or even be affected
by a flash frame).

Combining motion-compensated images in this way usually produces results that


are sharper when compared to processes that rely only on spatial filtering and is
usually as well resistant to some inter-frame flickering. And yes there are times
when spatial filtering is necessary; for example when or a sequence (or a part of
it) is problematic to track. As such, we also provide feature-sensitive filtering
options that attempt to smooth small artifacts while at the same time preserve
object edges integrity.

After all these feature-preserving noise-reduction methods are used, the final
image may be too soft for your taste. As such, DE:Noise also integrates contrast
and sharpening enhancement for a more complete one-stop final look control.

DE:Noise can even be useful to generate better looking compressed video, help
other image processing tasks that are noise sensitive (e.g. resizing an image),
and even help generate a cleaner static still image from video.

Release Notes Major Landmarks


This is a short reminder for people who used this tool early on that much has
changed. We regularly post new updates. Please consult our website
(particularly the download section) for an history of all changes if you happen to
load an old project.
http://revisionfx.com/products/denoise/downloads/

NEW in 2.0: In V2 an additional input is added that allows you to apply DE:Noise
to one clip but use an alternative clip as the tracking source.
NEW in 2.1: Added Controls to help suppress residual salt and pepper type
noise.
NEW in 2.2 (except FxPlug version): Added a new Temporal Threshold mode.
Note you will get a slightly different result by default. If you are loading an old
project (done with a previous version, you need to set Temporal Threshold
Mode to Ignore Pixels over Threshold.
NEW in 3.0: Added GPU support, improved post sharpening, added new more
robust spatial denoiser. New post options in Vegas and FCP-X. Now properly
works in floating point in Premiere (maximum bit depth).

Although we try to make this tool compatible over time so old projects work
without issue. Note that there can still be small rendering difference that occur
between versions as we improve the accuracy of the tool.
Additional Useful Notes
DE:Noise supports 8 and 16 bits per pixel and floating point processing.
DE:Noise V3 plugin has been tested on Mac, Windows and Linux. 32bit
applications are no longer supported anymore, but one can download the
previous version and contact. techsupport@revisionfx.com for additional info.
This applies to users of Final Cut Pro 6/7, Combustion, Toxik/Max/Maya
Composite and other applications that have been discontinued.

Not all hosts and versions are supported. As we continuously upgrade our
software, please consult for latest supported apps and version:
http://revisionfx.com/products/denoise/

Installation Folders
If you have special concerns about where the plugins actually live (can live).
http://revisionfx.com/support/faqs/purchasing_faqs/where-are-my-plugins-
installed/
DE:Noise Controls

Input Source:
This is the main variance between tools as we support multiple applications and
not all applications work the same in that regard. You will get a red frame if the
input is not connected. You will get a green frame if there is an issue with the
GPU settings. Consult our FAQ page if an issue comes up with your application.

In some application we have chosen to allow an optional ALT Track Source input
which will use that clip to compute the motion.

GPU Processing:
Click here to read about our GPU support.

Pre Processing Controls:


PRE-PROCESSING SECTION CURRENTLY DOES NOT APPLY TO SOME
HOSTS INCLUDING SONY VEGAS AND FINAL CUT PRO. FOR HOST APPS
WHERE DE:NOISE DOES NOT DISPLAY PREPROCESSING CONTROLSYOU
CAN JUMP TO SPATIAL NOISE REDUCTION SECTION.

When shooting in lower illumination contexts, it can sometimes help to try to


increase the dynamic range of the image or redistribute the values in some way
in order for DE:Noise to see the random noise or other similar artifacts. Note
we provide some simple contrast enhancement options, and that these pre-
processing controls are reversible (using the appropriate selection in the post-
processing controls), The basic idea is that you can use contrast enhancement to
help the tracking and the noise reduction process and then undo the
preprocessing enhancements post denoising.

However sometimes its not the proper thing to do. See section about pre-
processing in the Discussion section.

Pre Processing:
We provide the following methods to adjust contrast.
None: Performs no preprocessing. None also provides a quick way turn off
preprocessing without having to set the Pre Contrast % to 0.0.
Contrast using global avg: Contracts or expands colors towards or away from
the image average color.
Contrast using mid-grey: Contracts or expands colors towards 0.5 mid-grey.
This is what traditional color contrast tools do.
Contrast using global avg is usually more useful, except in some particular
cases such as a big white object appearing in a dark scene. In such shots, the
traditional contrast (to mid-grey) can be used.

Of course, changing the contrast changes the noise characteristics

Note that setting Pre-Processing to anything else other than None forces
DE:Noise to work internally at 32 bit floating point per channel (even if the source
is 8 or 16 bits per channel). This will increase the memory requirements in some
applications.

Pre Contrast %:
A value under 0 contracts the colors where a value of -100% produces a flat
color. A value over 0% will expand the contrast range (moving some values
towards black or some towards white).

Source Footage (left), Contrast Enhanced on the right.

When you enhance the contrast, you also typically as well make the noise much
more noticeable which might not be visually pleasant (and this tool is DE:Noise
and not RE:Noise). Note DE:Noise has post processing controls (described later)
whose purpose is to: a) undo the pre processing contrast enhancement applied
to help find the noise and/or b) help give more punch back to the result image if
the DE:Noise result is judged too blurry.
Spatial Noise Reduction

Example: On the left is without Spatial Noise Reduction, on the right is with the Smarter Blur
mode. This is from a test sequence provided by Grant Davis (vjculture@yahoo.com)

Spatial Noise Reduction:


This controls helps you spatially filter the residual noise that the temporal noise
reduction component cannot deal with. Note the Spatial Threshold control greatly
affects the result. Internally Spatial Noise reduction is applied then Temporal
Noise Processing. If for some reason you wanted to change that order, just apply
the effect twice in a row and simply turn OFF the appropriate control.

None: USE THIS TO TURN OFF Spatial Noise Reduction


Diffuse: Averages values within the area defined by Spatial Radius. Pixels are
averaged with the center pixel of the blur only if they vary from the center pixel by
less than the Spatial Threshold % (a parameter described below). In this way,
edges and feature details are more likely to be preserved because pixels are only
blurred together if they are similar (where similar is defined by the Spatial
Threshhold %).
Smarter blur: Performs a blur constrained by the actual luminance. Similar to
the Diffuse setting, however pixels are Gaussian blurred together instead of
averaged. Pixels are Gaussian blurred and the resulting blurred pixel is only
allowed to change by Spatial Threshold % or less. In this way, edges and
feature details are more likely to be preserved.
Blur biased towards darks: Performs a blur biased towards the darks (that is,
darker regions get blurred more than lighter regions). This mode is probably
appropriate for footage that have large black areas (e.g. was shot on black or in
daylight with deep shadows).
Blur biased towards lights: Performs a blur biased towards the light areas. This
mode is probably appropriate for footage that have large and noisy white areas.
(more typical with widely manipulated HDR type source).
Directional: Performs a more directional blur that follows feature edges in an
attempt to reduce blurring over feature edges..
Variational: This is a hybrid filter introduced. This filter image is unique in that it
tries to find similar texture elsewhere in the image to average instead of looking
at just the neighbor pixels. The Spatial Threshold setting used for the Variational
method controls significantly the amount of change in the result (see example in
the discussion for Spatial Threshhold %, below). The Variational method can
provide better results than the other spatial methods on very degraded images.
This filter is particularly useful on image sequences where temporal processing
(discussed below) leaves artifacts due to tracking issues and you have to rely
predominantly on spatial processing. Note this filter is significantly faster on GPU.
Temporal, then variational: This is a version of Variational that performs a first
pass of temporal processing (described below) and THEN applies the Variational
spatial mode. This can often be useful for fine noise that is often difficult to
reduce without overblurring the image.

Spatial Radius:
This controls the kernel size for spatial denoising (so the maximum of spatial
filtering that can take place). A value of 0 effectively turns spatial denoising off.
Its recommended to use spatial denoising when you have large patches of near
constant values in your images (where it is the most perceptive).

As you know, the larger the radius of a blur is, the more pixels are used to
calculate a pixel. This affects both render time and softness of the image. Notice
the source on the left, the noise is very fine, so we probably dont need a large
radius here.

Spatial Threshold %:
This control defines how much difference can take place between neighboring
pixels when performing the spatial filtering (the neighborhood considered is set
by the Spatial Radius setting). The logic varies a bit between mode but think of
in terms of the pixels greater than the Spatial Threshold % are either thrown out
or weighted less when filtering. A value of 0 essentially turns off spatial filtering
and a value toward 100% will include pixels in a greater range of pixel values.
Keeping the same large Radius as shown on the right in the previous figure, we
now threshold it with a succession of small to large Spatial Threshhold values.

Heres an example of spatial noise reduction in action (sequence provided by


Ami Sun). We have on the top left the film grainy look. The mode Diffuse
usually works pretty well for this sort of fine white noise as shown right of it
(particularly if you temporally denoise as well), with a smallish radius and a small
threshold. However as per the image on the lower-left side, as you increase the
radius and raise the threshold (try to reduce more noise), the spatial filter tends
to create overly smooth areas (look at the hair on the right of the frame for
example). Then you probably want to switch to a blur method as on the bottom
right. Note that if after this stage the result is a bit too soft for your taste, we
provide post processing sharpening to help you restore details. The post-
processing controls are described later.

Effect of Spatial Threshold with Variational mode

Here is a crop window into a frame. Notice how the image is noisy and blocky.

Now we add DE:Noise (just Spatial Mode set to Variational here). We used 3 as radius,
what works depends on the footage. If you play with the Spatial Threshold control you
will see that this slider will make the image go from more cartoony (milky) to more soft.
With a spatial threshold of 12, we see still some artifacts particularly on the left of the
dancer.
Now we raise the threshold to 20 much better! If you intend to further image process
the image, sometimes how much can be determined by using the post sharpening if
you can post sharpen without artifacts reappearing than you are usually in a good place.
HDR and Noise

When you deal with


high-dynamic range
(HDR) images you
benefit from a large
color range, which is
very helpful in color
matching graphics to
live-action to offload
some of the rendering
process to compositing.

However when dealing


both with computer
graphics and HDR
photography,after
merging the color
spaces in a non linear
manner can reveal some
artifacts not initially
visible (as shown in the
top picture insert and
then in the middle).

You can usually help


out (as in the bottom
picture) with DE:Noise,
however you might
prefer to denoise the
result looking at color
corrected results in the
ballpark of the final
result wanted.

Test provided by Chad


Capeland,
ccapeland@gmail.com

Temporal Processing
Temporal Process Mode:
The following modes are available. Note that each mode (except for None) uses
optical flow to match up features between the current frame and the surrounding
frames, and then performs per-pixel operations to reduce noise once the
surrounding images are tracked and warped to match the current frame being
processed.

None (Copy): This allows you to ignore temporal processing. USE THIS TO
TURN OFF TEMPORAL PROCESSING.
Average: Warps the previous frame to the current AND the next frame to the
current frame and mixes the 3 equally. This is the normal mode for sensor like
noise.
Median: Warps the previous frame to the current AND the next frame to the
current frame and at each pixel keeps the median of the 3 values. This is often
useful as a first application, followed by an Average pass. See Useful Tricks
section.
Average 2 most similar pixels: Warps the previous frame to the current AND
the next frame to the current frame, discards the most different pixel and
averages the remaining 2. This should work pretty well with drop-outs and
similar single frame (non recurrent) noise.
Motion-weighted average: Warps the previous frame to the current AND the
next frame to the current frame. This mode does not blindly blend the 3 frames,
but attempts to preserve the sharpness in the image. It is longer to process as it
does a lot more computations.
Average with prev: Warps the previous frame to the current and blends the two
equally
Average with next: Warps the next frame to the current and blends the two
equally. One reason to provide the two directions (to Previous and To Next) is for
shots when you zoom in or out. Ideally you would like the other frame to be the
wider field of view (zoom-in use previous, zoom-out use next).
Min: Warps the previous frame and next frame to the current and keeps the
minimum value of the 3. This is usually good on things like little white dots, e.g. a
shot with too much rain specularity, should do ok on shots with sporadic flash
frames as well.
Max: Warps the previous and next frame to current and keeps the maximum
value of the 3. Good for black dots removal.

Some practical examples are provided at the end. See the website gallery for
more.

Temporal Quality
No MV: Turns off Motion Estimation completely
Best - Forward Warp: High Quality Motion Estimation and high quality warp
filtering (slower but more precise)
Best - Inv Warp (faster): Same High Quality Motion Estimation but with a much
faster warping technique. This setting will often prove to have enough quality.
Medium: Faster Motion Estimation.
Fast: Sloppy result. It actually turns out that sometimes you want the motion
estimation to be a bit sloppy (for example relatively static scene with falling rain
would be such a case). Basically we are interested here to separate internally the
areas of the frame without motion and those with motion are properly
differentiated.

Temporal Threshold %:
This controls how much a pixel is allowed to vary after it is processed. A value of
0% will in effect turn off the temporal denoising, a value of 100% is the absolute
maximum deviation (inter-frame difference) from the source any particular pixel is
allowed to change. To control this slider is sometimes completely crucial in
obtaining the look wanted. Note such threshold-based cutoff can cause unnatural
ghosting if you dont pay attention.

There are some contexts where you want this slider to be a high value, like
100%. At other times you may wish to limit the amount of change to a really tiny
number even like 5% (note this is not 5% luminance variations, we use the
equivalent of cubic root of the value of 0 to 1 if you want to understand the
explicit difference since we are in practice interested in tiny amounts). If you see
double-edges in the result you may want to lower the threshold amount, this
often happens in part because the internal optical flow tracking does not work
perfectly.

Be careful about the setting of this value, if you see no difference before and
after, make sure this value is not set to too low. Low values are most useful for
cases where you have white noise, that is where the noise is at each pixel and
varies +/- some small percent from the expected real value. Higher values are
typically useful for defects that are not typical digital/electronic sensor noise (like
compression artifacts).

In the following example (below), a laser animation appears instantaneously


(some pixels dont really have a correspondence in next or previous frame), a
larger threshold like 60% in this case could create ghosting as on the top right
image. A lower value as below-right helps to reduce the ghosting, because pixels
in the neighboring frames-in-time are discarded if they are too different. The
proper value to put for this value will depend on the actual content.

Temporal Threshold Mode


THIS OPTION IS NOT AVAILABLE IN ALL HOSTS. .

There is 2 options: Ignore pixels over threshold is meant more for additive white noise ("real"
noise). Values too different from original images are ignored. The premise here being that when
comparing to other frames if the difference is too large than it's not noise. However this sort of
thresholding can leave salt and pepper noise. This will limit the change from the original to a
certain percent. And it usually is more what one wants.
The different temporal process modes
can produce dramatically different
results. In this example we have
another complication, which is that
between certain frames in a sequence
there is large luminance variance in
certain areas. The variance comes
from aging emulsion and is sometimes
referred to as flickering.

Note the original image: in that


particular frame there are 3 little blobs
that we wish to remove. Note the
circled blotch lower right, Average
Mode (below) only attenuates it while
the 2 Most Similar mode (Best 2 at
the bottom) completely eliminates it .
However because the Previous frame
has a sudden luminance shift, the
Best 2 mode creates some artifacts
in the hat (see Best 2, right side of
hat).

More discussion (all modes not


pictured): Average, Motion Weighted
Average, Average with Prev and
Average with Next are average-
based processes (mixes surrounding
frames) while Median, Max, Min,
Average 2 Most Similar have logical
operators that might allow you to
remove more noise but that may be
more sensitive to large luminance
shifts.

Sometimes, the best result will be to


apply a mode with the appropriate
logical condition (eg. Min to remove
say the extra specularity of a rain
machine gone wild) and then apply
DE:Noise a second time to smooth the
blending with the Average Mode.

This image is from Edwin S. Porter The Great


Train Robery.
http://www.archive.org/details/CEP146
Fields (After Effects-compatible version) Or Compare Adjacent
Fields (Vegas and FxPlug version):
NOTE: THIS FEATURE IS NOT AVAILABLE (OR EVEN NECESSARY) IN ALL
VERSIONS OF DE:NOISE.

When the source material is interlaced, this control determines whether DE:Noise
should compare images that are 1 field (half-frame time) apart, or 1 full frame
tme apart. With some interlaced footage you might get better results when this
button is NOT checked, and with other footage you might get better results by
checking this setting.

Why is this true? Lets give an example: if the camera is locked off it can be
better to turn this setting OFF so that DE:Noise compares the first field in one
frame to the first field in the NEXT frame (skipping the second field of the first
frame to compare against), so that DE:Noise is always comparing upper fields to
upper fields, and similarly for lower fields. Note that if there is a lot of motion in
the source sequence then turning this setting OFF (so that adjacent fields in time
are compared, and the nearer image IN TIME can often produce better tracking
results). Your results may vary, and turning this setting on and off will often be
obvious as to which setting is better for the footage you are working with.

Normally if you go from fields in to fields out, you will want to turn this setting on.
Please note we do have a full blown deinterlacing tool (FieldsKit) as AE and
FxPlug compatible applications. You might consider using it in critical contexts,
particularly when dealing with locked-off shots in part because the deinterlacing
process sort of stretches horizontally a tiny speckle of noise, perhaps making it
harder to isolate.

After Effects-compatible version: In the After Effects-compatible version (for


AE, Premiere Pro) this setting defaults to OFF, and should remain OFF for
progressive footage. You may wish to turn this setting ON when the source
material is interlaced.

FxPlug version: This setting defaults to On. For progressive footage, this
setting is disabled and is always ignored (as a result frames are compared to the
next and previous frames in time), and for interlaced footage this setting is
adhered to when comparing next and previous images (using a full frame time
to retrieve the next image when this setting is NOT turned on, and using a half-
frame time to retrieve the next image when this setting is turned on).

Vegas version: When using a video fields project in Vegas, make sure the
Project Settings for fields is set to Blend fields otherwise you might generate new
artifacts.
Note: if you are within After Effects and place your footage into a 59.94 (NTSC)
or 50 (PAL) composition so that you can view each field, and place DE:Noise in
the 59.94 (50) fps comp, then you should leave this checkbox UNCHECKED.
Basically the checkbox determines whether or not the plugins should get the
previous image from -of-a-frame-time away. In a 59.94 comp, there is not
another unique frame between the current frame and the previous frame in the
comp (because at twice the frame rate the previous frame IS a field of the
original sequence).

DE:Noise works most intuitively on progressive material and in projects,


compositions or sequences that are specified to be progressive.
If you have 3:2 pulldown in your source footage, you will want to remove
3:2 pulldown before processing with DE:Noise. Same with animation
done on "2"s, etc. the temporal denoising will not yield the wanted result.
DE:Noise usually works best on material with fields when the material has
been deinterlaced with field blending. For example Toxik provides a
deinterlacer operator which allows you to perform that operation prior to
applying DE:Noise.

ALT Track Source:


In After Effects 7.0 and over and some OFX hosts you have the option to
calculate the motion using an alternative clip. This can be useful for example in a
software if you work in linear or log space, as you can calculate the motion in
perceptual space by simply inserting a gamma node prior to that input.

Mark Segments:
After Effects, Premiere Pro, Fusion, genQ (Quantel) and most OFX hosts
(not Nuke):
If a menu is displayed for Mark Segments, its because the host application
supports the animation of pop-up menu choices. This setting allows you to
specify edit (cut points) information you know about the source material. The
purpose is to avoid artifacts at cut points, for example a frame with splicing
tape is a large disruption of the inter-frame visual flow. Basically this setting
allows you to identify the first frame of a sequence in a clip where the shots
are not segmented so a frame from another shot is not mixed in with a frame
from the current shot.

Cut A
Cut B
Cut C
Spatial Only
No MV
DE:Noise will not blend frames between two segments marked with different
Cut settings. For example, if one frame is marked as part of Cut A and the
next frame is marked as part of Cut B, then DE:Noise for that frame will not
interpolate between the two so will only use 2 frames. If your application does
not support animated menus we might have turned this parameter into an
integer slider (see below).

Spatial Only and No MV are special modes meant to better match Twixtor
Mark Segments menu. When Mark Segments is set to Spatial Only then no
temporal processing is done and only the spatial denoising is performed. If
Mark Segments is set to No MV the temporal processing is performed, but no
motion estimation is calculated (optical flow) nor used to match up pixels.
These special cases are to help streamline longer sequence where you might
have dissolves or other effects that make optical flow tracking not very useful.

Final Cut Pro X and Motion:


This setting allows you to specify edit (cut points) information you know about
the source material. The purpose is to avoid artifacts at cut points, for
example a frame with splicing tape is a large disruption of the inter-frame
visual flow. Basically this setting allows you to identify the first frame of a
sequence in a clip where the shots are not segmented so a frame from
another shot is not mixed in with a frame from the current shot.

Scene A
Scene B
Scene C

DE:Noise will not blend frames between two segments marked with different
Cut settings. For example, if one frame is marked as part of Scene A and the
next frame is marked as part of Scene B, then DE:Noise for that frame will not
interpolate between the two so will only use 2 frames. If your application does
not support animated menus we might have turned this parameter into an
integer slider (see below).

Nuke (and other hosts that dont support animating popup menu choices):
An integer setting is used if the host application does not properly support the
animation of menu items. In this case the Mark Segments can range from 0
to 4, where each of the integer values match one of the settings described
above:

0 is equivalent to Cut A
1 is equivalent to Cut B
2 is equivalent to Cut C
3 is equivalent to Spatial Only
4 is equivalent to No MV
In Nuke because we cant create an animated menu, we have to expose an
integer slider. Note that at least you can associate the menu values to a tidbit
describing the equivalence. In an host were you animate integers as if it was
a state menu, make sure you are not using an animation curve mode that
interpolates the values.

Example:

The picture below is the source footage: on one frame there is a camera flash.
The Mark Segments setting can be used to eliminate the spillover of the flash in
to the surrounding frames by marking them as different cuts of the sequence.

This completely breaks the inter-frame comparison, look below how the bottom
right of the image is pulled away from the frame

By animating the Mark Segments menu the problem is removed (see below)
Post Processing Controls
If the denoising process produces results too smooth for your taste, we provide
some post process enhancements to bring back details or remove some residiual
haze.

PostProcess:
POST PROCESS MENU IS ONLY INCLUDED IN VERSIONS OF DE:NOISE
THAT HAVE THE PREPROCESSING SETTINGS AVAILABLE.
We provide two common methods to adjust contrast.
None (no post contrast, no sharpen): Turn off Post-processing altogether.
Allows you quickly to visualize the result with or without post-processing.
Undo Pre-contrast: Inverse the contrast enhancement performed in pre-
processing.
Contrast using global avg: Contracts or expands colors towards average
color.
Contrast using mid-grey: Contracts or expands colors towards 0.5 mid-grey.

Post Contrast:
A value under 0 contracts the colors, a value of -100% is a flat color. A value
over 0% will expand the contrast range. Type for example 10% and see if you
like it.

Sharpen Amount:
A value of 0 turns sharpen off. The sharpening used here is a special form of
unsharp blur. This control scales the amount of sharpening.Although the slider
allow you to go up to 500% (5X), note that this can lead to a bit of ringing.

Sharpen Radius:
Like unsharp masking, this is the radius that is used for the sharpening process.
You typically want to use as large of a radius as you can get by while avoid
restoring the small noise you work so hard removing.

Suppress Salt and Pepper:


You can further reduce residual salt and pepper type noise (little dots) using this
control.Salt and Pepper noise might remain when you attempt to compromise
between threshholding to preserve details and smoothing flat areas. This
typically happens on low light situation as you push the exposure in attempt to
capture low light detail. You typically use this control as a final adjustment, often
with a real small value.
Discussion
DE:Noise is a temporal noise reducer complemented by some robust spatial
noise functionalities. Oftentimes you might need to complement such process
with other filters

Note DE:Noise does not address more static defects such as film scratches,
hair in gate, sensor banding, wire or larger object removal and background filling,
complete or partial damaged frame replacement, dust on lens, missing or
partially defective (in a constant manner) pixels in a CCD array Other
RE:Vision products such as RE:Fill might be useful for missing pixels problems.
RE:Flex Motion Morphing tool can sometimes be used for whole frame
replacement tasks.

DE:Noise also does not capture the signature of noise so you can use it later for
example a re-graining process as some film regraining software allow you to do.
This used to be important in the days where effects movies were shot in 35mm
and different shots needed to be composited, particularly computer graphics
renders. This is much less an issue these days. Note DE:Noise does allow you to
make really clean super smooth frames, and in some cases you might want for
look purposes to add some noise. In the digital video domain its relatively simple
to use a simple noise generator to approximate sensor noise. For look purposes
if you do a super-smooth render with DE:Noise adding after a bit of fine
gaussian noise can help. This is sometimes the case where the image is not
just noisy but also has significant compression artifacts.

DE:Noise is meant to work on a sequence of images (the temporal components


wont work on a still frame). For the no or almost no actual motion video
sequence case, we provided an assistant plugin Frame Average which is
described below.

Because the optical flow process matches up features between frames


DE:Noises temporal noise reduction has the ability to remove noise without
overblurring features. This is unlike a spatial noise that can oversmooth
important features The best practice may often entail using a bit of spatial
noise reduction with a bit of temporal noise reduction. Internally the filter first
applies the spatial filtering to the 3 input frames and then temporally denoises. If
for some reason you wish to reverse that order, you can always apply the
effect/filter twice, turning on and off the spatial and temporal components
appropriately.

Also sometimes the footage is so degraded that a Spatial Threshhold of 100% is


all you can do and then all you can do to completely remove visible noise is to
over blur the result. Then your option is to sharpen and play with contrast with
the postprocessing controls or another tool after this effect. Also, remember to
look at the result in motion, often you want to reduce noise a bit but not
completely eliminate it, and looking just at still images can be misleading as one
naturally tends to over denoise the image when looking at stills.

Pre-Processing, Order of Operations,


What helps often depends on the actual distribution of colors in the image. In
many case we (the plugin) did not get the memo. We basically get pixels.

On the left we have a gamma 2.2 ramp, in the middle a linearized version (thus
darker) image and on the right a log distribution of the image on the left. On the
left image the data is said to be in perceptual space, what you see is what the
data is. The two others are meant to be looked at via a display transform (e.g. a
LUT that makes it back perceptual at the display stage).

3D lighting TD, compositors assembling render elements like to work in linear


space as a target. This has become a popular / standard way of working with the
advent of the capacity to work with floating point precision image processing.
Before the advent of floating point processing, log space was a popular method
to maintain a pipeline at lower bit depth (to avoid blacks to be all
crushed/flattened out). LOG encoding is still widely used in digital video cameras
as they cant sustain recording with more than 8-10-12 bits of data. Hence all
these cLog/sLog special curves options in pro cameras.

So lets think about this, here again Rec709 Linear and Log.

To be fair here the log encoding and decoding probably involves some white and
black point definitions etc so in practice might be more like this on the right
below internally.
In a better lighten context a log encoding into a lower bandwidth space usually looks like very flat
contrast wise. Here below is direct out of BlackMagic Pocket camera 10bit Prores out sLog. If you
have a goPro, the flat proTune mode is similar. The pre-processing control can prove useful for
this.

When the image is not at this stage of the process looking like how it looked
when you shot it, it can be hard to judge how much denoising is really necessary
before grading these with the proper look. These are on ingestion miles away
from an sRGB JPEG you might have captured from the same camera at the
same time (with a process where the objective is to maintain something that
looks like how it was shot).

The order of operations also matters here. And if you suddenly add a color
correction ) before (including change your load settings in camera raw type
contexts). If the process involves changing the gamma profile/space (sRGB,
linear or log, wide log,), it can completely change (for better or for worse) what
a particular setup (a particular configuration of parameters here) will produce.
Why? For one, because it changes all the thresholds.

Bottom line is restoration image processing like pixels to be as much as possible


as close as possible in terms of color space, gamut, gamma curves, distribution
etc to how the viewer will see them. There is always a trade-off between too
much and not enough here. Such filtering is often a compromise.

This might sound counter-intuitive for someone who does compositing of 3D


renders where linear color pipelines is not only practical in that it creates a single
target and one that can be rationalized mathematically (linear light) and
implemented via display LUT transforming such images in perceptual space. Still
restoration is not like that. So we do recommend sometimes in applications like
the Foundry Nuke where you need to work in linear color space and look at the
image using sRGB or rec 709 in the viewer, that you actually try to insert a
gamma 2.2 and gamma 0.454 after the tool for example (delinearize), This will
produce very different then as the data is processed with steps between levels
that increase the distance between the part of the spectrum we are most
sensitive too noise wise.

The most common digital video noise is somehow a deviation (a variaition +/-)
from what it should ideally be. We could say that values that change frame to
frame more than a certain amount (threshold) are probably not noise. For
example firecracker sparkles are not noise in that sense. Simply mid-grey in
perceptual encoded space is 50% away for zero black and perhaps 18% in linear
light domain. So in perceptual encoding we have a lot more effect on the part of
the distribution where noise is most perceptible.

And noise is typically much more perceivable in the darker areas as per the
phenomena of simultaneous contrast i.e. we perceive more the difference
between 2 and 15 on a 255 basis than 242 and 255 the ratio of 15 to 2 is much
larger than 255 to 242). And as well we are very sensitive to temporal
differencing when processing a moving image sequence.

Performance Optimization
Understand as well that if you use a multi-frame temporal processing that any
effects applied before this effect will generate multiple frame requests, so as you
render the sequence the effects before might be requested 3 times to render the
same input frame. What to do here is very app dependent all applications are
not equal with regards to caching (i.e. calculating 3 times the same thing versus
1). All we are doing here is to warn you so you remember this. For example it
might be that one day you want to apply DE:Noise twice in a row (apply it to the
first instance result). Since the first makes a request 3 times and the second one
also 3 times, you might then end up (3x3 = 9) requesting the same frame render
in a non-efficient way. You might want to organize your project to avoid longer
using nesting (precomposition). Rendering a first pass to disk is always another
way.
Useful Tricks
A. Removing Snow: Min and Max mode.
Here on the left we covered the image with snow. On the right we used the
Min temporal mode. This will remove the white speckles. If it was dark spots
on a light background, you would then use Max. Note you then have to raise
the Inter Frame Difference (the Temporal Threshold) to 100% to completely
remove the white speckles. Note as well that in certain cases you might want to
reduce the excessive rain specularity etc rather then try to eliminate so then
less then 100% Difference can be appropriate. Also note the first and last frame
will not have 3 frames to compare so might not do as well. Try not to work
parked on these frames to make a setup.

This should work relatively well if the motion pattern is relatively fast moving
or completely random. This will not work with static patterns like scratches or
dust on the lens (see our other tool RE:Fill for that).
B. Handling Random Drop-Outs with 2 Most Similar Mode

On the left you see the frame before, the current frame and the
frame after. On the current frame we have two unwanted black
bars.

In a case like this where the drop-outs are sporadically happening on a single
frame the mode Average 2 Most Similar Pixels should work pretty well as it
will discard the frame with the large difference compared to the two others. If
the drop-outs appear here and there in your sequence, you might want to
animate off the effect on the frames without a problem to preserve the most
overall quality. This technique should work as well for scanned film prints with
dust on the film.

C. Isolated Problem: Dots in one area of the frame


With computer graphics ray tracing programs there are problems that can occur
where a single object ends up producing little white dots here and there
(whatever the reason: reflections, bad normals, not enough sampling, bad
filtering). That noise is usually not of the same nature as video or film grain noise.
It can look like hard white dots that come and go. In that case, its often
possible to handle this with temporal processing (e.g. set the mode to Min if its
white dots, remember to set the Temporal Threshold slider to 100%) and then to
avoid affecting too much other areas of the image sequence, you might consider
to make a rough track matte with a large feather and overlay that result over the
original. By track matte here we mean a matte that is applied after the effect over
the original. In general when the defects are localized to an area of the frame,
you can just run DE:Noise on the whole frame then composite the result back
over the original through a very feathered matte. The matte feather purpose is to
mask any visible transition between the processed and unprocessed areas. After
Effects users might consider using our PV Feather .

D. Dust-Busting:
Since film printing, inter-negatives, film to video transfer handling, often end up
with small piece of dust on the celluloid that block the optical path (and then are
scanned/recorded), and that sort of artifact is typically a single frame problem,
you can usually deal with it with DE:Noise. Often the Average 2 most similar
pixels is the mode that what you want to remove the dust artifacts. Larger debris
(like hair), and scratches might not be removed by DE:Noise. In such case, then
DE:Noise will be a first automatic pass that saves tons of time to your restoration
process before a more manual clean up pass. Our own internal test used
footage sometimes also had emulsion deterioration producing some luminance
flicker over time. You probably have the option of applying the Average Mode
instead OR if you have a render time budget, you can actually apply DE:Noise
two times in a row, first with Average 2 most similar pixels to remove the dust
(again probably with a Temporal Threshold the Max Inter-Frame Difference --
of 100%) AND then with the Temporal Process Mode Average to eliminate
flicker artifacts that might be locally amplified by Average 2 most similar pixels.

E. Noise Localized in a Channel:


It can happen that the noise is localized in one color channel. In that case you
either simply turn the image sequence to black and white using the channel as
RGB of the to denoise source, apply DE:Noise then reset the source channel to
that. Alternatively if the noise is just in the chroma, you could always use the
transfer modes of your compositing applications to blend back the result over the
original. If you decompose the image into intrinsics channels like that, again the
ALT track source (using the original RGB footage) if available to you might be
your friend.

F. Additive White Noise:


Sometimes you can have very defective source with massive white noise added
to the image.

Here (see image below) to restore the readability of the image (since its probably
past the point of resulting into a perfectly nice image), we apply DE:Noise 2 times
in a row.
First Pass (top right): a) We add a bit of contrast b) We blur using Blur darker
Second Pass: (bottom left) c) We now blur again using Blur lighter and then we
turn the Temporal Denoising on to help reduce the blotchy look of
oversmoothing. (OK post color correction could be needed here).

G. Dealing with Global Illumination Noise:


Computer generated images (CGI) using global illumination might require a lot of
samples (i.e. long renders, and too long for your render budget) to completely
remove noise. Aside noise, sometimes you also can get flickering on low polygon
count objects due to how importance sampling works. DE:Noise temporal modes
will remove a bit of this but for some flickering case DE:Flicker tools will be more
useful.

We are using here a pretty bad example to be very obvious. Note if you receive a
render like this we expect you fire the lighting TD/render guy. Also, the following
explanation assumes the noise is time dependent (does not stay at the same
place). This for example in V-Ray is in the render settings (under DMC Settings).
Picture provided by Matt Estela

There are 3 kinds of controls to consider here. The theory is some temporal
denoising will remove some of the noise, but in such case there is a lot of sharp
tiny dots so spatial filtering is necessary and as well for the white dots, salt and
pepper filter parameter is also useful. If it applies (is available in your version)
and its possible, to use ALT motion track using a non GI RGB render (no noise
pass, sometimes that would be the diffuse pass).

Here our basic setup is Spatial Radius=10 and Temporal mode is set to OFF. On
the left is the original image. We set the Spatial mode to Variational here. With a
threshold of 20, there is still some white and saturated dots.
Even at 30 (on the left) there are still some white dots. We have to go up to 35 to
see the dots go but then we might be overblurring the image like that. So for the
isolated white dots a better strategy is to apply a small amount of salt and pepper
filter and leave Spatial Threshold to a smaller value.

Now we turn off spatial denoising and turn on temporal processing. We set a
smaller value for salt and pepper, just enough to diffuse a bit the dots. And then
we reput ON the spatial denoising but now we can set the threshold lower.
DE:Noise Frame Average
THIS TOOL IS NOT AVAILABLE IN ALL VERSIONS OF DE:NOISE.

DE:Noise comes with a companion tool that can be particularly useful for shots
with no motion ( or with very little motion ). If there is no motion in the shot, its
probable you wont gain much from motion estimation). The frame averaging
performed with this plugin is not your normal frame averaging because DE:Noise
Frame Average contains a threshold parameter. Pixels are only averaged from
other frames that are within the tolerance of that threshold value. You often want
small threshold like 10-15 or 20 over a large number of frames (like 10 frames on
each side) to completely cancel out the noise. This is a simple way to make a
good quality still from video sequence in a low-light context.

Frames Before and After


A value of 1 before and after becomes a 3 frames average (as it includes the
current frame).

Auto Adjust In-Out


Lets say you set a Frame After of 5. What happens at the last frame? When
this setting is off, the last frame is used 5 times in the averaging process
(because there are no more frames after the last frame). When checked, this
setting grabs more frames from the before side of the sequence so that all
unique frames are used in the averaging process. Said another way: when
checked, the plugin adjusts the number of before and after frames at the
beginning and end of the sequence so that all unique frames are used.

Copyright 1999-2014 RE:Vision Effects, Inc. DE:Noise, RE:Fill, RE:Flex, PV Feather,


SmoothKit and FieldsKit are trademarks of RE:Vision Effects. Other trademarks are
owned by their respective owners

Potrebbero piacerti anche