Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
be/2qd2mq2bp9s ←
Reddit discussion thread is here.
Due to vandalism, I had to turn commenting off for the general public. This also means that
most of you can’t SEE any of the comments. (Google’s system is SUPER DUMB.)
So I’ve been adding their text directly into the document, when I have time.
If you are well-versed in these things, and want to be able to view and make comments,
and suggest edits, click “👁 View only” (top left) and then click “request access.”
(No promises)
1. How would you color correct this shot of Linus sitting in front of a bright window? Which
effect(s) did you choose to use, and why? Did you decide to use a mask? Can it be done
“good enough” without having to use a mask?
ttps://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0)
(h
Answer:
The human eye is naturally attracted to bright things based on human evolution. As a rule of thumb, a bright
background can distract from your subject in a case like this because there is so much background and it is so
bright. It’s a large portion of the frame and near white. Yes, it is outside and yes it is bright, but don’t let that trick
you into thinking that it “has” to be a brighter value on your waveform simply because it’s outside. This shot is
exposed that way and it’ll likely have to be a bit brighter in color correction so as not to break the shot, but
outside viewed through a window doesn’t have to be at 100 IRE.
-John Romero
Answer:
The key is trying to reduce the attention from that bright
windows and centered to the subject. For that I:
a) Corrected in generally. Trying to reduce some Highlights
and improving contrast.
b) Use a quick Qualifier and grab Linus’ skin. Brightened and
gave it more color.
c) Increased grens in the sofa and gave them more blue with
Hue vs Hue and Hue vs Sat. I duplicated that node again for
more punch.
d) Turned the highlights a bit more pink/yellow to a more
pleasant look. (subjective)
ucas Sanczyk’s approach)
-(L
Answer:
This is a situation where you want to ETTL in camera and use the knee function, a picture profile
with the highlights rolled off, or shoot it in a really nice codec / Log / RAW. Since the important
information in this scene (Linus) is less luminous and should be, optimize the exposure as such, and
let the highlights blow if need be. This also prevents the viewer from noticing an exposure change if
you capture a certain piece of furniture (or Linus) in a wide shot or other alternate angle. As you can
see from the waveform, the camera operator did a good job.
To fix this in post I simply brought the Highlights slider in the Lumetri Color panel down to -50 (and
later I bumped the Contract +10 so his skin doesn’t look so plasticy, new on right). -John Pooley
↓ ↓ Keep scrolling ↓ ↓
2. How would you color correct this shot of the inside of the Oneplus factory? Which effect(s)
did you choose to use, and why?
(https://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0)
(Not Taran’s grade used in the video) (Taran’s proposed grade)
Answer:
I like your correction of the shot at the one plus factory. Differently from question #1, those highlights are small
ceiling lights and they are totally blown (they have little “hats” on the waveform no matter what you do). Linus’
white clothing can start to blend in with them though, but there’s not much avoiding that.
To step a bit forward in time to a later question, when color correcting I do agree with Ansel Adam’s zone
system, and I believe in putting caucasian faces at about 70IRE where possible. The reasoning harkens back to
the eye being attracted to lighter objects. It may be tough to do this if the talent is in shadow relative to a bright
background as in Q#1, but as long as the shot isn’t totally run and gun like that one, then this rule is generally a
good one.
-John Romero
Answer:
1. I added a little contrast again using the
clarity/midtone detail to the image
2. I added saturation to the oranges in the image, and
slightly adjusted the hues of the red/orange range to
even out skin tones slightly. I also did this to the blues
to slightly change the blue to taste. I kept it kind of
subtle in this case as I didn't feel a more heavy handed
look was necessary for this shot.
3. I added a small contrast adjustment to the image
using curves to the range which applies to Linus's face.
4. I then lowered the shadows just a little bit to give the
image a hair more depth/contrast.
-Spencer Lantz
Answer:
In the first node I added some contrast to the shot using
LOG controls.
On the second node I played with the saturation a little
bit.
On the third node I played with the curves a little bit
(Hue vs Lum, Hue vs Sat, Lum vs Sat).
On the fourth node I keyed out the highlights using a
Luma key and I brought them down using the LOG
controls.
-Ido Simchoni
Here is the Lynda course that left me with far more questions than answers:
https://www.lynda.com/course-tutorials/Color-Video-Editors/711831-2.html
3. Why did Robbie use the gain control to fix the blue shot at 2:54 of this video:
https://www.lynda.com/Premiere-Pro-tutorials/Using-RGB-Parade-RGB-Overlay-waveforms-
judge-color-balance/711831/752722-4.html
…rather than using offset, lift, or gamma?
Answer: Because the gain control affects the brightest values the most, and that is where the imbalance was. (Lift
mostly affects the darkest values, gamma affects the middle values, and offset affects all values equally.)
I should have been using Premiere’s “highlights” control to change the sky, not the “midtones.” -Taran
4. DaVinci Resolve has “Color Wheels” of lift, gamma, gain, and offset, which affect the values
in a linear way. (Robbie Carman seems to use them a lot) Does Premiere have controls that
work the same way?
Answer:
Premiere’s Color Wheels have shadows, midtones, and highlights, which don’t quite work the same
way as Resolve’s Lift, Gamma, and Gain. I believe they “curve” the values as they approach the
whites and blacks, rather than moving them linearly… but I still need to try it out in both programs to
know exactly how they differ.
There may be a plugin for Premiere that gives you controls that work exactly the same way.
Premiere’s RGB curves tool can achieve the same results, but the curves are very finicky and difficult
to handle.
-Taran
Answer:
I am not a Premiere expert so take this with a grain of salt, but from the 2 min of testing that I did, it
looks like Premiere's color wheels have a function which is trying to anchor the white point when you
are using the Color Wheels (In theory this could be helpful in look development as it would allow you
to tint the highlights while still keeping a neutral white point?). It looks like you can affect the whole
image in Premiere if you do your correction in curves.
-NM Resolve
Answer:
They are "equivalent" in that Lift affects shadows, gamma affects midtones, and gain affects
highlights. However, they respond differently.
DaVinci Resolve's method is the proper method, a lso used by the sensor response curve.
-Ian S.
5. How did Robbie achieve the look shown at 3:33 of this video?
https://www.lynda.com/Premiere-Pro-tutorials/What-would-you-do-Creative-evaluation-examples
/711831/752713-4.html
(I couldn't get my floor to match his.)
Is it not possible for me to fully achieve that look, since I only have a PNG screenshot to work
from?
Answer: Yes. Because you’re using a screenshot of a video, rather than the original, uncompressed file,
your results will always be more blotchy, posterized, and inaccurate, due to the compression and lesser
bit depth.
6. Why couldn’t I get the classic “teal and orange” look to work for the shot inside of the LIGO
building? Was I going “too far?”
(https://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0)
Why does this same technique seem to work so well for these shots?
https://www.youtube.com/watch?v=g1J1DKRScDY
https://youtu.be/_m-9R1oqvh0?t=450
Is it because there’s not much inherently teal or orange in the original shot?
Answer: You can actually get the teal and orange in pretty much any shot, even if there isn't much of the color
originally. Of course the other colors might look a bit off, as you'd add/remove teal and orange from all colors.
-Bart Kuipers
Is it because the white balance was really far off in the original shot?
Answer: It’s less of a question of white balance, and more about the type of light.
If you're shooting with shitty ambient lights that aren't fairly full spectrum, you can't really fix that with a white
balance adjustment. You can't turn sodium vapor light into high CRI daylight light no matter how much you white
balance a scene -Ian Servin
Does bad white balance that mean there will be less leeway in color correction / color
grading?
Answer: Yes. If you save to jpg (for stills) or any compressed format, your pixel values will be computed to the
set white balance. Therefore, some information might get lost if the white balance is really far off in theory. In
practice, this doesn't really limit you a whole lot, except when grading quite extremely. -Bart Kuipers
Taran note: The most valuable thing I’ve learned from Ido so far is that he seems to always key or
mask the skin tones, so that they can be adjusted independently from the rest of the grade.
Another answer:
1. Brighten up the image using curves,I pull the midtones up towards the highlights in the curves,
this will compress the highlights a bit without blowing them out all together.
2. White balance with Temp and Tint adjustment, then adjust further using the color wheels to
remove tint and temperature issues that may appear in the mid tones and highlights.
3. Added some "Pop" using; clarity in Lumetri or midtone detail/contrast pop effect in resolve.
4. Using a secondary key select the skin tones and then invert the selection, desaturate the new key
and then add blue/green tint to taste.
5. I selected the yellow railing using another key and added contrast and adjusted the color to taste.
This is not totally necessary but I felt the shot could use it.
6 I also felt the ceiling was a bit too dark so I added a linear power window to the top of the image
and brightened the midtones to brighten without clipping.
7. I lowered the midtones a bit to just add contrast to taste.
-Spencer Lantz
More Answers:
Taran note: I admit that teal and orange is probably NOT the best choice for this factory
shot. Still, it’s been very interesting to see how different people would handle such an
(apparently, challenging!) request.
I feel that Ido’s grade is the best result (if you really WANT teal and orange!), and that
Spencer Lantz’s white/sterile grade seems to be the most appropriate for the scene. Though, I
would probably make both of them a bit brighter.
I can tell now that mine is not particularly good!
9. What sorts of things should be done or avoided when filming, to ensure the most options for
color correcting/grading?
Answer: Proper lighting is very important. (Though, that’s a different discipline entirely…)
Answer: The 'only' thing I'd care about a lot is not losing detail in either highlights or shadows. So try to expose
properly and have a 'right-ish' color balance so you can go anywhere with it. So try to squeeze the information in
the available range of the camera
-Bart Kuipers
Answer: As often as possible, when not shooting RAW, make sure to get the white balance close to accurate in
camera and expose the image as brightly as possible without clipping highlights that you deem important.
pencer Balliet
-S
10. Also, even if it is very difficult to get GoPro footage to look like Skyfall, once you figure out
how to do it, couldn’t you just create a LUT, and apply that to all the (color corrected) GoPro
shots? If not, why not?
Answer: No.
Reason: Theoretically, a user generated LUT could work in some instance... But would mostly depend on
shooting scenes with a consistent look. E.g say soft outdoor lighting in every scene... In this instance you would
white balance, and neutralise each shot so that your shadows/mids/highlights are fairly natural and match shots
before and after.
Then creating a Skyfall look on top of this and creating a LUT from it and applying to all clips throughout. To
reiterate, this only works if all shots and scenes are quite consistent with each other.
-Connor Ayliffe
Taran note: H ere’s how I’ve come to understand it: Although you might wish to go TO the same look for several
different scenes, the picture that you’re coming FROM is always going to be different. Therefore, the results
achieved by using the same LUT, would also be different.
Ido explained to me that LUTs always have to have very soft roll-offs on all their effects, because the LUT maker
does not know what the footage will look like. So they have to make it general, not specific.
12. Why do bias lights need to have a high CRI? (I’m not color correcting the wall behind the monitor!)
Answer:
You want "neutral" light that without color casts that could mess with how you perceive what's on
the screen. It really only matters if your entire space is also treated with special neutral tone paint.
-Ian Servin
Answer:
One good reason to have a high spectral similarity between your bias lighting and natural lighting
is that if your camera calibration is quite bad, you can correct for that by keying to the bias lighting.
-Tucker Downs
13. Is a high CRI important for the lights used on a set? Is higher always better?
Answer: “Y es,” is the general consensus. (But, low CRI can sometimes be compensated for if you
really know what you’re doing.)
14. If it’s important for a light source to have a continuous, rather than a discrete spectrum, but a
camera only records the frequencies of R, G, and B anyway, then why does it even matter?
Answer:
The "Yellow LED" in this picture has bad CRI. The middle, which is a normal light + filter has good CRI. -Tucker Downs
Under daylight (a continuous Under 'yellow light' the duck However, a “yellow” LED light
blackbody-like spectrum), the has much the same response will be producing a set of
duck will reflect a distinct as the 'yellow light' is really a specific wavelengths
distribution of light in the visible mixture of photons in the depending on its type. If it's an
spectrum that corresponds to yellow part of the spectrum. RGB LED (or set of LEDs) then
the wavelengths around yellow, Since these were the only ones the 'yellow light' will actually be
and absorb the rest, thus being reflected anyway, the a combination of red and green
appearing yellow. duck looks mostly the same wavelengths. Although it
(but slightly different probably appears yellow when reflecting
due to the difference in off white surfaces to us (and to
distributions of the yellow part a camera sensor), there are
of the spectrum between the actually no photons being
white and yellow lights). produced with wavelengths
that are in the yellow part of the
spectrum! Thus, the duck
appears darker and redder as
the yellow light it was
previously reflecting is no
longer there, and it reflects red
light slightly better than green
(not an inherent fact but it looks
like it does).
-Samuel Reynolds
15. If a light source with a very low CRI is desirable for a specific shot, due to the monochromatic look
that it would create,, wouldn’t it give you more control to just go with a higher CRI when filming,
and then make the shot more monochromatic during color correction/grading?
Answer:
It may give you more f lexibility in post, but a DP is striving to achieve a look in-camera and wants to be
precise in their on-set choices.
This is why a proper viewing pipeline on set is so critical so the camera team can be confident that
what they're capturing is conducive to what they want the final product to look like.
The oversight of this on-set preview pipeline is often done with a dedicated DIT person/team.
On smaller sets, it's just about having properly calibrated equipment and using LUTs on set to
preview grades.
-Ian Servin
FILM vs VIDEO
“Film” or “film stock” is a strip of photographs that can be played in a sequence, using an analog
projector.
“Video” is electronic data that can be stored, transmitted, and decoded into a series of images.
Movies, also known as “films,” used to be shot using film cameras, but these days, most are shot with
digital cameras, which means they are actually videos.
Everything in this document is concerned with video, and not film.
I wrote this video, which explains it, kinda.
Veritasium - The History of Video
LUMINOSITY vs LUMINANCE
Luminosity is NOT relevant to digital video at all. In astronomy, luminosity is the total amount of energy
emitted from an object per unit of time. I will not refer to “luminosity” in this document again.
Luminance is relevant, however. It is a measurement of the intensity (or, brightness) of light that is
reflected and/or emitted from an object. (It is measured in nits, but that’s not important right now.)
WAVELENGTH vs HUE
I don’t know of a simple way to explain this...
But, basically, as a video editor or colorist, you’ll always be talking about “hue,” NOT “wavelength.”
https://en.wikipedia.org/wiki/Visible_spectrum
https://en.wikipedia.org/wiki/Hue
PIXEL:
A digital “picture element.” They are not necessarily square, or even RGB.
https://en.wikipedia.org/wiki/Pixel
These Are Not Pixels: Revisited
If you’re already confused about this, here are some fantastic articles that explain it in terms anyone
can understand: https://medium.com/the-hitchhikers-guide-to-digital-colour
BITS PER PIXEL / BIT DEPTH / COLOR DEPTH
https://en.wikipedia.org/wiki/Color_depth
I feel like a distinction should be made between “bit depth” and “color depth…” because you can have
an image that uses only pure black, and pure white pixels,, and save it as a 24-bit png. Technically, that
file is still 24-bit, even though only 1 bit is represented. In this case, I’d call that “low color depth” rather
than “low bit depth.” Or, perhaps there is a better term?
CRUSHED BLACKS:
When the R, G, and/or B channel is at or below 0 IRE on a waveform.
Taran note: In broadcasting, pure black is actually 7.5 IRE. This is not relevant here.
BLOWN HIGHLIGHTS:
When the R, G, and/or B channel is at or above 100 IRE on a waveform.
CLIPPING:
For video, when the R, G, and/or B channel is at or beyond 0 IRE or 100 IRE, on a
waveform. (So, it refers to crushed blacks as well as blown highlights.)
For audio, when the waveform tries to go “above” -0db, and can sound terrible
as a result.
OVEREXPOSURE:
This seems to be a subjective judgment call for when an image appears brighter than it “should” be.
This does NOT necessarily involve blown highlights. (But some tutorials will say that it does…)
LOG, S-Log:
LOG (short for logarithmic which most of the curves were originally based on) references a specific curve applied to the footage
during capture. These curves are applied to pack wider dynamic ranges into the limited data range available. Traditionally
these curves boost low end signal to reduce noise and provide more shadow detail and often include a ‘knee’ in the highlights
which further extends the total range.
These LOG profiles need to be converted in the editor to look ‘normal’ again, and many manufacturers provide standard LUTS to
achieve this.
LOG profiles are often combined with “Exposing to the right”, which is the practice of over exposing LOG profiles to get more
important information higher in the IRE range where more data is allocated for recording it. This also needs to be adjusted for in
editing.
-Spencer Balliet
S-Log is a specific log to Sony cameras Canon, Sony, Red, Arri, Blackmagic etc all have their own flavors of log. They all act a little
different due to the color science which is why each company gives out the luts to transform their log to rec709, but they all
work basically the same.
-Spencer Lantz
GAMUT:
I have no idea how to explain this one simply, since I do not understand it well enough.
https://en.wikipedia.org/wiki/Gamut
Simple definition: "The range of colors that can be reproduced by a system." A gamut can be measured (e.g. the gamut of your
monitor) or specified by a standard (e.g. the sRGB gamut). The latter are typically defined by a set of three primary colors.
-Jason Gerecke
CHROMATICITY DIAGRAM:
https://en.wikipedia.org/wiki/Chromaticity
A 2-dimensional representation of all colors. There are many different ways to
draw them, but it’s important to note that the exact colors are probably not
accurate. Your screen cannot show you colors beyond its own gamut, and
therefore, others have been substituted. A true diagram would have
significantly more saturated colors on the edges.
A gamut i s drawn on top of a chromaticity diagram. One diagram might contain
several gamuts, like in the image to the right.
Tucker note:
Everyone should be using the 1976 Chromaticity diagram. NOT the VERY COMMON 1931
arald Brendel explains it best.
diagram. H
DEAR GOD YES. LEARN GAMMA (especially in context to LOG). It is one of the foundations of colour science. It
should be one of the first things someone picks up - not judging you - for example; I still find some of the more
straightforward concepts of maths, in general, [difficult] to grasp, but have no issues with the complicated stuff. When
I slip up on the complicated stuff, it's always due to my skipping or never learning the more straightforward parts.
Had you known more about Gamma, the questions would be completely different. Start by looking at what gamma
means in terms of Log. Then look at HDR again. However, meanwhile, if you want your brain fried, watch all parts of
ttps://www.youtube.com/watch?v=yZKDzT8pwTI
this: h
-Caspar Brown
Here’s another great link for learning about gamma by the people who make the chart that everyone uses - JP:
http://dsclabs.com/wp-content/uploads/2018/12/DSC-LABS-SETUP-Ver-2.pdf
Color spaces and gamuts are usually used interchangeably in the context of the range of colors that
are available.
Ac olor model is how you define or calculate colors.
CMYK (Cyan Magenta Yellow and Key(black)) is the color model usually used in professional print for
instance.
RGB is usually used for emissive displays and is additive.
(CIE)LAB expresses color as three numerical values, L* for the lightness and a* and b* for the
green–red and blue–yellow color components. It was made to better mimic human vision.
Y-UV defines color space in terms of one luma component (Y′) and two chrominance (UV)
components. It was made to better mask compression or errors by taking human vision into account.
HLS is best explained as follows:
Hue is a degree on the color wheel; 0 (or 360) is red, 120 is green, 240 is blue. Numbers in between
reflect different shades.
Saturation is a percentage value; 100% is the full colour.
Lightness is also a percentage; 0% is dark (black), 100% is light (white), and 50% is the average.
A colorists can take advantage of the way certain color models or color spaces work in order to do
certain operations a lot easier. A very basic example of this would be to use HLS to drop lightness
without affecting saturation or pivoting colors in certain ways that are not easily reproduced in RGB.
-Espen Flagtvedt Olsen
If you make video content for the internet, you’re probably working in the 24-bit sRGB color space.
Now, sRGB uses the same “primaries” or “chromaticities” as REC.709, (also known as ITU-R BT.709)
which means that the reddest red in REC.709 is the exact same reddest red as sRGB, and so on for green
and blue. However, I believe that REC.709 has a lower color depth than sRGB, since it can only represent
approximately 221 values of gray, rather than 256. (It looks to me like every 7th value is skipped when
converting from sRGB to REC.709, because of some horrible thing called “studio swing.”)
RGB, sRGB:
RGB is any additive color model that uses red, green, and blue.
sRGB (Standard red green blue) is the default color space used on the internet, and on most devices. It is
usually, but not always, 24-bit. (8bpc)
Many tutorials and articles will use these terms interchangeably, and/or assume that sRGB is always
24-bit (8bpc), which can all get extremely confusing!
Now, in the real world, iron melts at 1811 Kelvin, and boils (turns into a gas!) at
3135 Kelvin.
And, even though light bulbs will usually list a “color temperature” on the box, that
doesn’t mean that the light bulb actually gets that hot. (Unless it’s an incandescent
bulb, in which case, that’s exactly how hot the tungsten filament gets!)
DYNAMIC RANGE:
The ratio of the darkest to the brightest parts of an image. This is typically measured in “stops,” also
known as “exposure stops” or “T-stops.”
https://www.premiumbeat.com/blog/what-is-dynamic-range/
I've taught photography for years, and in my humble opinion... dynamic range (or the lack thereof) is a prime problem in
photography. I describe it as a piano keyboard. We understand that there are tones (audio) above and beyond the range of
tones a piano keyboard can play. But the dynamic range of tones a piano can play is limited. Great music has been made within
this limited range of tones, but it was all limited to the range that the piano can play.
Your eyes have a broad range of tones in which they can see detail. But a digital camera, compared to our eyes, has a keyboard
(range of tones) slightly more than half of what our eyes do. Even though our eyes can see it, everything above or below the
narrow range of tones that the digital camera can record, will be lost... truncated to that deepest tone, or the highest pitch of
the camera's very narrow keyboard.
The interesting thing about a camera "keyboard" is we can move the narrow range of tones lower than what our eyes can see,
or above what we can see, via manual exposure. The keyboard isn't any broader, but we can put it exactly where we want it.
Thus with a digital camera, we can photograph the surface of the sun, or the darkest sky... beyond where the fixed dynamic
range of our eyes can record tones.
-Douglas Henderson
SDR:
Standard Dynamic Range. If the box your monitor came in isn’t gloating that it’s HDR (High Dynamic
Range), then it’s SDR. An SDR monitor can show about 6 stops of dynamic range.
Typically, the brightest that an SDR monitor is expected to get, is about 120 nits.
HDR VIDEO vs HDR STILL FRAME vs HDR PHOTO:
“HDR” stands for “High Dynamic Range.”
For HDR video, the idea is that the brightest white that a typical (SDR) monitor can produce… is not
nearly bright enough! The real world can get so much brighter!
Imagine an SDR video of a person wearing a white shirt, with sunglasses that show a reflection of the
sun. (A “specular highlight.”)
If the video is HDR rather than SDR, the only difference should be that
the specular highlight is much brighter. The brightness of the shirt, and
everything else, will be exactly the same.
So, HDR video isn’t just a brighter version of an SDR video. It simply
allows for brighter colors to be used if and when you need them.
HDR monitors typically start at ~500 nits, and some professional models
can get brighter than 10,000 nits! (For reference, the sun is about 1.6
billion nits, if you stare straight at it. A welding arc can be even
brighter.)
Because video is a sequence of individual photographs, you might think
that if you pause an HDR video, that you’d be looking at an “HDR
photograph.” But, that’s not what most people mean when they’re talking about HDR photography.
Instead, I have coined the term “H DR still frame” to describe that. (If there is an existing term, let me
know!)
HDR photography actually involves the careful compositing of multiple exposures into one “SDR” image.
These images do NOT need to be viewed on an HDR display - they can be viewed on an SDR display or
even printed on paper. HDR videos, (and by extension, HDR still frames) must b e viewed on an HDR
display with its own built-in illumination.
This distinction can be quite confusing sometimes.
An HDR monitor can display approximately 17.6 stops of dynamic range. (Compared to 6 stops on an
SDR monitor)
https://skylum.com/blog/hdr-photography-vs-hdr-tv
https://www.digitaltrends.com/photography/what-is-hdr-photography/
http://files.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf
LUT:
“Lookup Table.”
Lookup tables are used to map from one colorspace to another.
They simply take an input value (say, 42% gray in the Red channel) and re-map it to a new output (55%
gray in the Red channel).
It's a lot like grading, just automatic, and unless it's very specific (s-log to rec. 709 for instance) the
results can be extremely unpredictable.
This is why most creative luts are very generic and soft.
Note that there are input and output LUTs.
-Espen Flagtvedt Olsen
https://en.wikipedia.org/wiki/Lookup_table
18. (Question) On this un-corrected, un-clamped footage from the Canon XA20, why is the brightest
part of the image (the sun, or a bright light) set at 275 (110 IRE), rather than 255? (100 IRE)
Download the video clip if you need to.
Or, you can get it from here: https://youtu.be/m0D2H0s-TMo
(The spot I was looking at is timecode 18:25:48:19, or 27 seconds from the start.)
(You can also find a very brief example of pure white light at timecode 18:25:35:10, or 00:00:13:27
from the start)
ANSWER: Because the Canon XA20 was designed with 10 IRE of extra headroom, to give more flexibility in
post. This is set specifically by Canon and is not necessarily consistent across all manufacturers.
19. (Question) At that exact same timecode, why is the pure white blown highlight
NOT the "highest" part of the RGB waveform?
ANSWER (Taran): If you view “luma” on the scopes, you can see that the white highlight
is in fact the highest part of the waveform, and nothing gets higher than 110 in this case.
Because chrominance is relative to the luminance, it is “added” or “subtracted” from
there.
20. (Question) Why does the XA20 also put its “100% zebras” at 110 IRE, rather than 100 IRE? (The spot
I am talking about is at timecode 18:27:18:14, or 1:57 from the start.)
Answer: “Because the XA20 clips at 110IRE, not 100IRE, and the 100% zebra is supposed to convey the
part of an image that is clipping.” -Ian Servin
21. (Question) If you have “YUV” video where the information goes above 100 IRE, but it is able to be
recovered, is that really “clipping”? Or, is there a different term for that? (If not, I propose the term
“pre-clipping”)
Answer: (Which might be wrong/inaccurate) Those are called “unsafe colors,” “illegal colors,” or
“whiter-than-white.” If your video is going to be broadcast over the air, they would need to be
manually clipped (truncated) using a video limiter, to make the video “broadcast safe.” For YouTube,
it’s not a problem to have them.
22. (Question) When “clamp signal” is OFF, why does my Lumetri Scopes waveform still only
SOMETIMES show the data above 100 IRE?
Answer: Might just be a bug, or poor implementation...
23. (Question) It’s widely proclaimed that shooting in LOG (which results in a “flatter” image) gives you
much greater ability to color correct (and color grade) later. For 10-bit footage, that makes sense to
me. But if you’re shooting in 8-bit, wouldn’t shooting in LOG just massively reduce your available
color depth? This article claims that it’s not a problem.
I think that this footage of a kitten was shot using 10-bit. But, if it was shot in 8-bit, would it be a
mistake to use S-Log2?
Excellent article on sLOG.
Here is an example of 8 bit vs 10 bit LOG -Link provided by Stephan
Answer:
“You are trading bit depth for some extra dynamic range. Having LOG footage does make it easy to influence
colors more (seeing how they're robbed of much of their original color information), but it's primary purpose is to
create more dynamic range. It's actually recommended that on 8-bit (and while less so on 10-bit cameras, it's still
recommended) if you can control the dynamic range in the room and you don't need the extra dynamic range, shoot
in a standard or natural looking profile. The codec simply doesn't have as much information for you to move stuff
around.
This is why many professionals jumped for joy when 10-bit depth with 4:2:2 chroma subsampling started making
its way to smaller cameras: the masses were getting better tools for using LOG profiles.
-Koto-Kun
Answer:
In your tutorial, you're right. You don't want your picture to look as flat as possible. People totally went overboard
with that whole concept. Someone invented LOG files, but people didn't get it. The whole idea of a Log file is to get
as much detail as possible in the shot, not to bunch all the information together. Log will prevent information from
getting clipped out, because it curves off the top end and lower end. The scope you see there is actually worse than
what you'd want. You want information as spread out as possible, without getting clipped. This gives you the ability
to have the greatest freedom in grading, as there'll be a distinct difference between a white wall and a white light
for example, allowing you to grade those 'separately'. If you'd shoot like the crunched scope shown there, there
would probably be not enough value difference to grade those two separately.
-Bart Kuipers
Answer:
Rule of thumb, do not shoot 8-bit log footage.
-Jean Paul Sneider
24. (Question) Does HDR really have blacker blacks than SDR, as this article claims? Or is it just the type
of display? If so, it seems to me that SDR footage can also be shown on that kind of display, and
therefore achieve the same level of blackness.
Answer:
HDR is always at least 10-bit, so HDR displays have far more possible shades of near-black,
compared to SDR monitors. So, HDR can show more DETAIL in the darkest blacks. But this article
seems to be implying that the absolute blackest black (zero) will always be MORE BLACK on an HDR
display, compared to an SDR display. That is NOT true.
...
“LCDs are backlit, and even a signal value of zero will produce some light.
I don't work on LCD displays; the LED display tech I work on is truly black. But it can also be used as
an SDR monitor. So my SDR monitor has a darker black (and higher contrast ratio) than any HDR LCD
monitor I am aware of.
HDR monitors are not guaranteed to have an absolutely darker black.
In fact, they aren't even guaranteed to have a brighter white either, in the case of cinema standards.
In cinema, SDR goes up to about 48 nits. Dolby vision is 108 nits and is considered to be HDR but my
SDR macbook pro has a screen luminance of around 250 nits."
-Tucker Downs
Taran note (info from Tucker):
The reason Dolby vision is still considered HDR even at “only” 108 nits, is because the theatre room
itself is almost completely black. So, your eyes have adapted t o that.
For the same reason, at 100% brightness, your smartphone screen can be pitifully dim in broad
daylight, but seems blindingly bright in a pitch-black room.
25. (INFO) Taran’s theory of why it is important to care about fundamentals for your chosen art form
Totally agree about learning fundamentals. Also for color science/theory. You should learn what everything
does and is supposed to do. You want to know all the rules so you can choose which ones to break instead of
just breaking them because you didn't know. Breaking rules is fine, if done on purpose :) -Bart Kuipers
26. (INFO) How to best convert to grayscale / black and white
Just lowering the saturation to 0 (“averaging”) can be the WORST solution.
Premiere’s Black & White effect is probably your best bet.
You can also try the Channel Mixer effect, and check “monochrome.”
By default, it just uses the red channel. Not great.
You can also try using just the green, or just the blue channel.
These values give almost the same result as the B&W effect: 28,58, 14, 0.
(Notice that they add up to 100)
In Photoshop, use Adjustments> Black & White. It’s far more powerful than what
Premiere has.
“Converting to black and white is 100% subjective. There is no "wrong" way. What gives you the
best results just depends on what you're trying to accomplish. Using the newish Hue vs Luma in Lumetri to
adjust the luma of specific colours before a desaturate should actually give you even more control than
Photoshop's Black & White because you can target any specific hue: not just Reds, Yellows, etc.”
-ThisIsTeeKay
↓ ↓ Keep scrolling ↓ ↓
30. (INFO) This video poses a problem, but does not give a complete or correct solution:
Computer Color is Broken, by MinutePhysics
How to REALLY fix Photoshop’s blur:
For existing documents, go to Edit > Convert to Profile > Profile > Lab Color
When creating a new document, use Color Mode: Lab Color
However, you still need to convert it to RGB when it’s time to export as a .png, .jpeg, etc.
31. (Question) In Photoshop, can I get color scopes that are as good or better than Premiere’s?
Answer:
For Photoshop, open a new window (dual monitor and make the second monitor
output full screen) you take an HDMI to SDI converter (like the one from BMD for pretty
cheap) and plug in the SDI into a waveform/vectorscope capable monitor or into a
Decklink in a PCIe slot which will take the input into Drastic 4KScope or Scopebox.
-Ian S.
a. Is there a plugin?
b. (INFO) Photoshop’s histogram can expand to show more info at once, but it’s not much
better.
32. (Question) In Premiere and Photoshop, can I get bezier handles on my curves tool? That would be a
lot easier to control…
Answer: Probably not. But Resolve has them...
The following questions are concerned with SDR, 24-bit (8-bpc) color.
33. (Questions) Do you agree with these fundamentals of mine for color CORRECTION? If not, why?
-A pure white blown highlight (100+ IRE on R, G and B) cannot be recovered, and should
be left at 100.
-If you have crushed blacks on 1 or 2 channels, you should leave them at 0.
-If you have crushed blacks on all 3 channels, definitely leave them at 0. In fact, if that
blackness has some digital noise in it, you should lower your blacks until all the noise is
gone.
Yes, but ensure that blackness ranges don't affect the image negatively for the sake of reducing noise - look
into digital noise reduction before pushing into shadows more than you need to. -Caspar Brown
34. (Questions) Do you agree with these fundamentals of mine for color GRADING? If not, why?
You could do all sorts of creative stuff when grading, there are no rules about right or wrong in grading. -Bart
Kuipers
-A pure white blown highlight (100 IRE on R, G and B) should probably still be left at 100.
Answer: No. http://lookbook.colorist.us/?p=64 (Link from VYZ Studios)
-Crushed blacks CAN be brought up – for example, old film has noticeably dark-red “blacks.”
-There a very few reasons to ever let the “curve” slope downwards/backwards
(I have only ever done this to create a chrome effect https://youtu.be/jrISZ5jdmIs?t=163)
(Can you think of any other reasons to do it?)
-It is okay to deliberately blow the highlights, or crush the blacks. But you really need to
know when and where that would be appropriate. (And I do not… do you?)
1. If there simply isn't enough highlight information... you might have a little bit of detail, but pulling that
back can often reveal chunks of clipped white. In this instance, blowing the highlights might be more
visually pleasing.
2. If your shadows and blacks are super noisy with very little detail and beyond saving with temporal
Noise Reduction then crushing these can be your only option.
3. Crushing the blacks is also very common as a stylistic choice for dramatic/moody looks. The term
'crush the blacks' also gets used a lot when referring to a Crushing the shadows but raising the black level
- to emulate a film stock look.
-Connor Ayliffe
-Always have your waveforms/scopes visible when color grading
But only look at them to make sure you're not clipping after you've done your creative grade. If you keep to
your scopes too tightly, you might think something is going to look bad while it could actually be really
cool what you're doing. - Bart Kuipers
35. (QUESTION) Do you agree with these as GUIDELINES for color CORRECTING? (in SDR!)
Taran note: There is some confusion over “IRE” here. I am only speaking about the numbers as they
appear on the (Premiere) Lumetri waveform scopes. Not how they are used in cameras and
broadcasting.
100 IRE: Bare bulbs, the sun, welding torch, specular highlights, etc. Do not let anything else touch
100.
80-90 IRE: Our set's lit windows with white curtains on them
~70 IRE: White skin tone (I center the red mass around 70. The actual luminance
channel is around 60. So, maybe I should have it even brighter??)
~65 IRE: Asian/hispanic/black skin tone (According to this book)
~60-65 IRE: Black skin tone
All skin tones should line up with +i on the vectorscope (The skin tone line)
20-1 IRE: Black objects, lit. Clothing, PC components, etc. Just DO NOT let it touch 0 IRE.
0 IRE: Parts of the scene that are deliberately unlit CAN be this dark. Also, if there is digital noise
close to 0, you may wish to lower it until the noise vanishes.
These guidelines are much better for a camera operator/DIT during filming. Exposing for these levels should
fall more on them than on you as the colourist. I wouldn't try to follow them to the letter every time.
-Caspar Brown
Taran note: Most people seem to think that these guidelines are fairly decent.
And, what do you think about Ansel Adams’ “Zone system” as described here?
https://www.cinema5d.com/primary-colour-correction/
Comment: Fascinating. But I don’t really find it helpful for digital cinematography. -John Pooley
Comment: Yes. - Caspar Brown
Taran note: I am seeing Ansel’s zone system referenced in a lot of tutorials in regards to digital video, and many
commenters seem to like it. I’m not sure what John doesn’t find helpful about it.
(Captain Disillusion goes even further, using vignettes, textures, virtual 3D camera moves, and faux
lens blur.)
The broadcast networks will often shoot the screen with a camera (which looks terrible and takes
more time) or often they’ll just recreate the graphics themselves. I recommend blurring irrelevant
content, cropping the web page, and layering that on top of your own logo loop. If you’re
producing for HDR then it would be advisable to bring the whites down a couple %. -John Pooley
37. (Question) Do you know of any other fundamentals of color correction or color grading? And, do
you have any useful “tricks?” Please share them with me!
Feel free to make a copy/backup of this document. I don’t plan on ever taking it down, but you never
know what might happen. It’s always a shame to follow a link to a great resource only to find that it’s
no longer there!
Many people have suggested that we switch to DaVinci Resolve, because the color correcting/grading
options are much better than Premiere’s. This… is a bad idea for many reasons. None of us have the
knowledge required to utilize those color tools most effectively - and even if we did, “properly” grading
footage will take more time than we have, since we make daily videos. And although Premiere certainly
does have its issues, Resolve is still lacking in essential NLE features that we DO need - it doesn’t even
have Q and W (Ripple trim to previous/next edit)
This is like recommending a $500K Rolls Royce to a guy who just needs a $5,000 Toyota to commute to
work, when he previously just took the bus. Getting a car and learning to drive it, is plenty. We really
don’t need all the fancy stuff.
If we do another commercial or other large budget project, then we’ll probably use Resolve for the
color.
The Hitchhiker’s Guide to Digital Colour 1 2 3 4 5 6 ← I feel like these articles were written just for me.
https://www.khanacademy.org/partner-content/pixar/color
https://blog.frame.io/2018/05/21/premiere-lumetri-guide/
Bias lighting
https://cinematiccolor.org/
https://filmsimplified.com/
http://color-artist.blogspot.com/2015/12/
https://www.fxphd.com/details/304/
https://www.fxphd.com/details/380/
Videos:
“On LOG: go watch Filmmaker IQ’s fantastic technical breakdown of dynamic range here”
Insider Knowledge - An easier way to grade log footage
How NOT to balance Highlights and Shadows | Davinci Resolve
White balance: Important or Overrated?
DaVinci Resolve 15 - The Art of Color Grading (1:56:31)
DaVinci Resolve 12 - 52b Curves Palette - Soft Clip, S-Curve and Splines
Books:
https://www.amazon.ca/Color-Correction-Handbook-Professional-Techniques/dp/0321713117
“Read it and don’t touch color again until you have.” -Caspar Brown
http://www.moderncolorworkflow.com/
A Broadcast Engineering Tutorial For Non Engineers
Another decent color grading tutorial – I was most surprised by how much better her face looked after it
was masked and corrected separately.
https://www.youtube.com/watch?v=_m-9R1oqvh0
NOTE that the way he does this, by using the same footage stacked on top of itself, seems quite
inelegant to me now. I can see why Resolve’s node system is preferable.
How our eyes see chrominance and luminance, and how the Bayer filter pattern is designed to take
advantage of that
Note that, although rod cells are for detecting luminance, not
chrominance, they still have a frequency response - highest at
498nm -- blueish-green. This blew my mind - most diagrams you’ll
see like the one to the right, will only show the cone cells, with a big
gap between the blue and green response curves. Adding the rod
information makes the whole thing make a lot more sense.
Beware that even your graphics card might be modifying the colors that you see:
https://forums.adobe.com/message/8818586#8818586
Here are my Nvidia settings:
And here are my display settings in Windows:
You should definitely only be using your monitor’s native resolution when editing videos.
The 150% UI scaling works fine, and is important so that things aren’t so small on my screen that I
cannot see them… but many programs have issues with it, including Premiere.
Extremely detailed article on gamuts and color spaces, much of which is still over my head:
http://www.tftcentral.co.uk/articles/pointers_gamut.htm
In this article, I learned the term “temporal dithering,” which blew my mind, because I had never heard
or thought about it before…
Video that explains the difference between primary and log wheels in Resolve:
www.youtube.com/watch?v=M9OyxO8EqWM
Some definitions which I feel are not important enough to go in the main list:
LUX vs LX:
They both mean the same thing. Lux.
Other important bits of information that I didn’t know, or wasn’t sure about:
“You may also notice that your camera has a menu selection to choose a color space of either SRGB Or
Adobe RGB. This is a big point of confusion. Understand that this choice only applies to JPEG captures.”
“Also if you’re shooting RAW, there is no white balance built into the file. But if it’s a JPEG, the white
balance becomes part of the file immediately.”
Source
Furthermore, if you have a JPEG file the white balance is “baked in,” it can’t be easily changed using the
most accurate white balance mathematics. With a raw image, the source xy (in the CIE sense) values can
be more correctly adjusted for a white balance change.
Apparently, some LUTs are so sophisticated, it’s nearly impossible for most people to recreate them
Here’s an interesting exception
Here’s just one example of an article that gets things woefully wrong:
http://changingminds.org/explanations/perception/visual/hsl.htm
They refer to the “L” in HSL as “luminosity,” when they actually mean “luminance,” but even that is
incorrect. The “L” stands for “lightness,” a very different concept.
[THE END]