Sei sulla pagina 1di 38

The tutorial is here: → ​https://youtu.

be/2qd2mq2bp9s​ ←
Reddit discussion thread is here.

Due to vandalism, I had to turn commenting off for the general public. This also means that
most of you can’t SEE any of the comments. (Google’s system is SUPER DUMB.)

So I’ve been adding their text directly into the document, when I have time.

If you are well-versed in these things, and want to be able to view and make comments,
and suggest edits, click “👁 View only” (top left) and then click “request access.”
(No promises)

(This document is best viewed on desktop, not mobile.)

1. How would you color correct this shot of Linus sitting in front of a bright window? Which
effect(s) did you choose to use, and why? Did you decide to use a mask? Can it be done
“good enough” without having to use a mask?
​ ttps://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0)​
(h

A screenshot of the raw footage →

Note that this is a shot from ​this video​, where the


camera was exposed for a constantly moving interior
shot -- a “run and gun” situation. People are saying
that the camera operator should have reduced the
exposure, but I disagree. I’m glad they didn’t try to
change it every time someone appeared in front of the
window. If this was a shot from a movie, then yes, they
should have reduced the exposure.

Answer:​
The human eye is naturally attracted to bright things based on human evolution. As a rule of thumb, a bright
background can distract from your subject in a case like this because there is so much background and it is so
bright. It’s a large portion of the frame and near white. Yes, it is outside and yes it is bright, but don’t let that trick
you into thinking that it “has” to be a brighter value on your waveform simply because it’s outside. This shot is
exposed that way and it’ll likely have to be a bit brighter in color correction so as not to break the shot, but
outside viewed through a window doesn’t have to be at 100 IRE.
-John Romero

Answer:
The key is trying to reduce the attention from that bright
windows and centered to the subject. For that I:
a) Corrected in generally. Trying to reduce some Highlights
and improving contrast.
b) Use a quick Qualifier and grab Linus’ skin. Brightened and
gave it more color.
c) Increased grens in the sofa and gave them more blue with
Hue vs Hue and Hue vs Sat. I duplicated that node again for
more punch.
d) Turned the highlights a bit more pink/yellow to a more
pleasant look. (subjective)
​ ucas Sanczyk’s approach​)
-(L
Answer:
This is a situation where you want to ETTL in camera and use the knee function, a picture profile
with the highlights rolled off, or shoot it in a really nice codec / Log / RAW. Since the important
information in this scene (Linus) is less luminous and should be, optimize the exposure as such, and
let the highlights blow if need be. This also prevents the viewer from noticing an exposure change if
you capture a certain piece of furniture (or Linus) in a wide shot or other alternate angle. As you can
see from the waveform, the camera operator did a good job.

To fix this in post I simply brought the Highlights slider in the Lumetri Color panel down to -50 (and
later I bumped the Contract +10 so his skin doesn’t look so plasticy, new on right). -John Pooley

Answer: (​ No masking needed)


1. First, I bring down the shadows a bit as I felt the shot
needed a bit more contrast.
2. Using the keyer in Resolve (Secondary Correction in
Lumetri should achieve the same results) select only the
brightest parts of the image, and massively soften that key
out. Some of the key will bleed onto other parts of the image
and this is ok here because of how soft the key is it should
be unnoticeable in the final product.
3.I pulled down the key using the midtones so that it retains
some of the highlights and does not grey out the brightest
whites.
4. I then added some contrast to the image using an effect
called Contrast pop in resolve, clarity is similar in Lumetri, to
pop the image slightly.
5. The final tweaks I made minor and were in an effort to balance out his skin tones and the highlights to make
them appear natural or as close as possible considering it appears to be mixed color temperatures in the lighting.
This step is to taste and what will best match the other shots in the video.
-Spencer Lantz

Answer:​ (Uses a mask)


If I didn’t have time to add masks I would first max the scope
as much as possible without blowing anything out. Then
simply lift the highlights and let them be blown out, and then
drag down the shadows to add more contrast. The goal is
just to get Linus looking decent, the highlights are fine even if
they are blown out.
If I were to use masks I would first max the scope as much as I could without clipping.
Then I would isolate the face and brighten it a bit to match the rest of the shot.
After that I would add a weak vignette around the entire image, but keep the highlight.
This is just to darken the corners a bit.
This is followed by a very soft mask that excludes the subject. I would increase the highlights even more and
decrease the shadows. Since Linus is masked out we got a lot more room to play with the luminance without
risking the subject.
COLOR GRADE:
I would do a weak teal orange color cast to this shot.
Bit of teal in the shadows, yellow- orange in the midtones and a hint of orange in the highlights.
- ​Espen Flagtvedt Olsen

↓ ↓ Keep scrolling ↓ ↓
2. How would you color correct this shot of the inside of the Oneplus factory? Which effect(s)
did you choose to use, and why?
(​https://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0​)

(Not Taran’s grade used ​in the video​) (Taran’s proposed grade)
Answer:​
I like your correction of the shot at the one plus factory. Differently from question #1, those highlights are small
ceiling lights and they are totally blown (they have little “hats” on the waveform no matter what you do). Linus’
white clothing can start to blend in with them though, but there’s not much avoiding that.
To step a bit forward in time to a later question, when color correcting I do agree with Ansel Adam’s zone
system, and I believe in putting caucasian faces at about 70IRE where possible. The reasoning harkens back to
the eye being attracted to lighter objects. It may be tough to do this if the talent is in shadow relative to a bright
background as in Q#1, but as long as the shot isn’t totally run and gun like that one, then this rule is generally a
good one.
-John Romero

Answer:​
1. I added a little contrast again using the
clarity/midtone detail to the image
2. I added saturation to the oranges in the image, and
slightly adjusted the hues of the red/orange range to
even out skin tones slightly. I also did this to the blues
to slightly change the blue to taste. I kept it kind of
subtle in this case as I didn't feel a more heavy handed
look was necessary for this shot.
3. I added a small contrast adjustment to the image
using curves to the range which applies to Linus's face.
4. I then lowered the shadows just a little bit to give the
image a hair more depth/contrast.
-Spencer Lantz

Answer:​
In the first node I added some contrast to the shot using
LOG controls.
On the second node I played with the saturation a little
bit.
On the third node I played with the curves a little bit
(Hue vs Lum, Hue vs Sat, Lum vs Sat).
On the fourth node I keyed out the highlights using a
Luma key and I brought them down using the LOG
controls.
-Ido Simchoni
Here is the Lynda course that left me with far more questions than answers:
https://www.lynda.com/course-tutorials/Color-Video-Editors/711831-2.html

3. Why did Robbie use the ​gain​ control to fix the blue shot at 2:54 of this video:
https://www.lynda.com/Premiere-Pro-tutorials/Using-RGB-Parade-RGB-Overlay-waveforms-
judge-color-balance/711831/752722-4.html
…rather than using offset, lift, or gamma?

Answer:​ Because the gain control affects the brightest values the most, and that is where the imbalance was. (Lift
mostly affects the darkest values, gamma affects the middle values, and offset affects all values equally.)
I should have been using Premiere’s “highlights” control to change the sky, not the “midtones.” -Taran

Why didn’t he use another tool like “curves?”


Answer: ​I think most colourists just like the simplicity and speed of using the wheels to do primary corrections. Why
pick up a mouse and start tweaking curves on a screen when you can just turn a wheel without taking you eyes off
the grade monitor? One of the biggest qualities looked for in a colourist is speed.
Also, on large productions, colour correction (not grading) is done on set, before the footage makes it anywhere near
a post facility. The file that is used to carry that colour correction information is called a CDL, and it can only carry Lift,
Gamma, Gain, and Saturation information. -Christy Kail

Why not just use the temperature/tint sliders?


Answer: ​Colorists usually prefer to use the wheels when they can. It's often faster and more accurate 
than messing with tint and temperature sliders. -Espen Flagtvedt Olsen 
 
Taran follow-up question: 
But I have temperature and tint mapped to the same ball on the Ripple Tangent, making it very 
easy and precise for me to adjust them as needed. Is it still better to use the wheels, in theory? 
Answer: T​ he Ripple Tangent is a proper color grading surface, it really depends on what your personal 
preference. For example, if you are a writer do you go with ergonomic keyboards or not? Wheels may 
help you move in one direction then throughout a space like a ball. But if you don’t want to choose, see 
https://www.blackmagicdesign.com/products/davinciresolve/panels​. ​Resolve is known for its color 
grading features (like node based compositing) and its tight integration with the hardware panels. It 
may be a simple matter of pushing your workflow towards that and away from Adobe. -Ian S. 
Answer:​ On a digital waveform you can set up a YRGB overlay, so you just find something that should 
be white and adjust the gains to line all of the channels up. If you’re working with lights that aren’t 
perfectly on the CCT scale (meaning anything LED for Fluorescent) it can be faster to work this 
way.-JP 

4. DaVinci Resolve has “Color Wheels” of lift, gamma, gain, and offset, which affect the values
in a linear way. (Robbie Carman seems to use them a lot) Does Premiere have controls that
work the same way?
Answer:
Premiere’s Color Wheels have shadows, midtones, and highlights, which don’t quite work the same
way as Resolve’s Lift, Gamma, and Gain. I believe they “curve” the values as they approach the
whites and blacks, rather than moving them linearly… but I still need to try it out in both programs to
know exactly how they differ.
There may be a plugin for Premiere that gives you controls that work exactly the same way.
Premiere’s RGB curves tool can achieve the same results, but the curves are very finicky and difficult
to handle.
-Taran

Answer:
I am not a Premiere expert so take this with a grain of salt, but from the 2 min of testing that I did, it
looks like Premiere's color wheels have a function which is trying to anchor the white point when you
are using the Color Wheels (In theory this could be helpful in look development as it would allow you
to tint the highlights while still keeping a neutral white point?). It looks like you can affect the whole
image in Premiere if you do your correction in curves.
-NM Resolve

Answer:
They are "equivalent" in that Lift affects shadows, gamma affects midtones, and gain affects
highlights. ​However, they respond differently.​
DaVinci Resolve's method is the proper method, a ​ lso used by the sensor response curve​.
-Ian S.

5. How did Robbie achieve the look shown at 3:33 of this video?
https://www.lynda.com/Premiere-Pro-tutorials/What-would-you-do-Creative-evaluation-examples
/711831/752713-4.html
(I couldn't get my floor to match his.)
Is it not possible for me to fully achieve that look, since I only have a PNG screenshot to work
from?
Answer: Yes.​ Because you’re using a screenshot of a video, rather than the original, uncompressed file,
your results will always be more blotchy, posterized, and inaccurate, due to the compression and lesser
bit depth.

6. Why couldn’t I get the classic “teal and orange” look to work for the shot inside of the LIGO
building? Was I going “too far?”
(​https://www.dropbox.com/sh/dqu5z3ap0r0mmjn/AABs4aC2r36I-kUoa1UnICjwa?dl=0​)

Why does this same technique seem to work so well for these shots?
https://www.youtube.com/watch?v=g1J1DKRScDY
https://youtu.be/_m-9R1oqvh0?t=450

Is it because there’s not much inherently teal or orange in the original shot?
Answer:​ You can actually get the teal and orange in pretty much any shot, even if there isn't much of the color
originally. Of course the other colors might look a bit off, as you'd add/remove teal and orange from all colors.
-Bart Kuipers

Is it because the white balance was really far off in the original shot?
Answer:​ It’s less of a question of white balance, and more about the type of light.
If you're shooting with shitty ambient lights that aren't fairly full spectrum, you can't really fix that with a white
balance adjustment. You can't turn sodium vapor light into high CRI daylight light no matter how much you white
balance a scene -Ian Servin

Does bad white balance that mean there will be less leeway in color correction / color
grading?
Answer:​ ​Yes.​ If you save to jpg (for stills) or any compressed format, your pixel values will be computed to the
set white balance. Therefore, some information might get lost if the white balance is really far off in theory. In
practice, this doesn't really limit you a whole lot, except when grading quite extremely. -Bart Kuipers

Is that also true if you are shooting in a raw format?


Answer:​ ​No​. If you are shooting in a RAW format, there is no inherent white balance yet, as the camera just
stores the amount of light that hit the R,G,B filters of the sensor. You can then 'interpret' these numbers with the
white balance in post for almost all cameras that shoot RAW. -Bart Kuipers

One of many ​answers ​I’ve received:

(Original shot) Ido Simchoni’s grade


My result isn't very good, but it is possible to get a better result with the original lossless footage. The
way you achieve the look is by giving the whole image a blue/teal tint, and then to overlay the
original skin tones and orange/yellow tones by using a HSL key. T ​ his is a pretty great tutorial about
this technique.​
-Ido Simchoni

Taran note:​ The most valuable thing I’ve learned from Ido so far is that he seems to always key or
mask the skin tones, so that they can be adjusted independently from the rest of the grade.

Another answer:
1. Brighten up the image using curves,I pull the midtones up towards the highlights in the curves,
this will compress the highlights a bit without blowing them out all together.
2. White balance with Temp and Tint adjustment, then adjust further using the color wheels to
remove tint and temperature issues that may appear in the mid tones and highlights.
3. Added some "Pop" using; clarity in Lumetri or midtone detail/contrast pop effect in resolve.
4. Using a secondary key select the skin tones and then invert the selection, desaturate the new key
and then add blue/green tint to taste.
5. I selected the yellow railing using another key and added contrast and adjusted the color to taste.
This is not totally necessary but I felt the shot could use it.
6 I also felt the ceiling was a bit too dark so I added a linear power window to the top of the image
and brightened the midtones to brighten without clipping.
7. I lowered the midtones a bit to just add contrast to taste.
-Spencer Lantz
More Answers:

Yanzl Spencer Lantz

YVZ Studios Adrian larsson

Dinamis Spencer Balliet


Connor Ayliffe Felix

Detore ​ ucas Sanczyk’s approach


L

Jean Paul Sneider Taran Van Hemert ​(Here’s the


video)
Dmytro Chaika John Pooley

Taran note: I admit that teal and orange is probably NOT the best choice for this factory
shot. Still, it’s been very interesting to see how different people would handle such an
(apparently, challenging!) request.
I feel that Ido’s grade is the best result (if you really WANT teal and orange!), and that
Spencer Lantz’s white/sterile grade seems to be the most appropriate for the scene. Though, I
would probably make both of them a bit brighter.
I can tell now that mine is not particularly good!

7. At 4:13 of this video:


https://www.lynda.com/Premiere-Pro-tutorials/How-long-how-much-effort-project-take-color-
correct/711831/752715-4.html
Robbie says that Go Pro footage would be very difficult to color grade to look like Skyfall (2012),
but he does not explain why. Is it the color space? The compression? The 8 bits (per channel)?
Something else?
ANSWER​: Consensus seems to be that the c ​ ompression i​ s the biggest issue. And the 8 bits
isn’t helping things.

8. What kind of footage IS best for color grading?


Answers:​
“Raw” footage is better than… not raw. (But the files are much larger.)
Less compression is always better (But again, the files will be bigger.)
More bit depth is better (8 bit < 10 bit < 12 bit, etc.) (Much bigger files.)
More dynamic range is better
Codec plays a role, but there is no general rule.
The lens/glass can make a difference, in terms of depth of field, chromatic aberration,
sharpness, and color cast.
Big discussion here about when to use LOG

9. What sorts of things should be done or avoided when filming, to ensure the most options for
color correcting/grading?
Answer:​ Proper lighting is very important. (Though, that’s a different discipline entirely…)
Answer:​ The 'only' thing I'd care about a lot is not losing detail in either highlights or shadows. So try to expose
properly and have a 'right-ish' color balance so you can go anywhere with it. So try to squeeze the information in
the available range of the camera
-Bart Kuipers
Answer:​ As often as possible, when not shooting RAW, make sure to get the white balance close to accurate in
camera and expose the image as brightly as possible without clipping highlights that you deem important.
​ pencer Balliet
-S

10. Also, even if it is very difficult to get GoPro footage to look like Skyfall, once you figure out
how to do it, couldn’t you just create a LUT, and apply that to all the (color corrected) GoPro
shots? If not, why not?
Answer: No.
Reason: ​Theoretically, a user generated LUT could work in some instance... But would mostly depend on
shooting scenes with a consistent look. E.g say soft outdoor lighting in every scene... In this instance you would
white balance, and neutralise each shot so that your shadows/mids/highlights are fairly natural and match shots
before and after.
Then creating a Skyfall look on top of this and creating a LUT from it and applying to all clips throughout. To
reiterate, this only works if all shots and scenes are quite consistent with each other.
-Connor Ayliffe

Taran note: H​ ere’s how I’ve come to understand it: Although you might wish to go TO the same look for several
different scenes, the picture that you’re coming FROM is always going to be different. Therefore, the results
achieved by using the same LUT, would also be different.
Ido explained to me that LUTs always have to have very soft roll-offs on all their effects, because the LUT maker
does not know what the footage will look like. So they have to make it general, not specific.

At 4:15 of this video:


https://www.lynda.com/Premiere-Pro-tutorials/Controlled-correcting-lighting-neutral-environment/711
831/752707-4.html​ Robbie talks about the “bias lights” behind color correction monitors, saying “They
usually have a very high CRI value - color rendering index. That’s the quality of light.” But he does not
explain any further.
11. What is a good resource to learn more about CRI or “quality of light” as it relates to video
production?
Answers:
Link from Ian Servin
Link from Jonathan Kokotajlo
Link from Bart Kuipers
Link from Tucker Downs​ (SSI)
Link from Ian S​ (TLCI)

12. Why do bias lights need to have a high CRI? (I’m not color correcting the wall behind the monitor!)
Answer:
You want "neutral" light that without color casts that could mess with how you perceive what's on 
the screen. It really only matters if your entire space is also treated with special neutral tone paint. 
-Ian Servin 
Answer: 
One good reason to have a high spectral similarity between your bias lighting and natural lighting 
is that if your camera calibration is quite bad, you can correct for that by keying to the bias lighting. 
-Tucker Downs 
 
13. Is a high CRI important for the lights used on a set? Is higher always better?
Answer​: “Y​ es​,” is the general consensus. (But, low CRI can sometimes be compensated for if you
really know what you’re doing.)

14. If it’s important for a light source to have a continuous, rather than a discrete spectrum, but a
camera only records the frequencies of R, G, and B anyway, then why does it even matter?
Answer:
The "Yellow LED" in this picture has bad CRI. The middle, which is a normal light + filter has good CRI. -Tucker Downs

Under daylight (a continuous Under 'yellow light' the duck However, a “yellow” LED light
blackbody-like spectrum), the has much the same response will be producing a set of
duck will reflect a distinct as the 'yellow light' is really a specific wavelengths
distribution of light in the visible mixture of photons in the depending on its type. If it's an
spectrum that corresponds to yellow part of the spectrum. RGB LED (or set of LEDs) then
the wavelengths around yellow, Since these were the only ones the 'yellow light' will actually be
and absorb the rest, thus being reflected anyway, the a combination of red and green
appearing yellow. duck looks mostly the same wavelengths. Although it
(but slightly different probably appears yellow when reflecting
due to the difference in off white surfaces to us (and to
distributions of the yellow part a camera sensor), there are
of the spectrum between the actually no photons being
white and yellow lights). produced with wavelengths
that are in the yellow part of the
spectrum! Thus, the duck
appears darker and redder as
the yellow light it was
previously reflecting is no
longer there, and it reflects red
light slightly better than green
(not an inherent fact but it looks
like it does).
-Samuel Reynolds

15. If a light source with a very low CRI is desirable for a specific shot, due to the monochromatic look
that it would create,, wouldn’t it give you more control to just go with a higher CRI when filming,
and then make the shot more monochromatic during color correction/grading?
Answer: 
It may give you more f​ lexibility​ in post, but a DP is striving to achieve a look in-camera and wants to be 
precise​ in their on-set choices. 
This is why a proper viewing pipeline on set is so critical so the camera team can be confident that 
what they're capturing is conducive to what they want the final product to look like. 
The oversight of this on-set preview pipeline is often done with a dedicated DIT person/team. 
On smaller sets, it's just about having properly calibrated equipment and using LUTs on set to 
preview grades. 
-Ian Servin 

Alternatives to CRI, which are apparently better:


TLCI ​(Television Lighting Consistency Index)
SSI ​(Spectral Similarity Index)

DEFINING OUR TERMS


SUPER IMPORTANT!
Many tutorials will never do this, or worse, they will mix these up or wrongly define them.
16. Let me know if you think any of my definitions here are wrong or incomplete:

FILM vs VIDEO
“Film” or “film stock” is a strip of photographs that can be played in a sequence, using an analog
projector.
“Video” is electronic data that can be stored, transmitted, and decoded into a series of images.
Movies, also known as “films,” used to be shot using​ film cameras​, but these days, most are shot with
digital cameras, which means they are actually ​videos.​
Everything in this document is concerned with ​video,​ and not ​film.
I wrote this video, which explains it, kinda.
Veritasium - The History of Video

LOUDNESS vs VOLUME vs LEVEL vs (GAIN vs AMPLIFICATION)


I have no idea. But here’s an article from someone who does.
My point is that while it’s okay for the general public to use these terms incorrectly and interchangeably,
it is NOT okay for audio professionals to do so. (And, by extension, it is not okay for audio tutorials to get
it wrong, either.)

LUMINOSITY vs LUMINANCE
Luminosity ​is NOT relevant to digital video at all. In astronomy, luminosity is the total amount of energy
emitted from an object per unit of time. I will not refer to “luminosity” in this document again.
Luminance ​is relevant, however. It is a measurement of the intensity (or, brightness) of light that is
reflected and/or emitted from an object. (It is measured in ​nits​,​ ​but that’s not important right now.)

LUMINANCE vs​ ​ILLUMINANCE​.


It is easiest to think of these terms from an object's point of view. Imagine a white shirt on a black table.
All incoming light is ​illuminance​, and all outgoing light is ​luminance​. The ​illuminance f​ or both objects is
the same, but the white shirt has more ​luminance ​than the black table.

BRIGHTNESS vs LIGHTNESS vs VALUE vs LUMINANCE vs LUMA


Most tutorials/articles use the term “lightness” incorrectly. It is NOT the same as
“brightness.”
Here is how I have come to understand it:
Think of “​lightness​” in terms of paint or ink. It is a part of the object itself.
The “lightest” paint you can get would be a very pure white… but it is wrong to call it a
“BRIGHT” paint, because paint does not emit light. So, your wall can only be “​bright​” if
you shine a bright light onto it. A “lighter” paint ​does r​ esult in a “brighter” wall, but the
paint itself is NOT “bright.”
A physical swatch of ​Pantone 11-4001 TPG Brilliant White​ is still white even when there
is no visible light bouncing off of it. - It still retains its “lightness” or “whiteness.”
This means that “lightness” is independent from illumination.
When you shine a light onto the swatch, it now has “brightness,” which can be
measured. (In terms of ​lux,​ but that’s not important right now.) And, the lighter the
paint, the brighter it will be.
This means that “brightness” IS dependant upon “lightness.”
In a similar way, the colors on your screen have their own ​lightness,​ but the backlight can be turned up,
which makes all of the colors ​brighter,​ but does NOT make all the colors ​lighter.
If you make video content that is distributed online, you do not know how ​bright​ the screen will be for
anyone in your audience. You can only hope that they haven’t set it to be too bright or too dark.
Or, think of it this way: The editor controls the ​lightness​, and the audience controls the ​brightness​.
Similarly, for audio, the editor controls the ​loudness​, but the audience controls the ​volume​.
...
In this document, and the associated video tutorial, I use the word “​value​” to refer to the numbers
associated with a color in the sRGB color space. If there is a better word, let me know!
I still don’t understand what exactly the difference is between ​luma ​and ​luminance​, but apparently,
when we talk about “luminance,” we should often be saying “luma” instead. I do not yet know if all
references to “luminance” in this document are accurate.
http://poynton.ca/PDFs/YUV_and_luminance_harmful.pdf
https://en.wikipedia.org/wiki/Luminance

COLORFULNESS vs CHROMA vs SATURATION


Yes, these are three different things! I don’t understand it well enough to give a simple explanation.
Please leave a comment if you do. (Instructions at top of document.)
https://en.wikipedia.org/wiki/Colorfulness

COLOR vs CHROMATICITY vs CHROMINANCE


Color​ is a combination of 3 properties: Hue, saturation, and luminance.
Chromaticity ​is a combination of 2 properties: Hue and saturation. It does not consider luminance.
“Chromaticity is in fact the property that the average person thinks of as the “color” of light (not realizing
that luminance is one aspect), the cause of much confusion in the technical discussion of color.”
- Douglas A. Kerr, P.E.
Here’s a video I made on the subject: ​https://www.youtube.com/watch?v=aFxx4jVRHME
Chrominance​… I still don’t understand. It has something to do with the fact that different hues have
different inherent luminance.

WAVELENGTH vs HUE
I don’t know of a simple way to explain this...
But, basically, as a video editor or colorist, you’ll always be talking about “hue,” NOT “wavelength.”
https://en.wikipedia.org/wiki/Visible_spectrum
https://en.wikipedia.org/wiki/Hue

BITS and BYTES


In computer science, a “bit” is either a 0 or a 1. There are 8 ​bits ​in a “​byte​.”
Here’s a video I made about this: ​https://youtu.be/LpuPe81bc2w

DECIMAL and HEXADECIMAL


You might notice that 255 can also be written as “FF.” It’s just a different way of writing the same
number. But don’t worry about it. In this document, and the tutorial, I’ve only used decimal (base 10)
(the system you’re already used to) in order to avoid any confusion.
https://en.wikipedia.org/wiki/Hexadecimal

PIXEL:
A digital “picture element.” They are not necessarily square, or even RGB.
https://en.wikipedia.org/wiki/Pixel
These Are Not Pixels: Revisited

DISPLAY vs SCREEN vs MONITOR


The ​screen ​is the part of a ​monitor ​that has the pixels on it.
A​ ​projection screen​ ​is a special surface used to
display(verb) an image beamed from a projector.
I use “​display” ​to talk about the image that could be on
either a ​monitor screen ​or a ​projector screen.​
Note to self: need to update these pictures.

8-BIT COLOR vs “8-BIT COLOR”


Classic 8-bit color​ allows for one byte per pixel: 3 bits for red,
3 bits for green, and 2 bits for blue. That allows a total of 256
colors. You can see all of those colors in this image:
These days, when someone says “8-bit color,” they are probably talking about “8-bit ​PER CHANNEL
color,” which is actually “​24-bit color​,” where red, green, and blue are each given 1 byte (8 bits) of
information, which allows for 16,777,216 colors. It is more accurate to write this as “8bpc” (8 bits per
channel.)

If you’re already confused about this, here are some fantastic articles that explain it in terms anyone
can understand:​ ​https://medium.com/the-hitchhikers-guide-to-digital-colour
BITS PER PIXEL / BIT DEPTH / COLOR DEPTH
https://en.wikipedia.org/wiki/Color_depth
I feel like a distinction should be made between “bit depth” and “color depth…” because you can have
an image that uses only pure black, and pure white pixels,, and save it as a 24-bit png. Technically, that
file is still 24-bit, even though only 1 bit is represented. In this case, I’d call that “low color depth” rather
than “low bit depth.” Or, perhaps there is a better term?

BLACK-AND-WHITE vs GRAYSCALE vs MONOCHROME


Usually, when people talk about a “black and white,” image, they don’t mean that the image literally has
ONLY black and ONLY white… (that would be bi-tonal.) Rather, they are talking about an image that has
black, white, and many shades of gray in-between. I believe that it is more accurate to just call this
“grayscale.”
“Monochrome” is best used to refer to an image that has different values of a single hue. I don’t like the
ambiguity when someone refers to a ​grayscale​ image as “monochrome,” even though it technically does
fit the definition.
https://en.wikipedia.org/wiki/Grayscale
https://en.wikipedia.org/wiki/Monochrome
CHANNEL vs “COLOR”
In the RGB ​color space,​ a digital image has 3 channels: red, green, and blue. Each channel
consists of monochromatic information. Some formats (like .png) also have a 4th channel:
transparency. You can easily see all of them in Photoshop by going to Window > Channels.
Please don’t use the word “color” when you actually mean “channel!”
https://en.wikipedia.org/wiki/Channel_(digital_image)
An image in the CMYK color space has four channels: Cyan, Magenta, Yellow, and Key.
(“Key” is usually black.)

“0 IRE” to “100 IRE”


The numbers that appear along the left side of Premiere Pro’s Lumetri Scopes,
when the ​Waveform ​or ​Parade ​is visible, are “IRE” numbers. (​ IRE stands for
Institute of Radio Engineers​, but that’s not important.)
In 24-bit color (8 bits per channel), 100 IRE correlates to sRGB pixel value 255.
(As seen along the right side)
For our purposes, you can probably just think of 100 IRE as “100% lightness”

CRUSHED BLACKS:
When the R, G, and/or B channel is at or below 0 IRE on a waveform.
Taran note: In broadcasting, pure black is actually 7.5 IRE. This is not relevant here.

BLOWN HIGHLIGHTS:
When the R, G, and/or B channel is at or above 100 IRE on a waveform.

CLIPPING:
For video, when the R, G, and/or B channel is at or beyond 0 IRE or 100 IRE, on a
waveform. (So, it refers to crushed blacks as well as blown highlights.)
For audio, when the waveform tries to go “above” -0db, and can sound terrible
as a result.

OVEREXPOSURE:
This seems to be a subjective judgment call for when an image appears brighter than it “should” be.
This does NOT necessarily involve blown highlights. (But some tutorials will say that it does…)

COLOR CORRECTING vs COLOR GRADING:


Color correction​ is “fixing the problems” with the original footage, to make things look more like they
would in real life to the human eye. This means fixing the ​exposure​, ​white balance​, ​contrast​, ​saturation​,
etc. This is also where you would try to fix noisy or clipped footage. This also involves making sure that
all the shots of a scene look the same, especially if the footage is from different cameras.
Color grading​ is the fancy process of altering the image to create a tone/mood/aesthetic. The Matrix
(1999) was very green and dark, 300 (2006) used lots of red and brown with very high contrast, and
most horror films use dark blues. It requires years of study and practice to get good at doing this.

a “LOOK” vs. a “GRADE”


I have no idea, but I hear that maybe these are different things?? Someone clear this up for me plz...

LOG, S-Log​:
LOG (short for logarithmic which most of the curves were originally based on) references a specific curve applied to the footage
during capture. These curves are applied to pack wider dynamic ranges into the limited data range available. Traditionally
these curves boost low end signal to reduce noise and provide more shadow detail and often include a ‘knee’ in the highlights
which further extends the total range.
These LOG profiles need to be converted in the editor to look ‘normal’ again, and many manufacturers provide standard LUTS to
achieve this.
LOG profiles are often combined with “Exposing to the right”, which is the practice of over exposing LOG profiles to get more
important information higher in the IRE range where more data is allocated for recording it. This also needs to be adjusted for in
editing.
-Spencer Balliet

S-Log is a specific log to Sony cameras Canon, Sony, Red, Arri, Blackmagic etc all have their own flavors of log. They all act a little
different due to the color science which is why each company gives out the luts to transform their log to rec709, but they all
work basically the same.
-Spencer Lantz

Lots of info from Amit Parekh

GAMUT:
I have no idea how to explain this one simply, since I do not understand it well enough.
https://en.wikipedia.org/wiki/Gamut
Simple definition: "The range of colors that can be reproduced by a system." A gamut can be measured (e.g. the gamut of your
monitor) or specified by a standard (e.g. the sRGB gamut). The latter are typically defined by a set of three primary colors.
-Jason Gerecke

CHROMATICITY DIAGRAM:
https://en.wikipedia.org/wiki/Chromaticity
A 2-dimensional representation of all colors. There are many different ways to
draw them, but it’s important to note that the exact colors are probably not
accurate. Your screen cannot show you colors beyond its own gamut, and
therefore, others have been substituted. A true diagram would have
significantly more saturated colors on the edges.
A ​gamut i​ s drawn on top of a chromaticity diagram. One diagram might contain
several gamuts, like in the image to the right.
Tucker note:
Everyone should be using the 1976 Chromaticity diagram. NOT the VERY COMMON 1931
​ arald Brendel explains it best.
diagram. H

GAMMA or GAMMA CURVE or TRANSFER FUNCTION:


I have no idea how to explain this one simply, since I do not understand it well enough.
http://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
https://www.provideocoalition.com/the_not_so_technical_guide_to_s_log_and_log_gamma_curves/

DEAR GOD YES. LEARN GAMMA (especially in context to LOG). It is one of the foundations of colour science. It
should be one of the first things someone picks up - not judging you - for example; I still find some of the more
straightforward concepts of maths, in general, [difficult] to grasp, but have no issues with the complicated stuff. When
I slip up on the complicated stuff, it's always due to my skipping or never learning the more straightforward parts.
Had you known more about Gamma, the questions would be completely different. Start by looking at what gamma
means in terms of Log. Then look at HDR again. However, meanwhile, if you want your brain fried, watch all parts of
​ ttps://www.youtube.com/watch?v=yZKDzT8pwTI
this: h
-Caspar Brown
Here’s another great link for learning about gamma by the people who make the chart that everyone uses - JP:
http://dsclabs.com/wp-content/uploads/2018/12/DSC-LABS-SETUP-Ver-2.pdf

COLOR SPACE vs COLOR MODEL


https://en.wikipedia.org/wiki/Color_space
Everyone, even Wikipedia, does not consistently use the terms ​color space​ and ​color model,​ so I still
have no idea which is which, and what the differences are supposed to be. ...Send help.

Color spaces​ and gamuts are usually used interchangeably in the context of the range of colors that 
are available.  
Ac​ olor model ​is how you define or calculate colors. 
CMYK (Cyan Magenta Yellow and Key(black)) is the color model usually used in professional print for 
instance. 
RGB is usually used for emissive displays and is additive. 
(CIE)LAB expresses color as three numerical values, L* for the lightness and a* and b* for the 
green–red and blue–yellow color components. It was made to better mimic human vision. 
Y-UV defines color space in terms of one luma component (Y′) and two chrominance (UV) 
components. It was made to better mask compression or errors by taking human vision into account. 
HLS is best explained as follows: 
Hue is a degree on the color wheel; 0 (or 360) is red, 120 is green, 240 is blue. Numbers in between 
reflect different shades. 
Saturation is a percentage value; 100% is the full colour. 
Lightness is also a percentage; 0% is dark (black), 100% is light (white), and 50% is the average. 
A colorists can take advantage of the way certain color models or color spaces work in order to do 
certain operations a lot easier. A very basic example of this would be to use HLS to drop lightness 
without affecting saturation or pivoting colors in certain ways that are not easily reproduced in RGB. 
-Espen Flagtvedt Olsen 
If you make video content for the internet, you’re probably working in the ​24-bit​ ​sRGB​ color space.
Now, sRGB uses the same “primaries” or “chromaticities” as REC.709, (also known as ​ITU-R BT.709​)
which means that the reddest red in REC.709 is the exact same reddest red as sRGB, and so on for green
and blue. However, I believe that REC.709 has a lower ​color depth​ than sRGB, since it can only represent
approximately 221 values of gray, rather than 256. (It looks to me like every 7​th​ value is skipped when
converting from sRGB to REC.709, because of ​some horrible thing called “studio swing.”​)

CIE L*A*B*, CIELAB, (CIE)LAB, LAB, Lab color


These terms all refer to the same thing. “​CIE L*A*B*​” is the correct way to write it, but that is kind of
annoying, so people will often write it as “​CIELAB​,” “​LAB color​,” or just “​Lab.​”
I have no idea if it is a ​color space​ or a ​color model.​ It might be both? someone help.
https://en.wikipedia.org/wiki/CIELAB_color_space
CIELAB is the basis of lots of color theory, so it’s important to know about. sRGB, REC.709, Adobe RGB,
and so on, are all defined from CIELAB.
It expresses color as three values: ​L*​ for the lightness from black (0) to white (100), ​A*​ from green (-) to
red (+), and ​B*​ from blue (-) to yellow (+).
LAB is an “absolute” color space, but only if you specify the ​white point​. (Defined further down.)
However, the exact meaning of “absolute” was not defined in any of the explanations I read.
At first, I believed it meant that CIELAB included values for all possible luminosities, like fire, welding
torches, the sun, red and blue hypergiant stars, and even supernova… but I was wrong. The “​L*​” is for
lightness,​ not ​luminance. ​CIELAB seems to have been developed for paint, which has lightness, but does
not have its own luminance, as I explained earlier.
This means that “​L*​” values above 100 (maximum lightness) are ​not possible.​ Any source that tells you
otherwise, is wrong.
But, hold on. The values of 255,255,255 in sRGB 8bpc SDR correspond with 1,0,0 in the CIELAB color
space. So you might wonder what this means for HDR, which is supposed to have​ brighter whites t​ han
SDR.
That was the source of much of my confusion on this subject. ​The answer is… I’m still not sure.
I have to assume that all HDR videos will have metadata as to exactly how many nits the brightest white
is supposed to be, and that that ​white level​ will be treated as L* = 1 in CIELAB. This does mean that
direct conversion from HDR to SDR through CIELAB (using the same values) would look awful. (Dark,
with extremely high contrast.) However, it can be done with ​tone mapping​.

RGB, sRGB:
RGB​ is ​any​ additive color model that uses ​r​ed, ​g​reen, and ​b​lue.
sRGB​ (Standard red green blue) is the default color space used on the internet, and on most devices. It is
usually​, but ​not always,​ 24-bit. (8bpc)
Many tutorials and articles will use these terms interchangeably, and/or assume that sRGB is always
24-bit (8bpc), which can all get extremely confusing!

WHITE CHROMATICITY, WHITE BALANCE, COLOR BALANCE, COLOR TEMPERATURE


Although I totally understand how this works, I have no idea how to simply explain the difference
between these. Some of them might have the exact same definition. IDK, man.
Our eyes automatically adapt to differently colored light, but a camera has to be told what ​color
temperature​ to use. (Even if that just means telling it to use an automatic setting.)
COLOR TEMPERATURE
Imagine that you place a rod of iron into a fire. If it reached these temperatures (Kelvin) then these are
the colors that the rod would glow:

Now, in the real world, iron melts at 1811 Kelvin, and boils (turns into a gas!) at
3135 Kelvin.
And, even though light bulbs will usually list a “color temperature” on the box, that
doesn’t mean that the light bulb actually gets that hot. (Unless it’s an incandescent
bulb, in which case, that’s exactly how hot the tungsten filament gets!)

WHITE POINT, WHITE LEVEL


According to Wikipedia, ​white point​ only has to do with chromaticity, and nothing to do with luminance.
According to Tucker Downs:
To me, when someone says w
​ hite point​, I'm thinking in 3d space. So that includes ​chromaticity a
​ nd ​luminance​.  
But if someone said w
​ hite level​, the word “level” would mean just the l​ uminance​.  
But a LOT of photographers think white point means only the chromaticity, since when you set a camera white 
point you are only affecting the color. Other controls like aperture and exposure time control the white level in that 
medium.  
Which I think is what leads to the confusion.  
But if I see the word "white point" followed only by a chromaticity value, I don't take up arms. I just use 1.0 for the 
luminance and scale everything. Not great. But it can do a lot of the color stuff that way.  
(Tucker is some genius guy who works on million dollar displays for a living, so I’ll believe him on this.)

DYNAMIC RANGE:
The ratio of the darkest to the brightest parts of an image. This is typically measured in “stops,” also
known as “exposure stops” or “​T-stops​.”
https://www.premiumbeat.com/blog/what-is-dynamic-range/
I've taught photography for years, and in my humble opinion... dynamic range (or the lack thereof) is a prime problem in
photography. I describe it as a piano keyboard. We understand that there are tones (audio) above and beyond the range of
tones a piano keyboard can play. But the dynamic range of tones a piano can play is limited. Great music has been made within
this limited range of tones, but it was all limited to the range that the piano can play.
Your eyes have a broad range of tones in which they can see detail. But a digital camera, compared to our eyes, has a keyboard
(range of tones) slightly more than half of what our eyes do. Even though our eyes can see it, everything above or below the
narrow range of tones that the digital camera can record, will be lost... truncated to that deepest tone, or the highest pitch of
the camera's very narrow keyboard.
The interesting thing about a camera "keyboard" is we can move the narrow range of tones lower than what our eyes can see,
or above what we can see, via manual exposure. The keyboard isn't any broader, but we can put it exactly where we want it.
Thus with a digital camera, we can photograph the surface of the sun, or the darkest sky... beyond where the fixed dynamic
range of our eyes can record tones.
-Douglas Henderson

SDR:
Standard Dynamic Range.​ If the box your monitor came in isn’t gloating that it’s HDR (High Dynamic
Range), then it’s SDR. An SDR monitor can show about 6 stops of dynamic range.
Typically, the brightest that an SDR monitor is expected to get, is about ​120 nits​.
HDR VIDEO vs HDR STILL FRAME vs HDR PHOTO:
“​HDR​” stands for “​High Dynamic Range​.”
For ​HDR video​, the idea is that the brightest white that a typical (SDR) monitor can produce… is not
nearly bright enough! The real world can get so much brighter!
Imagine an SDR video of a person wearing a white shirt, with sunglasses that show a reflection of the
sun. (A “specular highlight.”)
If the video is HDR rather than SDR, the only difference should be that
the specular highlight is much brighter. ​The brightness of the shirt, and
everything else, will be exactly the same.
So, HDR video isn’t just a brighter version of an SDR video. It simply
allows for brighter colors to be used if and when you need them.
HDR monitors typically start at ~​500 nits,​ and some professional models
can get brighter than ​10,000 nits​! (For reference, the sun is about 1.6
billion nits, if you stare straight at it. A welding arc can be even
brighter.)
Because video is a sequence of individual photographs, you might think
that if you pause an HDR video, that you’d be looking at an “HDR
photograph.” But, that’s not what most people mean when they’re talking about HDR photography.
Instead, ​I have coined the term “H ​ DR still frame​”​ to describe that. ​(If there is an existing term, let me
know!)
HDR photography​ actually involves the careful compositing of multiple exposures into one “SDR” image.
These images do NOT need to be viewed on an HDR display - they can be viewed on an SDR display or
even printed on paper. ​HDR videos​, (and by extension, ​HDR still frames​) ​must b ​ e viewed on an HDR
display with its own built-in illumination.
This distinction can be quite confusing sometimes.
An HDR monitor can display approximately ​17.6 stops​ of dynamic range. (Compared to 6 stops on an
SDR monitor)

Some great info from Espen Flagtvedt Olsen

https://skylum.com/blog/hdr-photography-vs-hdr-tv
https://www.digitaltrends.com/photography/what-is-hdr-photography/
http://files.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf

LUMEN, CANDELA, NIT, LUX, FOOT-CANDLE:


1 ​lumen ​is the TOTAL intensity of light emitted by a bulb. Like in projectors at 10,000 lumens.
1 ​candela ​is the amount of light emitted in within a range from the source
1 ​nit i​ s the same as 1 ​candela per meter squared​ (cd/m2). Nits are used to describe display brightness.
1 ​lux ​is the same as 1 ​lumens per meter squared (lm/m2). ​Used for light falling on an area like a bulb
illuminating a space
1 ​foot-candle i​ s the same as ​1 lumens per foot squared (lm/ft2). ​Typically used in light meters when
lighting a scene properly or a theater stage.

LUT:
“Lookup Table.”
Lookup tables are used to map from one colorspace to another.  
They simply take an input value (say, 42% gray in the Red channel) and re-map it to a new output (55% 
gray in the Red channel).  
It's a lot like grading, just automatic, and unless it's very specific (s-log to rec. 709 for instance) the 
results can be extremely unpredictable. 
This is why most creative luts are very generic and soft. 
Note that there are input and output LUTs. 
-Espen Flagtvedt Olsen 
https://en.wikipedia.org/wiki/Lookup_table

COLOR CALIBRATION vs COLOR PROFILING:


Calibration can be a bit confusing at first, but it's actually fairly straightforward. 
The ICC standardized this system: 
Characterize. Every color-managed device requires a personalized table, or "color profile," which 
characterizes the color response of that particular device. 
Standardize. Each color profile describes these colors relative to a standardized set of reference 
colors (the "Profile Connection Space"). 
Translate. Color-managed software then uses these standardized profiles to translate color from 
one device to another. This is usually performed by a Color Management Module (CMM). 
Basically you start by calibrating the device to a certain white balance and luminance.  
Then you profile the display to see how it matches up to the standardized reference.  
The process of calibrating doesn't set the display to match the gamut of any standard. It just helps 
set the device to match some aspects, and creates a profile that certain color sensitive applications 
can use to map the colors to a certain color space. 
I say that it doesn't set any color space, but for devices that feature hardware calibration that might 
not be true. These devices can do these transformations internally and can get more accurate 
results by doing so. 
(Note that you still need color aware software in order to show the correct colors. Just because your 
device is outputting to a certain standard does not mean the software automatically knows this and 
acts accordingly.) 
-Espen Flagtvedt Olsen 
 
...I still don’t get it. 
-Taran 
 
According to Joe Brady,​ “They're part of the same process. What calibration does is, it sets the monitor
to a default. The software's going to pick a color temperature and a brightness. The profiling is the color
correction part.”
BACK TO THE QUESTIONS, NOW
17. (Question) Why does some footage seem to have information that goes “above” 100 IRE (or, 255 on
the right side) on the video waveform?
ANSWER ​(Taran): That footage was encoded in Y'CbCr, which is unfortunately also referred to as “YUV.” Please
watch the video​ for the full explanation, as this is very important.
https://en.wikipedia.org/wiki/YCbCr
In Premiere, only effects with a “YUV” icon can properly affect these video clips.

18. (Question) On this un-corrected, un-clamped footage from the Canon XA20, why is the brightest
part of the image (the sun, or a bright light) set at 275 (110 IRE), rather than 255? (100 IRE)
Download the video clip if you need to.
Or, you can get it from here: ​https://youtu.be/m0D2H0s-TMo
(The spot I was looking at is timecode ​18:25:48:19,​ or 27 seconds from the start.)
(You can also find a very brief example of pure white light at timecode 18:25:35:10, or 00:00:13:27
from the start)
ANSWER​: Because the Canon XA20 was designed with 10 IRE of extra headroom, to give more flexibility in
post. This is set specifically by Canon and is not necessarily consistent across all manufacturers.

19. (Question) At that exact same timecode, why is the pure white blown highlight
NOT the "highest" part of the RGB waveform?
ANSWER ​(Taran): If you view “luma” on the scopes, you can see that the white highlight
is in fact the highest part of the waveform, and nothing gets higher than 110 in this case.
Because chrominance is relative to the luminance, it is “added” or “subtracted” from
there.

20. (Question) Why does the XA20 also put its “100% zebras” at 110 IRE, rather than 100 IRE? (The spot
I am talking about is at timecode ​18:27:18:14,​ or 1:57 from the start.)
Answer:​ “​Because the XA20 clips at 110IRE, not 100IRE, and the 100% zebra is supposed to convey the 
part of an image that is ​clipping​.” -Ian Servin

21. (Question) If you have “YUV” video where the information goes above 100 IRE, but it is able to be
recovered, is that really “clipping”? Or, is there a different term for that? (If not, I propose the term
“pre-clipping”)
Answer:​ (Which might be wrong/inaccurate) Those are called “unsafe colors,” “illegal colors,” or
“whiter-than-white.” If your video is going to be broadcast over the air, they would need to be
manually clipped (truncated) using a ​video limiter​, to make the video “broadcast safe.” For YouTube,
it’s not a problem to have them.

22. (Question) When “clamp signal” is OFF, why does my Lumetri Scopes waveform still only
SOMETIMES show the data above 100 IRE?
Answer: ​Might just be a bug, or poor implementation...

23. (Question) It’s widely proclaimed that shooting in LOG (which results in a “flatter” image) gives you
much greater ability to color correct (and color grade) later. For 10-bit footage, that makes sense to
me. But if you’re shooting in 8-bit, wouldn’t shooting in LOG just massively reduce your available
color depth?​ ​This article​ claims that it’s not a problem.

I think that ​this footage of a kitten​ was shot using 10-bit. But, if it was shot in 8-bit, would it be a
mistake to use S-Log2?
Excellent article on sLOG.
Here is an example of 8 bit vs 10 bit LOG​ -Link provided by Stephan 
 
Answer: 
“You are trading bit depth for some extra dynamic range. Having LOG footage does make it easy to influence 
colors more (seeing how they're robbed of much of their original color information), but it's primary purpose is to 
create more dynamic range. It's actually recommended that on 8-bit (and while less so on 10-bit cameras, it's still 
recommended) if you can control the dynamic range in the room and you don't need the extra dynamic range, shoot 
in a standard or natural looking profile. The codec simply doesn't have as much information for you to move stuff 
around. 
This is why many professionals jumped for joy when 10-bit depth with 4:2:2 chroma subsampling started making 
its way to smaller cameras: the masses were getting better tools for using LOG profiles. 
-​Koto-Kun 
 
Answer: 
In your tutorial, you're right. You don't want your picture to look as flat as possible. People totally went overboard 
with that whole concept. Someone invented LOG files, but people didn't get it. The whole idea of a Log file is to get 
as much detail as possible in the shot, not to bunch all the information together. Log will prevent information from 
getting clipped out, because it curves off the top end and lower end. The scope you see there is actually worse than 
what you'd want. You want information as spread out as possible, without getting clipped. This gives you the ability 
to have the greatest freedom in grading, as there'll be a distinct difference between a white wall and a white light 
for example, allowing you to grade those 'separately'. If you'd shoot like the crunched scope shown there, there 
would probably be not enough value difference to grade those two separately. 
-Bart Kuipers 
 
Answer: 
Rule of thumb, do not shoot 8-bit log footage. 
-​Jean Paul Sneider 

24. (Question) Does HDR really have blacker blacks than SDR, as ​this article​ claims? Or is it just the type
of display? If so, it seems to me that SDR footage can also be shown on that kind of display, and
therefore achieve the same level of blackness.

Answer​:
HDR is always at least 10-bit, so HDR displays have far more possible shades of near-black,
compared to SDR monitors. So, HDR can show more DETAIL in the darkest blacks. But this article
seems to be implying that the absolute blackest black (zero) will always be MORE BLACK on an HDR
display, compared to an SDR display. That is NOT true.
...
“LCDs are backlit, and even a signal value of zero will produce some light.
I don't work on LCD displays; the LED display tech I work on is truly black. But it can also be used as
an SDR monitor. So my SDR monitor has a darker black (and higher contrast ratio) than any HDR LCD
monitor I am aware of.
HDR monitors are not guaranteed to have an absolutely darker black.
In fact, they aren't even guaranteed to have a brighter white either, in the case of cinema standards.
In cinema, SDR goes up to about 48 nits. Dolby vision is 108 nits and is considered to be HDR but my
SDR macbook pro has a screen luminance of around 250 nits."
-Tucker Downs
Taran note (info from Tucker):
The reason Dolby vision is still considered HDR even at “only” 108 nits, is because the theatre room
itself is almost completely black. So, your eyes have ​adapted t​ o that.
For the same reason, at 100% brightness, your smartphone screen can be pitifully dim in broad
daylight, but seems blindingly bright in a pitch-black room.

25. (INFO) Taran’s theory of why it is important to care about fundamentals for your chosen art form
Totally agree about learning fundamentals. Also for color science/theory. You should learn what everything 
does and is supposed to do. You want to know all the rules so you can choose which ones to break instead of 
just breaking them because you didn't know. Breaking rules is fine, if done on purpose :) -Bart Kuipers 
 
26. (INFO) How to best convert to grayscale / black and white
Just lowering the saturation to 0 (“averaging”) can be the WORST solution.
Premiere’s Black & White effect is probably your best bet.
You can also try the Channel Mixer effect, and check “monochrome.”
By default, it just uses the red channel. Not great.
You can also try using just the green, or just the blue channel.
These values give almost the same result as the B&W effect: 28,58, 14, 0.
(Notice that they add up to 100)
In Photoshop, use Adjustments> Black & White. It’s far more powerful than what
Premiere has.

Excellent article on grayscale conversion methods:


http://www.tannerhelland.com/3643/grayscale-image-algorithm-vb6/

“​Converting to black and white is 100% subjective. There is no "wrong" way. What gives you the 
best results just depends on what you're trying to accomplish. Using the newish Hue vs Luma in Lumetri to 
adjust the luma of specific colours before a desaturate should actually give you even more control than 
Photoshop's Black & White because you can target any specific hue: not just Reds, Yellows, etc.” 
-ThisIsTeeKay 

27. (INFO) Different hues actually have different values:


a. Color Conundrum: How It All Works! (Sinix Design)
b. So you know how I was using the word “value” constantly during
this tutorial? Turns out that not even that is entirely correct. But
we have to pretend that it is, most of the time…
c. So, a 3D chromaticity diagram should not be symmetrical. Yellow
should be brightest/highest…
28. (INFO?) Crushed blacks:
https://photofocus.com/2017/09/26/the-term-crushed-blacks-has-got-people-confused/
https://www.rocketstock.com/blog/crush-the-blacks-in-color-grading/

29. (INFO) How to salvage blown highlights


For photographs: ​https://www.youtube.com/watch?v=kh8ced75BHc
For video: ​https://www.premiumbeat.com/blog/how-to-salvage-overexposed-footage/
http://juanmelara.com.au/blog/recovering-highlights-with-davinci-resolve

↓ ↓ Keep scrolling ↓ ↓
30. (INFO) This video poses a problem, but does not give a complete or correct solution:
Computer Color is Broken, by MinutePhysics
How to REALLY fix Photoshop’s blur:
For existing documents, go to Edit > Convert to Profile > Profile > Lab Color
When creating a new document, use Color Mode: Lab Color
However, you still need to convert it to RGB when it’s time to export as a .png, .jpeg, etc.

31. (Question) In Photoshop, can I get color scopes that are as good or better than Premiere’s?

Answer:  
For Photoshop, open a new window (dual monitor and make the second monitor 
output full screen) you take an HDMI to SDI converter (like the one from BMD for pretty 
cheap) and plug in the SDI into a waveform/vectorscope capable monitor or into a 
Decklink in a PCIe slot which will take the input into Drastic 4KScope or Scopebox. 
-Ian S. 
a. Is there a plugin?
b. (INFO) Photoshop’s histogram can expand to show more info at once, but it’s not much
better.

32. (Question) In Premiere and Photoshop, can I get bezier handles on my curves tool? That would be a
lot easier to control…
Answer​: Probably not. But ​Resolve ​has them...

The following questions are concerned with SDR, 24-bit (8-bpc) color.

33. (Questions) Do you agree with these fundamentals of mine for color CORRECTION? If not, why?

-A blown highlight on just one channel can/should potentially be recovered.

-A pure white blown highlight (100+ IRE on R, G and B) cannot be recovered, and should
be left at 100.

-If you have crushed blacks on 1 or 2 channels, you should leave them at 0.

-If you have crushed blacks on all 3 channels, definitely leave them at 0. In fact, if that
blackness has some digital noise in it, you should lower your blacks until all the noise is
gone.
Yes, but ensure that blackness ranges don't affect the image negatively for the sake of reducing noise - look
into digital noise reduction before pushing into shadows more than you need to. -Caspar Brown

-The ”Curves” effect can do everything that adjustments to


blacks/shadows/midtones/highlights/whites can do… but with much more fine control.
“​Yes​. The slider controls do more damage than good. Pity the Lumetri curve is dogshite. The
different Hue/luma graphs are useful though and would recommend trying them as they're also
present in Resolve.” -Caspar Brown
​ o.​ The curves are inherently linked, and even with control points, it can not adjust specific
“N
ranges of luminance the same way that the slider controls can.” -John Pooley
Taran note:​ Well, I don’t know who to believe. Can someone else weigh in on this? Also, I’d like
to know exactly what it is about the Lumetri curve that is “dogshite” compared to Resolve.
ANSWER: ???
JP:
https://forums.adobe.com/message/5642295#5642295
https://forums.adobe.com/message/4259091#4259091#4259091

-NEVER let your curve go downwards… always upwards!

-Always have your ​waveforms ​visible when color correcting.


“​Yes”​ is the consensus.
The YRGB Parade or Waveform, depending on which style you find more readable,
allows you to confirm that you are not clipping each channel while also monitoring overall luma. -John
Pooley

-Always have your ​vectorscope ​visible when color correcting.


​ ot always”​ is the consensus.
“N
The biggest use for the vectorscope is traditionally checking skin tones. The other applications aren't
as mainstream -John Romero
Vectorscopes can also be used when using green screens to help white balance the image and improve
the color keying. Here's captain D using it:​ h​ ttps://youtu.be/aO3JgPUJ6iQ?t=614​ -Joao Martins

34. (Questions) Do you agree with these fundamentals of mine for color GRADING? If not, why?
You could do all sorts of creative stuff when grading, there are no rules about right or wrong in grading. -Bart
Kuipers

-A pure white blown highlight (100 IRE on R, G and B) should probably still be left at 100.
Answer​: ​No.​ ​http://lookbook.colorist.us/?p=64​ (Link from VYZ Studios)

Example: “cream highlights”

-Crushed blacks CAN be brought up – for example, old film has noticeably dark-red “blacks.”

-Fully saturated colors often look amateur.


I disagree, it depends on the look. Modern hollywood desaturates the color these days, so that's the style,
but if you're going for a retro aesthetic, or a rom com, or commercial applications then saturated color is
definitely applicable. However, I'm talking about the camera colors, not colors in graphic design. I'm no
graphics designer so I can't help you there. -John Romero
(I was mostly talking about graphic design/animation here. EposVox and Tucker Downs
have both recommended using a texture in that case.)

-There a very few reasons to ever let the “curve” slope downwards/backwards
(I have only ever done this to create a chrome effect​ ​https://youtu.be/jrISZ5jdmIs?t=163)​
(Can you think of any other reasons to do it?)

-It is okay to deliberately blow the highlights, or crush the blacks. But you really need to
know when and where that would be appropriate. (And I do not… do you?)
1. If there simply isn't enough highlight information... you might have a little bit of detail, but pulling that 
back can often reveal chunks of clipped white. In this instance, blowing the highlights might be more 
visually pleasing.
2. If your shadows and blacks are super noisy with very little detail and beyond saving with temporal 
Noise Reduction then crushing these can be your only option. 
3. Crushing the blacks is also very common as a stylistic choice for dramatic/moody looks. The term 
'crush the blacks' also gets used a lot when referring to a Crushing the shadows but raising the black level 
- to emulate a film stock look.  
-Connor Ayliffe 
-Always have your waveforms/scopes visible when color grading
But only look at them to make sure you're not clipping after you've done your creative grade. If you keep to 
your scopes too tightly, you might think something is going to look bad while it could actually be really 
cool what you're doing. - Bart Kuipers

35. (QUESTION) Do you agree with these as GUIDELINES for color CORRECTING? (in SDR!)

Taran note: There is some confusion over “IRE” here. I am only speaking about the numbers as they
appear on the (Premiere) Lumetri waveform scopes. Not how they are used in cameras and
broadcasting.

100 IRE: Bare bulbs, the sun, welding torch, specular highlights, etc. Do not let anything else touch
100.

100 and below: Lit lampshades, diffusers, etc.

80-90 IRE: Our set's lit windows with white curtains on them

~80 IRE: LTT kitchen set - white cabinets

75 IRE: “True white” Where most broadcast/video textbooks suggest we white


balance our cameras at when using a white card such as DSCLabs CamWhite

~70 IRE: White skin tone (I center the red mass around 70. The actual ​luminance
channel ​is around 60. So, maybe I should have it even brighter??)
~65 IRE: Asian/hispanic/black skin tone ​(According to this book)
~60-65 IRE: Black skin tone

All skin tones should line up with +i on the vectorscope (The skin tone line)
20-1 IRE: Black objects, lit. Clothing, PC components, etc. Just DO NOT let it touch 0 IRE.

0 IRE: Parts of the scene that are deliberately unlit CAN be this dark. Also, if there is digital noise
close to 0, you may wish to lower it until the noise vanishes.

These guidelines are much better for a camera operator/DIT during filming. Exposing for these levels should
fall more on them than on you as the colourist. I wouldn't try to follow them to the letter every time.
-Caspar Brown
Taran note:​ Most people seem to think that these guidelines are fairly decent.

And, what do you think about Ansel Adams’ “Zone system” as described here?
https://www.cinema5d.com/primary-colour-correction/

Comment:​ ​Fascinating​. But I don’t really find it helpful for digital cinematography. -John Pooley
Comment: Yes. -​ Caspar Brown
Taran note:​ I am seeing Ansel’s zone system referenced in a lot of tutorials in regards to digital video, and many
commenters seem to like it. I’m not sure what John doesn’t find helpful about it.

36. I’ve started to color correct / color grade


screenshots and screen capture footage.
For websites that heavily use 100% white (In
24-bit sRGB, this is 255,255,255, or #FFFFFF), I
prefer to reduce it down to about 85IRE, or
85%.
(Note that the white of this very document is
also at 100% white).

My rationale is that if the viewer is adapted to


the brightness of a live action scene, hitting
them with a full screen of pure white is just as
rude as HAVING SOME SCENES WITH MUCH
LOUDER VOLUME!
I started thinking about this, because Linus wants to make videos in HDR. I realized that there's no
way I can allow for screen capture to continue to be the brightest possible value. Those values must
be reserved for truly bright things. So, screen capture has to have its brightness capped
SOMEWHERE. The most logical place, would be the same as a screen appears in the live action
footage: About 85 IRE. Still bright, but not as bright as the surface of the sun is represented to be.

...What do you think about this?


Strongly agree. The analogy you make with audio is spot on and I would go further to say that 
color correction can be compared to audio mastering, your main job is to make sure that 
different scenes don't clash with each other (unless purposefully done so). -​Joao Martins
Sounds good to me. Very sound reasoning. -Espen

(​Captain Disillusion​ goes even further, using vignettes, textures, virtual 3D camera moves, and faux
lens blur.)
The broadcast networks will often shoot the screen with a camera (which looks terrible and takes
more time) or often they’ll just recreate the graphics themselves. I recommend blurring irrelevant
content, cropping the web page, and layering that on top of your own logo loop. If you’re
producing for HDR then it would be advisable to bring the whites down a couple %. -John Pooley

37. (Question) Do you know of any other fundamentals of color correction or color grading? And, do
you have any useful “tricks?” Please share them with me!

38. (Extra question.)


In this document, John Romero​ says in the comments (which you can’t see):
“​In a ycbcr format, the bitstream is already encoding the luminance channel with about 70% or the 
total information stream if you look at the formulas, but it is still within an 8 bit file. “ 
 
...which reveals just how bizarre it is to call 8-bit video, “8-bit video.” N ​ one of the channels are 
actually 8 bits. 70% of 24 is 16.8. So it seems like the Y channel is 16.8 bit, and the Cb and Cr 
channels have to divide the remaining 7.2 bits amongst themselves. (maybe not evenly.) 
I don’t know if it even makes any sense to talk about fractions of bits, in this manner. Maybe the 
divide is actually something like 17, 4, 3. 
So again, if ​none ​of the channels are actually 8-bits, why do we call it “8-bit video,? 
...like, maybe it ​becomes ​8-bit per channel once it’s converted to RGB, but because of studio 
swing, I’m not even sure even THAT is correct. 
 
Answer:​ This question conflates two types of resolutions; spatial, and intensity. The luminance 
channel gets a large portion of the spatial resolution, but all three channels are quantized with the 
same possible set of intensities (0-255, 0-1023…) -John Pooley 
Taran question: ​Which three channels? Are you talking about Y, Cb, and Cr? Or... R, G, and B? 
Taran note:​ I believe John is talking about c
​ hroma subsampling​. 
Thanks to everyone who has contributed their knowledge! I’m still working my way through it all.

Feel free to make a copy/backup of this document. I don’t plan on ever taking it down, but you never
know what might happen. It’s always a shame to follow a link to a great resource only to find that it’s
no longer there!

Many people have suggested that we switch to DaVinci Resolve, because the color correcting/grading
options are much better than Premiere’s. This… is a bad idea for many reasons. None of us have the
knowledge required to utilize those color tools most effectively - and even if we did, “properly” grading
footage will take more time than we have, since we make daily videos. And although ​Premiere certainly
does have its issues​, Resolve is still lacking in essential NLE features that we DO need - it doesn’t even
have Q and W (Ripple trim to previous/next edit)
This is like recommending a $500K Rolls Royce to a guy who just needs a $5,000 Toyota to commute to
work, when he previously just took the bus. Getting a car and learning to drive it, is plenty. We really
don’t need all the fancy stuff.
If we do another commercial or other large budget project, then we’ll probably use Resolve for the
color.

Useful links people have submitted:

The Hitchhiker’s Guide to Digital Colour​ ​1​ ​2​ ​3​ ​4​ 5 6 ← I feel like these articles were written just for me.

https://www.khanacademy.org/partner-content/pixar/color
https://blog.frame.io/2018/05/21/premiere-lumetri-guide/
Bias lighting
https://cinematiccolor.org/
https://filmsimplified.com/
http://color-artist.blogspot.com/2015/12/
https://www.fxphd.com/details/304/
https://www.fxphd.com/details/380/

Videos:

“On LOG: go watch Filmmaker IQ’s fantastic technical breakdown of dynamic range here”
Insider Knowledge - An easier way to grade log footage
How NOT to balance Highlights and Shadows | Davinci Resolve
White balance: Important or Overrated?
DaVinci Resolve 15 - The Art of Color Grading (1:56:31)
DaVinci Resolve 12 - 52b Curves Palette - Soft Clip, S-Curve and Splines
Books:
https://www.amazon.ca/Color-Correction-Handbook-Professional-Techniques/dp/0321713117
“Read it and don’t touch color again until you have.” -Caspar Brown
http://www.moderncolorworkflow.com/
A Broadcast Engineering Tutorial For Non Engineers

Direct video responses:


https://www.youtube.com/watch?v=Z2rVys342FQ
https://www.youtube.com/watch?v=qTvbk2sM3DA&feature=youtu.be

Google doc responses:


John Romero
Caspar Brown
Spencer Balliet
Espen -- HDR and SDR
Camon -- Slog vs Cine
Bonus material:
Really helpful color grading tutorial, quite in-depth, and shows several techniques I hadn’t seen before:
https://www.youtube.com/watch?v=-3j30sT8VAw

Another decent color grading tutorial – I was most surprised by how much better her face looked after it
was masked and corrected separately.
https://www.youtube.com/watch?v=_m-9R1oqvh0
NOTE that the way he does this, by using the same footage stacked on top of itself, seems quite
inelegant to me now. I can see why Resolve’s node system is preferable.

How our eyes see chrominance and luminance, and how the Bayer filter pattern is designed to take
advantage of that

Note that, although rod cells are for detecting luminance, not
chrominance, they still have a frequency response - highest at
498nm -- blueish-green. This blew my mind - most diagrams you’ll
see like the one to the right, will only show the ​cone cells​, with a big
gap between the blue and green response curves. Adding the ​rod
information makes the whole thing make a lot more sense.

This is not yellow


There’s no purple light

Fun fact: ​Magenta​ is an “extra spectral” color.


https://en.wikipedia.org/wiki/Spectral_color#Non-spectral_colors
Subpixel rendering
https://en.wikipedia.org/wiki/Subpixel_rendering
(This is why black and white text is sometimes multicolored if you take a
screenshot and zoom in really far.)

Some stuff about “full” vs “limited:”


https://forum.kodi.tv/showthread.php?tid=252023
It’s also known as “full swing” and “studio swing.”
https://wolfcrow.com/blog/what-is-full-swing-studio-swing-and-how-to-work-with-video-levels-in-adob
e-premiere-pro-part-one/
“There is no broadcast standard that accepts a full range”
Comment: Many broadcasters work in full range and don’t even worry about clamping because
it happens on the airchain encoder and they want full for the web. -JP

Beware that even your graphics card might be modifying the colors that you see:
https://forums.adobe.com/message/8818586#8818586
Here are my Nvidia settings:
And here are my display settings in Windows:

You should ​definitely​ only be using your monitor’s native resolution when editing videos.
The 150% UI scaling works fine, and is important so that things aren’t so small on my screen that I
cannot see them… but many programs have issues with it, including Premiere.

Good info on gamma curves:


https://www.provideocoalition.com/the_not_so_technical_guide_to_s_log_and_log_gamma_curves/

Lots of test charts that are much better than mine:


https://obsproject.com/forum/resources/obs-studio-color-space-color-format-color-range-settings-guid
e-test-charts.442/

Extremely detailed article on gamuts and color spaces, much of which is still over my head:
http://www.tftcentral.co.uk/articles/pointers_gamut.htm
In this article, I learned the term “temporal dithering,” which blew my mind, because I had never heard
or thought about it before…

Lynda video series on color management (for photographers):


https://www.lynda.com/Design-Color-tutorials/Understanding-tools-required-color-management/1353
61/149854-4.html?utm_campaign=KKX08oOTMkk&utm_medium=viral&utm_source=youtube

Video that explains the difference between primary and log wheels in Resolve:
www.youtube.com/watch?v=M9OyxO8EqWM

Some definitions which I feel are not important enough to go in the main list:
LUX vs LX:
They both mean the same thing. ​Lux.

TINT, TONE, SHADE


Adding white, adding grey, and adding black, respectively.
https://en.wikipedia.org/wiki/Tints_and_shades
I never knew that there was a clear distinction between these. I imagine that this matters a lot more
with subtractive color, like paint.
I simply refer to all of these as being different “values.”

For the lols:


https://www.reddit.com/r/shittyHDR/top/?t=all

Other important bits of information that I didn’t know, or wasn’t sure about:
“You may also notice that your camera has a menu selection to choose a color space of either SRGB Or
Adobe RGB. This is a big point of confusion. Understand that this choice only applies to JPEG captures.”
“Also if you’re shooting RAW, there is no white balance built into the file. But if it’s a JPEG, the white
balance becomes part of the file immediately.”
Source
Furthermore, if you have a JPEG file the white balance is “baked in,” it can’t be easily changed using the
most accurate white balance mathematics. With a raw image, the source xy (in the CIE sense) values can
be more correctly adjusted for a white balance change.

Apparently, some LUTs are so sophisticated, it’s nearly impossible for most people to recreate them
Here’s an interesting exception

Here’s just one example of an article that gets things woefully wrong:
http://changingminds.org/explanations/perception/visual/hsl.htm
They refer to the “L” in HSL as “luminosity,” when they actually mean “luminance,” but even that is
incorrect. The “L” stands for “lightness,” a very different concept.

Remember, the enemy’s gate is ​down​.


(Ender’s Game reference. His team played better when they all oriented themselves in the same way.)

[THE END]

Potrebbero piacerti anche