888-44-WAYNE
(888-449-2963)

Video Engineer
Video Controller (Shader)
Video Painting Engineer
Digital Image Technician
Engineer In Charge
Technical Supervisor

wayneweb@videoengineer.net
























888-44-WAYNE
(888-449-2963)

Video Engineer
Video Controller (Shader)
Video Painting Engineer
Digital Image Technician
Engineer In Charge
Technical Supervisor

wayneweb@videoengineer.net
























888-44-WAYNE
(888-449-2963)

Video Engineer
Video Controller (Shader)
Video Painting Engineer
Digital Image Technician
Engineer In Charge
Technical Supervisor

wayneweb@videoengineer.net























(Page 4 of 4 pages for this article « First  <  2 3 4)

Gadgets powered by Google
Wednesday, July 21, 2010

Filed under: CamerasPost ProductionProductionTipsTraining

Step into the Matrix: What I Learned from Examining RED’s Build 30 Color Science

Art Adams | 07/21

RED says the MX sensor uses the same colorimetry as their old M sensor. Others say the improvements are so dramatic that this can’t be. A search for the truth led me deep into the heart of The Matrix…

 

 

Anyone who has traversed the menu structure of a Sony or Panasonic camcorder has likely seen the matrix. Like the movies, this matrix can be a bit confusing until you’ve played with it for a while. I’m nowhere near being adept at manipulating it, but I learned enough from building my own pseudo matrix in Apple’s Color that I have a better grasp of what it’s for and what it’s meant to do.

The bottom line is this:

A color matrix, or series of color matrices, massages the raw sensor color data into a form that looks correct on a specific viewing device, and allows some customization of how the camera responds to color at a very deep level.

DIT Peter Gray has a page on his web site that deals with various settings, including the matrix, in the Sony F900, so if you haven’t seen a matrix before then click here and scroll down the page for a rough explanation. There’s also an interesting article about this under DSC Labs’s Tech Tips column. Look for Dave Adams of Sky Television.

Every camera has at least one matrix, and typically two or three. Based on what I’ve seen in my experiments, matrices are fundamental to the proper functioning of a camera at a very, very deep level. The matrix most of us have seen is Sony’s User Matrix, which in many of their cameras offers settings that read as follows: R-G, R-B, G-R, G-B, B-R and B-G. Many engineers I’ve spoken to understand that these controls “push” colors around on a vectorscope, but my experiments have shown me that the matrix is more complicated than that:

Matrix settings either

(a) mix a color channel signal into another color channel signal, or

(b) subtract a color channel signal from another color channel signal.

For example, (b) is how I eliminated green spill from the RED’s Camera RGB color space on the previous page: the blue channel saw a lot of blue and a little bit of green, so I told Color to subtract some of the green signal from the blue signal until all that was left in the blue signal was a response to the color blue only..

Except that it wasn’t that simple. With modern cameras it never is. Subtracting the green signal from the blue channel solved part of the problem, but not nearly all of it. Clearly there was more that had to be done.

And with that thought, I decided to build my own matrix (or matrix emulator) in Apple’s Color. I was curious as to whether I could build a node tree in the Color FX room that would add and subtract color channels from each other in such a way that I could make a Camera RGB image look like RedColor.

Here’s my starting vectorscope showing Camera RGB under tungsten light:

Here’s RedColor:

And here’s the look I was able to build using a node tree in Color:

I’ve got the basic shape correct, although some of my colors are a bit oversaturated compared to RedColor. The important thing is that the colors fall onto their proper vectors, even if they don’t fall into their designated boxes. (An electronically-generated color bar signal will put all the colors into their boxes, but this rarely works when shooting in the real world. A “vector” is a line that passes from the center of the display through each color box, and as long as each color falls somewhere along its vector it’s usually okay even if it doesn’t fit neatly into its little box.)

There are some other problems as well, so let’s look at what the actual charts look like. Here’s RedColor:

And here’s my look:

Close, but no cigar. Well, not an expensive cigar, anyway.

Lest you think this was easy, here’s the node tree I created in Color over the course of several hours:

This may look complicated, but it’s really not—at least not at this level. What I did was isolate the three color channels—red, green and blue—and then create branches where I could add or subtract the signals from the other color channels. For example, after isolating the red channel I set up branches where I could add or subtract blue and green; for green I set up branches that could add or subtract red and blue; etc.

Mostly I subtracted colors, which meant that I was compensating for the colored filters on the photosites seeing colors other than their own. (A little of this is essential, otherwise secondary colors like cyan, yellow and magenta will suffer horribly—as they do on many older cameras.)

It’s important to note that we are talking about color signals, not actual colors. On the previous page I didn’t really subtract green from blue; instead I subtracted a green signal from a blue signal. The signal itself isn’t a color, it’s just information that lets the camera’s processor know how much color is present at points in the image so it can build an RGB picture. When you subtract a signal from a signal you’re merely removing one signal’s influence from another’s, as opposed to subtracting one color from another color which results in a new color.

Think of a color signal on its own as a monochrome image. As an example, I created some imperfect but good enough examples in Photoshop. Here’s blue:

And here’s green:

Bright tones mean that the a color channel sees its color in the image and dark tones means it doesn’t. If we take the values in the green channel and subtract them from the blue values, the parts where green values are the highest will wipe out the areas where blue overlaps, because the blue values are lower. That has the effect of eliminating green contamination from the blue signal, making blue much more pure.

Conversely, as green doesn’t see much blue if it sees any at all, green’s values will be very low in the blue areas of the chart, and subtracting low green values from high blue values leaves the blue portions of the image alone.

Since we’re doing all this work only in the blue channel, blue is the only color affected.

This delicate balance is what differentiates one model or make of camera from another. It’s all about what filters are used on the photosites to detect red, green and blue, and how their signals are mixed together to form pure and pleasing colors. There’s no such thing as a pure green, pure blue or pure red filter because colors are comprised of a range of wavelengths, not a single wavelength. The filters on the photosites often pass a range of wavelengths to the photosite, and occasionally have sensitivities, or “leaks,” where they shouldn’t. (Infrared comes to mind.)

In addition, it’s beneficial for each color channel to be at least a little sensitive to other colors as that’s how secondary colors are made. If photosites only see very narrow wavelengths of red, green and blue then they’ll never respond to yellow or magenta or cyan. Some overlap is necessary.

On this web page you’ll see charts of the spectral response of a number of popular DSLR cameras. Note how all the colors overlap somewhat in spectral response, and yet these cameras reproduce very pretty and accurate colors. That’s because every image is being processed through a color matrix, either in the camera (JPEG) or later through an import plugin or some other software tool (raw). There’s a formula that these manufacturers developed that allows them to selectively add and subtract color channels from each other so that all these different signals blend into a very pleasing and accurate color palette. This matrix is specific to the spectral properties of the red, green and blue filters used on their sensor. (This is true of any camera.)

When you look at matrix numbers you are seeing deep into the heart of the sensor, right down to the way the dye filters on each photosite respond to their designated wavelengths of light. That’s pretty amazing, and very powerful.

In a camcorder there are often multiple matrices. In the Sony F900, for example, there are four obvious matrices:

(1) The OHB matrix. This adjusts for color differences between this camera’s optical head block and any other F900’s optical head block. At a very basic level the OHB matrix strives to make all F900 cameras look the same in spite of subtle differences in their prisms.

(2) The preset matrix. This is how you specify a color space for viewing. If you’re shooting for broadcast you may set this to ITU (Rec) 709 to make sure that all the colors you capture are “legal” and look correct on a standard HD monitor.

(3) The user matrix. This is where you can customize how the camera responds to color in a manner that you choose. Adjusting this is not for the faint of heart, as channel mixing requires a interesting blend of technical know-how, experience and voodoo.

(4) The multi matrix. This allows the user to select a swath of color and affect it exclusively. This is handy if you need to make a product a very specific shade of color that the camera doesn’t automatically reproduce accurately. This is the simplest matrix and allows the user to grab a “pie slice” of the vectorscope and drag it one way or another.

In the RED ONE MX there’s matrixing that happens as part of the de-Bayer process, and a matrix that is applied in-camera to the monitor outputs (RedColor or “raw”). Multiple matrixes are available in Red-Cine X. By selecting RedColor, RedSpace, Camera RGB or any other option you are choosing to interpret the recorded RGB signals through a specific color matrix. (RedColor is the matrix that produces ITU (Rec) 709-compliant color for viewing on any standardized HD monitor.)

I saw an interesting demonstration of the matrix years ago when watching an F900 demo at Bexel in Burbank. The engineer aimed the camera at a gray card and then dialed a number of very extreme numbers into the User matrix. The gray card didn’t change at all, but when he zoomed out and revealed the room every color in it was severely messed up. You can’t adjust a color matrix by looking at a grey card or grey scale chart: you must observe known color references during the process. Matrices respond to very specific colors, so you have to look at those very specific colors in order to see how they change.

That’s where our old friend the Chroma-Du-Monde chart comes in.

Here’s RedColor’s waveform from the above experiment:

And here’s the look I created:

I got close, but it’s not a perfect match. The important thing, though, is that all the color channels have “arms.” See how the left and right sides of each color have notches that show how they are responding to colors on the chart? The downward blue notch on the left of the blue channel shows that it isn’t seeing green, and the upward notch on the right shows that it is seeing blue. The green waveform shows that green, too, is only seeing green in parts of the chart that contain green, as does the red waveform (although red shows up differently as both of the chart’s side columns show decreasing amounts of red from top to bottom).

These “arms” show color separation, which is very important for pure and accurate color: not only do the colors need to fall onto the proper vector on the vectorscope, but they need to have broad distinct “arms” on the waveform in order to show that they are only responding when they see their own color in the chart (and in the real world). A Chroma-du-Monde chart and a parade waveform monitor are the only way to see this accurately.

While working only with a vectorscope I discovered that I could get the colors onto the right vectors and still have a lot of crossover between colors. Remember the blue notch that started all this?

When I created my own matrix I discovered that if I only watched the vectorscope I could get all the colors lined up with their little boxes but STILL not create that notch. I could get blue and green to look proper on a vectorscope, but green was still contaminating blue. In order to eliminate that contamination and recreate that notch I had to watch both the vectorscope AND the parade waveform, because the vectorscope didn’t tell me where colors were crossing over. My process went like this:

(1) Separate Camera RGB into its red, green and blue components

(2) Look at the chart for obvious color contamination, such as blue in the greens. Work on one channel at a time.

(3) Add and subtract colors from that color channel to get that color into its vector (in line with its color box on the vectorscope) taking my clues visually from the chart. (Does green look bluish? Then subtract some of the green channel from blue and see what happens…)

(4) Look at the parade waveform and see if the color I’m working on has big “arms” or not. If it doesn’t, go back to step 3 and try subtracting and adding other colors. If it does have big arms, and it lands on the right vector on the vectorscope, move on to the next channel and repeat.

(5) Do the channels still fall along their vectors on the vectorscope? Tweaking one color channel affects all the others, so if I change one I frequently have to go back and tweak the others. If the colors line up on the vectorscope, move on. If not, go back to step (2) and try to place each color in its vector without sacrificing the “arms” on the waveform.

(6) Repeat endlessly until done.

Adding and subtracting color signals from a color does two things: it affects where that color falls on a vectorscope, which shows color accuracy; and it affects color separation, making sure that each color channel is pure when viewed on its own. The vectorscope shows color accuracy, while the parade waveform shows color separation.

Back at the beginning of this article I mentioned that this journey started as an experiment to determine whether the RED ONE M and RED ONE MX sensors shared the same color filters. Here are some waveform/vectorscope images from Color that show the results of the tests. Remember that I didn’t tamper with these images in any way, using white balance presets in both cameras, so the images are not perfectly white balanced. This reveals itself in two ways:

(1) When white balanced the center of the vectorscope will show a tight white dot. In these images you’ll notice that the center of the vectorscope is not a tight white dot and that the chart doesn’t always look perfectly accurate. The direction of the white dot tells you what the color bias is, so if the dot is skewed toward blue then expect the chart to look blue.

(2) When the white dot is skewed that means the overall vectorscope pattern is skewed in the same direction. This does not change its shape. Compare the shapes of the vectorscope patterns, not where they fall on the scope.

(3) Same with the waveforms: they won’t match perfectly due to the different white balances, but you’ll be able to see where the “arms” fall and that tells you a lot about color purity.

All of these tests were shot with the cameras set at ASA 400.


M sensor, daylight, RedColor. White balance is a little on the cyan side.


MX sensor, daylight, RedColor. White balance is cyan-blue, but the basic shape of the vectorscope matches the M sensor. Notice how tight the dots are on the MX sensor, which shows that it displays considerably less chroma noise than the M sensor.


M sensor, tungsten, RedColor. White balance is a little cyan.


MX sensor, tungsten, RedColor. White balance is a little cyan. Once again the shapes are roughly the same, including the way the dots smear on the blue-yellow axis. We saw that earlier in the experiment where I subtracted green from the blue channel.

Based on this comparison my best guess is that RED is using the same colorimetry in both cameras, and the only significant difference between the two is that the MX sensor is much, much, much quieter than the M.

I’ve learned a lot from this process. Specifically, I learned that what I thought was a RED-specific flaw is actually representative of what every camera manufacturer has to deal with in one way or the other. While the RED’s path has been fraught with growing pains, those pains have allowed us to see inside a process that is normally hidden from view. One thing is for sure: from a color and noise perspective the RED ONE is finally a mature camera.

Thanks to Adam WIlt and Meets the Eye Productions for their help in shooting the tests used in this article. Thanks also to Gary Adcock and Bill Hogan for spot-checking this article and David Corley and Michael Wiegand of DSC Labs for their insights.

Disclosure: DSC Labs sends me free charts for my tests and so far hasn’t asked for any back.

Art Adams is a DP who makes frequent trips to the heart of the matrix. His website is at www.artadams.net.

 

 

 

(Page 4 of 4 pages for this article « First  <  2 3 4)

 

ShareThis            0digg          Clip to Evernote

 

DSC Labs Hawk Chart: The Simplest Color Chart That You Can’t Live Without

Art Adams | 06/17

Wouldn’t it be great if someone designed an easy-to-use color chart that could be quickly and easily used in the field? Well, someone did. And they call it The Hawk.

 

It wasn’t until I worked at the DSC Labs booth at NAB that I discovered The Hawk… and it blew me away. It’s a very simple chart, but it offers a colorist (professional or amateur) the most critical information necessary to accurately…

Arri Alexa and Rosco LitePads Come Through for OnLive’s First National Spot

Art Adams | 06/11

The project started out as a web-only teaser. When the client saw it they added another shoot day and turned it into a national spot. Here’s why, and how.

 

Originally slated to be a web-only spot, the first shoot day went so well that when the client saw the results they ordered a second shoot day, added an actor and made a $1m+ national ad buy.

 

Digital Cinema Real-World Available Light Test

Matt Jeppsen | 06/02

35mm, 16mm, Alexa, Red MX, PMW-F3 and DSLR compared

image

 

We geeks are pixel pushers, and we love charts and scientific testing of camera systems. It’s an essential part of understanding where the system breaks, and it informs your…

You must be registered to comment. This is an effort to reduce spam. Please REGISTER HERE.

Single click from RSS feed to main article ... thank you Scott. I’ve wanted this and I like it.

Posted by Rob  on  07/21  at  01:17 PM


Great article, it really helped me understand more about how the in camera processing works.
But… Could this also be applied on post production?
As in, is this possible with all camera’s? It seems to me this would be a great way to get more/better color out of any footage.
Or is that a general concept of grading I don’t know about?

Posted by BBuijsman  on  07/22  at  03:52 AM


The RED ONE was the first camera that offered a “digital negative” at an affordable price point. Up till then, if you’d asked any established company if this was something they’d be wiling to offer, they’d have said “No; unless you want to shoot with an F35 in S-Log, or maybe a Varicam in Film Rec mode, no one wants that unless they have a lot of money.”

Traditionally video/HD has been the “affordable” medium. You could turn it around quickly as the image was baked in and ready to go—there was no need for pricey color grading. I spent many years working in the corporate and broadcast markets knowing that whatever I shot that day was never going to be touched again beyond basic editing.

RED proved that people do want a “digital negative” and are willing to jump through hoops to get it.

Even now, though, there are few cameras that will give you a “flat” image you can grade in post. Varicams offer “Film Rec” gamma which is reasonably flat for grading, but it also works brilliantly as a WYSIWYG adjustable gamma curve. I’ve used it many times and I’ve never graded any of that footage. Alexa will offer LogC; Sony offers S-Log but only on their highest-end cameras; and even DSLR’s, which you’d think would offer you some sort of raw imaging, don’t—probably because the compression is so high that they don’t want you to push the image around much in post as it wouldn’t hold up.

I think you’ll see more and more of this going forward, and I think Alexa will set the trend: you can record Rec 709 to ProRes and get WYSIWYG images, or you can record LogC and grade it later. Or you can watch Rec 709 on your monitor and record LogC elsewhere, or record Rec 709 to the CF cards for editing and then conform and grade the LogC files later…

I think Sony and Panasonic will be slow to adopt this attitude, as historically they’ve worked really hard to make their cameras beautiful right out of the box, but eventually every camera should work this way. There’s no reason for it not to. Meanwhile, unless you’re shooting with a RED, and Phantom or a really high-end Sony camera you’re probably stuck with images processed through a couple of baked-in matrices. This is mostly due to corporate culture that think the greatest benefit of video and HD is immediacy.

There have been, and always will be, people who want to see what they are getting as they get it and who don’t want to spend the money and time to grade it later.

When you get access to raw material you can learn an awful lot about how cameras work. Manufacturers don’t always want you to know what’s going on under the hood.

Posted by Art Adams  on  07/22  at  09:02 PM


Name:

Email:

Location:

URL:

Smileys

Remember my personal information

Notify me of follow-up comments?

Submit the word you see below:



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Partner Text Links

 

 

 



Copyright 2008 ProVideo Coalition LLC
Check PageRank
 

Contact    |   Privacy

 

Hit Counter

Contact    |   Privacy

 

Contact    |   Privacy