As per my knowledge, LUT is meant to be applied on color channels (RGB) since we are doing colorspace conversion. But Nuke's viewer LUT settings affect alpha channel too. I am aware that viewer LUT doesn't alter original pixel values but only displays them as per the LUT settings set, but shouldn't we turn off viewer LUT while working on alpha channel? Like for instance pulling a key or doing roto ?
Shouldn't we be viewing alpha in linear color space? Am I missing something here?
You're right, NUKE Viewer's LookUp Table doesn't change alpha at all but it affects it. You need to use Viewer's f/8 (multiplier) slider and y (gamma) slider for changing appearance of your alpha when you're keying. NUKE's working color space is LINEAR but default LUT settings for monitor, 8-bit and 16-bit files are superRGB corrected:
Read an article about NUKE colorspaces and color transformations: HERE.
To compensate sRGB gamma for getting linear working color space NUKE uses mirrored gamma:
You can change any default LUT settings or turn off alpha channel (or rgb, or any channel you want) for your convenience at any time.
Execute this code and then create new Viewer node with ctrl-i shortcut (cmd-i on a Mac):
import nuke
# to change Viewers' properties globally
nuke.knobDefault('Viewer.channels', 'rgb')
nuke.knobDefault('Viewer.viewerProcess', 'rec709') # use rec709, for instance
You'll get this:
Or simply execute the code for Viewer1, changing viewerProcess for None (linear color space):
nuke.toNode('Viewer1').knob('channels').setValue('alpha')
nuke.toNode('Viewer1').knob('viewerProcess').setValue('None')
# then add these 3 lines to menu.py file (they'll work after restart)
nuke.knobDefault("Root.monitorLut", "linear") # monitor LUT
nuke.knobDefault("Root.int8Lut", "linear") # 8-bit files LUT
nuke.knobDefault("Root.int16Lut", "linear") # 16-bit files LUT
Additionally, to physically transform you current LUT, you can use OpenColorIO LUT and 3D LUT nodes from Color Toolbar's menu.
And a few words about Pixel Analyzer panel:
Current, Min, Max, Average and Median operations in Pixel Analyzer panel are applied to any channels from dropdown menu. If you need only alpha value or only rgb values just choose it from the menu.
But. There's no mistake if you'll be using rgba mode. Check it. Apply Keyer node to the image and you'll see RGB values are the same with or without alpha (but only if rgba isn't premultiplied).
And it's sad but there still isn't access to API via Python for the Pixel Analyzer panel.
Related
I have two textures in dds format, one original and the other with different colors (changed brightness, color saturation etc). Unfortunately, I don't know the color settings in this other texture altered so that I can transfer them to another similar dds texture.
I would like to compare these two textures, the original and the changed one, and draw a difference from them, creating the so-called the mask. Then apply this mask to another texture with the same dimensions as the original but the wrong colors. So as to get the same color saturation, brightness, etc. as in the changed image.
I tried to do this with compressonator. I managed to get a difference, but I can't combine my image with the difference to get the same effect as in the changed image.
Below I am adding a link to the original modified textures and the difference that I was able to extract from them, I would like to get the same effect as on the changed texture after applying the difference to the original
https://mega.nz/folder/G5AXjSAB#7NXZ0CVMJTs4mo0YgKwy3g
Thanks for help
I am wanting to create a function that would determine how colorblind-friendly an image is (on a scale from 0-1). I have several color-related functions that are capable of performing the following tasks:
Take an image (as a PIL image, filename, or RGB array) and transform it into an image representative of what a colorblind person would see (for the different types of colorblindness)
Take an image and determine the rgb colors associated with each pixel of the image (transform into numpy array of rgb colors)
Determine the color palette associated with an image
Find the similarity between two rgb arrays (using CIELAB- see colormath package)
My first instinct was to transform the image and colorblind version of the image into RGB arrays and then use the CIELAB function to determine the similarity between the two images. However, that doesn't really solve the problem since it wouldn't be able to pick out things like readability (e.g. if the text and background color end up being very similar after adjusting for colorblindness).
Any ideas for how to determine how colorblind-friendly an image is?
I am using tkinter and the PIL to make a basic photo viewer (mostly for learning purposes). I have the bg color of all of my widgets set to the default which is "systemfacebutton", whatever that means.
I am using the PIL.Image module to view and rotate my images. When an image is rotated you have to choose a fillcolor for the area behind the image. I want this fill color to be the same as the default system color but I have no idea how to get a the rgb value or a supported color name for this. It has to be calculated by python at run time so that it is consistent on anyone's OS.
Does anyone know how I can do this?
You can use w.winfo_rgb("systembuttonface") to turn any color name to a tuple of R, G, B. (w is any Tkinter widget, the root window perhaps. Note that you had the color name scrambled.) The values returned are 16-bit for some unknown reason, you'll likely need to shift them right by 8 bits to get the 0-255 values commonly used for specifying colors.
Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ??
I am not sure, what you want to do or achieve, here's one thing to mention about color profiles.
The most common color profile for cameras, screens and printers is sRGB, which is a limited color spectrum which does not include the whole RGB range (because the cheap hardware can't visualize it anyways).
Some cameras (and probably scanners) allow to use different color profiles like AdobeRGB, which increases the color space and "allows" more colors.
The problem is, if you capture (e.g. scan) an image in AdobeRGB color profile, but the system (browser/screen/printer) interprets it as sRGB, you'll probably get washed out colors, just because of wrong interpretation (like you'll get blue faces in your image, if you interpret BGR images as RGB images).
OpenCV and many browsers, printers, etc. always interpret images as sRGB images, according to http://fotovideotec.de/adobe_rgb/
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed..
The pixel values are the ones you read in with
imgread
It depends on the flags you set for it. The original image may have a greater bit-depth (depending on your scanner) than the one you loaded.
Also the real file extension is determined from the first bytes of the file, not by the file name extension.
So it may not be the pixel value of the scanned image if the bit-depths differ.
Please have a look at the imgread documentation.
I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there?
I would also like to get something like this for image processing, is there some function on numpy arrays like interpolate('nearest') out there?
EDIT: Please correct me if I'm wrong, it looks like it does simple pixel clustering (clusters are colors of the corresponding colormap) and the word 'nearest' says that it takes the nearest colormap color (probably in the RGB space) to decide to which cluster the pixel belongs.
interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels.
There is no relation between interpolation='nearest' and the grayscale image being displayed in color. By default imshow uses the jet colormap to display an image. If you want it to be displayed in greyscale, call the gray() method to select the gray colormap.